1
0
mirror of https://github.com/ianstormtaylor/slate.git synced 2025-02-01 05:16:10 +01:00

refactor normalization to be operations-based (#2193)

#### Is this adding or improving a _feature_ or fixing a _bug_?

Improvement.

#### What's the new behavior?

This changes the normalization logic to be operations (and `key`) based, instead of the current logic which is more haphazard, and which has bugs that lead to non-normalized documents in certain cases.

#### How does this change work?

Now, every time a change method is called, after it applies its operations, those operations will be normalized. Based on each operation we can know exactly which nodes are "dirty" and need to be re-validated.

This change also makes it easy for the `withoutNormalizing` (previously `withoutNormalization`) helper to be much more performant, and only normalize the "dirty" nodes instead of being forced to handle the entire document.

To accommodate this new behavior, the old "operation flags" have been removed, replaced with a set of more consistent helpers:

- `withoutNormalizing`
- `withoutSaving`
- `withoutMerging`

All of them take functions that will be run with the desired behavior in scope, similar to how Immutable.js's own `withMutations` works. Previously this was done with a more complex set of flags, which could be set and unset in a confusing number of different ways, and it was generally not very well thought out. Hopefully this cleans it up, and makes it more approachable for people.

We also automatically use the `withoutNormalizing` helper function for all of the changes that occur as part of schema `normalize` functions. Previously people had to use `{ normalize: false }` everywhere in those functions which was error-prone.

With this new architecture, you sure almost never need to think about normalization. Except for cases where you explicitly want to move through an interim state that is invalid according to Slate's default schema or your own custom schema. In which case you'd use `withoutNormalizing` to allow the invalid interim state to be moved through.

#### Have you checked that...?

* [x] The new code matches the existing patterns and styles.
* [x] The tests pass with `yarn test`.
* [x] The linter passes with `yarn lint`. (Fix errors with `yarn prettier`.)
* [x] The relevant examples still work. (Run examples with `yarn watch`.)

#### Does this fix any issues or need any specific reviewers?

Fixes: #1363
Fixes: #2134
Fixes: #2135
Fixes: #2136
Fixes: #1579
Fixes: #2132
Fixes: #1657
This commit is contained in:
Ian Storm Taylor 2018-09-21 11:15:04 -07:00 committed by GitHub
parent da93937b19
commit c9cf16d019
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 1058 additions and 1434 deletions

View File

@ -45,7 +45,7 @@ const schema = {
Hopefully just by reading this definition you'll understand what kinds of blocks are allowed in the document and what properties they can have—schemas are designed to prioritize legibility.
This schema defines a document that only allows `paragraph` and `image` blocks. In the case of `paragraph` blocks, they can only contain text nodes. And in the case of `image` blocks, they are always void nodes with a `data.src` property that is a URL. Simple enough, right?
This schema defines a document that only allows `paragraph` and `image` blocks. In the case of `paragraph` blocks, they can only contain text nodes. And in the case of `image` blocks, they are void nodes with a `data.src` property that is a URL. Simple enough, right?
The magic is that by passing a schema like this into your editor, it will automatically "validate" the document when changes are made, to make sure the schema is being adhered to. If it is, great. But if it isn't, and one of the nodes in the document is invalid, the editor will automatically "normalize" the node, to make the document valid again.
@ -109,67 +109,3 @@ This validation defines a very specific (honestly, useless) behavior, where if a
When you need this level of specificity, using the `normalizeNode` property of the editor or plugins is handy.
However, only use it when you absolutely have to. And when you do, make sure to optimize the function's performance. `normalizeNode` will be called **every time the node changes**, so it should be as performant as possible. That's why the example above returns early, so that the smallest amount of work is done each time it is called.
## Multi-step Normalizations
Some normalizations will require multiple `change` function calls in order to complete. But after calling the first change function, the resulting document will be normalized, changing the document out from underneath you. This can cause unintended behaviors.
Consider the following validation function that merges adjacent text nodes together.
Note: This functionality is already correctly implemented in slate-core so you don't need to put it in yourself!
```js
/**
* Merge adjacent text nodes.
*
* @type {Object}
*/
normalizeNode(node) {
if (node.object != 'block' && node.object != 'inline') return
const invalids = node.nodes
.map((child, i) => {
const next = node.nodes.get(i + 1)
if (child.object != 'text') return
if (!next || next.object != 'text') return
return next
})
.filter(Boolean)
if (!invalids.size) return
return (change) => {
// Reverse the list to handle consecutive merges, since the earlier nodes
// will always exist after each merge.
invalids.reverse().forEach((n) => {
change.mergeNodeByKey(n.key)
})
}
}
```
There is actually a problem with this code. Because each `change` function call will cause nodes impacted by the mutation to be normalized, this can cause interruptions to carefully implemented sequences of `change` functions and may create performance problems or errors. The normalization logic in the above example will merge the last node in the invalids list together, but then it'll trigger another normalization and start over!
How can we deal with this? Well, normalization can be suppressed temporarily for multiple `change` function calls by using the `change.withoutNormalization` function. `withoutNormalization` accepts a function that takes a `change` object as a parameter, and executes the function while suppressing normalization. Once the function is done executing, the entire document is then normalized to pick up any unnnormalized transformations and ensure your document is in a normalized state.
The above validation function can then be written as below
```js
/**
* Merge adjacent text nodes.
*
* @type {Object}
*/
normalizeNode(node) {
...
return (change) => {
change.withoutNormalization((c) => {
// Reverse the list to handle consecutive merges, since the earlier nodes
// will always exist after each merge.
invalids.reverse().forEach((n) => {
c.mergeNodeByKey(n.key)
})
});
}
}
```

View File

@ -24,9 +24,9 @@ A [`Value`](./value.md) with the change's current operations applied. Each time
### `call`
`call(customChange: Function, ...args) => Change`
`call(fn: Function, ...args) => Change`
This method calls the provided `customChange` function with the current instance of the `Change` object as the first argument and passes through the remaining `args`.
This method calls the provided function with the current instance of the `Change` object as the first argument and passes through the remaining `args`.
The purpose of `call` is to enable custom change methods to exist and called in a chain. For example:
@ -53,40 +53,50 @@ function onSomeEvent(event, change) {
`normalize() => Void`
This method normalizes the document with the value's schema. This should run automatically-you should not need to call this method unless you have manually disabled normalization (and you should rarely, if ever, need to manually disable normalization). The vast majority of changes, whether by the user or invoked programmatically, will run `normalize` by default to ensure the document is always in adherence to its schema. `withoutNormalization` also runs `normalize` upon completion.
This method normalizes the document with the value's schema. This should run automatically-you should not need to call this method unless you have manually disabled normalization (and you should rarely, if ever, need to manually disable normalization). The vast majority of changes, whether by the user or invoked programmatically, will run `normalize` by default to ensure the document is always in adherence to its schema.
> 🤖 If you must use this method, use it sparingly and strategically. Calling this method can be very expensive as it will run normalization on all of the nodes in your document.
### `withoutNormalization`
### `withoutNormalizing`
`withoutNormalization(customChange: Function) => Change`
`withoutNormalizing(fn: Function) => Change`
This method calls the provided `customChange` function with the current instance of the `Change` object as the first argument. Normalization is suspended while `customChange` is executing, but will be run after `customChange` completes.
This method calls the provided function with the current instance of the `Change` object as the first argument. Normalization does not occur while the fuction is executing, and is instead deferred to be be run immediately after it completes.
This method can be used to allow a sequence of change operations that should not be interrupted by normalization. For example:
```js
/**
* Only allow block nodes in documents.
*
* @type {Object}
*/
validateNode(node) {
if (node.object != 'document') return
const invalids = node.nodes.filter(n => n.object != 'block')
if (!invalids.size) return
function removeManyNodes(node) {
const toRemove = node.nodes.filter(n => n.object != 'block')
if (!toRemove.size) return
return (change) => {
change.withoutNormalization((c) => {
invalids.forEach((child) => {
c.removeNodeByKey(child.key)
})
change.withoutNormalizing(() => {
toRemove.forEach(child => {
change.removeNodeByKey(child.key)
})
}
})
}
```
> 🤖 If you must use this method, use it sparingly and strategically. Calling this method can be very expensive as it will run normalization on all of the nodes in your document.
### `withoutSaving`
`withoutSaving(fn: Function) => Change`
By default all new operations are saved to the editor's history. If you have changes that you don't want to show up in the history when the user presses <kbd>cmd+z</kbd>, you can use `withoutSaving` to skip those changes.
```js
change.withoutSaving(() => {
change.setValue({ decorations })
})
```
However, be sure you know what you are doing because this will create changes that cannot be undone by the user, and might result in confusing behaviors.
### `withoutMerging`
`withoutMerging(fn: Function) => Change`
Usually, all of the operations in a `Change` are grouped into a single save point in the editor's history. However, sometimes you may want more control over this, to be able to create distinct save points in a single change. To do that, you can use the `withoutMerging` helper.
## Full Value Change
@ -97,10 +107,6 @@ validateNode(node) {
Set the entire `value` using either a `properties` object or a `Value` object. Can be used to set `value.data` and other properties that cannot otherwise be easily set using the available methods.
Warning: Calling `setValue` with a `Value` object has unpredictable behavior including the loss of the edit history. Only use with a `Value` object if you know what you are doing. For most use cases, we recommend passing `properties` as an `Object` (e.g. `change.setValue({data: myNewDataObject})`.
Hint: Wrapping the call to `setValue` as follows can be helpful if you want to update a value, like in the value's `data` but do not want to have another save point in the undo history: `change.setOperationFlag('save', false).setValue({data: myNewDataObject}).setOperationFlag('save', true).
## Current Selection Changes
These changes act on the `document` based on the current `selection`. They are equivalent to calling the [Document Range Changes](#document-range-changes) with the current selection as the `range` argument, but they are there for convenience, since you often want to act with the current selection, as a user would.

View File

@ -229,7 +229,7 @@ Will validate a node's marks. The `marks` definitions can declare the `type` pro
A function that can be provided to override the default behavior in the case of a rule being invalid. By default, Slate will do what it can, but since it doesn't know much about your schema, it will often remove invalid nodes. If you want to override this behavior and "fix" the node instead of removing it, pass a custom `normalize` function.
For more information on the arguments passed to `normalize`, see the [Normalizing](#normalizing) section.
For more information on the arguments passed to `normalize`, see the [Errors](#errors) section.
### `parent`

View File

@ -150,14 +150,13 @@ class SearchHighlighting extends React.Component {
})
})
// Setting the `save` option to false prevents this change from being added
// to the undo/redo stack and clearing the redo stack if the user has undone
// changes.
const change = value
.change()
.setOperationFlag('save', false)
.setValue({ decorations })
.setOperationFlag('save', true)
const change = value.change()
// Make the change to decorations without saving it into the undo history,
// so that there isn't a confusing behavior when undoing.
change.withoutSaving(() => {
change.setValue({ decorations })
})
this.onChange(change)
}

View File

@ -115,7 +115,9 @@ function BeforePlugin() {
// happen on the initialization of the editor, or if the schema changes.
// This change isn't save into history since only schema is updated.
if (value.schema != editor.schema) {
change.setValue({ schema: editor.schema }, { save: false }).normalize()
change.withoutSaving(() => {
change.setValue({ schema: editor.schema }).normalize()
})
}
debug('onChange')

View File

@ -4,6 +4,36 @@ A list of changes to the `slate` package with each new version. Until `1.0.0` is
---
### `0.41.0` — September 21, 2018
###### DEPRECATED
**The `withoutNormalization` helper has been renamed to `withoutNormalizing`.** This is to stay consistent with the new helpers for `withoutSaving` and `withoutMerging`.
###### BREAKING
**The the "operation flags" concept was removed.** This was a confusing concept that was implemented in multiple different ways and led to the logic around normalizing, saving, and merging operations being more complex than it needed to be. These flags have been replaced with three simpler helper functions: `withoutNormalizing`, `withoutSaving` and `withoutMerging`.
```js
change.withoutNormalizing(() => {
nodes.forEach(node => change.removeNodeByKey(node.key))
})
```
```js
change.withoutSaving(() => {
change.setValue({ decorations })
})
```
This means that you no longer use the `{ normalize: false }` or `{ save: false }` options as arguments to individual change methods, and instead use these new helper methods to apply these behaviors to groups of changes at once.
**The "normalize" change methods have been removed.** Previously there were a handful of different normalization change methods like `normalizeNodeByPath`, `normalizeParentByKey`, etc. These were confusing because it put the onus on the implemented to know exact which nodes needed to be normalized. They have been removed, and implementers no longer ever need to worry about which specific nodes to normalize, as Slate will handle that for them.
**The internal `refindNode` and `refindPath` methods were removed.** These should never have been exposed in the first place, and are now no longer present on the `Element` interface. These were only used internally during the normalization process.
---
### `0.40.0` — August 22, 2018
###### BREAKING

File diff suppressed because it is too large Load Diff

View File

@ -20,10 +20,9 @@ const Changes = {}
* @param {Number} offset
* @param {Number} length
* @param {Mixed} mark
* @param {Object} options
*/
Changes.addMarkByPath = (change, path, offset, length, mark, options) => {
Changes.addMarkByPath = (change, path, offset, length, mark) => {
mark = Mark.create(mark)
const { value } = change
const { document } = value
@ -62,7 +61,6 @@ Changes.addMarkByPath = (change, path, offset, length, mark, options) => {
})
change.applyOperations(operations)
change.normalizeParentByPath(path, options)
}
/**
@ -72,15 +70,12 @@ Changes.addMarkByPath = (change, path, offset, length, mark, options) => {
* @param {Array} path
* @param {Number} index
* @param {Fragment} fragment
* @param {Object} options
*/
Changes.insertFragmentByPath = (change, path, index, fragment, options) => {
Changes.insertFragmentByPath = (change, path, index, fragment) => {
fragment.nodes.forEach((node, i) => {
change.insertNodeByPath(path, index + i, node)
})
change.normalizeNodeByPath(path, options)
}
/**
@ -90,10 +85,9 @@ Changes.insertFragmentByPath = (change, path, index, fragment, options) => {
* @param {Array} path
* @param {Number} index
* @param {Node} node
* @param {Object} options
*/
Changes.insertNodeByPath = (change, path, index, node, options) => {
Changes.insertNodeByPath = (change, path, index, node) => {
const { value } = change
change.applyOperation({
@ -102,8 +96,6 @@ Changes.insertNodeByPath = (change, path, index, node, options) => {
path: path.concat(index),
node,
})
change.normalizeNodeByPath(path, options)
}
/**
@ -114,10 +106,9 @@ Changes.insertNodeByPath = (change, path, index, node, options) => {
* @param {Number} offset
* @param {String} text
* @param {Set<Mark>} marks (optional)
* @param {Object} options
*/
Changes.insertTextByPath = (change, path, offset, text, marks, options) => {
Changes.insertTextByPath = (change, path, offset, text, marks) => {
const { value } = change
const { document } = value
const node = document.assertNode(path)
@ -131,8 +122,6 @@ Changes.insertTextByPath = (change, path, offset, text, marks, options) => {
text,
marks,
})
change.normalizeParentByPath(path, options)
}
/**
@ -140,10 +129,9 @@ Changes.insertTextByPath = (change, path, offset, text, marks, options) => {
*
* @param {Change} change
* @param {Array} path
* @param {Object} options
*/
Changes.mergeNodeByPath = (change, path, options) => {
Changes.mergeNodeByPath = (change, path) => {
const { value } = change
const { document } = value
const original = document.getDescendant(path)
@ -171,8 +159,6 @@ Changes.mergeNodeByPath = (change, path, options) => {
},
target: null,
})
change.normalizeParentByPath(path, options)
}
/**
@ -182,10 +168,9 @@ Changes.mergeNodeByPath = (change, path, options) => {
* @param {Array} path
* @param {String} newPath
* @param {Number} index
* @param {Object} options
*/
Changes.moveNodeByPath = (change, path, newPath, newIndex, options) => {
Changes.moveNodeByPath = (change, path, newPath, newIndex) => {
const { value } = change
change.applyOperation({
@ -194,9 +179,6 @@ Changes.moveNodeByPath = (change, path, newPath, newIndex, options) => {
path,
newPath: newPath.concat(newIndex),
})
const ancestorPath = PathUtils.relate(path, newPath)
change.normalizeNodeByPath(ancestorPath, options)
}
/**
@ -207,10 +189,9 @@ Changes.moveNodeByPath = (change, path, newPath, newIndex, options) => {
* @param {Number} offset
* @param {Number} length
* @param {Mark} mark
* @param {Object} options
*/
Changes.removeMarkByPath = (change, path, offset, length, mark, options) => {
Changes.removeMarkByPath = (change, path, offset, length, mark) => {
mark = Mark.create(mark)
const { value } = change
const { document } = value
@ -249,7 +230,6 @@ Changes.removeMarkByPath = (change, path, offset, length, mark, options) => {
})
change.applyOperations(operations)
change.normalizeParentByPath(path, options)
}
/**
@ -257,10 +237,9 @@ Changes.removeMarkByPath = (change, path, offset, length, mark, options) => {
*
* @param {Change} change
* @param {Array} path
* @param {Object} options
*/
Changes.removeAllMarksByPath = (change, path, options) => {
Changes.removeAllMarksByPath = (change, path) => {
const { state } = change
const { document } = state
const node = document.assertNode(path)
@ -268,7 +247,7 @@ Changes.removeAllMarksByPath = (change, path, options) => {
texts.forEach(text => {
text.getMarksAsArray().forEach(mark => {
change.removeMarkByKey(text.key, 0, text.text.length, mark, options)
change.removeMarkByKey(text.key, 0, text.text.length, mark)
})
})
}
@ -278,10 +257,9 @@ Changes.removeAllMarksByPath = (change, path, options) => {
*
* @param {Change} change
* @param {Array} path
* @param {Object} options
*/
Changes.removeNodeByPath = (change, path, options) => {
Changes.removeNodeByPath = (change, path) => {
const { value } = change
const { document } = value
const node = document.assertNode(path)
@ -292,8 +270,6 @@ Changes.removeNodeByPath = (change, path, options) => {
path,
node,
})
change.normalizeParentByPath(path, options)
}
/**
@ -303,10 +279,9 @@ Changes.removeNodeByPath = (change, path, options) => {
* @param {Array} path
* @param {Number} offset
* @param {Number} length
* @param {Object} options
*/
Changes.removeTextByPath = (change, path, offset, length, options) => {
Changes.removeTextByPath = (change, path, offset, length) => {
const { value } = change
const { document } = value
const node = document.assertNode(path)
@ -344,9 +319,6 @@ Changes.removeTextByPath = (change, path, offset, length, options) => {
// Apply in reverse order, so subsequent removals don't impact previous ones.
change.applyOperations(removals.reverse())
const block = document.getClosestBlock(node.key)
change.normalizeNodeByKey(block.key, options)
}
/**
@ -355,16 +327,17 @@ Changes.removeTextByPath = (change, path, offset, length, options) => {
* @param {Change} change
* @param {Array} path
* @param {Object|Node} node
* @param {Object} options
*/
Changes.replaceNodeByPath = (change, path, newNode, options) => {
Changes.replaceNodeByPath = (change, path, newNode) => {
newNode = Node.create(newNode)
const index = path.last()
const parentPath = PathUtils.lift(path)
change.removeNodeByPath(path, { normalize: false })
change.insertNodeByPath(parentPath, index, newNode, { normalize: false })
change.normalizeParentByPath(path, options)
change.withoutNormalizing(() => {
change.removeNodeByPath(path)
change.insertNodeByPath(parentPath, index, newNode)
})
}
/**
@ -375,18 +348,9 @@ Changes.replaceNodeByPath = (change, path, newNode, options) => {
* @param {Number} length
* @param {string} text
* @param {Set<Mark>} marks (optional)
* @param {Object} options
*/
Changes.replaceTextByPath = (
change,
path,
offset,
length,
text,
marks,
options
) => {
Changes.replaceTextByPath = (change, path, offset, length, text, marks) => {
const { document } = change.value
const node = document.assertNode(path)
@ -401,22 +365,24 @@ Changes.replaceTextByPath = (
let activeMarks = document.getActiveMarksAtRange(range)
change.removeTextByPath(path, offset, length, { normalize: false })
change.withoutNormalizing(() => {
change.removeTextByPath(path, offset, length)
if (!marks) {
// Do not use mark at index when marks and activeMarks are both empty
marks = activeMarks ? activeMarks : []
} else if (activeMarks) {
// Do not use `has` because we may want to reset marks like font-size with
// an updated data;
activeMarks = activeMarks.filter(
activeMark => !marks.find(m => activeMark.type === m.type)
)
if (!marks) {
// Do not use mark at index when marks and activeMarks are both empty
marks = activeMarks ? activeMarks : []
} else if (activeMarks) {
// Do not use `has` because we may want to reset marks like font-size with
// an updated data;
activeMarks = activeMarks.filter(
activeMark => !marks.find(m => activeMark.type === m.type)
)
marks = activeMarks.merge(marks)
}
marks = activeMarks.merge(marks)
}
change.insertTextByPath(path, offset, text, marks, options)
change.insertTextByPath(path, offset, text, marks)
})
}
/**
@ -427,18 +393,9 @@ Changes.replaceTextByPath = (
* @param {Number} offset
* @param {Number} length
* @param {Mark} mark
* @param {Object} options
*/
Changes.setMarkByPath = (
change,
path,
offset,
length,
mark,
properties,
options
) => {
Changes.setMarkByPath = (change, path, offset, length, mark, properties) => {
mark = Mark.create(mark)
properties = Mark.createProperties(properties)
const { value } = change
@ -452,8 +409,6 @@ Changes.setMarkByPath = (
mark,
properties,
})
change.normalizeParentByPath(path, options)
}
/**
@ -462,10 +417,9 @@ Changes.setMarkByPath = (
* @param {Change} change
* @param {Array} path
* @param {Object|String} properties
* @param {Object} options
*/
Changes.setNodeByPath = (change, path, properties, options) => {
Changes.setNodeByPath = (change, path, properties) => {
properties = Node.createProperties(properties)
const { value } = change
const { document } = value
@ -478,8 +432,6 @@ Changes.setNodeByPath = (change, path, properties, options) => {
node,
properties,
})
change.normalizeNodeByPath(path, options)
}
/**
@ -489,15 +441,14 @@ Changes.setNodeByPath = (change, path, properties, options) => {
* @param {Array} path
* @param {String} text
* @param {Set<Mark>} marks (optional)
* @param {Object} options
*/
Changes.setTextByPath = (change, path, text, marks, options) => {
Changes.setTextByPath = (change, path, text, marks) => {
const { value } = change
const { document } = value
const node = document.assertNode(path)
const end = node.text.length
change.replaceTextByPath(path, 0, end, text, marks, options)
change.replaceTextByPath(path, 0, end, text, marks)
}
/**
@ -520,14 +471,12 @@ Changes.splitNodeByPath = (change, path, position, options = {}) => {
value,
path,
position,
target,
properties: {
type: node.type,
data: node.data,
},
target,
})
change.normalizeParentByPath(path, options)
}
/**
@ -537,18 +486,11 @@ Changes.splitNodeByPath = (change, path, position, options = {}) => {
* @param {Array} path
* @param {Array} textPath
* @param {Number} textOffset
* @param {Object} options
*/
Changes.splitDescendantsByPath = (
change,
path,
textPath,
textOffset,
options
) => {
Changes.splitDescendantsByPath = (change, path, textPath, textOffset) => {
if (path.equals(textPath)) {
change.splitNodeByPath(textPath, textOffset, options)
change.splitNodeByPath(textPath, textOffset)
return
}
@ -565,18 +507,14 @@ Changes.splitDescendantsByPath = (
let previous
let index
nodes.forEach(n => {
const prevIndex = index == null ? null : index
index = previous ? n.nodes.indexOf(previous) + 1 : textOffset
previous = n
change.splitNodeByKey(n.key, index, {
normalize: false,
target: prevIndex,
change.withoutNormalizing(() => {
nodes.forEach(n => {
const prevIndex = index == null ? null : index
index = previous ? n.nodes.indexOf(previous) + 1 : textOffset
previous = n
change.splitNodeByKey(n.key, index, { target: prevIndex })
})
})
change.normalizeParentByPath(path, options)
}
/**
@ -585,17 +523,16 @@ Changes.splitDescendantsByPath = (
* @param {Change} change
* @param {Array} path
* @param {Object|String} properties
* @param {Object} options
*/
Changes.unwrapInlineByPath = (change, path, properties, options) => {
Changes.unwrapInlineByPath = (change, path, properties) => {
const { value } = change
const { document, selection } = value
const node = document.assertNode(path)
const first = node.getFirstText()
const last = node.getLastText()
const range = selection.moveToRangeOfNode(first, last)
change.unwrapInlineAtRange(range, properties, options)
change.unwrapInlineAtRange(range, properties)
}
/**
@ -604,17 +541,16 @@ Changes.unwrapInlineByPath = (change, path, properties, options) => {
* @param {Change} change
* @param {Array} path
* @param {Object|String} properties
* @param {Object} options
*/
Changes.unwrapBlockByPath = (change, path, properties, options) => {
Changes.unwrapBlockByPath = (change, path, properties) => {
const { value } = change
const { document, selection } = value
const node = document.assertNode(path)
const first = node.getFirstText()
const last = node.getLastText()
const range = selection.moveToRangeOfNode(first, last)
change.unwrapBlockAtRange(range, properties, options)
change.unwrapBlockAtRange(range, properties)
}
/**
@ -626,10 +562,9 @@ Changes.unwrapBlockByPath = (change, path, properties, options) => {
*
* @param {Change} change
* @param {Array} path
* @param {Object} options
*/
Changes.unwrapNodeByPath = (change, path, options) => {
Changes.unwrapNodeByPath = (change, path) => {
const { value } = change
const { document } = value
document.assertNode(path)
@ -642,28 +577,21 @@ Changes.unwrapNodeByPath = (change, path, options) => {
const isFirst = index === 0
const isLast = index === parent.nodes.size - 1
if (parent.nodes.size === 1) {
change.moveNodeByPath(path, grandPath, parentIndex + 1, {
normalize: false,
})
change.removeNodeByPath(parentPath, options)
} else if (isFirst) {
change.moveNodeByPath(path, grandPath, parentIndex, options)
} else if (isLast) {
change.moveNodeByPath(path, grandPath, parentIndex + 1, options)
} else {
change.splitNodeByPath(parentPath, index, { normalize: false })
let updatedPath = PathUtils.increment(path, 1, parentPath.size - 1)
updatedPath = updatedPath.set(updatedPath.size - 1, 0)
change.moveNodeByPath(updatedPath, grandPath, parentIndex + 1, {
normalize: false,
})
change.normalizeNodeByPath(grandPath, options)
}
change.withoutNormalizing(() => {
if (parent.nodes.size === 1) {
change.moveNodeByPath(path, grandPath, parentIndex + 1)
change.removeNodeByPath(parentPath)
} else if (isFirst) {
change.moveNodeByPath(path, grandPath, parentIndex)
} else if (isLast) {
change.moveNodeByPath(path, grandPath, parentIndex + 1)
} else {
let updatedPath = PathUtils.increment(path, 1, parentPath.size - 1)
updatedPath = updatedPath.set(updatedPath.size - 1, 0)
change.splitNodeByPath(parentPath, index)
change.moveNodeByPath(updatedPath, grandPath, parentIndex + 1)
}
})
}
/**
@ -672,17 +600,19 @@ Changes.unwrapNodeByPath = (change, path, options) => {
* @param {Change} change
* @param {Array} path
* @param {Block|Object|String} block
* @param {Object} options
*/
Changes.wrapBlockByPath = (change, path, block, options) => {
Changes.wrapBlockByPath = (change, path, block) => {
block = Block.create(block)
block = block.set('nodes', block.nodes.clear())
const parentPath = PathUtils.lift(path)
const index = path.last()
const newPath = PathUtils.increment(path)
change.insertNodeByPath(parentPath, index, block, { normalize: false })
change.moveNodeByPath(newPath, path, 0, options)
change.withoutNormalizing(() => {
change.insertNodeByPath(parentPath, index, block)
change.moveNodeByPath(newPath, path, 0)
})
}
/**
@ -691,17 +621,19 @@ Changes.wrapBlockByPath = (change, path, block, options) => {
* @param {Change} change
* @param {Array} path
* @param {Block|Object|String} inline
* @param {Object} options
*/
Changes.wrapInlineByPath = (change, path, inline, options) => {
Changes.wrapInlineByPath = (change, path, inline) => {
inline = Inline.create(inline)
inline = inline.set('nodes', inline.nodes.clear())
const parentPath = PathUtils.lift(path)
const index = path.last()
const newPath = PathUtils.increment(path)
change.insertNodeByPath(parentPath, index, inline, { normalize: false })
change.moveNodeByPath(newPath, path, 0, options)
change.withoutNormalizing(() => {
change.insertNodeByPath(parentPath, index, inline)
change.moveNodeByPath(newPath, path, 0)
})
}
/**
@ -710,20 +642,15 @@ Changes.wrapInlineByPath = (change, path, inline, options) => {
* @param {Change} change
* @param {Array} path
* @param {Node|Object} node
* @param {Object} options
*/
Changes.wrapNodeByPath = (change, path, node) => {
node = Node.create(node)
if (node.object == 'block') {
if (node.object === 'block') {
change.wrapBlockByPath(path, node)
return
}
if (node.object == 'inline') {
} else if (node.object === 'inline') {
change.wrapInlineByPath(path, node)
return
}
}

View File

@ -4,7 +4,6 @@ import ByPath from './by-path'
import OnHistory from './on-history'
import OnSelection from './on-selection'
import OnValue from './on-value'
import WithSchema from './with-schema'
/**
* Export.
@ -19,5 +18,4 @@ export default {
...OnHistory,
...OnSelection,
...OnValue,
...WithSchema,
}

View File

@ -1,4 +1,3 @@
import invert from '../operations/invert'
import omit from 'lodash/omit'
/**
@ -38,7 +37,9 @@ Changes.redo = change => {
op = op.set('properties', omit(properties, 'isFocused'))
}
change.applyOperation(op, { save: false })
change.withoutSaving(() => {
change.applyOperation(op)
})
})
// Update the history.
@ -71,7 +72,7 @@ Changes.undo = change => {
previous
.slice()
.reverse()
.map(invert)
.map(op => op.invert())
.forEach(inverse => {
const { type, properties } = inverse
@ -81,7 +82,9 @@ Changes.undo = change => {
inverse = inverse.set('properties', omit(properties, 'isFocused'))
}
change.applyOperation(inverse, { save: false })
change.withoutSaving(() => {
change.applyOperation(inverse)
})
})
// Update the history.

View File

@ -13,21 +13,17 @@ const Changes = {}
*
* @param {Change} change
* @param {Object|Value} properties
* @param {Object} options
*/
Changes.setValue = (change, properties, options = {}) => {
Changes.setValue = (change, properties) => {
properties = Value.createProperties(properties)
const { value } = change
change.applyOperation(
{
type: 'set_value',
properties,
value,
},
options
)
change.applyOperation({
type: 'set_value',
properties,
value,
})
}
/**

View File

@ -1,205 +0,0 @@
import PathUtils from '../utils/path-utils'
/**
* Changes.
*
* @type {Object}
*/
const Changes = {}
/**
* Normalize the value with its schema.
*
* @param {Change} change
*/
Changes.normalize = (change, options) => {
change.normalizeDocument(options)
}
/**
* Normalize the document with the value's schema.
*
* @param {Change} change
*/
Changes.normalizeDocument = (change, options) => {
const { value } = change
const { document } = value
change.normalizeNodeByKey(document.key, options)
}
/**
* Normalize a `node` and its children with the value's schema.
*
* @param {Change} change
* @param {Node|String} key
*/
Changes.normalizeNodeByKey = (change, key, options = {}) => {
const normalize = change.getFlag('normalize', options)
if (!normalize) return
const { value } = change
const { document, schema } = value
const node = document.assertNode(key)
normalizeNodeAndChildren(change, node, schema)
change.normalizeAncestorsByKey(key)
}
/**
* Normalize a node's ancestors by `key`.
*
* @param {Change} change
* @param {String} key
*/
Changes.normalizeAncestorsByKey = (change, key) => {
const { value } = change
const { document, schema } = value
const ancestors = document.getAncestors(key)
if (!ancestors) return
ancestors.forEach(ancestor => {
if (change.value.document.getDescendant(ancestor.key)) {
normalizeNode(change, ancestor, schema)
}
})
}
Changes.normalizeParentByKey = (change, key, options) => {
const { value } = change
const { document } = value
const parent = document.getParent(key)
change.normalizeNodeByKey(parent.key, options)
}
/**
* Normalize a `node` and its children with the value's schema.
*
* @param {Change} change
* @param {Array} path
*/
Changes.normalizeNodeByPath = (change, path, options = {}) => {
const normalize = change.getFlag('normalize', options)
if (!normalize) return
const { value } = change
let { document, schema } = value
const node = document.assertNode(path)
normalizeNodeAndChildren(change, node, schema)
document = change.value.document
const ancestors = document.getAncestors(path)
if (!ancestors) return
ancestors.forEach(ancestor => {
if (change.value.document.getDescendant(ancestor.key)) {
normalizeNode(change, ancestor, schema)
}
})
}
Changes.normalizeParentByPath = (change, path, options) => {
const p = PathUtils.lift(path)
change.normalizeNodeByPath(p, options)
}
/**
* Normalize a `node` and its children with a `schema`.
*
* @param {Change} change
* @param {Node} node
* @param {Schema} schema
*/
function normalizeNodeAndChildren(change, node, schema) {
if (node.object == 'text') {
normalizeNode(change, node, schema)
return
}
let child = node.getFirstInvalidNode(schema)
let path = change.value.document.getPath(node.key)
while (node && child) {
normalizeNodeAndChildren(change, child, schema)
node = change.value.document.refindNode(path, node.key)
if (!node) {
path = []
child = null
} else {
path = change.value.document.refindPath(path, node.key)
child = node.getFirstInvalidNode(schema)
}
}
// Normalize the node itself if it still exists.
if (node) {
normalizeNode(change, node, schema)
}
}
/**
* Normalize a `node` with a `schema`, but not its children.
*
* @param {Change} change
* @param {Node} node
* @param {Schema} schema
*/
function normalizeNode(change, node, schema) {
const max =
schema.stack.plugins.length +
schema.rules.length +
(node.object === 'text' ? 1 : node.nodes.size)
let iterations = 0
function iterate(c, n) {
const normalize = n.normalize(schema)
if (!normalize) return
// Run the `normalize` function to fix the node.
let path = c.value.document.getPath(n.key)
normalize(c)
// Re-find the node reference, in case it was updated. If the node no longer
// exists, we're done for this branch.
n = c.value.document.refindNode(path, n.key)
if (!n) return
path = c.value.document.refindPath(path, n.key)
// Increment the iterations counter, and check to make sure that we haven't
// exceeded the max. Without this check, it's easy for the `normalize`
// function of a schema rule to be written incorrectly and for an infinite
// invalid loop to occur.
iterations++
if (iterations > max) {
throw new Error(
'A schema rule could not be normalized after sufficient iterations. This is usually due to a `rule.normalize` or `plugin.normalizeNode` function of a schema being incorrectly written, causing an infinite loop.'
)
}
// Otherwise, iterate again.
iterate(c, n)
}
iterate(change, node)
}
/**
* Export.
*
* @type {Object}
*/
export default Changes

View File

@ -404,6 +404,8 @@ class ElementInterface {
getClosestVoid(path, schema) {
const ancestors = this.getAncestors(path)
if (!ancestors) return null
const ancestor = ancestors.findLast(a => schema.isVoid(a))
return ancestor
}
@ -1542,36 +1544,6 @@ class ElementInterface {
return ret
}
/**
* Attempt to "refind" a node by a previous `path`, falling back to looking
* it up by `key` again.
*
* @param {List|String} path
* @param {String} key
* @return {Node|Null}
*/
refindNode(path, key) {
const node = this.getDescendant(path)
const found = node && node.key === key ? node : this.getDescendant(key)
return found
}
/**
* Attempt to "refind" the path to a node by a previous `path`, falling back
* to looking it up by `key`.
*
* @param {List|String} path
* @param {String} key
* @return {List|Null}
*/
refindPath(path, key) {
const node = this.getDescendant(path)
const found = node && node.key === key ? path : this.getPath(key)
return found
}
/**
* Remove mark from text at `offset` and `length` in node.
*

View File

@ -1,12 +1,12 @@
import Debug from 'debug'
import isPlainObject from 'is-plain-object'
import pick from 'lodash/pick'
import { List } from 'immutable'
import warning from 'slate-dev-warning'
import { List, Map } from 'immutable'
import MODEL_TYPES, { isType } from '../constants/model-types'
import Changes from '../changes'
import Operation from './operation'
import apply from '../operations/apply'
import PathUtils from '../utils/path-utils'
/**
* Debug.
@ -44,9 +44,11 @@ class Change {
this.value = value
this.operations = new List()
this.flags = {
this.tmp = {
dirty: [],
merge: null,
normalize: true,
...pick(attrs, ['merge', 'save', 'normalize']),
save: true,
}
}
@ -70,9 +72,10 @@ class Change {
*/
applyOperation(operation, options = {}) {
const { operations, flags } = this
const { operations } = this
let { value } = this
let { history } = value
const oldValue = value
// Add in the current `value` in case the operation was serialized.
if (isPlainObject(operation)) {
@ -83,25 +86,28 @@ class Change {
// Default options to the change-level flags, this allows for setting
// specific options for all of the operations of a given change.
options = { ...flags, ...options }
let { merge, save } = this.tmp
// Derive the default option values.
const {
merge = operations.size == 0 ? null : true,
save = true,
skip = null,
} = options
// If `merge` is non-commital, and this is not the first operation in a new change
// then we should merge.
if (merge == null && operations.size !== 0) {
merge = true
}
// Apply the operation to the value.
debug('apply', { operation, save, merge })
value = apply(value, operation)
value = operation.apply(value)
// If needed, save the operation to the history.
if (history && save) {
history = history.save(operation, { merge, skip })
history = history.save(operation, { merge })
value = value.set('history', history)
}
// Get the keys of the affected nodes, and mark them as dirty.
const keys = getDirtyKeys(operation, value, oldValue)
this.tmp.dirty = this.tmp.dirty.concat(keys)
// Update the mutable change object.
this.value = value
this.operations = operations.push(operation)
@ -131,22 +137,207 @@ class Change {
call(fn, ...args) {
fn(this, ...args)
this.normalizeDirtyOperations()
return this
}
/**
* Applies a series of change mutations, deferring normalization to the end.
* Normalize all of the nodes in the document from scratch.
*
* @return {Change}
*/
normalize() {
const { value } = this
const { document } = value
const keys = Object.keys(document.getKeysToPathsTable())
this.normalizeKeys(keys)
return this
}
/**
* Normalize any new "dirty" operations that have been added to the change.
*
* @return {Change}
*/
normalizeDirtyOperations() {
const { normalize, dirty } = this.tmp
if (!normalize) return this
if (!dirty.length) return this
this.tmp.dirty = []
this.normalizeKeys(dirty)
return this
}
/**
* Normalize a set of nodes by their `keys`.
*
* @param {Array} keys
* @return {Change}
*/
normalizeKeys(keys) {
const { value } = this
const { document } = value
// TODO: if we had an `Operations.tranform` method, we could optimize this
// to not use keys, and instead used transformed operation paths.
const table = document.getKeysToPathsTable()
let map = Map()
// TODO: this could be optimized to not need the nested map, and instead use
// clever sorting to arrive at the proper depth-first normalizing.
keys.forEach(key => {
const path = table[key]
if (!path) return
if (!path.length) return
if (!map.hasIn(path)) map = map.setIn(path, Map())
})
// To avoid infinite loops, we need to defer normalization until the end.
this.withoutNormalizing(() => {
this.normalizeMapAndPath(map)
})
return this
}
/**
* Normalize all of the nodes in a normalization `map`, depth-first. An
* additional `path` argument specifics the current depth/location.
*
* @param {Map} map
* @param {Array} path (optional)
* @return {Change}
*/
normalizeMapAndPath(map, path = []) {
map.forEach((m, k) => {
const p = [...path, k]
this.normalizeMapAndPath(m, p)
})
this.normalizePath(path)
return this
}
/**
* Normalize the node at a specific `path`, iterating as many times as
* necessary until it satisfies all of the schema rules.
*
* @param {Array} path
* @return {Change}
*/
normalizePath(path) {
const { value } = this
let { document, schema } = value
let node = document.assertNode(path)
let iterations = 0
const max =
schema.stack.plugins.length +
schema.rules.length +
(node.object === 'text' ? 1 : node.nodes.size)
const iterate = () => {
const fn = node.normalize(schema)
if (!fn) return
// Run the normalize `fn` to fix the node.
fn(this)
// Attempt to re-find the node by path, or by key if it has changed
// locations in the tree continue iterating.
document = this.value.document
const { key } = node
let found = document.getDescendant(path)
if (found && found.key === key) {
node = found
} else {
found = document.getDescendant(key)
if (found) {
node = found
path = document.getPath(key)
} else {
// If it no longer exists by key, it was removed, so abort.
return
}
}
// Increment the iterations counter, and check to make sure that we haven't
// exceeded the max. Without this check, it's easy for the `normalize`
// function of a schema rule to be written incorrectly and for an infinite
// invalid loop to occur.
iterations++
if (iterations > max) {
throw new Error(
'A schema rule could not be normalized after sufficient iterations. This is usually due to a `rule.normalize` or `plugin.normalizeNode` function of a schema being incorrectly written, causing an infinite loop.'
)
}
// Otherwise, iterate again.
iterate()
}
iterate()
return this
}
/**
* Apply a series of changes inside a synchronous `fn`, deferring
* normalization until after the function has finished executing.
*
* @param {Function} fn
* @return {Change}
*/
withoutNormalization(fn) {
const original = this.flags.normalize
this.setOperationFlag('normalize', false)
withoutNormalizing(fn) {
const value = this.tmp.normalize
this.tmp.normalize = false
fn(this)
this.setOperationFlag('normalize', original)
this.normalizeDocument()
this.tmp.normalize = value
if (this.tmp.normalize) {
this.normalizeDirtyOperations()
}
return this
}
/**
* Apply a series of changes inside a synchronous `fn`, without merging any of
* the new operations into previous save point in the history.
*
* @param {Function} fn
* @return {Change}
*/
withoutMerging(fn) {
const value = this.tmp.merge
this.tmp.merge = false
fn(this)
this.tmp.merge = value
return this
}
/**
* Apply a series of changes inside a synchronous `fn`, without saving any of
* their operations into the history.
*
* @param {Function} fn
* @return {Change}
*/
withoutSaving(fn) {
const value = this.tmp.save
this.tmp.save = false
fn(this)
this.tmp.save = value
return this
}
@ -158,35 +349,108 @@ class Change {
* @return {Change}
*/
/**
* Deprecated.
*/
setOperationFlag(key, value) {
this.flags[key] = value
warning(
false,
'As of slate@0.41.0 the `change.setOperationFlag` method has been deprecated.'
)
this.tmp[key] = value
return this
}
/**
* Get the `value` of the specified flag by its `key`. Optionally accepts an `options`
* object with override flags.
*
* @param {String} key
* @param {Object} options
* @return {Change}
*/
getFlag(key, options = {}) {
return options[key] !== undefined ? options[key] : this.flags[key]
warning(
false,
'As of slate@0.41.0 the `change.getFlag` method has been deprecated.'
)
return options[key] !== undefined ? options[key] : this.tmp[key]
}
/**
* Unset an operation flag by `key`.
*
* @param {String} key
* @return {Change}
*/
unsetOperationFlag(key) {
delete this.flags[key]
warning(
false,
'As of slate@0.41.0 the `change.unsetOperationFlag` method has been deprecated.'
)
delete this.tmp[key]
return this
}
withoutNormalization(fn) {
warning(
false,
'As of slate@0.41.0 the `change.withoutNormalization` helper has been renamed to `change.withoutNormalizing`.'
)
return this.withoutNormalizing(fn)
}
}
/**
* Get the "dirty" nodes's keys for a given `operation` and values.
*
* @param {Operation} operation
* @param {Value} newValue
* @param {Value} oldValue
* @return {Array}
*/
function getDirtyKeys(operation, newValue, oldValue) {
const { type, node, path, newPath } = operation
const newDocument = newValue.document
const oldDocument = oldValue.document
switch (type) {
case 'insert_node': {
const table = node.getKeysToPathsTable()
const parent = newDocument.assertParent(path)
const keys = [parent.key, ...Object.keys(table)]
return keys
}
case 'split_node': {
const nextPath = PathUtils.increment(path)
const parent = newDocument.assertParent(path)
const target = newDocument.assertNode(path)
const split = newDocument.assertNode(nextPath)
const keys = [parent.key, target.key, split.key]
return keys
}
case 'merge_node': {
const previousPath = PathUtils.decrement(path)
const parent = newDocument.assertParent(path)
const merged = newDocument.assertNode(previousPath)
const keys = [parent.key, merged.key]
return keys
}
case 'move_node': {
const parentPath = PathUtils.lift(path)
const newParentPath = PathUtils.lift(newPath)
const oldParent = oldDocument.assertNode(parentPath)
const newParent = oldDocument.assertNode(newParentPath)
const keys = [oldParent.key, newParent.key]
return keys
}
case 'remove_node': {
const parentPath = PathUtils.lift(path)
const parent = newDocument.assertNode(parentPath)
const keys = [parent.key]
return keys
}
default: {
return []
}
}
}
/**

View File

@ -121,13 +121,14 @@ class History extends Record(DEFAULTS) {
let history = this
let { undos, redos } = history
let { merge, skip } = options
const prevBatch = undos.peek()
const prevOperation = prevBatch && prevBatch.last()
if (skip) {
return history
}
const prevBatch = undos.peek()
const prevOperation = prevBatch && prevBatch.last()
if (merge == null) {
merge = shouldMerge(operation, prevOperation)
}

View File

@ -7,6 +7,8 @@ import Node from './node'
import PathUtils from '../utils/path-utils'
import Selection from './selection'
import Value from './value'
import apply from '../operations/apply'
import invert from '../operations/invert'
/**
* Operation attributes.
@ -224,6 +226,29 @@ class Operation extends Record(DEFAULTS) {
return 'operation'
}
/**
* Apply the operation to a `value`.
*
* @param {Value} value
* @return {Value}
*/
apply(value) {
const next = apply(value, this)
return next
}
/**
* Invert the operation.
*
* @return {Operation}
*/
invert() {
const inverted = invert(this)
return inverted
}
/**
* Return a JSON representation of the operation.
*

View File

@ -125,7 +125,9 @@ class Value extends Record(DEFAULTS) {
})
if (options.normalize !== false) {
value = value.change({ save: false }).normalize().value
const change = value.change()
change.withoutSaving(() => change.normalize())
value = change.value
}
return value

View File

@ -9,7 +9,7 @@ import { List } from 'immutable'
*/
function compare(path, target) {
// PERF: if the paths are the same we can exit early.
// PERF: if the paths are not the same size we can exit early.
if (path.size !== target.size) return null
for (let i = 0; i < path.size; i++) {
@ -131,6 +131,33 @@ function isBefore(path, target) {
return compare(p, t) === -1
}
/**
* Is a `path` equal to another `target` path in a document?
*
* @param {List} path
* @param {List} target
* @return {Boolean}
*/
function isEqual(path, target) {
return path.equals(target)
}
/**
* Is a `path` a sibling of a `target` path?
*
* @param {List} path
* @param {List} target
* @return {Boolean}
*/
function isSibling(path, target) {
if (path.size !== target.size) return false
const p = path.butLast()
const t = target.butLast()
return p.equals(t)
}
/**
* Lift a `path` to refer to its parent.
*
@ -210,6 +237,8 @@ export default {
isAbove,
isAfter,
isBefore,
isEqual,
isSibling,
lift,
max,
min,

View File

@ -11,9 +11,8 @@ export const input = (
<value>
<document>
<paragraph key="a">
<cursor />one
<cursor />word
</paragraph>
<paragraph>two</paragraph>
</document>
</value>
)
@ -23,9 +22,8 @@ export const output = (
<document>
<paragraph>
<emoji />
<cursor />one
<cursor />word
</paragraph>
<paragraph>two</paragraph>
</document>
</value>
)

View File

@ -3,11 +3,9 @@
import h from '../../helpers/h'
export default function(value) {
return value
.change()
.moveNodeByKey('h', 'a', 0)
.value.change()
.undo().value
const next = value.change().moveNodeByKey('h', 'a', 0).value
const undo = next.change().undo().value
return undo
}
export const input = (

View File

@ -36,19 +36,6 @@ describe('slate', () => {
assert.deepEqual(actual, expected)
})
fixtures(__dirname, 'models/change', ({ module }) => {
const { input, output, schema, flags, customChange } = module
const s = Schema.create(schema)
const expected = output.toJSON()
const actual = input
.change(flags)
.setValue({ schema: s })
.withoutNormalization(customChange)
.value.toJSON()
assert.deepEqual(actual, expected)
})
fixtures(__dirname, 'serializers/raw/deserialize', ({ module }) => {
const { input, output, options } = module
const actual = Value.fromJSON(input, options).toJSON()

View File

@ -1,51 +0,0 @@
/** @jsx h */
import h from '../../helpers/h'
export const flags = { normalize: false }
export const schema = {
blocks: {
paragraph: {},
item: {
parent: { type: 'list' },
nodes: [
{
match: [{ object: 'text' }],
},
],
},
list: {},
},
}
export const customChange = change => {
// this change function and schema are designed such that if
// validation takes place before both wrapBlock calls complete
// the node gets deleted by the default schema
// and causes a test failure
let target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'item')
target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'list')
}
export const input = (
<value>
<document>
<paragraph />
</document>
</value>
)
export const output = (
<value>
<document>
<list>
<item>
<paragraph />
</item>
</list>
</document>
</value>
)

View File

@ -1,49 +0,0 @@
/** @jsx h */
import h from '../../helpers/h'
export const flags = { normalize: true }
export const schema = {
blocks: {
paragraph: {},
item: {
parent: { type: 'list' },
nodes: [
{
match: [{ object: 'text' }],
},
],
},
list: {},
},
}
export const customChange = change => {
// this change function and schema are designed such that if
// validation takes place before both wrapBlock calls complete
// the node gets deleted by the default schema
// and causes a test failure
let target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'item')
target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'list')
}
export const input = (
<value>
<document>
<paragraph />
</document>
</value>
)
export const output = (
<value>
<document>
<list>
<item />
</list>
</document>
</value>
)

View File

@ -1,41 +0,0 @@
/** @jsx h */
import h from '../../helpers/h'
export const flags = {}
export const schema = {
blocks: {
paragraph: {},
item: {
parent: { type: 'list' },
nodes: [
{
match: [{ object: 'text' }],
},
],
},
list: {},
},
}
export const customChange = change => {
// see if we can break the expected validation sequence by toggling
// the normalization option
const target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'item', { normalize: true })
}
export const input = (
<value>
<document>
<paragraph />
</document>
</value>
)
export const output = (
<value>
<document />
</value>
)

View File

@ -1,49 +0,0 @@
/** @jsx h */
import h from '../../helpers/h'
export const flags = {}
export const schema = {
blocks: {
paragraph: {},
item: {
parent: { type: 'list' },
nodes: [
{
match: [{ object: 'text' }],
},
],
},
list: {},
},
}
export const customChange = change => {
// this change function and schema are designed such that if
// validation takes place before both wrapBlock calls complete
// the node gets deleted by the default schema
// and causes a test failure
let target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'item')
target = change.value.document.nodes.get(0)
change.wrapBlockByKey(target.key, 'list')
}
export const input = (
<value>
<document>
<paragraph />
</document>
</value>
)
export const output = (
<value>
<document>
<list>
<item />
</list>
</document>
</value>
)

View File

@ -18,9 +18,7 @@ export const schema = {
const offset = previous.nodes.size
child.nodes.forEach((n, i) =>
change.moveNodeByKey(n.key, previous.key, offset + i, {
normalize: false,
})
change.moveNodeByKey(n.key, previous.key, offset + i)
)
change.removeNodeByKey(child.key)