1
0
mirror of https://github.com/ianstormtaylor/slate.git synced 2025-02-21 15:44:19 +01:00
Nicolas Gaborit f1a5d6f3b4 Add benchmarks (#368)
* Add script for benchmark

* Add error handling

* Rename folder to perf/benchmarks

* Add README

* Avoid memoization between benchmark runs

* Handle multiple benchmark. Add setup to benchmarks

* Run benchmarks through Travis

* Add command line options for JSON output

* Add export to JSON, and comparison with reference

* Improve serialize and fix results display

* Add perf/ to .npmignore

* Print error message

* Create normal example for normalize

* Add normalize-document wide and deep

* Add split-block normal, deep and wide

* Add delete-backward benchmarks

* Fix too much newlines

* Use microtime for better results maybe?

* Print number of runs

* Add minSamples options for better accuracy

* Use babel-node to launch benchmarks

* Use jsdom-global instead of mocha-jsdom (deprecated)

* Add rendering benchmark example

* Fix jsdom usage.

* Use JSX because we can

* Only use on('cycle') that is called even on error

* Example of successive rendering benchmark

* Rename README, and explain how to add a benchmark

* Add C++11 to Travis to install microtime

* Update Readme.md # Understanding the results

* Try to fix Travis build with microtime

* Travis: use before_install

Instead of overwriting install

* Forgot to remove mocha-jsdom import

Thanks node_modules...

* Add jsdom as devDependency

(required as peer dependency by jsdom-global)

* Add --only option to run only a specific benchmark

* Print name onStart rather than at end
2016-10-24 15:06:17 -07:00
..
2016-10-24 15:06:17 -07:00
2016-10-24 15:06:17 -07:00
2016-10-24 15:06:17 -07:00

Benchmarks

This folder contains a set of benchmarks used to compare performances between Slate's version. We use BenchmarkJS to measure performances.

Running the benchmark

The following command will make sure to compile Slate before running the benchmarks.

npm run perf

You can skip Slate's compilation by running directly

npm run benchmarks

Comparing results

You can save the results of the benchmarks with:

npm run perf:save

The results are saved as JSON in ./perf/reference.json. You can then checkout a different implementation, and run a comparison benchmark with the usual command:

npm run perf

perf and benchmarks automatically look for an existing reference.json to use for comparison.

Understanding the results

Each benchmark prints its results, showing:

  • The number of operation per second. This is the relevant value, that must be compared with values from different implementation.
  • The number of samples run. BenchmarkJS has a special heuristic to choose how many samples must be made. The results are more accurate with a high number of samples. Low samples count is often tied with high relative margin of error
  • The relative margin of error for the measure. The lower the value, the more accurate the results are. When compared with previous results, we display the average relative margin of error.
  • (comparison only) A comparison of the two implementation, according to BenchmarkJS. It can be Slower, Faster, or Indeterminate.
  • (comparison only) The difference in operations per second. Expressed as a percentage of the reference.

Writing a benchmark

To add a benchmark, create a new folder in the perf/benchmarks/ directory. It must contain two files:

  1. input.yaml to provide an initial State
  2. index.js to tell what to run

index.js must export a run(state) function. This whole function will be benchmarked. It will be run several times, with the parsed state from input.yaml as parameter. You can optionally export a setup(state) -> state function, to modify the state parsed from input.yaml.

Note 1: Everything must be sync.

Note 2: To avoid unwanted memoization, a different instance of state will be passed for every run call.

Detailed options for the benchmark script

You can also launch the benchmark script directly. See usage:

babel-node ./perf/index.js -h