Right now code coverage results in a lot of false positives for failing builds. This makes it really difficult to tell at a glance whether PRs are passing / failing something more important like linting or tests.
Disabling for now until we can find a better approach.
* Run mocha test with module alias
* Running test with babel module alias
* Fix model alias
* Fix model alias
* Resolve module alias
* Running test with babel module alias
* Connect to codecov
* add codecov to travis
* stop if yarn test has errors
* Still cannot collect data from slate modules
* Try to check whether it works with codecov
* Move config to nycrc
* Remove nyc require
* Update nyc to use src
* better before_script
* first stab
* refactor to nanobench
* refactor to matcha
* use hand-rolled comparison logic, ugh
* update threshold
* remove unused dependencies
* remove benchmarks from travis ci
* Add script for benchmark
* Add error handling
* Rename folder to perf/benchmarks
* Add README
* Avoid memoization between benchmark runs
* Handle multiple benchmark. Add setup to benchmarks
* Run benchmarks through Travis
* Add command line options for JSON output
* Add export to JSON, and comparison with reference
* Improve serialize and fix results display
* Add perf/ to .npmignore
* Print error message
* Create normal example for normalize
* Add normalize-document wide and deep
* Add split-block normal, deep and wide
* Add delete-backward benchmarks
* Fix too much newlines
* Use microtime for better results maybe?
* Print number of runs
* Add minSamples options for better accuracy
* Use babel-node to launch benchmarks
* Use jsdom-global instead of mocha-jsdom (deprecated)
* Add rendering benchmark example
* Fix jsdom usage.
* Use JSX because we can
* Only use on('cycle') that is called even on error
* Example of successive rendering benchmark
* Rename README, and explain how to add a benchmark
* Add C++11 to Travis to install microtime
* Update Readme.md # Understanding the results
* Try to fix Travis build with microtime
* Travis: use before_install
Instead of overwriting install
* Forgot to remove mocha-jsdom import
Thanks node_modules...
* Add jsdom as devDependency
(required as peer dependency by jsdom-global)
* Add --only option to run only a specific benchmark
* Print name onStart rather than at end