Speedup Test::Unit + RSpec + Cucumber + Spinach by running parallel on multiple CPU cores.
ParallelTests splits tests into even groups (by number of lines or runtime) and runs each group in a single process with its own database.
RailsCasts episode #413 Fast Tests
Gemfile:
gem'parallel_tests',group: [:development,:test]ParallelTests uses 1 database per test-process.
| Process number | 1 | 2 | 3 |
| ENV['TEST_ENV_NUMBER'] | '' | '2' | '3' |
test: database: yourproject_test<%= ENV['TEST_ENV_NUMBER'] %>rake parallel:create rake parallel:prepare rake parallel:setup rake parallel:test # Test::Unit rake parallel:spec # RSpec rake parallel:features # Cucumber rake parallel:features-spinach # Spinach rake parallel:test[1] --> force 1 CPU --> 86 seconds rake parallel:test --> got 2 CPUs? --> 47 seconds rake parallel:test --> got 4 CPUs? --> 26 seconds ... Test by pattern with Regex (e.g. use one integration server per subfolder / see if you broke any 'user'-related tests)
rake parallel:test[^test/unit] # every test file in test/unit folder rake parallel:test[user] # run users_controller + user_helper + user tests rake parallel:test['user|product'] # run user and product related tests rake parallel:spec['spec\/(?!features)'] # run RSpec tests except the tests in spec/features 2 processes for 210 specs, ~ 105 specs per process ... test output ... 843 examples, 0 failures, 1 pending Took 29.925333 seconds RAILS_ENV=test parallel_test -e "rake my:custom:task"# or rake parallel:rake[my:custom:task] # limited parallelism rake parallel:rake[my:custom:task,2]# preparation:# affected by race-condition: first process may boot slower than the second# either sleep a bit or use a lock for example File.lockParallelTests.first_process? ? do_something : sleep(1)# cleanup:# last_process? does NOT mean last finished process, just last startedParallelTests.last_process? ? do_something : sleep(1)at_exitdoifParallelTests.first_process?ParallelTests.wait_for_other_processes_to_finishundo_somethingendendTest groups are often not balanced and will run for different times, making everything wait for the slowest group. Use these loggers to record test runtime and then use the recorded runtime to balance test groups more evenly.
Rspec: Add to your .rspec_parallel (or .rspec) :
--format progress --format ParallelTests::RSpec::RuntimeLogger --out tmp/parallel_runtime_rspec.log To use a custom logfile location (default: tmp/parallel_runtime_rspec.log), use the CLI: parallel_test spec -t rspec --runtime-log my.log
Add to your test_helper.rb:
require'parallel_tests/test/runtime_logger'ifENV['RECORD_RUNTIME']results will be logged to tmp/parallel_runtime_test.log when RECORD_RUNTIME is set, so it is not always required or overwritten.
Log the test output without the different processes overwriting each other.
Add the following to your .rspec_parallel (or .rspec) :
--format progress --format ParallelTests::RSpec::SummaryLogger --out tmp/spec_summary.log Produce pasteable command-line snippets for each failed example.
E.g.
rspec /path/to/my_spec.rb:123 # should do something Add the following to your .rspec_parallel (or .rspec) :
--format progress --format ParallelTests::RSpec::FailuresLogger --out tmp/failing_specs.log Log failed cucumber scenarios to the specified file. The filename can be passed to cucumber, prefixed with '@' to rerun failures.
Usage:
cucumber --format ParallelTests::Cucumber::FailuresLogger --out tmp/cucumber_failures.log Or add the formatter to the parallel: profile of your cucumber.yml:
parallel: --format progress --format ParallelTests::Cucumber::FailuresLogger --out tmp/cucumber_failures.log Note if your cucumber.yml default profile uses <%= std_opts %> you may need to insert this as follows parallel: <%= std_opts %> --format progress...
To rerun failures:
cucumber @tmp/cucumber_failures.log gem install parallel_tests # go to your project dir parallel_test test/ parallel_rspec spec/ parallel_cucumber features/ parallel_spinach features/ use
ENV['TEST_ENV_NUMBER']inside your tests to select separate db/memcache/etc.Only run selected files & folders:
parallel_test test/bar test/baz/foo_text.rbPass test-options and files via
--:parallel_test -- -t acceptance -f progress -- spec/foo_spec.rb spec/acceptance
Options are:
-n [PROCESSES] How many processes to use, default: available CPUs -p, --pattern [PATTERN] run tests matching this regex pattern --exclude-pattern [PATTERN] exclude tests matching this regex pattern --group-by [TYPE] group tests by: found - order of finding files steps - number of cucumber/spinach steps scenarios - individual cucumber scenarios filesize - by size of the file runtime - info from runtime log default - runtime when runtime log is filled otherwise filesize -m, --multiply-processes [FLOAT] use given number as a multiplier of processes to run -s, --single [PATTERN] Run all matching files in the same process -i, --isolate Do not run any other tests in the group used by --single(-s) --only-group INT[, INT] -e, --exec [COMMAND] execute this code parallel and with ENV['TEST_ENV_NUMBER'] -o, --test-options '[OPTIONS]' execute test commands with those options -t, --type [TYPE] test(default) / rspec / cucumber / spinach --suffix [PATTERN] override built in test file pattern (should match suffix): '_spec.rb$' - matches rspec files '_(test|spec).rb$' - matches test or spec files --serialize-stdout Serialize stdout output, nothing will be written until everything is done --prefix-output-with-test-env-number Prefixes test env number to the output when not using --serialize-stdout --combine-stderr Combine stderr into stdout, useful in conjunction with --serialize-stdout --non-parallel execute same commands but do not in parallel, needs --exec --no-symlinks Do not traverse symbolic links to find test files --ignore-tags [PATTERN] When counting steps ignore scenarios with tags that match this pattern --nice execute test commands with low priority. --runtime-log [PATH] Location of previously recorded test runtimes --allowed-missing Allowed percentage of missing runtimes (default = 50) --unknown-runtime [FLOAT] Use given number as unknown runtime (otherwise use average time) --verbose Print debug output --verbose-process-command Print the command that will be executed by each process before it begins --verbose-rerun-command After a process fails, print the command executed by that process --quiet Print only test output -v, --version Show Version -h, --help Show this. You can run any kind of code in parallel with -e / --exec
parallel_test -n 5 -e 'ruby -e "puts %[hello from process #{ENV[:TEST_ENV_NUMBER.to_s].inspect}]"' hello from process "2" hello from process "" hello from process "3" hello from process "5" hello from process "4" | 1 Process | 2 Processes | 4 Processes | |
| RSpec spec-suite | 18s | 14s | 10s |
| Rails-ActionPack | 88s | 53s | 44s |
- Add a
.rspec_parallelto use different options, e.g. no --drb - Remove
--loadbyfrom.rspec - Instantly see failures (instead of just a red F) with rspec-instafail
- Use rspec-retry (not rspec-rerun) to rerun failed tests.
- JUnit formatter configuration
- Add a
parallel: fooprofile to yourconfig/cucumber.ymland it will be used to run parallel tests - ReportBuilder can help with combining parallel test results
- Supports Cucumber 2.0+ and is actively maintained
- Combines many JSON files into a single file
- Builds a HTML report from JSON with support for debug msgs & embedded Base64 images.
- [SQL schema format] use :ruby schema format to get faster parallel:prepare`
- [ZSH] use quotes to use rake arguments
rake "parallel:prepare[3]" - [Memcached] use different namespaces
e.g.config.cache_store = ..., namespace: "test_#{ENV['TEST_ENV_NUMBER']}" - Debug errors that only happen with multiple files using
--verboseand cleanser export PARALLEL_TEST_PROCESSORS=13to override default processor count- Shell alias:
alias prspec='parallel_rspec -m 2 --' - [Spring] to use spring you have to patch it
--first-is-1will make the first environment be1, so you can test while running your full suite.export PARALLEL_TEST_FIRST_IS_1=truewill provide the same result- email_spec and/or action_mailer_cache_delivery
- zeus-parallel_tests
- Distributed parallel test (e.g. Travis Support)
- Capybara setup
- Sphinx setup
- Capistrano setup let your tests run on a big box instead of your laptop
Contribute your own gotchas to the Wiki or even better open a PR :)
inspired by pivotal labs
- Charles Finkel
- Indrek Juhkam
- Jason Morrison
- jinzhu
- Joakim Kolsjö
- Kevin Scaldeferri
- Kpumuk
- Maksim Horbul
- Pivotal Labs
- Rohan Deshpande
- Tchandy
- Terence Lee
- Will Bryant
- Fred Wu
- xxx
- Levent Ali
- Michael Kintzer
- nathansobo
- Joe Yates
- asmega
- Doug Barth
- Geoffrey Hichborn
- Trae Robrock
- Lawrence Wang
- Sean Walbran
- Lawrence Wang
- Potapov Sergey
- Łukasz Tackowiak
- Pedro Carriço
- Pablo Manrubia Díez
- Slawomir Smiechura
- Georg Friedrich
- R. Tyler Croy
- Ulrich Berkmüller
- Grzegorz Derebecki
- Florian Motlik
- Artem Kuzko
- Zeke Fast
- Joseph Shraibman
- David Davis
- Ari Pollak
- Aaron Jensen
- Artur Roszczyk
- Caleb Tomlinson
- Jawwad Ahmad
- Iain Beeston
- Alejandro Pulver
- Felix Clack
- Izaak Alpert
- Micah Geisel
- Exoth
- sidfarkus
- Colin Harris
- Wataru MIYAGUNI
- Brandon Turner
- Matt Hodgson
- bicarbon8
- seichner
- Matt Southerden
- Stanislaw Wozniak
- Dmitry Polushkin
- Samer Masry
- Volodymyr Mykhailyk
- Mike Mueller
- Aaron Jensen
- Ed Slocomb
- Cezary Baginski
- Marius Ioana
- Lukas Oberhuber
- Ryan Zhang
- Rhett Sutphin
- Doc Ritezel
- Alexandre Wilhelm
- Jerry
- Aleksei Gusev
- Scott Olsen
- Andrei Botalov
- Zachary Attas
- David Rodríguez
- Justin Doody
- Sandeep Singh
- Calaway
- alboyadjian
Michael Grosser
[email protected]
License: MIT