fixtures.parallelizer package

Module contents

Parallel testing, supporting arbitrary collection ordering

The Workflow

  • Master py.test process starts up, inspects config to decide how many slave to start, if at all
  • py.test config.option.appliances and the related –appliance cmdline flag are used to count the number of needed slaves
  • Slaves are started
  • Master runs collection, blocks until slaves report their collections
  • Slaves each run collection and submit them to the master, then block inside their runtest loop, waiting for tests to run
  • Master diffs slave collections against its own; the test ids are verified to match across all nodes
  • Master enters main runtest loop, uses a generator to build lists of test groups which are then sent to slaves, one group at a time
  • For each phase of each test, the slave serializes test reports, which are then unserialized on the master and handed to the normal pytest reporting hooks, which is able to deal with test reports arriving out of order
  • Before running the last test in a group, the slave will request more tests from the master
    • If more tests are received, they are run
    • If no tests are received, the slave will shut down after running its final test
  • After all slaves are shut down, the master will do its end-of-session reporting as usual, and shut down
class fixtures.parallelizer.Outcome(word, markup)

Bases: tuple

markup

Alias for field number 1

word

Alias for field number 0

class fixtures.parallelizer.ParallelSession(config, appliances)[source]

Bases: object

ack(slave, event_name)[source]

Acknowledge a slave’s message

get(slave)[source]
interrupt(slave, **kwargs)[source]

Nicely ask a slave to terminate

kill(slave, **kwargs)[source]

Rudely kill a slave

monitor_shutdown(slave)[source]
print_message(message, prefix='master', **markup)[source]

Print a message from a node to the py.test console

Parameters:
  • message – The message to print
  • **markup – If set, overrides the default markup when printing the message
pytest_runtestloop()[source]

pytest runtest loop

  • Disable the master terminal reporter hooks, so we can add our own handlers that include the slaveid in the output
  • Send tests to slaves when they ask
  • Log the starting of tests and test results, including slave id
  • Handle clean slave shutdown when they finish their runtest loops
  • Restore the master terminal reporter after testing so we get the final report
pytest_sessionstart(session)[source]

pytest sessionstart hook

  • sets up distributed terminal reporter
  • sets up zmp ipc socket for the slaves to use
  • writes pytest options and args to slave_config.yaml
  • starts the slaves
  • register atexit kill hooks to destroy slaves at the end if things go terribly wrong
recv()[source]
send(slave, event_data)[source]

Send data to slave.

event_data will be serialized as JSON, and so must be JSON serializable

send_tests(slave)[source]

Send a slave a group of tests

class fixtures.parallelizer.SlaveDetail(appliance, id=NOTHING, tests=NOTHING, process=None, provider_allocation=NOTHING)[source]

Bases: object

appliance = Attribute(name='appliance', default=NOTHING, validator=None, repr=True, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}))
forbid_restart = Attribute(name='forbid_restart', default=False, validator=None, repr=True, cmp=True, hash=None, init=False, convert=None, metadata=mappingproxy({}))
id = Attribute(name='id', default=Factory(factory=<function <lambda>>, takes_self=False), validator=None, repr=True, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}))
poll()[source]
process = Attribute(name='process', default=None, validator=None, repr=False, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}))
provider_allocation = Attribute(name='provider_allocation', default=Factory(factory=<type 'list'>, takes_self=False), validator=None, repr=False, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}))
slaveid_generator = <generator object <genexpr>>
start()[source]
tests = Attribute(name='tests', default=Factory(factory=<type 'set'>, takes_self=False), validator=None, repr=False, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}))
class fixtures.parallelizer.TerminalDistReporter(config, terminal)[source]

Bases: object

Terminal Reporter for Distributed Testing

trdist reporter exists to make sure we get good distributed logging during the runtest loop, which means the normal terminal reporter should be disabled during the loop

This class is where we make sure the terminal reporter is made aware of whatever state it needs to report properly once we turn it back on after the runtest loop

It has special versions of pytest reporting hooks that, where possible, try to include a slave ID. These hooks are called in ParallelSession’s runtestloop hook.

runtest_logreport(slaveid, report)[source]
runtest_logstart(slaveid, nodeid, location)[source]
fixtures.parallelizer.handle_end_session(signal, frame)[source]
fixtures.parallelizer.pytest_addhooks(pluginmanager)[source]
fixtures.parallelizer.pytest_configure(config)[source]

Configures the parallel session, then fires pytest_parallel_configured.

fixtures.parallelizer.report_collection_diff(slaveid, from_collection, to_collection)[source]

Report differences, if any exist, between master and a slave collection

Raises RuntimeError if collections differ

Note

This function will sort functions before comparing them.

fixtures.parallelizer.unserialize_report(reportdict)[source]

Generate a TestReport from a serialized report