Speedily Practical Large-Scale Tests

Type:
Talk
Audience level:
Intermediate
Category:
Testing
March 9th 12:10 p.m. – 12:55 p.m.

Description

Mozilla's projects have thousands of tests, so we've had to venture beyond vanilla test runners to keep things manageable. Our secret sauce can be used with your project as well. Reach beyond the test facilities that came with your project, harnessing pluggable test frameworks, dynamically reordering tests for speed, exploring various mocking libraries, and profiling your way to testing nirvana.

Abstract

A partial outline:

  • Intro
    • Motivation: a test not run is no test at all.
    • For most web apps, the easiest test speed win is a conquest of I/O.
  • The nose testrunner
    • Test discovery lets you organize tests well.
    • Pluggability
    • Gluing to projects with custom testrunners: django-nose and test-utils
  • py.test
    • Compare to nose. Nose forked from it. Explain history.
    • Very cool assertion re-evaluation
    • Plugin compatibility between py.test and nose
  • Profiling
    • Start here. Premature optimization sucks.
    • time on the commandline to divide CPU from I/O
    • --with-profile
  • Killing I/O for speedy justice: case study of support.mozilla.com
    • Fixture speed hacks (a 5x improvement!)
      • Once-per-class setup
        • How to use DB transactions to avoid repetitive I/O
      • Dynamic test reordering and fixture sharing
      • DB reuse and other startup optimizations
      • 37,583 queries to 4,116. Watch them fly by!
    • What to do instead of fixtures: the model-maker pattern
      • Lexical proximity
      • Lower coupling
      • Speed
  • Using mocking to kill the fixtures altogether
    • mock, the canonical lib
    • fudge, new declarative hotness
      • Syntax, capabilities
      • Example: oedipus, a better API for the Sphinx search engine. I used fudge to unit-test oedipus without requiring devs to set up and populate Sphinx.
    • Dangers of mocking
      • Don't mock out your caching unless your invalidation is perfect.
      • Some of our mistakes in oedipus
  • The nose-progressive display engine
    • Test results that are a pain to read don't get read.
    • Progress indication
    • Elision of junk frames
    • Easier round-tripping from test failure to source code
  • Continuous integration
    • Motivation
    • Jenkins
    • Buildbot
    • IRC bots
  • Next steps: what to do once you're CPU-bound
    • More parallelization.
      • Multithreading really buys you no speed bump for CPU-bound (or I/O bound?) tasks in Python due to the GIL. (Ref: PyCodeConf talk by David Beazley.)
      • State of multiprocess plugins in various testrunners.
    • Mozilla's Jenkins test farm
    • QA's big stacks of Mac Minis
    • What global warming? ;-)