Distributed signal processing using PyBlockWork

Christian O'Reilly

Audience level:


PyBlockWork allows defining signal processing operations as modules and chaining them into block schema. Each block is executed in its own process and is responsible for launching in cascade its following blocks. PyBlockWork is in development but has already been used successfully with Spyndle for analyzing polysomnographic recordings on a 12-core server.


Contemporary academic researchers experience heavy pressure under the current “publish or perish” paradigm that guides scientific endeavor. In this context, they need more efficient tools for extracting meaningful information from large databases. PyBlockWork has been developed as one such tool. It allows researchers to describe their data crunching needs as block schema whose execution can be distributed on a computer grid. Being in an early stage of development, PyBlockWorks suffers some conviviality shortcomings that are expected to be remedied by a graphical interface wrapper. Nevertheless, the core features of this library are fully functional and show interesting potential for Big Data management. This poster presentation will describe how PyBlockWork works and how it can be used to implement distributed data crunching tasks, taking as an example our work on analyzing sleep spindle properties of whole-night polysomnographic recording databases. As a teaser, the figure below describes the general architecture of this library, an architecture that will be explained in detail during the poster presentation. ![][1] [1]: https://bitbucket.org/christian_oreilly/pyblockwork/wiki/images/global_schema.png