Friday 11:30 a.m.–noon

Cache me if you can: memcached, caching patterns and best practices

Guillaume Ardaud

Audience level:
Intermediate
Category:
Best Practices & Patterns

Description

Memcached is a popular, blazing fast in-RAM key/object store mainly used in web applications (although it can be used in virtually any software). You will walk out of this talk with a solid understanding of memcached and what it does under the hood, and become familiar with several patterns and best practices for making the most of it in your own Python applications.

Abstract

Memcached is a distributed, in-RAM key/object store which does O(1) everything. I will first describe the memcached basics: what a typical memcached API looks like, how memcached behaves as a distributed system, how to name your keys smartly, and what memcached can't do for you. I then detail some memcached internals that are essential to know to use it properly: how memcached divides its memory in pages and slab classes, how it decides what data to evict when its memory is full, and how the memcached LRU cache behaves. Finally, we get our hands really dirty with concrete Python examples of using memcached in the wild: how to use memcached to speed up an object's JSON serialization, how we can cache large objects in memcached with either a paginated cache or a 2-phase fetch approach, what is a "thundering herd" and how to prevent it, and how we can write smart decorators to minimize code rewrite and avoid cache misses.