Friday 11:30 a.m.–noon
Cache me if you can: memcached, caching patterns and best practices
Guillaume Ardaud
- Audience level:
- Intermediate
- Category:
- Best Practices & Patterns
Description
Abstract
Memcached is a distributed, in-RAM key/object store which does O(1) everything. I will first describe the memcached basics: what a typical memcached API looks like, how memcached behaves as a distributed system, how to name your keys smartly, and what memcached can't do for you.
I then detail some memcached internals that are essential to know to use it properly: how memcached divides its memory in pages and slab classes, how it decides what data to evict when its memory is full, and how the memcached LRU cache behaves.
Finally, we get our hands really dirty with concrete Python examples of using memcached in the wild: how to use memcached to speed up an object's JSON serialization, how we can cache large objects in memcached with either a paginated cache or a 2-phase fetch approach, what is a "thundering herd" and how to prevent it, and how we can write smart decorators to minimize code rewrite and avoid cache misses.