Quantcast
Channel: domas mituzas » efficiency
Viewing all articles
Browse latest Browse all 4

Memcached for small objects

$
0
0

Memcached quite often ends up as a store for very small objects (small key and some integer value), though it isn’t really designed to do this kind of work by default. Current memory management is based on slabs (200 of them), where objects are grouped by similar size – though actual sizes are pre-defined at startup based on few configuration parameters.

By default memcached would have slabs based on assumption, that smallest object size will have 48 bytes of data (thats without item header), and will increase the slab sizes in +25% steps:

slab class   1: chunk size    104 perslab 10082
slab class   2: chunk size    136 perslab  7710
slab class   3: chunk size    176 perslab  5957
slab class   4: chunk size    224 perslab  4681
...

So, in this case, it allocates at least 104 bytes per object, and next steps are way behind. Fortunately, there’re some quick steps to have better efficiency:

Configuration!

There’re two parameters for this:

  • -n – minimum space allocated for key+value+flags, defaults at 48
  • -f – chunk growth factor, default 1.25

With these settings memcached will define just 38 slabs, ranging from 104 to 458992 byte sized chunks. Even in case of mixed workloads one can use -n 5 -f 1.05 – this will define ~170 slabs, where low values will increase in 8-byte chunks, and smallest slabs would look like this:

slab class   1: chunk size     64 perslab 16384
slab class   2: chunk size     72 perslab 14563
slab class   3: chunk size     80 perslab 13107
...

It would get way higher memory efficiency for larger objects too (6k steps at 100k object sizes, rather than 30k steps with default configuration). Of course, more slabs means that there’re more eviction queues, and in case distribution of object sizes changes, it would have more memory fragmentation, though thats nothing a restart can’t resolve ;-)

Internal storage

Every data item has a header, which includes pointers to other item structures, and additional metadata. One of obvious things that will be fixed in later releases is the CAS (compare-and-swap) metadata (8 bytes per object), which is stored for every object – though very rarely used by users (one needs to use special breed commands). In future versions this might get resolved, and a very dirty hack would be changing cas_id to be uint8_t in memcached.h (heeeee!).

There’re also multiple pointers inside the header – and on 64 bit systems they take 64 bits – though in theory chunks inside memory pages (slabs can have multiple memory pages assigned) can be addressed with 16-bit pointers. Of course, the easy workaround here is simply compiling memcached as 32-bit binary – though then it won’t be able to address more than ~3GB per-instance (and running multiple memcached instances is straightforward).

There’s another internal CPU-vs-memory optimization, where objects internally end up aligned at 8-byte boundaries. Hacking CHUNK_ALIGN_BYTES at slabs.c (I set it to 2) allows us to have chunk sizes increased in much smaller steps.

Data!

Have small keys. Have small data. Compression at application will reduce network i/o, less roundtrips and system calls, and better memory efficiency in the end. Pack integers inside keys into base250 or so (skip whitespace), store binary data.

It is usually way less cycles to have more efficient storage, compared to additional cycles when a cache miss happens :)

Testing & summary

I tried to simulate the most edge case out of all edge cases – integer key and one byte data objects being inserted into 64M memcached instance.

  • I could fit in 645k objects inside regular memcached, 932k after factor changes.
  • Simple 32-bit build fit in 763k objects, 1164k after factor changes, 1180k after alignment change
  • After removing CAS support, it fit 1378k, 1461k after reducing key with base250

So, I could have such slab size distribution (it facilitates objects up to 45k in size):

slab class   1: chunk size     33 perslab 31775
slab class   2: chunk size     34 perslab 30840
slab class   3: chunk size     35 perslab 29959
slab class   4: chunk size     36 perslab 29127
...

MySQL!

64M-sized MEMORY table will be able to store 2087k (INT,TINYINT) entries. When people aim for no-eviction storage, MySQL can be way more efficient.

Interesting though, the PK will take as much space as data itself (what simply asks PK to be held together with data, like InnoDB does (it will actually fit 2500k entries in 64M). With custom MySQL engines this shouldn’t be too difficult for those who really hit the edge cases, right? :)

Oh well, last I’ve heard, memcached is going to have storage engine support too, I wonder how fun will be hacking those (someone ages ago plugged BDB into memcached and called it Tugela.. ;)



Viewing all articles
Browse latest Browse all 4

Trending Articles