Memcached Memory Allocation

Memcached, a distributed memory caching system, is often used to improve the performance and availability of the hosted application by reducing database load. It creates a shared cache across all application nodes and serves as your application's short-term memory.

Let’s find out how the memory allocation in Memcached works, and how we can get rid of memory fragmentation while using the platform.

The memcached system uses the slab instead of item-by-item memory allocation. As a result, it improves memory usage and protects it from fragmentation when data expires from the cache.

Each slab consists of several 1 MB pages, and each page, in its turn, consists of an equal number of blocks or chunks. After storing all data, Memcached defines the data size and looks for a suitable allocation in all slabs. If such an allocation exists, the data is written to it. If there is no suitable allocation, Memcached creates a new slab and divides it into blocks of the necessary size. If you update an already stored item and its new value exceeds the size of the block allocation it was stored in before, Memcached moves it to another suitable slab.

memcached memory allocation 1

As a result, every instance has multiple pages distributed across the Memcached memory. This method of allocation prevents the memory fragmentation. But on the other hand it can cause the waste of memory if you have not enough amount of items with equal allocation size, i.e. there are only a few filled chunks in every page. Thus one more important point is the distribution of the stored items.

With the platform, you get a possibility to modify the slab growth coefficient right during your application is running. For that click the Config button next to the Memcached node, navigate to the conf directory and open memcached file. Edit it, e.g. in the following way:

OPTIONS=”-vv 2>> /var/log/memcached/memcached.log -f 2 -n 32″

memcached memory allocation memcached config

In this example -f 2 specifies that you will see 14 slabs with doubled size of chunks, and the value after the -n defines the minimum space allocated for the key, flags and value.

We’ve got the following results:

  • Chunk details:
1
2
3
#  Item_Size  Max_age   Pages   Count   Full?  Evicted Evict_Time    OOM
3     320B       550s       1     113     yes        0        0       0
4     640B       681s       1     277     yes        0        0       0
  • Memory usage:
1
2
3
4
total          used        free      shared    buffers     cached
Mem:           128          84         43          0          0         70
-/+ buffers/cache:          14        113
Swap:            0            0

Now let’s enter the default settings again and check what values we’ll get:

OPTIONS=”-vv 2>> /var/log/memcached/memcached.log”

  • Chunk details:
1
2
3
4
5
6
#  Item_Size  Max_age   Pages   Count   Full?  Evicted Evict_Time OOM
5     240B       765s       1      27     yes        0        0     0
6     304B       634s       1      93     yes        0        0     0
7     384B       634s       1     106     yes        0        0     0
8     480B       703s       1     133     yes        0        0     0
9     600B       634s       1      57     yes        0        0     0
  • Memory usage:
1
2
3
4
total       used       free     shared    buffers     cached
Mem:           128         87         40          0          0         70
-/+ buffers/cache:         17        110
Swap:            0          0          0

Also you can add the -L parameter to increase the memory page size and in such a way reduce the number of TLB misses and improve the performance.

Thanks to this easy and straightforward optimization we can improve the usage of the allocated memory.

Share this:
FacebookXWhatsAppTelegramLinkedInGmailCopy Link
Updated on March 10, 2026
Was this article helpful?

Related Articles

Need Support?
Can't find the answer you're looking for?
Contact Support