Object Pooling

Object pooling is a technique for reusing memory in garbage-collected runtimes, reducing allocation pressure. We might wish to reuse objects because initialisation is expensive, or because we want to avoid garbage collection pauses.

In garbage-collected environments, it is usual to rely on the garbage collector to recycle memory as it is used. Modern runtimes like Java and dotnet have been highly tuned to make object allocation and automatic memory reuse extremely efficient. 

For most use cases, relying on the garbage collector is acceptable for application performance, and it is preferable to performing manual memory management, as this leads to increased code complexity and a higher chance of bugs.

When it is necessary to manage object life-cycles, then several things need to be considered.

Object allocation is optimised

To keep allocation fast, runtimes like the JVM use a technique called thread-local allocation buffers (TLABs), where each thread has its own private memory area for allocating new objects. Without this, it would be necessary to synchronize access to a global memory pool, causing contention between threads. 

When allocating a new object, a thread simply claims the amount of required memory from its private memory, and bumps a pointer value. If this object lives long enough to be promoted to another memory region, then it is copied out of the TLAB into a survivor space (or elsewhere, depending on the garbage collector in use). If there is insufficient space in the TLAB, then memory is allocated from the global pool on the slow path.

Better cache use

Object allocation, while cheap due to technologies such as the TLAB, will still cause a program to churn through memory, with associated cache-misses. For some objects that have a medium lifetime or if garbage-collection pauses must be avoided, it can be a better use of memory to use pooled objects.

Object pools can take advantage of the temporal locality of memory caches by acting as a LIFO queue. When a pooled instance is released to the pool, it has just been in use by the program, so its backing memory is going to be in one of the CPU caches. When another instance of the object is required, the pool should return the most-recently released object, as it is most likely to avoid cache misses upon re-use. 

If a thread has been pinned to a specific CPU core, as is often the case in high-performance applications, then a thread can repeatedly use the same chunk of memory, taking advantage of spatial locality of its cache.

Pooling problems

While reducing the amount of memory churn and cache-misses, pooling comes with its own problems. If we pre-size the pools to be too large, or if a large number of objects are created, due to a burst of inputs, then we can end up with lots of effectively dead objects using up valuable memory. This can cause extra overhead during garbage collection cycles, as there are more live objects to account for.

One way to avoid this is to set a maximum size for the pool, so that in burst scenarios, excess pooled objects can be discarded when the burst has been dealt with. This does mean that there will be some memory churn, with the possibility of the garbage collector interfering with application throughput.

Pooling must also be safe to use across threads. If we are lucky, we only need to deal with a single thread, where a pool can be a simple stack. However, in a multi-threaded scenario where pooled objects are passed between threads, synchronization will be required to hand released objects back to the pool.

When using object pooling, careful tuning of size and pre-allocated instances is required. This exercise should be carried out by modelling realistic system inputs to determine the steady-state of required instances of a given type. Once this experiment has been completed, it should be possible to safely initialise object pools with the correct number of instances, so that memory churn is minimised.

Finally, one of the main issues experienced with object pooling is memory corruption due to use-after-free bugs. It is a common error to return an instance to a pool, while also mistakenly retaining a reference in some other data-structure. For this reason, the best pooling schemes will have some kind of reference-checking built in. This checking can be enabled during development and testing of software, and optionally disabled in production deployments to reduce overhead.

Summary

Object Pooling is a useful technique for reducing memory churn, but comes with a cost – the requirement to perform manual memory management, and all the pitfalls that we tend to ignore when using garbage-collected runtimes. If pooling is used, then safety mechanisms should be used to help detect and fix any memory leaks that result from object reuse.

Subscribe to receive the Four Steps Newsletter: