Netty源码之arena、chunk、page、subpage概念
在本节我们将梳理一下有关内存分配几个非常重要的概念,分别是arena、chunk、page、subpage。
接下来我们看一下什么是arena:
在每个线程去申请内存的时候,他首先会通过ThreadLocal这种方式去获得当前线程的PoolThreadCache对象,然后调用他的allocate方法去申请内存,他一共分为两部分,一个是cache,在申请内存的时候首先会尝试从缓存中获取,另外一个就是arena,如果从缓存中获取不到对应的内存,那么就要通过arena从内存池中获取。
我们看一下arena的数据结构是什么样子的:
一个arena维护着多个ChunkList,每个ChunkList代表着他们里面的Chunk使用率是多少,这些Chunk也会动态的移动,每次分配完后都会计算他们的使用率,应该被分到那个ChunkList,就会移动到对应的List中。每次进行内存分配的时候,首先会通过指定的算法定位到对应的ChunkList,然后从里面选择对应的Chunk进行内存分配。接下来,我们进入代码中具体的感受一下:
enum SizeClass {
Tiny,
Small,
Normal
}
static final int numTinySubpagePools = 512 >>> 4;
final PooledByteBufAllocator parent;
private final int maxOrder;
final int pageSize;
final int pageShifts;
final int chunkSize;
final int subpageOverflowMask;
final int numSmallSubpagePools;
private final PoolSubpage<T>[] tinySubpagePools;
private final PoolSubpage<T>[] smallSubpagePools;
private final PoolChunkList<T> q050;
private final PoolChunkList<T> q025;
private final PoolChunkList<T> q000;
private final PoolChunkList<T> qInit;
private final PoolChunkList<T> q075;
private final PoolChunkList<T> q100;
private final List<PoolChunkListMetric> chunkListMetrics;
// Metrics for allocations and deallocations
private long allocationsNormal;
// We need to use the LongCounter here as this is not guarded via synchronized block.
private final LongCounter allocationsTiny = PlatformDependent.newLongCounter();
private final LongCounter allocationsSmall = PlatformDependent.newLongCounter();
private final LongCounter allocationsHuge = PlatformDependent.newLongCounter();
private final LongCounter activeBytesHuge = PlatformDependent.newLongCounter();
private long deallocationsTiny;
private long deallocationsSmall;
private long deallocationsNormal;
// We need to use the LongCounter here as this is not guarded via synchronized block.
private final LongCounter deallocationsHuge = PlatformDependent.newLongCounter();
// Number of thread caches backed by this arena.
final AtomicInteger numThreadCaches = new AtomicInteger();
这就是它的数据结构。
一个Chunk是16M,进行一次内存分配,不可能一次将一个Chunk全部分配,于是又将Chunk分割成更小的Page
一个Chunk会以8K的大小进行划分,分成一个个的Page,到时候分配内存的时候只需要级Page为单位进行内存划分。但是,如果我只需要一个2K大小的内存,那么分配给我一个Page岂不是又造成了浪费,于是又继续将Page进行划分成更小的subpage,我们来看一下subPage的数据结构:
//当前subPage属于那个chunk
final PoolChunk<T> chunk;
private final int memoryMapIdx;
private final int runOffset;
private final int pageSize;
//记录子叶的内存分配情况
private final long[] bitmap;
PoolSubpage<T> prev;
PoolSubpage<T> next;
boolean doNotDestroy;
int elemSize;
private int maxNumElems;
private int bitmapLength;
private int nextAvail;
private int numAvail;