Server side:

Architecture    
Netty4 study notes



Normally, all handlers in server side pipeline will be executed in the same thread, sequencially. Alternatively, an option is given to execute handler in a separate thread, asynchronously, by adding a handler to a separate EventExecutorGroup. Netty will create an OneTimeTask and execute it in the new EventExecutorGroup.

EventExecutorGroupseparateExecutorGroup = new DefaultEventExecutorGroup(10);

p.addLast(separateExecutorGroup(seperateExecutorGroup,bizHandler)


On server side, each channel is identified by a pair (client port + client ip address). Multiple messages sent via the same channel from the client will be handled by the same server handler instance, and same thread sequentially. This is regardless of whether a separate EventExecutorGroup is assigned to execute the handler. (DefaultEventExecutorGroup creates DefaultEventExecutor which is inherited SingleThreadEventExecutor, each time, it takes one task from internal queue and runs it). Messages sent via different channel (each call to netty client's bootstrap.connect(server address, server port) creates a different channel on netty server)  


Base on the above discussion, Channel handler is thread safe. Class level variables can be declared and used. 


Inbound handlers and outbound handlers can be added to the same pipeline and will be handled by the same IO thread. This is different from Netty 3 where outbound handlers are executed in business logic thread.


It is recommended to use separate thread to handle complex business logic. This avoids blocking the io thread for long time and degrades performance. Decoded request can by put in an internal queue and a separate set of thread will poll from the queue, execute actual business logic and write back the response in same channel. However, this may result in multiple responses sending back to the same channel out of order. The client side must explicitly tracking request/response correspondence, which may add additional overhead.


Receiver buffer configuration:

FixedRecvByteBufAllocator (default) or AdaptiveRecvByteBufAllocator   

ChannelOption.RCVBUF_ALLOCATOR -> AdaptiveRecvByteBufAllocator.DEFAULT

UnpooledByteBufAllocator (default) or PooledByteBufAllocator        

ChannelOption.ALLOCATOR -> PooledByteBufAllocator.DEFAULT

To use PooledByteBufAllocator, one must explicitly call: ReferenceCountUtil.release() after decoding, and release must be performed in the same thread as the allocation thread, since buffer is cached in threadLocal. Repeated releasing of the same buffer should also be avoided.Memory leak can happen if ByteBuf pool is not used correctly.


Client side:

Bootstrap is not thread safe, so try to create a bootstrap per server ip/port combination and initialize it in a synchronized method.  Bootstrap' connection method is thread safe though. 

Multiple connect request of the same bootstrap shares the same EventLoopGroup


Normally each "connect" call of bootstrap creates a new channel. However, a typical rest client may send concurrent restful request to same server in multiple thread and is normally able to complete fast, so it is a good idea to store channel in a pool to avoid creating new channel per request. The channel pool will create a few channels on initialization. The channel pool is wired to a NettyHttpClientHandler which is shared by all channels' pipeline. Each request to server will look for a free connection in the pool. At the end of ChannelHandler's channelRead0 method, the channel will be freed and return to the pool. If server sends header ''CONNECTION" = "close", the channel is physically closed. 


Netty4 study notes


Common:

EvenLoopGroup thread count:  normally between  CPU core + 1 and 2* CPU count


SSL is quite expensive to use , especially when ChannelIntializer is created with sslCtx = sslContextFactory.create();  in the constructor.


Common configuration with netty

    TCP:

    SO_SNDBUF and SO_RCVBUF  normally 32k is a good choice

    SO_LINGER:  whether unsent TCP message will be discarded after close (closed by default)

    SO_TIMEOUT: how soon server tries to block for read operation. If exceeded, a SocketTimeoutException will be thrown, depending on whether SO_KEEPALIVE option is set, connection will be closed or heartbeat message will be sent to detect whether connection is still alive. 

    TCP_NODELAY: disable nagle algorithm, do not try to combine multiple packets and send them ASAP

    HTTP KEEP ALIVE setting:  whether a single TCP connection will be used to send multiple requests. A Connection: Keep-Alive  header will be set by client. In Netty the server's http handler will have to interpret header this explicitly

         if (!keepAlive) {
                channel.writeAndFlush(response).addListener(ChannelFutureListener.CLOSE);
            } else {
                response.headers().set(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.KEEP_ALIVE);
                channel.writeAndFlush(response);
            }


    

IdleStateHandler can be used to exchange heartbeat etc when connection is kept idle.  However, if timeout value is not set correctly (in IdelStateHandler's constructor), it tends to add a lot of ReaderIdleTimeoutTask,WriterIdleTimeoutTask or AllIdleTimeoutTask in the NioEventLoop's task queue as ScheduledFuture. Longer timeout value will cause a Task to stay in queue for longer time before it can be processed and garbage collected. Therefore shorter timeout value is prefered.


Spring Integration:

Client side: A customized ClientHttpRequestFactory for netty is plugged into Spring rest template

NettyClientHttpRequestFactory creates NettyClientHttpRequest which wraps a NettyClient.

Netty4 study notes

Server side: Create a DispatchServlet with mock servlet context and config. Customized converter is needed to convert Netty's io.netty.handler.codec.http.HttpRequest into spring's MockHttpServletRequest and convert spring's MockHttpServletResponse back to io.netty.handler.codec.http.FullHttpResponse.

Netty4 study notes

Reference: