brief introduction
Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.
Netty is an asynchronous (asynchronous in the sense of multithreading) and event driven network application framework, which is used to quickly develop maintainable and high-performance network servers and clients.
Hello Word
Server
public static void main(String[] args) { new ServerBootstrap() // Netty internal listening Accept method .group(new NioEventLoopGroup()) // Select ServerSocketChannel to implement NIO / OIO .channel(NioServerSocketChannel.class) // NioSocketChannel represents the initialization of the channel Initializer that reads and writes data with the client, and is responsible for adding other handler s .childHandler(new ChannelInitializer<NioSocketChannel>() { @Override // Called after initialization protected void initChannel(NioSocketChannel channel) { channel.pipeline().addLast(new StringDecoder()); channel.pipeline().addLast(new ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { System.out.println(msg); } }); } }).bind(25565); }
Client
public static void main(String[] args) throws InterruptedException { new Bootstrap() .group(new NioEventLoopGroup()) .channel(NioSocketChannel.class) .handler(new ChannelInitializer<NioSocketChannel>() { @Override // Called after the connection is established protected void initChannel(NioSocketChannel channel) { channel.pipeline().addLast(new StringEncoder()); } }) .connect(new InetSocketAddress("localhost", 25565)) .sync() // Block the method until the connection is established .channel() // Represents the connection object .writeAndFlush("Hello World"); }
assembly
EventLoop
EventLoop is essentially a single thread executor (maintaining a Selector at the same time), which has a run method to handle the continuous flow of IO events on the Channel.
Inheritance relationship (interface)
-
First, it inherits from java.util.ScheduledExecutorService, so it contains all methods in the thread pool
-
Second, it inherits from Netty's own OrderedEventExecutor
-
A boolean inEventLoop(Thread) method is provided to determine whether a thread belongs to the EventLoop
-
The parent() method is provided to see which EventLoopGroup it belongs to
-
Event cycle group
EventLoopGroup is a group of eventloops. A Channel will generally call the register method of EventLoopGroup to bind one of the eventloops. Subsequent IO events on this Channel will be processed by this EventLoop (ensuring thread safety during IO event processing)
Inherited from Netty's own EventExecutorGroup
-
The iterable interface is implemented to provide the ability to traverse EventLoop
-
The next() method gets the next EventLoop in the collection
EventLoopServer
public static void main(String[] args) { EventLoopGroup group = new DefaultEventLoop(); new ServerBootstrap() // The first one is only responsible for the accept event of ServerSocketChannel, and the second one is responsible for reading and writing socketChannel .group(new NioEventLoopGroup(), new NioEventLoopGroup(2)) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer<NioSocketChannel>() { @Override protected void initChannel(NioSocketChannel channel) { // Specify other EventGroupLoop processing (Default...) to prevent long-term non IO operations from affecting IO threads (see the figure below for details) channel.pipeline().addLast(group, "handler1", new ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { ByteBuf buf = (ByteBuf) msg; log.debug(buf.toString(StandardCharsets.UTF_8)); } }); } }).bind(25565); }
Switch thread during handler execution
Key code io.netty.channel.AbstractChannelHandlerContext#invokeChannelRead
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) { final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next); // Returns the eventLoop of the next handler EventExecutor executor = next.executor(); // Is the thread in the current handler the same thread as eventLoop if (executor.inEventLoop()) { next.invokeChannelRead(m); } else { // Submit the code to be executed as a task to the next thread for processing executor.execute(new Runnable() { public void run() { next.invokeChannelRead(m); } }); } }
Channel
Main functions of Channel
-
close() can be used to close the Channel
-
closeFuture() is used to handle the closing of the Channel
-
sync is used to synchronize and wait for the Channel to close
-
addListener is asynchronously waiting for the Channel to close
-
-
pipeline() add processor
-
write() writes data to the buffer
-
writeAndFlush() writes data to the buffer and brushes it out
ChannelFutureClient
public static void main(String[] args) throws InterruptedException { ChannelFuture channelFuture = new Bootstrap()...; // // Synchronization results // channelFuture.sync(); // Channel channel = channelFuture.channel(); // channel.writeAndFlush("Hello World"); // Asynchronous processing results channelFuture.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) { Channel channel = future.channel(); channel.writeAndFlush("Hello World"); } }); }
main points
-
Synchronization processing uses Channel#sync to obtain synchronization results channelfuture
Asynchronous processing adds a listening ChannelFutureListener with Channel#addListener, which is called by the listening thread
-
Close the thread and use Channel#closeFuture to obtain the asynchronous result channelfuture
Channel channel = channelFuture.sync().channel(); new Thread(() -> { Scanner scanner = new Scanner(System.in); while (true) { String line = scanner.nextLine(); if ("q".equals(line)) { channel.close(); // Do not write the operation after closing here. close() is an asynchronous operation and is not executed immediately. The order of the two is uncertain break; } channel.writeAndFlush(line); } }, "input").start(); ChannelFuture closeFuture = channel.closeFuture(); // synchronization // log.debug("waiting close..."); // closeFuture.sync(); // log.debug("operation after shutdown"); // asynchronous closeFuture.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { log.debug("waiting close..."); closeFuture.sync(); log.debug("Operation after shutdown"); group.shutdownGracefully(); } });
Future & Promise
First, we need to explain that future in Netty has the same name as future in jdk, but there are two interfaces. Future in Netty inherits from future in jdk, and Promise extend s Netty Future
-
jdk Future can only get results by synchronously waiting for the task to end (success or failure)
-
Netty Future can synchronously wait for the end of the task to get the result, or asynchronously get the result, but it has to wait for the end of the task
-
Netty Promise not only has the function of Netty Future, but also exists independently of tasks. It is only used as a container for transmitting results between two threads
io.netty.util.concurrent.Future
public static void main(String[] args) throws ExecutionException, InterruptedException { NioEventLoopGroup group = new NioEventLoopGroup(); EventLoop eventLoop = group.next(); Future<Integer> future = eventLoop.submit(() -> { log.debug("resolving..."); Thread.sleep(2000); return 0; }); log.debug("Waiting for results"); future.addListener(new GenericFutureListener<Future<? super Integer>>() { @Override public void operationComplete(Future future) { log.debug("The result is {}", future.getNow()); group.shutdownGracefully(); } }); log.debug("The main thread continues execution"); }
Promise
public static void main(String[] args) throws ExecutionException, InterruptedException { EventLoop eventLoop = new NioEventLoopGroup().next(); DefaultPromise<Integer> promise = new DefaultPromise<>(eventLoop); new Thread(() -> { log.debug("Start calculation"); try { Thread.sleep(1000); } catch (InterruptedException e) { promise.setFailure(e); } promise.setSuccess(666); }).start(); log.debug("Waiting for results"); log.debug("result: {}", promise.get()); }
Handler & Pupeline
ChannelHandler is used to handle various events on the Channel, including inbound and outbound events. All channelhandlers are connected in a string, which is Pipeline
-
Inbound processor is usually a subclass of channelnboundhandleradapter, which is mainly used to read client data and write back results
-
The outbound processor is usually a subclass of ChannelOutboundHandlerAdapter, which mainly processes the writeback results
In one sentence: responsibility chain model
Handler+Pupeline responsibility chain mode Server side example
public static void main(String[] args) { new ServerBootstrap() .group(new NioEventLoopGroup()) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer<NioSocketChannel>() { @Override protected void initChannel(NioSocketChannel channel) throws Exception { ChannelPipeline pipeline = channel.pipeline(); // head -> h1 -> h2 -> h3 -> h4 -> h5 -> h6 tail pipeline.addLast("h1", new ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { log.debug("1"); ctx.fireChannelRead(msg); } }).addLast("h2", new ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { log.debug("2"); ctx.fireChannelRead(msg); } }).addLast("h3", new ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { log.debug("3"); channel.writeAndFlush(ctx.alloc().buffer().writeBytes("server...".getBytes(StandardCharsets.UTF_8))); } }).addLast("h4", new ChannelOutboundHandlerAdapter() { @Override public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { log.debug("4"); ctx.write(msg, promise); } }).addLast("h5", new ChannelOutboundHandlerAdapter() { @Override public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { log.debug("5"); ctx.write(msg, promise); } }).addLast("h6", new ChannelOutboundHandlerAdapter() { @Override public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception { log.debug("6"); ctx.write(msg, promise); } }); } }) .bind(25565); }
Netty's own Handler debugging class
Using this class can avoid repeated initialization of miscellaneous and convenient debugging handlers such as ServerBootStrap.
public static void main(String[] args) { ChannelInboundHandlerAdapter h1 = new ChannelInboundHandlerAdapter() { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { System.out.println(1); super.channelRead(ctx, msg); } }; ChannelInboundHandlerAdapter h2 = ...; ChannelInboundHandlerAdapter h3 = ...; EmbeddedChannel channel = new EmbeddedChannel(h1, h2, h3); // Simulate stack operation channel.writeInbound(ByteBufAllocator.DEFAULT.buffer().writeBytes("Hello World".getBytes(StandardCharsets.UTF_8))); }
ByteBuf
Is the encapsulation of byte data
establish
Heap based memory
ByteBuf buf = ByteBufAllocator.DEFAULT.heapBuffer();
Based on direct memory (default)
ByteBuf buf = ByteBufAllocator.DEFAULT.directBuffer(); // buffer()
-
The cost of direct memory creation and destruction is high, but the read-write performance is high (less memory replication), which is suitable for use with pooling
-
Direct memory has little pressure on GC, because this part of memory is not managed by JVM garbage collection, but it also needs to be released in time
Pooled vs non pooled
The greatest significance of pooling is that ByteBuf can be reused. There are many advantages
-
Without pooling, you have to create a new ByteBuf instance every time. This operation is expensive for direct memory. Even heap memory will increase GC pressure
-
With pooling, ByteBuf instances in the pool can be reused, and a memory allocation algorithm similar to jemalloc is adopted to improve the allocation efficiency
-
In case of high concurrency, the pooling function saves more memory and reduces the possibility of memory overflow
Whether the pooling function is enabled can be set through the following system environment variables
-Dio.netty.allocator.type={unpooled|pooled}
-
After 4.1, non Android platforms enable pooled implementation by default, and Android platforms enable non pooled implementation
-
Before 4.1, the pooling function was not mature, and the default is non pooling
form
ByteBuf consists of four parts: capacity, maximum capacity, read pointer and write pointer
retain & release
Due to the ByteBuf implementation of out of heap memory in Netty, it is better to release out of heap memory manually rather than waiting for GC garbage collection.
-
UnpooledHeapByteBuf uses JVM memory. You just need to wait for GC to reclaim memory
-
UnpooledDirectByteBuf uses direct memory and requires special methods to reclaim memory·
-
PooledByteBuf and its subclasses use pooling mechanisms and require more complex rules to reclaim memory
Source code implementation of memory recovery
protected abstract void deallocate()
Netty uses the reference counting method to control memory recycling, and each ByteBuf implements the ReferenceCounted interface
-
The initial count for each ByteBuf object is 1
-
Call the release method to decrease the count by 1. If the count is 0, ByteBuf memory will be recycled
-
The count of calling the retain method plus 1 indicates that other handler s will not cause recycling even if they call release before the caller runs out
-
When the count is 0, the underlying memory will be reclaimed. At this time, even if the ByteBuf object is still there, its methods cannot be used normally
Note: the person who gets the ByteBuf will handle it. If the head / tail is not handled, it will be closed
slice
One embodiment of zero copy is to slice the original ByteBuf into multiple bytebufs. The memory copy of the sliced ByteBuf does not occur. Instead, the original ByteBuf memory is used only to encapsulate new objects according to the original memory and supply them for use within a given range.
public static void main(String[] args) { ByteBuf buf = ByteBufAllocator.DEFAULT.buffer(8); buf.writeBytes(new byte[] {'a','b','c','d','e','f','g','h'}); log.debug("{}", buf.toString(StandardCharsets.UTF_8)); ByteBuf f1 = buf.slice(0, 4); f1.retain(); // Each group count + 1 to prevent the body from being recycled and causing an error ByteBuf f2 = buf.slice(4, 4); f2.retain(); log.debug("{}", f1.toString(StandardCharsets.UTF_8)); log.debug("{}", f2.toString(StandardCharsets.UTF_8)); buf.setByte(2, ' '); log.debug("{}", f1.toString(StandardCharsets.UTF_8)); f1.release(); // After use, even if released, make the counter - 1 log.debug("{}", f2.toString(StandardCharsets.UTF_8)); f2.release(); log.debug("{}", buf.toString(StandardCharsets.UTF_8)); buf.release(); // Noumenon release }
CompositeByteBuf
One embodiment of zero copy is that, contrary to slice, multiple bytebufs can be merged to generate a new CompositeByteBuf. During this period, there is no memory copy, which is only realized by integrating references.
public static void main(String[] args) { ByteBuf buf1 = ByteBufAllocator.DEFAULT.buffer(); buf1.writeBytes(new byte[] {1, 2, 3, 4}); buf1.retain(); ByteBuf buf2 = ByteBufAllocator.DEFAULT.buffer(); buf2.writeBytes(new byte[] {7, 8, 9, 10}); buf2.retain(); CompositeByteBuf buffer = ByteBufAllocator.DEFAULT.compositeBuffer(); // A parameter must be added to automatically adjust the write pointer position buffer.addComponents(true, buf1, buf2); buffer.retain(); System.out.println(buffer.toString(StandardCharsets.US_ASCII)); buffer.release(); buf1.release(); buf2.release(); }
duplicate
One embodiment of zero copy is that it intercepts all the contents of the original ByteBuf without the limitation of max capacity. It also makes it use the unified fast underlying memory with the original ByteBuf, and the knowledge independent write pointer is independent
copy
The underlying data will be deep copied, regardless of reading and writing, which is independent of the original data
Unpooled
Tool class, which provides operations such as creating, combining, and copying non pooled ByteBuf.
advantage
-
Pooling - ByteBuf instances in the pool can be reused to save memory and reduce the possibility of memory overflow
-
The read-write pointer is separated, and the read-write mode does not need to be switched like ByteBuffer
-
It can be expanded automatically
-
Support chain call, use more smoothly
-
Zero copy is reflected in more places, such as slice, duplicate and CompositeByteBuf