

Pipelines and codecs - sending your socket data through a series of tubes
#Java telnet server library code
Users get easy access to the executors while avoiding the overhead and code complexity of self-managed executors. Using Folly’s Singleton implementation, we provide access to global, overridable, and lazily created IO and CPU thread pools via getIOExecutor() and getCPUExecutor(), respectively. Wangle provides a simple GlobalExecutor abstraction to encourage moving away from this pattern. Complex applications might use a multitude of such components, and all of a sudden there is a gross excess of (likely idle) threads lying around. Global executorsĪ common anti-pattern that we have observed at Facebook is for shared components to spin up their own thread pools for their own work. Additionally, our CPU pool supports task priorities via multiple shared queues.īoth thread pools provide a host of useful features, including per-pool and per-task statistics, pool resizing, task expiration (including CoDel expiration), custom thread factories, and thread event observers.

Second, our implementation madvises away the stacks of inactive threads, meaning that CPU pools can be sized quite liberally without excess memory usage. First, the LIFO policy means that as few threads as possible are actively processing tasks, leading to increased cache locality. A LIFO semaphore has several desirable performance characteristics. Threads wait on a last-in first-out (LIFO) semaphore to pull tasks from a multiproducer, multiconsumer, lockless shared queue. This precludes fair scheduling, as there is no way for the epoll_wait() calls to cooperate - we employ round-robin scheduling instead.įor CPU pools, we can do better. Instead, each thread has its own epoll fd and thus its own task queue. For IO pools, the threads cannot share a single epoll file descriptor, as epoll_wait() wakes up all waiters on an active event, which would lead to contention on processing the event. Second, thread pools can be optimized for either CPU-bound or IO-bound workloads, which is why we have separate implementations for each. Wangle makes this easy, as we’ll see later on in our example. Therefore, applications should strive to relocate long-running work or blocking operations to a separate CPU-bound thread pool in order to reduce tail latency. It is important, then, that those threads block as little as possible, because every cycle spent on one connection’s work blocks all the other connections hosted on that thread. It is useful to separate IO-bound work (e.g., asynchronous socket IO) from CPU-bound work (e.g., application business logic) for a couple of reasons.įirst, in event-driven server environments like Wangle’s, M connections are typically multiplexed across N IO threads for M > N, since one thread per connection does not scale effectively (see the C10K problem). Wangle provides a set of useful concurrency primitives, the most important of which are a pair of thread pool implementations - CPUThreadPoolExecutor and IOThreadPoolExecutor. Futures provide a monadic pattern for expressing asynchronous computations that is simultaneously performant, clean, and composable. Second are Futures, an implementation of the Promise/Future pattern for C++11. Folly/io/async provides event bases, sockets (including TCP sockets, UDP sockets, server sockets, and SSL sockets), and async timeout/callback interfaces, among other things. Wangle builds on top of two other Facebook-developed libraries that provide asynchronous interfaces, both of which live in Folly, our open source C++ library.įirst is folly/io/async, a set of object-oriented asynchronous IO wrappers around libevent. Note that we’ll focus on the Netty-like components and leave the Finagle-like pieces to a later post. We’ll give a brief overview of the core Wangle components and then walk through an example to demonstrate their power. Wangle has been open-sourced since summer 2015 and is a stable component of several Facebook projects, including the open source fbthrift and Proxygen. Wangle aims to provide a similar set of abstractions for C++ as the combination of Netty and Finagle does for the JVM.

Wangle is heavily influenced by the JVM-based libraries Netty, a Java network application framework, and Finagle, a Scala RPC system built at Twitter on top of Netty. This post introduces Wangle, a C++ library for building protocols, servers, and clients in an asynchronous, clean, composable, and scalable manner.
