Part 1: Why we needed NIO… and why Virtual Threads may finally free us from it.(WSO2

Part 1: Why we needed NIO… and why Virtual Threads may finally free us from it.(WSO2 Integrator: MI Perspective)

Why Java Servers Chose Complexity — and How Loom Lets Us Escape It

Modern integration platforms like WSO2 Integrator: MI can handle thousands of concurrent HTTP connections with ease. But that apparent simplicity hides a long-running struggle against one fundamental problem: blocking I/O.

When a traditional java thread performs a blocking read or write, it occupies an OS thread until the operation completes. OS threads are expensive. They require significant memory, incur context-switching overhead, and do not scale linearly with connection count.

Much of server-side architecture over the past decades, including thread pools, non-blocking I/O, and reactor patterns, exists largely to mitigate this cost.

Before we talk about replacing NIO and reactor patterns with Virtual Threads, we must understand why NIO existed in the first place, and why systems like WSO2 Integrator: MI’s pass-through transport became architecturally complex.

This article is the first in a series exploring how we can move from:

NIO + Reactor Pattern + Async Callbacks
To
Virtual Threads + Simple Blocking Architecture

The Execution Reality: Java Threads

A Java thread does not execute directly on a CPU core.

The execution chain is:

Java Thread → OS Thread → Physical CPU Core

Important consequence:

  • Each Java thread maps to an OS thread

Why OS Threads Are a Scalability Problem

OS threads are heavyweight:

  • Expensive to create
  • Expensive to context-switch
  • Limited by memory (Each thread is allocated with memory for thread stack)

So Creating thousands of Java threads means:

  • Creating many Java threads = creating many OS threads
  • Too many = memory pressure + context switching overhead
  • Blocking I/O wastes expensive OS threads

As an example see following classic blocking call:

socket.read()

This Causes:

  • Thread enters kernel
  • Waits for network data
  • Holds an OS thread the entire time

This is wasted capacity.

So Why Servers Adopted NIO and Reactor Patterns

To avoid blocking OS threads, Java servers adopted:

  • Thread pools
  • Non-blocking I/O (NIO)
  • Reactor pattern
  • Async callbacks

The idea:

In the reactor pattern, a small number of threads run an event loop.
Each event loop uses a Selector to multiplex many socket channels onto a single thread.

Instead of blocking on each connection, the selector waits for readiness events (such as accept or read, write) from any of the registered channels. When an event occurs, the event loop reacts to it and dispatches the corresponding task.

Actual request processing is typically handed off to a bounded pool of worker threads.
This pool is intentionally limited to:

  • Avoid creating excessive OS threads
  • Control memory usage

As a result, thousands of connections can be served by a small number of selector threads and a carefully sized worker pool, with multiplexing ensuring that idle connections do not consume threads.

This allows:

✔ Thousands of connections
✔ Few OS threads
✔ Predictable memory usage

This is the model used by:

  • HTTP Core NIO and forms the foundation of WSO2 Integrator: MI’s pass-through transport.

The Cost of This Design

NIO solves the OS thread problem, but introduces a new one:

Architectural complexity.

To remain non-blocking, the system must:

  • Split logic into stages
  • Use callbacks
  • Track partial reads and writes
  • Avoid blocking anywhere
  • Manage selector loops
  • Handle back-pressure explicitly

This leads to:

  • Hard-to-read code
  • Complex control flow
  • Difficult debugging
  • Subtle race conditions

Conclusion

NIO and the reactor pattern were not chosen because they are elegant.
They were chosen because OS threads are expensive and blocking I/O wastes them.

For systems like WSO2 Integrator: MI’s pass-through transport, this led to an architecture built around:

  • Selectors and event loops
  • Non-blocking reads and writes
  • Worker pools
  • Async control flow

This design achieves scalability, but at a cost:

  • Increased architectural complexity
  • Harder reasoning about execution flow
  • More difficult debugging
  • Tight coupling between transport logic and concurrency model

In other words, much of this complexity exists not because the business problem is complex,
but because the execution model forced it to be.

Understanding this trade-off is critical before proposing any alternative.
Only by seeing why NIO and reactor patterns became necessary can we fairly evaluate whether they are still necessary today.

In the next article, we will introduce Virtual Threads and examine how they change the cost model of blocking I/O — and why that may allow us to rethink the architecture of WSO2 Integrator: MI’s pass-through transport entirely.


Part 1: Why we needed NIO… and why Virtual Threads may finally free us from it.(WSO2 was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.

This post first appeared on Read More