Project Loom in Production: Migrating Legacy Java Applications to Virtual Threads

Strategies to migrate to virtual threads

Photo by Héctor J. Rivas on Unsplash

When Project Loom and virtual threads was announced, I was excited but cautious. After 15 years of using threads in services I knew this could provide a huge benefit — but only if we could migrate safely.

Virtual threads promise to improve how we handle concurrency in Java, but migrating legacy applications is not as simple as swapping Thread for virtual threads. Having recently moved a high-traffic e-commerce platform and guiding my team through this transition , I want to share the real-world lessons, issues and practical strategies that made our migration successful.

Why Migration Matters More Than Adoption

Before diving into the how-to, let’s talk about why migrating an old application is different from writing a service using virtual threads. Your legacy application has lots of assumptions baked in:

  • Thread pools sized for server thread limits
  • Connection pools tuned for specific concurrency levels
  • Synchronization patterns written for expensive thread creation
  • Monitoring and observability tools expecting hundreds, not huge number of threads

The promise made by virtual threads is compelling: threads can handle huge number of concurrent operations on the same hardware that previously maxed out at server limit. But getting there requires strategy, not just code changes.

Understanding What You’re Migrating From

Most legacy Java applications follow the traditional thread model

// Traditional approach - expensive threads, careful resource management
public class LegacyOrderService {
private final ExecutorService threadPool =
Executors.newFixedThreadPool(200); // Carefully tuned number

public CompletableFuture<OrderResult> processOrder(Order order) {
return CompletableFuture.supplyAsync(() -> {
// Call payment service
PaymentResponse payment = paymentClient.processPayment(order);

// Call inventory service
InventoryResponse inventory = inventoryClient.reserveItems(order);

// Call shipping service
ShippingResponse shipping = shippingClient.scheduleDelivery(order);

return new OrderResult(payment, inventory, shipping);
}, threadPool);
}
}

This works, but you’re constantly juggling:

  • Thread pool sizing: Too few threads = poor throughput, too many = memory issues
  • Resource contention: 200 threads competing for database connections
  • Blocking operations: Each I/O call ties up a precious OS thread

The Virtual Thread Alternative

Here’s the same logic with virtual threads:

public class ModernOrderService {
// No thread pool needed!

public OrderResult processOrder(Order order) {
// Each operation gets its own virtual thread
var paymentTask = Thread.startVirtualThread(() ->
paymentClient.processPayment(order)
);

var inventoryTask = Thread.startVirtualThread(() ->
inventoryClient.reserveItems(order)
);

var shippingTask = Thread.startVirtualThread(() ->
shippingClient.scheduleDelivery(order)
);

try {
// Wait for all to complete
PaymentResponse payment = paymentTask.join();
InventoryResponse inventory = inventoryTask.join();
ShippingResponse shipping = shippingTask.join();

return new OrderResult(payment, inventory, shipping);
} catch (Exception e) {
// Handle interruption
paymentTask.interrupt();
inventoryTask.interrupt();
shippingTask.interrupt();
throw e;
}
}
}

Notice what changed:

  • No thread pool management
  • Direct, blocking-style code (no CompletableFuture complexity)
  • Per-task threads instead of shared resources

Step-by-Step Migration Strategy

Phase 1: Preparation and Assessment

Inventory Your Thread Usage

Start by understanding where your application creates threads, this can be done by using a bash script to find usages or use IDE search functionality :

        // Find ExecutorService usage
grep -r "ExecutorService" src/
grep -r "ThreadPoolExecutor" src/
grep -r "ForkJoinPool" src/

// Find direct Thread usage
grep -r "new Thread" src/
grep -r "Thread.start" src/

// Find CompletableFuture async operations
grep -r "CompletableFuture.supplyAsync" src/
grep -r "CompletableFuture.runAsync" src/

Identify Migration Candidates

Look for these patterns that benefit most from virtual threads:

  • I/O-heavy operations (HTTP calls, database queries)
  • High-concurrency request handlers
  • Task processing with many blocking operations
  • Producer-consumer patterns with queues

Phase 2: Create a Virtual Thread-Aware Executor Factory

Instead of changing everything at once, create an abstraction:

public class ThreadingStrategy {
private final boolean useVirtualThreads;

public ThreadingStrategy() {
// Feature flag for gradual rollout
this.useVirtualThreads = Boolean.parseBoolean(
System.getProperty("app.virtual.threads.enabled", "false")
);
}

public ExecutorService createExecutor(String purpose) {
if (useVirtualThreads) {
return createVirtualThreadExecutor(purpose);
} else {
return createTraditionalExecutor(purpose);
}
}

private ExecutorService createVirtualThreadExecutor(String purpose) {
ThreadFactory factory = Thread.ofVirtual()
.name(purpose + "-", 0)
.factory();
return Executors.newThreadPerTaskExecutor(factory);
}

private ExecutorService createTraditionalExecutor(String purpose) {
return Executors.newFixedThreadPool(
getThreadCount(purpose),
new ThreadFactoryBuilder()
.setNameFormat(purpose + "-%d")
.build()
);
}
}

Phase 3: Migrate High-Impact, Low-Risk Components

Start with components that are:

  • Well-tested
  • I/O bound
  • Not using complex synchronization

Before: Traditional HTTP Client

public class PaymentServiceClient {
private final ExecutorService executor = Executors.newFixedThreadPool(50);
private final HttpClient httpClient = HttpClient.newHttpClient();

public CompletableFuture<PaymentResponse> processPayment(PaymentRequest request) {
return CompletableFuture.supplyAsync(() -> {
try {
HttpRequest httpRequest = HttpRequest.newBuilder()
.uri(URI.create("https://payment-service/process"))
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(
objectMapper.writeValueAsString(request)))
.build();

HttpResponse<String> response = httpClient.send(
httpRequest, HttpResponse.BodyHandlers.ofString());

return objectMapper.readValue(response.body(), PaymentResponse.class);
} catch (Exception e) {
throw new PaymentException("Payment processing failed", e);
}
}, executor);
}
}

After: Virtual threads

public class PaymentServiceClient {
private final HttpClient httpClient = HttpClient.newHttpClient();

// Synchronous method - virtual threads make this scalable
public PaymentResponse processPayment(PaymentRequest request) {
try {
HttpRequest httpRequest = HttpRequest.newBuilder()
.uri(URI.create("https://payment-service/process"))
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(
objectMapper.writeValueAsString(request)))
.build();

HttpResponse<String> response = httpClient.send(
httpRequest, HttpResponse.BodyHandlers.ofString());

return objectMapper.readValue(response.body(), PaymentResponse.class);
} catch (Exception e) {
throw new PaymentException("Payment processing failed", e);
}
}

// Async wrapper when needed
public CompletableFuture<PaymentResponse> processPaymentAsync(PaymentRequest request) {
return CompletableFuture.supplyAsync(
() -> processPayment(request),
// Use virtual thread executor
Thread.ofVirtual().factory()::newThread
);
}
}

Key improvements:

  • Simpler code: No CompletableFuture chains
  • Better error handling: Stack traces make sense again
  • Easier testing: Synchronous code is easier to unit test

Production Issues and How to Handle Them

1. The ThreadLocal Trap

Problem: Virtual threads can create millions of instances, making ThreadLocal expensive.

// DANGEROUS - with millions of virtual threads
public class BadCachingService {
private static final ThreadLocal<ExpensiveResource> CACHE =
ThreadLocal.withInitial(() -> new ExpensiveResource());

public String processData(String data) {
ExpensiveResource resource = CACHE.get();
return resource.process(data);
}
}

Solution: Use proper caching or scoped values:

public class GoodCachingService {
// Use a proper cache instead of ThreadLocal
private final Cache<String, ExpensiveResource> cache =
Caffeine.newBuilder()
.maximumSize(100)
.expireAfterAccess(Duration.ofMinutes(10))
.build();

public String processData(String data) {
ExpensiveResource resource = cache.get(
Thread.currentThread().getName(),
k -> new ExpensiveResource()
);
return resource.process(data);
}
}

2. Synchronization and Thread Pinning

Problem: Traditional synchronized blocks can “pin” virtual threads to carrier threads.

// Can cause pinning in older JDK versions
public class SynchronizedService {
private final Object lock = new Object();
private int counter = 0;

public synchronized int incrementCounter() { // Potential pinning!
return ++counter;
}
}

Solution: Use j.u.c locks or upgrade to JDK 24:

public class NonPinningService {
private final ReentrantLock lock = new ReentrantLock();
private int counter = 0;

public int incrementCounter() {
lock.lock();
try {
return ++counter;
} finally {
lock.unlock();
}
}
}

3. Resource Limiting Without Thread Pools

Problem: No more thread pools means no built-in concurrency limiting.

Solution: Use Semaphores for resource limiting:

public class DatabaseService {
// Limit concurrent database operations
private final Semaphore dbConnectionLimit = new Semaphore(50);
private final DataSource dataSource;

public UserData fetchUser(Long userId) {
dbConnectionLimit.acquire();
try (Connection conn = dataSource.getConnection()) {
// Database operation
return queryUser(conn, userId);
} catch (SQLException e) {
throw new DatabaseException("Failed to fetch user", e);
} finally {
dbConnectionLimit.release();
}
}
}

4. Monitoring Millions of Threads

Traditional thread dumps become useless with millions of virtual threads. Create custom monitoring:

public class VirtualThreadMonitor {
private final AtomicInteger activeVirtualThreads = new AtomicInteger(0);

public void executeWithMonitoring(Runnable task, String taskName) {
Thread.startVirtualThread(() -> {
activeVirtualThreads.incrementAndGet();
try {
MDC.put("taskName", taskName);
MDC.put("threadType", "virtual");
task.run();
} finally {
activeVirtualThreads.decrementAndGet();
MDC.clear();
}
});
}

@Scheduled(fixedRate = 30000) // Every 30 seconds
public void logVirtualThreadMetrics() {
logger.info("Active virtual threads: {}", activeVirtualThreads.get());
}
}

Real-World Migration Results

After migrating our e-commerce platform’s order processing system:

Before (Platform Threads):

  • 200 threads in main thread pool
  • 95th percentile response time: 450ms
  • Memory usage: 2.1GB under load
  • Max concurrent orders: ~180

After (Virtual Threads):

  • No thread pool limits
  • 95th percentile response time: 280ms
  • Memory usage: 1.8GB under load
  • Max concurrent orders: 2000+

The improvement came from eliminating thread pool contention and allowing truly parallel I/O operations.

Migration Checklist

Before You Start:

  • Audit existing thread usage patterns
  • Identify I/O-bound vs CPU-bound operations
  • Create feature flags for gradual rollout
  • Update monitoring and alerting

During Migration:

  • Replace ThreadLocal with proper caching
  • Convert synchronized blocks to j.u.c locks
  • Add Semaphores for resource limiting
  • Simplify CompletableFuture chains to blocking code

After Migration:

  • Monitor virtual thread creation patterns
  • Validate no thread pinning occurs
  • Performance test under realistic load
  • Update documentation and team training

The Bottom Line

Virtual threads aren’t just a performance optimisation — they’re a paradigm shift back to simpler, more intuitive concurrent programming. The migration effort is worth it, but success requires understanding both the technology and your existing codebase.

Start small, measure everything, and don’t be afraid to keep some legacy patterns during transition. Your future self will thank you for the simpler, more maintainable code that virtual threads enable.

Have you started exploring virtual threads in your applications? What migration challenges are you facing? I’d love to hear about your experiences in the comments.


Project Loom in Production: Migrating Legacy Java Applications to Virtual Threads was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.

This post first appeared on Read More