Mastering Multi-threading in Java Spring Boot (Part 1): Fundamentals of Multi-threading in Java

Divyansh Tripathi
6 min read1 day ago

--

Spring Boot: Multi-threading in Java

Multi-threading is a crucial component of modern applications, especially in environments where scalability, performance, and responsiveness are key. This comprehensive guide begins with the fundamentals of multi-threading in Java and lays the groundwork for more advanced topics. By the end of this section, you will have a solid grasp of concurrency, basic thread operations, and thread safety mechanisms.

Introduction to Concurrency

Why Multi-threading Matters

In modern applications, users expect real-time responsiveness and scalability. Multi-threading allows programs to perform multiple tasks simultaneously, making it vital for applications that handle multiple users, perform background processing, or rely on high-throughput services.

Key Use Cases:

  • Web servers handling concurrent client requests.
  • Background processing of logs or analytics data.
  • Real-time data processing (e.g., financial applications).

Concurrency vs. Parallelism

Many developers confuse concurrency and parallelism, but they address different aspects of multi-tasking:

  • Concurrency: The ability to switch between tasks efficiently (context switching).
  • Parallelism: The actual simultaneous execution of tasks on multiple CPU cores.

Think of concurrency as a juggler managing multiple balls, while parallelism is having multiple jugglers managing one ball each.

CPU-bound vs. IO-bound Operations

  • CPU-bound tasks: Computation-heavy operations that require significant processing power (e.g., encryption algorithms, large data transformations). For CPU-bound tasks, FixedThreadPool or ForkJoinPool with a size equal to the number of available cores (Runtime.getRuntime().availableProcessors()) is generally optimal.
  • IO-bound tasks: Operations dependent on input/output (e.g., database access, file reading). Since these tasks involve waiting, using a CachedThreadPool or configuring ThreadPoolExecutor with a higher number of threads is recommended to ensure that blocked threads don’t limit throughput.

Choosing the right threading strategy depends on profiling the workload and balancing compute vs. wait times. We will discuss thread pool configurations and performance optimizations in greater depth in the next blog.

The Evolution of Multi-threading in Java

Java’s multi-threading capabilities have evolved from basic thread management using Thread and Runnable to a comprehensive framework in java.util.concurrent. Modern APIs, including the ForkJoinPool and CompletableFuture.

Core Threading Concepts

Thread Lifecycle and States

A thread in Java can be in one of the following states:

  • NEW: Created but not started.
  • RUNNABLE: Ready to run but waiting for CPU time.
  • BLOCKED: Waiting for a monitor lock.
  • WAITING: Waiting indefinitely for another thread to signal.
  • TIMED_WAITING: Waiting for a specified time.
  • TERMINATED: Completed execution.

Use the following diagram to visualize the transitions:

NEW -> RUNNABLE -> (BLOCKED/WAITING) -> RUNNABLE -> TERMINATED

Creating Threads in Java

Java offers several ways to create and manage threads:

Extending Class

class MyThread extends Thread {
public void run() {
System.out.println("Thread is running");
}
}

public class ThreadExample {
public static void main(String[] args) {
MyThread thread = new MyThread();
thread.start();
}
}

Implementing Interface

class MyRunnable implements Runnable {
public void run() {
System.out.println("Runnable thread is running");
}
}

public class RunnableExample {
public static void main(String[] args) {
Thread thread = new Thread(new MyRunnable());
thread.start();
}
}

Using Futures(for return values)

import java.util.concurrent.*;

public class CallableExample {
public static void main(String[] args) throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Callable<String> task = () -> "Callable task completed";

Future<String> future = executor.submit(task);
System.out.println(future.get()); // Blocking call to get the result
executor.shutdown();
}
}

Note: The thread created by Executors.newSingleThreadExecutor() is not from a common pool but is a fresh thread in a dedicated single-threaded pool managed by the executor. Executors like newFixedThreadPool or ForkJoinPool manage their own pool of threads, dynamically growing based on configuration.

Key Exception Handling: If an uncaught exception occurs in a thread, it terminates that thread but does not terminate the main thread. However, in executor services, exceptions need to be properly handled using Future.get() or CompletionException to avoid silent failures.

Practical Note: Consider how uncaught exceptions propagate in different thread pool implementations, such as in ForkJoinPool, where exceptions are wrapped in CompletionException.

Thread Pools and Where Threads Reside

Thread pools provide significant performance benefits by reusing existing threads instead of creating new ones each time a task is executed. Here are the key details about how threads are managed internally:

  • FixedThreadPool: Creates a fixed number of threads upfront and reuses them for tasks.
  • CachedThreadPool: Creates new threads as needed but reuses previously constructed threads if available. Idle threads are terminated after a default timeout (60 seconds).
  • ForkJoinPool: Designed for divide-and-conquer tasks. It splits tasks into subtasks recursively and uses a work-stealing mechanism where idle threads can “steal” tasks from busy threads. The default common pool size is determined by the number of available processors.

Thread Pool Overview:

When you submit a task to an executor service, the following happens:

  • For newCachedThreadPool(), threads are created on demand and reside within the pool. If a thread is idle for more than 60 seconds, it is terminated.
  • newFixedThreadPool() keeps threads alive until explicitly shut down.
  • The ForkJoinPool maintains worker threads internally and manages them based on parallelism requirements. The common pool is shared across APIs like parallelStream() and CompletableFuture unless explicitly overridden.

Key Limitation: The upper capping of the common ForkJoinPool is equal to the number of available processors (Runtime.getRuntime().availableProcessors()) unless overridden by the system property java.util.concurrent.ForkJoinPool.common.parallelism.

We will explore a deeper comparison between these thread pools, including configurations and tuning strategies, in Part 2.

Thread Priorities and Daemon Threads

  • Thread priorities: Help the OS determine the importance of a thread (ranging from MIN_PRIORITY to MAX_PRIORITY). In practice, priority changes rarely have a significant impact due to OS-level scheduling policies.
  • Daemon threads: Background threads that terminate when all user threads have completed.

Example:

Thread daemonThread = new Thread(() -> {
while (true) {
System.out.println("Daemon thread running");
}
});

daemonThread.setDaemon(true);
daemonThread.start();

Thread Safety and Synchronization

Understanding Race Conditions

A race condition occurs when multiple threads access shared resources concurrently and the outcome depends on the timing of their execution.

Example of a Race Condition:

public class RaceConditionExample {
private int counter = 0;

public void increment() {
counter++;
}

public static void main(String[] args) throws InterruptedException { // Add throws
RaceConditionExample example = new RaceConditionExample();

Runnable task = () -> {
for (int i = 0; i < 1000; i++) {
example.increment();
}
};

Thread t1 = new Thread(task);
Thread t2 = new Thread(task);

t1.start();
t2.start();

t1.join(); // Join t1
t2.join(); // Join t2

System.out.println("Final counter: " + example.counter); // Print the result
}
}

Without proper synchronization, the counter may not reflect the correct value.

The `synchronized` Keyword

Synchronizing blocks or methods prevents race conditions by ensuring mutual exclusion.

Object-level Lock:

public synchronized void increment() {
counter++;
}

Class-level Lock:

public static synchronized void incrementGlobalCounter() {
globalCounter++;
}

Pitfall: Overusing synchronized can lead to thread contention and reduced performance. Avoid synchronizing large code blocks.

The `volatile` Keyword

The volatile keyword ensures visibility of changes to variables across threads but does not provide atomicity.

Example:

private volatile boolean running = true;

public void run() {
while (running) {
// Do work
}
}

Tip: For atomic updates, prefer classes from java.util.concurrent.atomic such as AtomicInteger.

Happens-before Relationship

The Java Memory Model defines a happens-before relationship that guarantees that changes made by one thread are visible to another. For example, releasing a lock happens-before acquiring the same lock by another thread.

Conclusion

In this part, we covered the essential concepts of multi-threading, from basic thread creation and lifecycle management to understanding race conditions and synchronization mechanisms. We also highlighted key strategies for different workloads and potential pitfalls in real-world applications. Additionally, we provided a deep dive into thread pools, where threads reside, and their default upper limits.

With this foundation, you are now ready to explore advanced topics such as the Java Memory Model, in-depth comparisons of thread pools, and concurrency utilities in the next section.

Stay tuned for Part 2: Advanced Threading Concepts and Java Concurrency API.

--

--

Divyansh Tripathi
Divyansh Tripathi

Written by Divyansh Tripathi

Software Developer, Exploring and Learning

No responses yet