Thread synchronization tools in details
Max Bazhenov
Senior Full Stack Developer | 8+ Years in Software Engineering | .NET (C#), Angular, TypeScript, Azure | Enterprise Web Applications | Real-time UI, Performance Optimization
Thread management in .NET offers multiple synchronization tools, each suited to different concurrency needs.
Here's a detailed look at key thread management mechanisms.
1. Lock (Monitor)
Purpose: Ensures that only one thread can access a particular section of code at a time.
How it works. The lock statement (or Monitor.Enter/Monitor.Exit) creates a mutual exclusion (mutex) for a specified object, preventing other threads from entering a code block until the lock is released. When a thread encounters a lock statement, it waits until it acquires the lock on the specified object. It does not use a simple loop or continuous polling to wait for the lock. Instead, .NET uses an efficient, blocking mechanism where the thread is put into a waiting state by the operating system:
Performance Considerations: lock is suitable for quick operations but can create bottlenecks if overused. Deadlocks can occur if locks are not handled carefully.
2. Semaphore and SemaphoreSlim
Purpose: Controls access to a resource by limiting the number of concurrent threads that can enter a specific code block. Useful for limiting concurrency in scenarios like database connections, file access, or throttling requests.
How it works. Semaphore allows multiple threads (up to a set limit) to access a resource concurrently, while others wait until one of the slots becomes available.
领英推荐
private static readonly SemaphoreSlim _semaphore = new SemaphoreSlim(3); // Allow 3 threads at a time
A semaphore has a count and a maximum capacity. The count represents the number of available "permits" or slots for threads. Every time a thread acquires a slot, the count decreases by one. When it releases the slot, the count increments by one. If the count reaches zero, any additional threads attempting to acquire the semaphore will be blocked until a slot is released.
3. Mutex (Mutual Exclusion)
Purpose: Similar to lock, but designed for inter-process synchronization, allowing threads in different processes to access a shared resource sequentially.
How it works. When a thread needs access to a protected resource, it calls Mutex.WaitOne to request ownership. Once it’s done, it releases the mutex by calling Mutex.ReleaseMutex, allowing other threads to acquire it. Mutex can be named, making it accessible across processes on the same machine.
private static readonly Mutex _mutex = new Mutex(false, "GlobalMutexName");
public void AccessResource()
{
_mutex.WaitOne();
try
{
// Access and work with shared resource
}
finally
{
_mutex.ReleaseMutex();
}
}
4. Concurrent Collections
Purpose: Offer built-in thread-safety for collections like List, Dictionary, etc, reducing the need for explicit locks. These collections handle multiple threads accessing and modifying shared data without needing explicit locks. The most popular concurrent collections are ConcurrentDictionary, ConcurrentQueue, ConcurrentStack, and ConcurrentBag. Here, we’ll focus on ConcurrentDictionary.
How it works. In a ConcurrentDictionary, the dictionary's internal structure is divided into multiple segments, which act as smaller, independent dictionaries within the main dictionary (logical segmentation, not physical separation). Each segment has its own lock, which allows multiple threads to read from and write to different segments concurrently.
When a ConcurrentDictionary is initialized, it divides its data into segments based on the default or specified concurrency level. The concurrency level determines the number of segments, with each segment responsible for a portion of the data. When a thread accesses or modifies data, it only locks the specific segment holding that data, leaving other segments unlocked. The number of segments is typically set based on the expected level of concurrency (the estimated number of threads accessing the dictionary simultaneously). More segments mean higher parallelism but also a larger memory footprint.