Thread synchronization tools in details

Thread synchronization tools in details

Thread management in .NET offers multiple synchronization tools, each suited to different concurrency needs.

Here's a detailed look at key thread management mechanisms.


1. Lock (Monitor)

Purpose: Ensures that only one thread can access a particular section of code at a time.

How it works. The lock statement (or Monitor.Enter/Monitor.Exit) creates a mutual exclusion (mutex) for a specified object, preventing other threads from entering a code block until the lock is released. When a thread encounters a lock statement, it waits until it acquires the lock on the specified object. It does not use a simple loop or continuous polling to wait for the lock. Instead, .NET uses an efficient, blocking mechanism where the thread is put into a waiting state by the operating system:

  1. Lock Request: When a thread requests a lock on an object, it checks if the lock is currently available.
  2. Immediate Acquisition or Waiting: If the lock is free, the thread acquires it and proceeds. If the lock is already held by another thread, the requesting thread goes into a waiting state.
  3. Waiting State: In this state, the thread is temporarily paused, allowing the CPU to work on other tasks. This waiting is handled at the OS level using low-level synchronization primitive.
  4. Wake-up and Resume: When the lock becomes available, one of the waiting threads is awakened, acquires the lock, and resumes execution. When the thread holding the lock finishes its work and releases the lock, the .NET Monitor (which manages lock statements) sends a signal to the OS or runtime that the lock is now free.

Performance Considerations: lock is suitable for quick operations but can create bottlenecks if overused. Deadlocks can occur if locks are not handled carefully.


2. Semaphore and SemaphoreSlim

Purpose: Controls access to a resource by limiting the number of concurrent threads that can enter a specific code block. Useful for limiting concurrency in scenarios like database connections, file access, or throttling requests.

How it works. Semaphore allows multiple threads (up to a set limit) to access a resource concurrently, while others wait until one of the slots becomes available.

private static readonly SemaphoreSlim _semaphore = new SemaphoreSlim(3); // Allow 3 threads at a time

A semaphore has a count and a maximum capacity. The count represents the number of available "permits" or slots for threads. Every time a thread acquires a slot, the count decreases by one. When it releases the slot, the count increments by one. If the count reaches zero, any additional threads attempting to acquire the semaphore will be blocked until a slot is released.


3. Mutex (Mutual Exclusion)

Purpose: Similar to lock, but designed for inter-process synchronization, allowing threads in different processes to access a shared resource sequentially.

How it works. When a thread needs access to a protected resource, it calls Mutex.WaitOne to request ownership. Once it’s done, it releases the mutex by calling Mutex.ReleaseMutex, allowing other threads to acquire it. Mutex can be named, making it accessible across processes on the same machine.

private static readonly Mutex _mutex = new Mutex(false, "GlobalMutexName");

public void AccessResource()
{
    _mutex.WaitOne();
    try
    {
        // Access and work with shared resource
    }
    finally
    {
        _mutex.ReleaseMutex();
    }
}        

4. Concurrent Collections

Purpose: Offer built-in thread-safety for collections like List, Dictionary, etc, reducing the need for explicit locks. These collections handle multiple threads accessing and modifying shared data without needing explicit locks. The most popular concurrent collections are ConcurrentDictionary, ConcurrentQueue, ConcurrentStack, and ConcurrentBag. Here, we’ll focus on ConcurrentDictionary.

How it works. In a ConcurrentDictionary, the dictionary's internal structure is divided into multiple segments, which act as smaller, independent dictionaries within the main dictionary (logical segmentation, not physical separation). Each segment has its own lock, which allows multiple threads to read from and write to different segments concurrently.

When a ConcurrentDictionary is initialized, it divides its data into segments based on the default or specified concurrency level. The concurrency level determines the number of segments, with each segment responsible for a portion of the data. When a thread accesses or modifies data, it only locks the specific segment holding that data, leaving other segments unlocked. The number of segments is typically set based on the expected level of concurrency (the estimated number of threads accessing the dictionary simultaneously). More segments mean higher parallelism but also a larger memory footprint.

要查看或添加评论,请登录

Max Bazhenov的更多文章

  • Concurrency control in Entity Framework Core

    Concurrency control in Entity Framework Core

    Concurrency control ensures data integrity when multiple users or processes interact with the database simultaneously…

  • Use try/catch with caution

    Use try/catch with caution

    The try/catch mechanism is used for handling exceptions, allowing the program to catch errors and handle them without…

  • Parallelism management

    Parallelism management

    Parallelism in programming allows tasks to run concurrently, making full use of multi-core CPU architectures. By…

  • What is parallelism and how does it work

    What is parallelism and how does it work

    Previous posts covered asynchronous execution where we pass tasks to external resources and use the waiting time to do…

  • Best practices and antipatterns for Async/Await

    Best practices and antipatterns for Async/Await

    In the previous posts, we explored how async/await works, when to use them, and what benefits they provide. In this…

  • Await, Async, Task in depth

    Await, Async, Task in depth

    In the previous post, we explored how asynchronous methods free up threads in the thread pool and improve WebApp…

  • WebApp and system resources

    WebApp and system resources

    In the previous post, we introduced async operations and how they can optimize WebApp performance. But how exactly do…

    1 条评论
  • ToList() vs ToListAsync(). How does it work and where is benefit

    ToList() vs ToListAsync(). How does it work and where is benefit

    In this series of posts, we will look in depth at how async works and how the load is distributed between the WebApp…

  • Bundling and minificaiton

    Bundling and minificaiton

    Bundles and minificaiton in ASP.NET 4.

  • Angular standalone components

    Angular standalone components

    A standalone component is a type of component that doesn’t belong to any specific Angular module. Standalone component…

社区洞察

其他会员也浏览了