Concurrency is a fundamental concept in modern software development, enabling programs to handle multiple tasks simultaneously, thus increasing efficiency and performance. In this blog series, we’ll cover concurrency concepts specific to Golang, however the concepts themselves are generic and could be applied in a language agnostic manner!
Concurrency Vs Parallelism
Before diving into concurrency, its important to understand the difference between concurrency and parallelism as these are terms which often used interchangeably but have distinct meanings.
Concurrency is about dealing with multiple tasks at once. It involves structuring a program to handle multiple tasks simultaneously, even if not all are progressing at the same instant. It's more about the composition of independently executing processes.
Parallelism, on the other hand, is about doing multiple tasks at the same time. It requires a multi-core processor where separate tasks run simultaneously on different cores.
Let's consider the operations at a busy international airport to further illustrate the concepts of concurrency and parallelism.
Situation: At every airport, we have an air traffic controller, who is responsible for performing multiple tasks like managing takeoffs, landings and ground traffic.
Scenario 1: Concurrency in Action
- The controller instructs a plane to taxi to the runway, then switches to clear another plane for landing, and later coordinates with ground vehicles. The controller is handling multiple tasks but not executing them simultaneously. The controller can perform these tasks really fast and give the illusion that it’s doing everything in parallel, but in reality, it’s just switching between the tasks. Here we can say that the Air Traffic Controller is concurrently doing multiple tasks.
Scenario 2: Parallelism in Action
- Let’s imagine we now have multiple air traffic controllers (a multi-core CPU). Now, each air traffic controller can either split the responsibility i.e one controller handling takeoffs, another handling landings, and the third ground traffic. Now, each of the tasks can be performed in parallel because it’s being managed by a separate controller(Cores).
So, coming back to a computer science parallel, a single-threaded program running on a single core can be concurrent by handling multiple tasks and switching between them. Parallelism is about performing multiple operations at the same time, and can only be achieved if you have the resources for it, i.e multiple cores/processes.
Parallelism can also occur within a single core through instruction-level parallelism or data parallelism techniques like SIMD, but for now, let’s focus on parallelism through multiple cores.
Go's concurrency model allows for both concurrent and parallel execution, but its primary focus is on making concurrent programming more accessible and safe.
Goroutines: The Heart of Go Concurrency
Goroutines, in the Go programming language, are a fundamental abstraction and represent a unit of concurrency. They are lightweight, self-contained threads of execution that operate independently.
Here's how a Goroutine differs from a traditional thread:
- Concurrency vs. Parallelism:Goroutine: Goroutines are designed for concurrency. They allow multiple functions to be executed independently, making it easy to write concurrent code. Goroutines run in the same address space and share memory. They are cooperatively scheduled, meaning they yield control voluntarily when they encounter blocking operations.Traditional Thread: Threads are a lower-level operating system construct that can be used for both concurrency and parallelism. Threads can run in parallel on multiple CPU cores and may not share memory depending on the programming language and threading model used. Threads often follow preemptive scheduling, which means that the operating system can forcibly interrupt the execution of a thread and schedule another thread to run in its place.
- Concurrency Model:Goroutine: Goroutines are managed by the Go runtime, which multiplexes several operating system threads (usually one per CPU core) to run many Goroutines concurrently. This user-level scheduling allows for efficient concurrent execution.Traditional Thread: Threads are managed directly by the operating system's thread scheduler, which can be less efficient when dealing with a large number of threads.
- Context Switching:Goroutine: A goroutine's stack is much smaller than that of a traditional thread but can grow dynamically as needed. This smaller size means less state information to save and load during a context switch.
The Go runtime scheduler handles the context switching of goroutines. Since this scheduler operates at the user-space level, it is more lightweight and efficient than an OS-level context switch. The scheduler only needs to save and load a minimal amount of state (like the stack and registers) for each goroutine, which is significantly less than the full thread state in traditional threading.The reduced overhead in context switching contributes to the efficiency of goroutines, making them ideal for high-concurrency applications where numerous small tasks need to be handled concurrently.Traditional Thread: Traditional threads, managed by the operating system, are heavier in terms of resources. Each thread has its own stack and registers, among other structures.
When the OS switches context from one thread to another, it involves saving and loading a significant amount of state information, which can be resource-intensive. This process can lead to considerable overhead, especially with a large number of threads, reducing overall system efficiency.
- Communication:Goroutine: Goroutines communicate via channels, built-in concurrency primitives in Go. Channels simplify and encourage safe communication between Goroutines.Traditional Thread: Threads often rely on lower-level synchronization mechanisms like mutexes, semaphores, and condition variables for communication, which can be error-prone and prone to deadlocks.
As we mentioned, Goroutines are managed by the Go Runtime, specifically the Go Scheduler and we’ll dive into the Go Scheduler in our next article!
This will be a multi-part series on Concurrency in Golang, where we’ll not only dive into the internals of concurrency in Golang but also have some hands-on examples for implementation of different concurrency patterns in Golang! So subscribe to the newsletter
so that you don’t miss out on those updates!
Senior Software Engineer @ Timescale
10 个月Nice article. It is always good to remember that parallelism implies concurrency but not the other way arround. Parallelism is a property satisfied at run time and concurrency at design time. Also, the Go concurrency model is just superb. ??
Senior Software Engineer at Booking.com | AWS Serverless Community Builder | pratikpandey.substack.com
10 个月Next blog out. We dive into Go scheduler in this blog - https://www.dhirubhai.net/pulse/go-concurrency-series-deep-dive-scheduleri-pratik-pandey-mhx4e%3FtrackingId=1epyjltRSZq8Tg4v1y4u6w%253D%253D/?trackingId=1epyjltRSZq8Tg4v1y4u6w%3D%3D
Senior Software Engineer at Booking.com | AWS Serverless Community Builder | pratikpandey.substack.com
11 个月Subscribe to me on the following distributions - LinkedIn -?https://www.dhirubhai.net/newsletters/system-design-patterns-6937319059256397824/ Medium -?https://distributedsystemsmadeeasy.medium.com/subscribe Substack -?https://pratikpandey.substack.com/