A goroutine is a lightweight, independently executing function that runs concurrently with other goroutines in the same address space. Think of it as:
- In JavaScript, we have an event loop that handles async tasks (e.g., promises, async/await).
- In Go, instead of a single-threaded event loop, we have goroutines managed by the Go runtime.
They allow us to perform tasks like handling requests, I/O operations, or computations in parallel without manually managing threads.
Feature | Goroutine | OS Thread |
---|---|---|
Size at start | ~2 KB stack | ~1 MB stack |
Managed by | Go runtime scheduler (M:N model) | OS Kernel |
Number you can create | Millions | Limited (few thousands) |
Switching | Very fast, done in user space | Slower, done by OS |
Creation cost | Extremely cheap | Expensive |
π This is why we say goroutines are lightweight threads.
package main
import (
"fmt"
"time"
)
func printMessage(msg string) {
for i := 0; i < 5; i++ {
fmt.Println(msg, i)
time.Sleep(500 * time.Millisecond)
}
}
func main() {
go printMessage("goroutine") // runs concurrently
printMessage("main") // runs in main goroutine
}
-
The
go
keyword starts a new goroutine. -
Here:
main()
itself runs in the main goroutine.go printMessage("goroutine")
starts another goroutine.
-
If
main()
exits before the new goroutine finishes, the program ends immediately.
Go runtime uses an M:N scheduler, meaning:
- M goroutines are multiplexed onto N OS threads.
- This is different from 1:1 (like Java threads) or N:1 (like cooperative multitasking).
The scheduler ensures:
- Goroutines are distributed across multiple threads.
- When one blocks (e.g., waiting on I/O), another is scheduled.
Think of goroutines as tasks in a work-stealing scheduler.
Since goroutines run concurrently, we need synchronization tools:
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // signals completion
fmt.Printf("Worker %d starting\n", id)
// simulate work
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1) // add to wait counter
go worker(i, &wg)
}
wg.Wait() // wait for all to finish
}
β Ensures the program wonβt exit before all goroutines finish.
Channels are Goβs big idea for concurrency. Instead of sharing memory and locking it, goroutines communicate by passing messages.
package main
import "fmt"
func worker(ch chan string) {
ch <- "task finished" // send data into channel
}
func main() {
ch := make(chan string)
go worker(ch)
msg := <-ch // receive data
fmt.Println("Message:", msg)
}
π Think of it like JavaScript Promise.resolve("task finished")
, but synchronous communication unless buffered.
ch := make(chan int, 2) // capacity = 2
ch <- 10
ch <- 20
fmt.Println(<-ch)
fmt.Println(<-ch)
- Unbuffered channel: send blocks until receive is ready.
- Buffered channel: send doesnβt block until buffer is full.
select {
case msg := <-ch1:
fmt.Println("Received", msg)
case msg := <-ch2:
fmt.Println("Received", msg)
default:
fmt.Println("No message")
}
Like Promise.race()
in JS.
-
Main goroutine exit kills all child goroutines. β Always use WaitGroups or channels to synchronize.
-
Race conditions happen if goroutines write/read shared data without sync. β Use
sync.Mutex
,sync.RWMutex
, or better: channels. -
Too many goroutines can cause memory pressure, but still far cheaper than threads.
-
Donβt block forever β unreceived channel sends cause deadlocks.
- Web servers: Each request can run in its own goroutine.
- Scraping / Crawling: Launch a goroutine for each URL fetch.
- Background jobs: Run tasks concurrently (DB writes, logging, metrics).
- Pipelines: Process data in multiple stages with goroutines + channels.
- JavaScript β concurrency = single-threaded event loop + async callbacks.
- Go β concurrency = many goroutines scheduled onto multiple OS threads.
So:
- In JS, concurrency = illusion via async.
- In Go, concurrency = real, parallel execution when multiple CPU cores exist.
β To summarize:
- Goroutines = cheap concurrent tasks managed by Go runtime.
- Not OS threads, but multiplexed onto threads.
- Communicate via channels instead of shared memory.
- Powerful with WaitGroups, select, and synchronization tools.
concurrency vs parallelism is a core concept in computer science and in Go (since Go was built with concurrency in mind). Letβs break it down step by step in detail.
- Concurrency = Dealing with many tasks at once (managing multiple things).
- Parallelism = Doing many tasks at the same time (executing multiple things simultaneously).
Both sound similar, but theyβre not the same.
Imagine weβre in a restaurant kitchen:
-
Concurrency (chef multitasking): One chef handles multiple dishes by switching between them. He cuts vegetables for Dish A, stirs the sauce for Dish B, and checks the oven for Dish C. Heβs not doing them at the exact same time, but heβs managing multiple tasks in progress.
-
Parallelism (many chefs working together): Three chefs cook three different dishes at the same time. Tasks truly happen simultaneously.
π Concurrency is about structure (how tasks are managed). π Parallelism is about execution (how tasks are run in hardware).
-
Concurrency: Multiple tasks make progress in overlapping time periods. It doesnβt require multiple processors/cores. Even with a single CPU core, the system can interleave execution of tasks via context switching.
-
Parallelism: Multiple tasks run at the exact same instant, usually on different CPU cores or processors.
Go is famous for concurrency with goroutines.
package main
import (
"fmt"
"time"
)
func task(name string) {
for i := 1; i <= 3; i++ {
fmt.Println(name, ":", i)
time.Sleep(500 * time.Millisecond)
}
}
func main() {
go task("Task A") // run concurrently
go task("Task B")
time.Sleep(3 * time.Second)
fmt.Println("Done")
}
- Concurrency: Both
Task A
andTask B
appear to run at the same time because Go schedules goroutines across available cores. If you run this on a single-core CPU, Go interleaves execution β thatβs concurrency. - Parallelism: If you run this on a multi-core CPU,
Task A
might run on Core 1 andTask B
on Core 2 simultaneously β thatβs parallelism.
Aspect | Concurrency | Parallelism |
---|---|---|
Definition | Managing multiple tasks at once | Executing multiple tasks at once |
Focus | Task switching and scheduling | Simultaneous execution |
CPU Requirement | Can happen on a single-core CPU | Requires multi-core CPU |
Analogy | One chef multitasking across dishes | Many chefs cooking different dishes |
In Go | Achieved via goroutines & channels | Achieved when goroutines run on multiple cores |
- Concurrency (single-core):
Time: |----A----|----B----|----A----|----B----|
^ Task A and Task B interleaved
- Parallelism (multi-core):
Core1: |----A----|----A----|----A----|
Core2: |----B----|----B----|----B----|
^ Tasks running truly at the same time
- Concurrency is a design approach: "How do we structure a program so that it can handle many things at once?"
- Parallelism is an execution strategy: "How do we use hardware to literally do many things at once?"
Go is concurrent by design (goroutines + channels) and parallel by runtime (GOMAXPROCS decides how many cores are used).
β Final takeaway:
- Concurrency = composition of independently executing tasks.
- Parallelism = simultaneous execution of tasks.
They are related, but not the same. A program can be concurrent but not parallel, parallel but not concurrent, or both.
Letβs go step by step and dive deep into channels in Go, because theyβre one of the most powerful concurrency primitives in the language.
In Go, a channel is a typed conduit (pipe) through which goroutines can communicate with each other.
- They allow synchronization (ensuring goroutines coordinate properly).
- They allow data exchange between goroutines safely, without explicit locking (like mutexes).
π Think of a channel as a "queue" or "pipeline" where one goroutine can send data and another goroutine can receive it.
var ch chan int // declare a channel of type int
ch := make(chan int) // make allocates memory for a channel
Here:
ch
is a channel of integers.make(chan int)
initializes it.
We use the <-
operator.
ch <- 10 // send value 10 into channel
value := <-ch // receive value from channel
- Send (
ch <- value
): Puts data into the channel. - Receive (
value := <-ch
): Gets data from the channel. - Both operations block until the other side is ready (unless buffered).
package main
import (
"fmt"
"time"
)
func worker(ch chan string) {
time.Sleep(2 * time.Second)
ch <- "done" // send message
}
func main() {
ch := make(chan string)
go worker(ch)
fmt.Println("Waiting for worker...")
msg := <-ch // blocks until worker sends data
fmt.Println("Worker says:", msg)
}
β Output:
Waiting for worker...
Worker says: done
Here:
main
waits on<-ch
until the goroutine sends "done".- This synchronizes
main
and the worker.
- No capacity β send blocks until a receiver is ready, and receive blocks until a sender is ready.
- Ensures synchronization.
ch := make(chan int) // unbuffered
- Created with a capacity.
- Allows sending multiple values before blocking, up to the capacity.
ch := make(chan int, 3) // capacity = 3
ch <- 1
ch <- 2
ch <- 3
// sending a 4th value will block until receiver consumes one
π Buffered channels provide asynchronous communication.
We can close a channel when no more values will be sent:
close(ch)
After closing:
- Further sends β panic.
- Receives β still possible, but will yield zero values when channel is empty.
Example:
package main
import "fmt"
func main() {
ch := make(chan int, 2)
ch <- 10
ch <- 20
close(ch)
for val := range ch {
fmt.Println(val)
}
}
β Output:
10
20
We can restrict channels to send-only or receive-only.
func sendData(ch chan<- int) { // send-only
ch <- 100
}
func receiveData(ch <-chan int) { // receive-only
fmt.Println(<-ch)
}
This enforces clear contracts between functions.
The select
statement is like a switch
for channels.
It waits on multiple channel operations and executes whichever is ready first.
select {
case msg1 := <-ch1:
fmt.Println("Received", msg1)
case msg2 := <-ch2:
fmt.Println("Received", msg2)
default:
fmt.Println("No messages")
}
π Useful for:
- Handling multiple channels.
- Adding timeouts with
time.After
. - Preventing blocking with
default
.
Channels make it easy to build worker pools.
package main
import (
"fmt"
"time"
)
func worker(id int, jobs <-chan int, results chan<- int) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(time.Second)
results <- job * 2
}
}
func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)
// Start 3 workers
for i := 1; i <= 3; i++ {
go worker(i, jobs, results)
}
// Send jobs
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
// Collect results
for r := 1; r <= 5; r++ {
fmt.Println("Result:", <-results)
}
}
β Output (order may vary):
Worker 1 processing job 1
Worker 2 processing job 2
Worker 3 processing job 3
Worker 1 processing job 4
Worker 2 processing job 5
Result: 2
Result: 4
Result: 6
Result: 8
Result: 10
This shows how channels + goroutines β powerful concurrent systems.
- Channels are typed pipes for goroutine communication.
- Unbuffered channels synchronize sender and receiver.
- Buffered channels allow limited async communication.
- Use
close()
to signal no more values. - Directional channels (
chan<-
,<-chan
) enforce contracts. select
helps multiplex multiple channels.- Channels + goroutines = safe, concurrent, and elegant design.
Now weβre going into the guts of channels in Go, the kind of stuff that matters if we want a CS-level understanding of why channels are so powerful and how they avoid race conditions. π
Channels in Go arenβt magic β theyβre implemented in the Go runtime (part of the scheduler and memory model). Letβs break down their internal structure, blocking mechanism, and scheduling behavior.
Internally, every channel is represented by a structure called hchan
(defined in Goβs runtime source, runtime/chan.go
):
type hchan struct {
qcount uint // number of elements currently in queue
dataqsiz uint // size of the circular buffer
buf unsafe.Pointer // circular buffer (for buffered channels)
elemsize uint16 // size of each element
closed uint32 // is channel closed?
sendx uint // send index (next slot to write to)
recvx uint // receive index (next slot to read from)
recvq waitq // list of goroutines waiting to receive
sendq waitq // list of goroutines waiting to send
lock mutex // protects all fields
}
- Circular Buffer β if channel is buffered, data lives here.
- Send/Recv Index β used for round-robin access in buffer.
- Wait Queues β goroutines that are blocked are put here.
- Lock β ensures safe concurrent access (Go runtime manages locking, so we donβt).
Unbuffered channels are the simplest case:
-
Send (
ch <- x
):- If thereβs already a goroutine waiting to receive, value is copied directly into its stack.
- If not, sender blocks β itβs enqueued into
sendq
until a receiver arrives.
-
Receive (
<-ch
):- If thereβs a waiting sender, value is copied directly.
- If not, receiver blocks β itβs enqueued into
recvq
until a sender arrives.
π This is why unbuffered channels synchronize goroutines. No buffer exists; transfer happens only when both sides are ready.
Buffered channels add a queue (circular buffer):
-
Send:
- If buffer not full β put value in buffer, increment
qcount
, updatesendx
. - If buffer full β block, enqueue sender in
sendq
.
- If buffer not full β put value in buffer, increment
-
Receive:
- If buffer not empty β take value from buffer, decrement
qcount
, updaterecvx
. - If buffer empty β block, enqueue receiver in
recvq
.
- If buffer not empty β take value from buffer, decrement
π Buffered channels provide asynchronous communication, but when full/empty they still enforce synchronization.
When a goroutine cannot proceed (because channel is full or empty), Goβs runtime parks it:
- Parking = goroutine is put to sleep, removed from runnable state.
- Unparking = when the condition is satisfied (e.g., sender arrives), runtime wakes up the goroutine and puts it back on the scheduler queue.
This avoids busy-waiting (goroutines donβt spin-loop, they sleep efficiently).
When we close(ch)
:
closed
flag inhchan
is set.- All goroutines in
recvq
are woken up and return the zero value. - Any new send β panic.
- Receives on empty closed channel β return zero value immediately.
select
in Go is implemented like a non-deterministic choice operator:
- The runtime looks at all channel cases.
- If multiple channels are ready β pick one pseudo-randomly (to avoid starvation).
- If none are ready β block the goroutine, enqueue it on all those channelsβ
sendq/recvq
. - When one channel becomes available, runtime wakes up the goroutine, executes that case, and unregisters it from others.
π This is why select
is fair and efficient.
Channels follow Goβs happens-before relationship:
- A send on a channel happens before the corresponding receive completes.
- This ensures visibility of writes: when one goroutine sends a value, all memory writes before the send are guaranteed visible to the receiver after the receive.
This is similar to release-acquire semantics in CPU memory models.
- Channels avoid explicit locks for user code β the runtime lock inside
hchan
is optimized with CAS (Compare-And-Swap) instructions when possible. - For heavy concurrency, channels can become a bottleneck (due to contention on
hchan.lock
). In such cases, Go devs sometimes use lock-free data structures or sharded channels. - But for safe communication, channels are much cleaner than manual locking.
Imagine a mailbox system:
- Unbuffered channel β one person waits at the mailbox until another arrives.
- Buffered channel β mailbox has slots; sender can drop letters until itβs full.
select
β person waiting at multiple mailboxes, ready to grab whichever letter arrives first.- Closing β post office shuts down; no new letters allowed, but old ones can still be collected.
- Channels are backed by a lock-protected struct (
hchan
) with a buffer and wait queues. - Unbuffered channels β synchronous handoff (sender β receiver meet at the same time).
- Buffered channels β async up to capacity, but still block when full/empty.
- Blocked goroutines are parked efficiently, not spin-looping.
- Select allows non-deterministic, fair channel multiplexing.
- Closing signals termination and wakes receivers.
- Channels provide happens-before memory guarantees, making them safer than manual synchronization.
Letβs go deep into unbuffered vs buffered channels in Go, both conceptually and under the hood (CS-level).
A channel in Go is essentially a typed conduit that goroutines use to communicate. Think of it like a pipe with synchronization built-in. Under the hood, Go implements channels as a struct (hchan
) in the runtime, which manages:
- A queue (circular buffer) of values
- A list of goroutines waiting to send
- A list of goroutines waiting to receive
- Locks for synchronization
An unbuffered channel is created like this:
ch := make(chan int) // no buffer size specified
-
Synchronous communication.
- A
send
(ch <- v
) blocks until another goroutine executes areceive
(<-ch
). - A
receive
blocks until another goroutine sends.
- A
-
This creates a rendezvous point between goroutines: both must be ready simultaneously.
-
Since the buffer capacity = 0, the channel cannot hold values.
-
When a goroutine executes
ch <- v
:- The runtime checks if thereβs a waiting receiver in the channelβs
recvq
. - If yes β it directly transfers the value from sender to receiver (no buffer copy).
- If not β the sender goroutine is put to sleep and added to the
sendq
.
- The runtime checks if thereβs a waiting receiver in the channelβs
-
Similarly, a receiver blocks until thereβs a sender.
So data is passed directly, goroutine-to-goroutine, like a handoff.
func main() {
ch := make(chan int)
go func() {
ch <- 42 // blocks until receiver is ready
}()
val := <-ch // blocks until sender is ready
fmt.Println(val) // 42
}
This ensures synchronization β the print only happens after the send completes.
A buffered channel is created like this:
ch := make(chan int, 3) // capacity = 3
-
Asynchronous communication up to capacity.
- A
send
(ch <- v
) only blocks if the buffer is full. - A
receive
(<-ch
) only blocks if the buffer is empty.
- A
-
Acts like a queue between goroutines.
-
Channel has a circular buffer (
qcount
,dataqsiz
,buf
). -
On
ch <- v
:- If a receiver is waiting β value bypasses buffer, sent directly.
- Else, if buffer is not full β value is enqueued in buffer.
- Else (buffer full) β sender goroutine is parked in
sendq
.
-
On
<-ch
:- If buffer has elements β dequeue and return.
- Else, if a sender is waiting β take value directly.
- Else β receiver goroutine is parked in
recvq
.
So buffered channels allow decoupling: senders and receivers donβt have to line up perfectly in time (up to buffer capacity).
func main() {
ch := make(chan int, 2)
ch <- 1 // does not block
ch <- 2 // does not block
go func() {
ch <- 3 // blocks until someone reads
}()
fmt.Println(<-ch) // frees up buffer, goroutine sends 3
fmt.Println(<-ch)
fmt.Println(<-ch)
}
Feature | Unbuffered Channel | Buffered Channel |
---|---|---|
Capacity | 0 | > 0 |
Communication style | Synchronous (rendezvous) | Asynchronous (queue, but limited) |
Blocking (Send) | Until a receiver is ready | Until buffer has space |
Blocking (Receive) | Until a sender is ready | Until buffer has data |
Use case | Enforce synchronization, handoff semantics | Decouple producer/consumer timing |
- Unbuffered channel: Like passing a note hand-to-hand β both people must meet at the same time.
- Buffered channel: Like a mailbox β sender can drop the note and leave, receiver can pick it up later (until the mailbox is full).
-
The Go runtime (
runtime/chan.go
) maintains:-
hchan
struct:type hchan struct { qcount uint // number of elements in queue dataqsiz uint // size of circular buffer buf unsafe.Pointer // actual buffer sendx uint // next send index recvx uint // next receive index recvq waitq // list of waiting receivers sendq waitq // list of waiting senders lock mutex }
-
-
Unbuffered:
dataqsiz == 0
, always usessendq/recvq
. -
Buffered: Uses circular buffer for queuing, only parks goroutines when full/empty.
π So in short:
- Unbuffered channels couple goroutines tightly in time (synchronous handoff).
- Buffered channels decouple them (asynchronous, but limited by capacity).
Perfect π β this is where Go really shines. Letβs break down how the Go runtime orchestrates goroutines + channels together, in detail, both conceptually and at the runtime (CS-level implementation).
Go doesnβt rely on the OS to schedule lightweight tasks. Instead, it implements its own scheduler inside the runtime. This allows goroutines and channels to work smoothly together.
-
A goroutine is a lightweight thread of execution, managed by the Go runtime (not OS).
-
Under the hood:
- Each goroutine is represented by a
g
struct. - Each has its own stack (starts tiny, grows/shrinks dynamically).
- Thousands (even millions) of goroutines can run inside one OS thread.
- Each goroutine is represented by a
-
M = OS threads
-
N = Goroutines
-
The runtime maps N goroutines onto M OS threads.
-
Key runtime structs:
- M (Machine) β OS thread
- P (Processor) β Logical processor, responsible for scheduling goroutines on an M
- G (Goroutine) β A goroutine itself
-
Scheduling is cooperative + preemptive:
- Goroutines yield at certain safe points (e.g., blocking operations, function calls).
- Since Go 1.14, preemption also works at loop backedges.
So: goroutines are not OS-level threads β theyβre scheduled by Goβs own runtime.
Channels are the synchronization primitive between goroutines.
Runtime implementation: runtime/chan.go
.
Struct:
type hchan struct {
qcount uint // # of elements in queue
dataqsiz uint // buffer size
buf unsafe.Pointer // circular buffer
sendx uint // next send index
recvx uint // next receive index
recvq waitq // waiting receivers
sendq waitq // waiting senders
lock mutex
}
-
Channels are queues with wait lists:
- If buffered β goroutines enqueue/dequeue values.
- If unbuffered β goroutines handshake directly.
-
Senders & receivers that cannot proceed are parked (suspended) into the
sendq
orrecvq
.
ch := make(chan int)
go func() { ch <- 42 }()
val := <-ch
-
Sender (
ch <- 42
):- Lock channel.
- Check
recvq
(waiting receivers). - If receiver waiting β value copied directly β receiver wakes up β sender continues.
- If no receiver β sender is parked (blocked) and added to
sendq
.
-
Receiver (
<-ch
):- Lock channel.
- Check
sendq
(waiting senders). - If sender waiting β value copied β sender wakes up β receiver continues.
- If no sender β receiver is parked and added to
recvq
.
This ensures synchronous handoff.
ch := make(chan int, 2)
-
Sender (
ch <- v
):- Lock channel.
- If
recvq
has waiting receivers β skip buffer, deliver directly. - Else if buffer has space β enqueue value β done.
- Else (buffer full) β park sender in
sendq
.
-
Receiver (
<-ch
):- Lock channel.
- If buffer has values β dequeue β done.
- Else if
sendq
has waiting senders β take value directly. - Else β park receiver in
recvq
.
So buffered channels act as a mailbox (async up to capacity).
When goroutines canβt make progress (blocked send/recv), the runtime:
- Parks them: puts them in channel queues (
sendq
orrecvq
) and removes them from the schedulerβs run queue. - Stores a
sudog
(suspended goroutine) object in the queue with metadata (which goroutine, element pointer, etc.).
When the condition is satisfied (buffer space, sender arrives, etc.):
- The runtime wakes up a waiting goroutine by moving it back into the schedulerβs run queue.
- The scheduler later assigns it to a P (processor) β M (thread) β resumes execution.
This is why Go channels feel seamless: the runtime transparently parks and wakes goroutines.
select
is also handled in runtime:
- The runtime checks multiple channels in random order to avoid starvation.
- If one is ready β proceeds immediately.
- If none are ready β goroutine is parked, attached to all involved channelsβ queues, and woken up when one becomes available.
- Channel operations are protected by mutex + atomic ops β very efficient.
- Goroutines are cheap (KB stack, small structs).
- Parking/waking is implemented in pure runtime β no heavy syscalls unless all goroutines block (then Go hands thread back to OS).
G1: ch <- 42 <-----> G2: val := <-ch
(synchronous handoff, both must rendezvous)
G1: ch <- 42 ---> [ buffer ] ---> G2: val := <-ch
(asynchronous until buffer full/empty)
[M:OS Thread] <----> [P:Logical Processor] <----> [G:Goroutine Queue]
- Goroutines = cheap lightweight threads managed by Go runtime.
- Scheduler = M:N model with P (processor) abstraction.
- Channels = safe queues with wait lists.
- Interaction = senders/receivers park & wake, enabling CSP-style concurrency.
- Runtime magic = efficient, cooperative scheduling + lightweight context switching.
π So: goroutines are like "actors," channels are "mailboxes," and the Go runtime is the "stage manager" that schedules actors and delivers their messages efficiently.
Letβs build a step-by-step execution timeline for how the Go runtime handles goroutines + channels.
Two cases: unbuffered and buffered channels.
Code:
ch := make(chan int)
go func() {
ch <- 42
fmt.Println("Sent 42")
}()
val := <-ch
fmt.Println("Received", val)
-
Main goroutine (G_main) creates channel
ch
(capacity = 0).- Runtime allocates an
hchan
struct with emptysendq
andrecvq
.
- Runtime allocates an
-
Spawn goroutine (G1) β scheduled by runtime onto an M (OS thread) via some P.
-
G1 executes
ch <- 42
:- Lock channel.
- Since
recvq
is empty, no receiver is waiting. - Create a
sudog
for G1 (stores goroutine pointer + value). - Add
sudog
tosendq
. - G1 is parked (blocked) β removed from run queue.
-
Main goroutine executes
<-ch
:- Lock channel.
- Sees
sendq
has a waiting sender (G1). - Runtime copies
42
from G1βs stack to G_mainβs stack. - Removes G1 from
sendq
. - Marks G1 as runnable β puts it back in the schedulerβs run queue.
- G_main continues with value
42
.
-
Scheduler resumes G1 β prints
"Sent 42"
. **Main goroutine prints"Received 42"
.
πΈ Key point: In unbuffered channels, send/recv must rendezvous. One goroutine blocks until the other arrives.
Code:
ch := make(chan int, 2)
go func() {
ch <- 1
ch <- 2
ch <- 3
fmt.Println("Sent all")
}()
time.Sleep(time.Millisecond) // give sender time
fmt.Println(<-ch)
fmt.Println(<-ch)
fmt.Println(<-ch)
-
Main goroutine (G_main) creates channel
ch
(capacity = 2).- Runtime allocates buffer (circular queue), size = 2.
-
Spawn goroutine (G1).
-
G1 executes
ch <- 1
:- Lock channel.
- Buffer not full (0/2).
- Enqueue
1
atbuf[0]
. - Increment
qcount
= 1. - Return immediately (non-blocking).
-
G1 executes
ch <- 2
:- Lock channel.
- Buffer not full (1/2).
- Enqueue
2
atbuf[1]
. qcount
= 2.- Return immediately.
-
G1 executes
ch <- 3
:- Lock channel.
- Buffer is full (2/2).
- No receivers waiting (
recvq
empty). - Create
sudog
for G1. - Put it in
sendq
. - Park G1 (blocked).
-
Main goroutine executes
<-ch
:-
Lock channel.
-
Buffer has elements (
qcount
= 2). -
Dequeue
1
. -
qcount
= 1. -
Since thereβs a blocked sender in
sendq
(G1 with value3
), runtime:- Wakes G1.
- Copies
3
into buffer (at freed slot). - G1 resumes later.
-
-
Main goroutine executes
<-ch
again:- Dequeue
2
. qcount
= 1 (still has3
).
- Dequeue
-
Main goroutine executes
<-ch
final time:- Dequeue
3
. qcount
= 0 (buffer empty).
- Dequeue
-
Scheduler resumes G1 β
"Sent all"
printed.
πΈ Key point: Buffered channels decouple sender/receiver timing. G1 only blocked when the buffer was full.
G1: send(42) ---- waits ----> G_main: recv()
<--- wakes ----
Buffer: [ 1 ][ 2 ] <- send 1, send 2
Buffer: full <- send 3 blocks
Recv 1 β slot frees <- wakes sender, puts 3 in
Recv 2, Recv 3 <- empties buffer
π In both cases, the Go runtime orchestrates this:
sendq
&recvq
hold waiting goroutines (sudog
objects).- Blocked goroutines are parked (suspended).
- When conditions change (buffer frees, peer arrives), goroutines are woken and put back into the schedulerβs run queue.
A buffered channel is a channel with capacity > 0:
ch := make(chan int, 3) // capacity 3
It provides a small queue (a circular buffer) between senders and receivers. A send (ch <- v
) only blocks when the buffer is full; a receive (<-ch
) only blocks when the buffer is empty β unless there are waiting peers, in which case the runtime can do a direct handoff.
Use it when we want to decouple producer and consumer timing (allow short bursts) but still bound memory and concurrency.
- Create:
ch := make(chan T, capacity)
wherecapacity >= 1
. - Zero value is
nil
:var ch chan int
β nil channel (send/recv block forever). - Inspect:
len(ch)
gives number of queued elements,cap(ch)
gives capacity.
When sending (ch <- v
):
- If there is a waiting receiver (parked on
recvq
) β direct transfer: runtime copiesv
to receiver and wakes it (no buffer enqueue). - Else if the buffer has free slots (
len < cap
) β enqueue the value into the circular buffer and return immediately. - Else (buffer full and no receiver) β park the sender (sudog) on the channel's
sendq
and block.
When receiving (<-ch
):
- If buffer has queued items (
len > 0
) β dequeue an item and return it. - Else if there is a waiting sender (in
sendq
) β direct transfer: take the senderβs value and wake the sender. - Else (buffer empty and no sender) β park the receiver on
recvq
and block.
Important: the runtime prefers delivering directly to a waiting peer if one exists β it avoids unnecessary buffer operations and wake-ups.
Channels are implemented by the runtime in a structure conceptually like:
// simplified conceptual fields
type hchan struct {
qcount uint // number of elements currently in buffer
dataqsiz uint // capacity (buffer size)
buf unsafe.Pointer // pointer to circular buffer memory
sendx uint // next index to send (enqueue)
recvx uint // next index to receive (dequeue)
sendq waitq // queue of waiting senders (sudog)
recvq waitq // queue of waiting receivers (sudog)
lock mutex // protects the channel's state
}
- The buffer is a circular array indexed by
sendx
/recvx
modulodataqsiz
. sendq
andrecvq
are queues of parked goroutines (sudog objects) waiting for a send/receive.- Operations lock the channel, check queues and buffer, then either enqueue/dequeue or park/unpark goroutines.
- Parked goroutines are moved back to the scheduler run queue when woken.
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan int, 2) // capacity 2
go func() {
ch <- 1 // does NOT block
fmt.Println("sent 1")
ch <- 2 // does NOT block
fmt.Println("sent 2")
ch <- 3 // blocks until receiver consumes one
fmt.Println("sent 3")
}()
time.Sleep(100 * time.Millisecond) // let sender run
fmt.Println("recv:", <-ch) // receives 1; this will unblock sender for 3
fmt.Println("recv:", <-ch) // receives 2
fmt.Println("recv:", <-ch) // receives 3
}
Expected printed sequence (order may vary slightly with scheduling, but logically):
sent 1
sent 2
recv: 1
sent 3 // unblocks here after first recv frees slot
recv: 2
recv: 3
-
close(ch)
:- Makes the channel no longer accept sends. Any sends to a closed channel Panic.
- Receivers can still drain buffered items.
- Once buffer is empty, subsequent receives return the zero value and
ok == false
.
-
Example:
ch := make(chan int, 2)
ch <- 10
ch <- 20
close(ch)
v, ok := <-ch // v==10, ok==true
v, ok = <-ch // v==20, ok==true
v, ok = <-ch // v==0, ok==false (channel drained and closed)
- Closing is normally done by the sender/owner side. Closing from multiple places or closing when other senders still send is dangerous.
We often use a select
with default
to attempt a non-blocking send/recv:
select {
case ch <- v:
// succeeded
default:
// buffer full β do alternate action
}
This is how we implement try-send / try-receive semantics.
-
Bounded buffer / producer-consumer
- Buffer provides smoothing for bursts.
-
Worker pool (task queue)
tasks := make(chan Task, queueSize)
β spawn worker goroutines thatfor t := range tasks { ... }
.
-
Semaphore / concurrency limiter
sem := make(chan struct{}, N) // allow N concurrent active tasks sem <- struct{}{} // acquire (blocks when N reached) <-sem // release
-
Pipelines
- Stage outputs into buffered channels to decouple stages.
- A successful send on a channel synchronizes with the corresponding receive that receives the value. That means the receive sees all memory writes that happened before the send (happens-before guarantee).
- Using channels for signalling is safe: if we send after setting fields, the receiver will see those fields set.
-
Buffered channels improve throughput where producers and consumers are not tightly synchronized.
-
Too large buffers:
- Consume more memory.
- Increase latency for consumers (items may sit in buffer).
- Mask backpressure (producers can outrun consumers).
-
Too small buffers:
- Lead to frequent blocking and context switching.
-
Tuning:
- Choose
cap
to match burst size / acceptable queueing. - For heavy throughput, benchmark channels vs other concurrency primitives (e.g., pools, atomics) β channels are convenient and fast but not free.
- Choose
- Deadlock: If producers fill the buffer and nobody consumes, they block. If blocked sends prevent the program from progressing, deadlock occurs.
- Send on closed channel: panic β avoid by ensuring only the owner closes the channel.
- Nil channel:
var ch chan T
without make isnil
β send/recv block forever. - Large struct values: sending large values copies them into the buffer; prefer pointers or smaller structs if copying is expensive.
- Mixing close and multiple senders: close only from a single owner to avoid races/panics.
- The runtime enqueues waiting senders/receivers (sudogs) and generally wakes them in FIFO order β so waiting goroutines are served in roughly the order they arrived. For
select
across multiple channels, selection is randomized among ready cases to avoid starvation.
make(chan T, n)
β buffered channel with capacityn
.len(ch)
β items queued now.cap(ch)
β total capacity.close(ch)
β no more sends; readers drain buffer then getok==false
.select { case ch<-v: default: }
β non-blocking send attempt.
- When producers produce in bursts and consumers are slower but able to catch up.
- When you want some decoupling but still bounded memory/queueing.
- When you need a simple concurrency limiter (semaphore style).
Channel Synchronization is one of the most important and elegant parts of Goβs concurrency model.
- In Go, channels are not just for communication (passing values between goroutines).
- They are also a synchronization primitive: they coordinate execution order between goroutines.
Think of it like: π Send blocks until the receiver is ready (unbuffered) π Receive blocks until the sender provides data π This mutual blocking acts as a synchronization point.
Unbuffered channels enforce strict rendezvous synchronization:
- When goroutine A sends (
ch <- x
), it is blocked until goroutine B executes a receive (<- ch
). - Both goroutines meet at the channel, exchange data, and continue.
package main
import (
"fmt"
"time"
)
func worker(done chan bool) {
fmt.Println("Worker: started")
time.Sleep(2 * time.Second)
fmt.Println("Worker: finished")
// notify main goroutine
done <- true
}
func main() {
done := make(chan bool)
go worker(done)
// wait for worker to finish
<-done
fmt.Println("Main: all done")
}
π Here:
done <- true
synchronizes the worker with the main goroutine.- Main will block on
<-done
until the worker signals. - No explicit
mutex
or condition variable is needed β the channel ensures correct ordering.
Buffered channels allow decoupling between sender and receiver, but can still be used for synchronization.
Rules:
- Sending blocks only if buffer is full.
- Receiving blocks only if buffer is empty.
package main
import (
"fmt"
"time"
)
func worker(tasks chan int, done chan bool) {
for {
task, more := <-tasks
if !more {
fmt.Println("Worker: all tasks done")
done <- true
return
}
fmt.Println("Worker: processing task", task)
time.Sleep(500 * time.Millisecond)
}
}
func main() {
tasks := make(chan int, 3)
done := make(chan bool)
go worker(tasks, done)
for i := 1; i <= 5; i++ {
fmt.Println("Main: sending task", i)
tasks <- i
}
close(tasks) // signals no more tasks
<-done // wait for worker
fmt.Println("Main: worker finished")
}
π Here:
- Buffer allows temporary queuing of tasks.
- Synchronization happens when
tasks
is full (main blocks) or empty (worker blocks). - Closing the channel signals the worker to stop.
Now letβs peek under the hood.
-
A buffer (circular queue, if buffered).
-
Two wait queues:
sendq
β goroutines waiting to send.recvq
β goroutines waiting to receive.
-
A send operation checks
recvq
:- If a goroutine is waiting to receive β direct handoff (value copied, receiver resumed).
- If not β sender parks itself in
sendq
(blocked).
-
A receive operation checks
sendq
:- If a goroutine is waiting to send β direct handoff.
- If not β receiver parks itself in
recvq
.
This ensures synchronous rendezvous.
-
Send:
- If buffer is not full β enqueue value, return immediately.
- If buffer is full β block in
sendq
.
-
Receive:
- If buffer is not empty β dequeue value, return immediately.
- If buffer is empty β block in
recvq
.
-
When a goroutine blocks, the runtime:
- Saves its state (stack, registers).
- Moves it off the run queue.
- Adds it to the channelβs wait queue.
-
When the opposite operation happens, the runtime:
- Wakes a goroutine from the wait queue.
- Puts it back on the scheduler run queue.
-
This is how Go synchronizes goroutines without explicit locks.
- Signaling (done channels, as in worker example).
- Worker pools (tasks + done channels).
- Bounded queues (buffered channels to control throughput).
- Fan-in / Fan-out (multiple producers and consumers).
- Rate limiting (token buckets using buffered channels).
β Summary
- Channels synchronize goroutines naturally: send blocks until receive, receive blocks until send (with buffering rules).
- Runtime uses wait queues (sendq, recvq) and goroutine parking/unparking for this.
- This synchronization mechanism replaces the need for explicit mutexes in many cases.