Skip to content

callmeskyy111/golang-advanced

Repository files navigation

🌱 What is a Goroutine?

A goroutine is a lightweight, independently executing function that runs concurrently with other goroutines in the same address space. Think of it as:

  • In JavaScript, we have an event loop that handles async tasks (e.g., promises, async/await).
  • In Go, instead of a single-threaded event loop, we have goroutines managed by the Go runtime.

They allow us to perform tasks like handling requests, I/O operations, or computations in parallel without manually managing threads.


βš–οΈ Goroutine vs OS Thread

Feature Goroutine OS Thread
Size at start ~2 KB stack ~1 MB stack
Managed by Go runtime scheduler (M:N model) OS Kernel
Number you can create Millions Limited (few thousands)
Switching Very fast, done in user space Slower, done by OS
Creation cost Extremely cheap Expensive

πŸ‘‰ This is why we say goroutines are lightweight threads.


βš™οΈ How to Start a Goroutine

package main

import (
	"fmt"
	"time"
)

func printMessage(msg string) {
	for i := 0; i < 5; i++ {
		fmt.Println(msg, i)
		time.Sleep(500 * time.Millisecond)
	}
}

func main() {
	go printMessage("goroutine") // runs concurrently
	printMessage("main")         // runs in main goroutine
}
  • The go keyword starts a new goroutine.

  • Here:

    • main() itself runs in the main goroutine.
    • go printMessage("goroutine") starts another goroutine.
  • If main() exits before the new goroutine finishes, the program ends immediately.

⚠️ Unlike JavaScript promises (which keep the process alive until settled), Go doesn’t wait for goroutines unless you explicitly synchronize them.


🧡 Go’s Concurrency Model (M:N Scheduler)

Go runtime uses an M:N scheduler, meaning:

  • M goroutines are multiplexed onto N OS threads.
  • This is different from 1:1 (like Java threads) or N:1 (like cooperative multitasking).

The scheduler ensures:

  • Goroutines are distributed across multiple threads.
  • When one blocks (e.g., waiting on I/O), another is scheduled.

Think of goroutines as tasks in a work-stealing scheduler.


πŸ› οΈ Synchronization with Goroutines

Since goroutines run concurrently, we need synchronization tools:

1. WaitGroup – Wait for Goroutines to Finish

package main

import (
	"fmt"
	"sync"
)

func worker(id int, wg *sync.WaitGroup) {
	defer wg.Done() // signals completion
	fmt.Printf("Worker %d starting\n", id)
	// simulate work
	fmt.Printf("Worker %d done\n", id)
}

func main() {
	var wg sync.WaitGroup

	for i := 1; i <= 3; i++ {
		wg.Add(1)            // add to wait counter
		go worker(i, &wg)
	}

	wg.Wait() // wait for all to finish
}

βœ… Ensures the program won’t exit before all goroutines finish.


2. Channels – Communication Between Goroutines

Channels are Go’s big idea for concurrency. Instead of sharing memory and locking it, goroutines communicate by passing messages.

package main

import "fmt"

func worker(ch chan string) {
	ch <- "task finished" // send data into channel
}

func main() {
	ch := make(chan string)

	go worker(ch)

	msg := <-ch // receive data
	fmt.Println("Message:", msg)
}

πŸ‘‰ Think of it like JavaScript Promise.resolve("task finished"), but synchronous communication unless buffered.


3. Buffered Channels – Queue of Messages

ch := make(chan int, 2) // capacity = 2
ch <- 10
ch <- 20
fmt.Println(<-ch)
fmt.Println(<-ch)
  • Unbuffered channel: send blocks until receive is ready.
  • Buffered channel: send doesn’t block until buffer is full.

4. select – Multiplexing Channels

select {
case msg := <-ch1:
	fmt.Println("Received", msg)
case msg := <-ch2:
	fmt.Println("Received", msg)
default:
	fmt.Println("No message")
}

Like Promise.race() in JS.


πŸ”₯ Key Gotchas with Goroutines

  1. Main goroutine exit kills all child goroutines. β†’ Always use WaitGroups or channels to synchronize.

  2. Race conditions happen if goroutines write/read shared data without sync. β†’ Use sync.Mutex, sync.RWMutex, or better: channels.

  3. Too many goroutines can cause memory pressure, but still far cheaper than threads.

  4. Don’t block forever – unreceived channel sends cause deadlocks.


πŸ“Š Real-World Use Cases

  • Web servers: Each request can run in its own goroutine.
  • Scraping / Crawling: Launch a goroutine for each URL fetch.
  • Background jobs: Run tasks concurrently (DB writes, logging, metrics).
  • Pipelines: Process data in multiple stages with goroutines + channels.

🧠 Mental Model (JS vs Go)

  • JavaScript β†’ concurrency = single-threaded event loop + async callbacks.
  • Go β†’ concurrency = many goroutines scheduled onto multiple OS threads.

So:

  • In JS, concurrency = illusion via async.
  • In Go, concurrency = real, parallel execution when multiple CPU cores exist.

βœ… To summarize:

  • Goroutines = cheap concurrent tasks managed by Go runtime.
  • Not OS threads, but multiplexed onto threads.
  • Communicate via channels instead of shared memory.
  • Powerful with WaitGroups, select, and synchronization tools.

concurrency vs parallelism is a core concept in computer science and in Go (since Go was built with concurrency in mind). Let’s break it down step by step in detail.


1. The Core Idea

  • Concurrency = Dealing with many tasks at once (managing multiple things).
  • Parallelism = Doing many tasks at the same time (executing multiple things simultaneously).

Both sound similar, but they’re not the same.


2. Analogy

Imagine we’re in a restaurant kitchen:

  • Concurrency (chef multitasking): One chef handles multiple dishes by switching between them. He cuts vegetables for Dish A, stirs the sauce for Dish B, and checks the oven for Dish C. He’s not doing them at the exact same time, but he’s managing multiple tasks in progress.

  • Parallelism (many chefs working together): Three chefs cook three different dishes at the same time. Tasks truly happen simultaneously.

πŸ‘‰ Concurrency is about structure (how tasks are managed). πŸ‘‰ Parallelism is about execution (how tasks are run in hardware).


3. Technical Definition

  • Concurrency: Multiple tasks make progress in overlapping time periods. It doesn’t require multiple processors/cores. Even with a single CPU core, the system can interleave execution of tasks via context switching.

  • Parallelism: Multiple tasks run at the exact same instant, usually on different CPU cores or processors.


4. Example with Go

Go is famous for concurrency with goroutines.

package main

import (
	"fmt"
	"time"
)

func task(name string) {
	for i := 1; i <= 3; i++ {
		fmt.Println(name, ":", i)
		time.Sleep(500 * time.Millisecond)
	}
}

func main() {
	go task("Task A") // run concurrently
	go task("Task B")

	time.Sleep(3 * time.Second)
	fmt.Println("Done")
}

What happens:

  • Concurrency: Both Task A and Task B appear to run at the same time because Go schedules goroutines across available cores. If you run this on a single-core CPU, Go interleaves execution β†’ that’s concurrency.
  • Parallelism: If you run this on a multi-core CPU, Task A might run on Core 1 and Task B on Core 2 simultaneously β†’ that’s parallelism.

5. Key Differences Table

Aspect Concurrency Parallelism
Definition Managing multiple tasks at once Executing multiple tasks at once
Focus Task switching and scheduling Simultaneous execution
CPU Requirement Can happen on a single-core CPU Requires multi-core CPU
Analogy One chef multitasking across dishes Many chefs cooking different dishes
In Go Achieved via goroutines & channels Achieved when goroutines run on multiple cores

6. Visual Representation

  • Concurrency (single-core):
Time: |----A----|----B----|----A----|----B----|
       ^ Task A and Task B interleaved
  • Parallelism (multi-core):
Core1: |----A----|----A----|----A----|
Core2: |----B----|----B----|----B----|
       ^ Tasks running truly at the same time

7. In Practice

  • Concurrency is a design approach: "How do we structure a program so that it can handle many things at once?"
  • Parallelism is an execution strategy: "How do we use hardware to literally do many things at once?"

Go is concurrent by design (goroutines + channels) and parallel by runtime (GOMAXPROCS decides how many cores are used).


βœ… Final takeaway:

  • Concurrency = composition of independently executing tasks.
  • Parallelism = simultaneous execution of tasks.

They are related, but not the same. A program can be concurrent but not parallel, parallel but not concurrent, or both.


Let’s go step by step and dive deep into channels in Go, because they’re one of the most powerful concurrency primitives in the language.


πŸ”Ή What are Channels in Go?

In Go, a channel is a typed conduit (pipe) through which goroutines can communicate with each other.

  • They allow synchronization (ensuring goroutines coordinate properly).
  • They allow data exchange between goroutines safely, without explicit locking (like mutexes).

πŸ‘‰ Think of a channel as a "queue" or "pipeline" where one goroutine can send data and another goroutine can receive it.


πŸ”Ή Syntax of Channels

Declaring a channel

var ch chan int // declare a channel of type int

Creating a channel

ch := make(chan int) // make allocates memory for a channel

Here:

  • ch is a channel of integers.
  • make(chan int) initializes it.

πŸ”Ή Sending and Receiving on Channels

We use the <- operator.

ch <- 10       // send value 10 into channel
value := <-ch  // receive value from channel
  • Send (ch <- value): Puts data into the channel.
  • Receive (value := <-ch): Gets data from the channel.
  • Both operations block until the other side is ready (unless buffered).

πŸ”Ή Example: Simple Goroutine Communication

package main

import (
	"fmt"
	"time"
)

func worker(ch chan string) {
	time.Sleep(2 * time.Second)
	ch <- "done" // send message
}

func main() {
	ch := make(chan string)
	go worker(ch)

	fmt.Println("Waiting for worker...")
	msg := <-ch // blocks until worker sends data
	fmt.Println("Worker says:", msg)
}

βœ… Output:

Waiting for worker...
Worker says: done

Here:

  • main waits on <-ch until the goroutine sends "done".
  • This synchronizes main and the worker.

πŸ”Ή Buffered vs Unbuffered Channels

1. Unbuffered Channels (default)

  • No capacity β†’ send blocks until a receiver is ready, and receive blocks until a sender is ready.
  • Ensures synchronization.
ch := make(chan int) // unbuffered

2. Buffered Channels

  • Created with a capacity.
  • Allows sending multiple values before blocking, up to the capacity.
ch := make(chan int, 3) // capacity = 3
ch <- 1
ch <- 2
ch <- 3
// sending a 4th value will block until receiver consumes one

πŸ‘‰ Buffered channels provide asynchronous communication.


πŸ”Ή Closing a Channel

We can close a channel when no more values will be sent:

close(ch)

After closing:

  • Further sends β†’ panic.
  • Receives β†’ still possible, but will yield zero values when channel is empty.

Example:

package main

import "fmt"

func main() {
	ch := make(chan int, 2)
	ch <- 10
	ch <- 20
	close(ch)

	for val := range ch {
		fmt.Println(val)
	}
}

βœ… Output:

10
20

πŸ”Ή Directional Channels

We can restrict channels to send-only or receive-only.

func sendData(ch chan<- int) { // send-only
	ch <- 100
}

func receiveData(ch <-chan int) { // receive-only
	fmt.Println(<-ch)
}

This enforces clear contracts between functions.


πŸ”Ή Select Statement (Channel Multiplexing)

The select statement is like a switch for channels. It waits on multiple channel operations and executes whichever is ready first.

select {
case msg1 := <-ch1:
	fmt.Println("Received", msg1)
case msg2 := <-ch2:
	fmt.Println("Received", msg2)
default:
	fmt.Println("No messages")
}

πŸ‘‰ Useful for:

  • Handling multiple channels.
  • Adding timeouts with time.After.
  • Preventing blocking with default.

πŸ”Ή Real Example: Worker Pool with Channels

Channels make it easy to build worker pools.

package main

import (
	"fmt"
	"time"
)

func worker(id int, jobs <-chan int, results chan<- int) {
	for job := range jobs {
		fmt.Printf("Worker %d processing job %d\n", id, job)
		time.Sleep(time.Second)
		results <- job * 2
	}
}

func main() {
	jobs := make(chan int, 5)
	results := make(chan int, 5)

	// Start 3 workers
	for i := 1; i <= 3; i++ {
		go worker(i, jobs, results)
	}

	// Send jobs
	for j := 1; j <= 5; j++ {
		jobs <- j
	}
	close(jobs)

	// Collect results
	for r := 1; r <= 5; r++ {
		fmt.Println("Result:", <-results)
	}
}

βœ… Output (order may vary):

Worker 1 processing job 1
Worker 2 processing job 2
Worker 3 processing job 3
Worker 1 processing job 4
Worker 2 processing job 5
Result: 2
Result: 4
Result: 6
Result: 8
Result: 10

This shows how channels + goroutines β†’ powerful concurrent systems.


πŸ”Ή Key Takeaways

  • Channels are typed pipes for goroutine communication.
  • Unbuffered channels synchronize sender and receiver.
  • Buffered channels allow limited async communication.
  • Use close() to signal no more values.
  • Directional channels (chan<-, <-chan) enforce contracts.
  • select helps multiplex multiple channels.
  • Channels + goroutines = safe, concurrent, and elegant design.

Now we’re going into the guts of channels in Go, the kind of stuff that matters if we want a CS-level understanding of why channels are so powerful and how they avoid race conditions. πŸš€


πŸ”¬ Channels in Go: Under the Hood

Channels in Go aren’t magic β€” they’re implemented in the Go runtime (part of the scheduler and memory model). Let’s break down their internal structure, blocking mechanism, and scheduling behavior.


1. Channel Data Structure (hchan)

Internally, every channel is represented by a structure called hchan (defined in Go’s runtime source, runtime/chan.go):

type hchan struct {
    qcount   uint           // number of elements currently in queue
    dataqsiz uint           // size of the circular buffer
    buf      unsafe.Pointer // circular buffer (for buffered channels)
    elemsize uint16         // size of each element
    closed   uint32         // is channel closed?

    sendx    uint   // send index (next slot to write to)
    recvx    uint   // receive index (next slot to read from)

    recvq    waitq  // list of goroutines waiting to receive
    sendq    waitq  // list of goroutines waiting to send

    lock mutex       // protects all fields
}

Key things to notice:

  • Circular Buffer β†’ if channel is buffered, data lives here.
  • Send/Recv Index β†’ used for round-robin access in buffer.
  • Wait Queues β†’ goroutines that are blocked are put here.
  • Lock β†’ ensures safe concurrent access (Go runtime manages locking, so we don’t).

2. Unbuffered Channels (Zero-Capacity)

Unbuffered channels are the simplest case:

  • Send (ch <- x):

    • If there’s already a goroutine waiting to receive, value is copied directly into its stack.
    • If not, sender blocks β†’ it’s enqueued into sendq until a receiver arrives.
  • Receive (<-ch):

    • If there’s a waiting sender, value is copied directly.
    • If not, receiver blocks β†’ it’s enqueued into recvq until a sender arrives.

πŸ‘‰ This is why unbuffered channels synchronize goroutines. No buffer exists; transfer happens only when both sides are ready.


3. Buffered Channels

Buffered channels add a queue (circular buffer):

  • Send:

    • If buffer not full β†’ put value in buffer, increment qcount, update sendx.
    • If buffer full β†’ block, enqueue sender in sendq.
  • Receive:

    • If buffer not empty β†’ take value from buffer, decrement qcount, update recvx.
    • If buffer empty β†’ block, enqueue receiver in recvq.

πŸ‘‰ Buffered channels provide asynchronous communication, but when full/empty they still enforce synchronization.


4. Blocking and Goroutine Parking

When a goroutine cannot proceed (because channel is full or empty), Go’s runtime parks it:

  • Parking = goroutine is put to sleep, removed from runnable state.
  • Unparking = when the condition is satisfied (e.g., sender arrives), runtime wakes up the goroutine and puts it back on the scheduler queue.

This avoids busy-waiting (goroutines don’t spin-loop, they sleep efficiently).


5. Closing a Channel

When we close(ch):

  • closed flag in hchan is set.
  • All goroutines in recvq are woken up and return the zero value.
  • Any new send β†’ panic.
  • Receives on empty closed channel β†’ return zero value immediately.

6. Select Statement Internals

select in Go is implemented like a non-deterministic choice operator:

  1. The runtime looks at all channel cases.
  2. If multiple channels are ready β†’ pick one pseudo-randomly (to avoid starvation).
  3. If none are ready β†’ block the goroutine, enqueue it on all those channels’ sendq/recvq.
  4. When one channel becomes available, runtime wakes up the goroutine, executes that case, and unregisters it from others.

πŸ‘‰ This is why select is fair and efficient.


7. Memory Model Guarantees

Channels follow Go’s happens-before relationship:

  • A send on a channel happens before the corresponding receive completes.
  • This ensures visibility of writes: when one goroutine sends a value, all memory writes before the send are guaranteed visible to the receiver after the receive.

This is similar to release-acquire semantics in CPU memory models.


8. Performance Notes

  • Channels avoid explicit locks for user code β€” the runtime lock inside hchan is optimized with CAS (Compare-And-Swap) instructions when possible.
  • For heavy concurrency, channels can become a bottleneck (due to contention on hchan.lock). In such cases, Go devs sometimes use lock-free data structures or sharded channels.
  • But for safe communication, channels are much cleaner than manual locking.

9. Analogy

Imagine a mailbox system:

  • Unbuffered channel β†’ one person waits at the mailbox until another arrives.
  • Buffered channel β†’ mailbox has slots; sender can drop letters until it’s full.
  • select β†’ person waiting at multiple mailboxes, ready to grab whichever letter arrives first.
  • Closing β†’ post office shuts down; no new letters allowed, but old ones can still be collected.

πŸ”‘ Key Takeaways (CS-level)

  1. Channels are backed by a lock-protected struct (hchan) with a buffer and wait queues.
  2. Unbuffered channels β†’ synchronous handoff (sender ↔ receiver meet at the same time).
  3. Buffered channels β†’ async up to capacity, but still block when full/empty.
  4. Blocked goroutines are parked efficiently, not spin-looping.
  5. Select allows non-deterministic, fair channel multiplexing.
  6. Closing signals termination and wakes receivers.
  7. Channels provide happens-before memory guarantees, making them safer than manual synchronization.

Let’s go deep into unbuffered vs buffered channels in Go, both conceptually and under the hood (CS-level).


πŸ”Ή Channels Recap

A channel in Go is essentially a typed conduit that goroutines use to communicate. Think of it like a pipe with synchronization built-in. Under the hood, Go implements channels as a struct (hchan) in the runtime, which manages:

  • A queue (circular buffer) of values
  • A list of goroutines waiting to send
  • A list of goroutines waiting to receive
  • Locks for synchronization

πŸ”Ή Unbuffered Channels

An unbuffered channel is created like this:

ch := make(chan int) // no buffer size specified

βœ… Key Behavior:

  • Synchronous communication.

    • A send (ch <- v) blocks until another goroutine executes a receive (<-ch).
    • A receive blocks until another goroutine sends.
  • This creates a rendezvous point between goroutines: both must be ready simultaneously.

πŸ” Under the hood:

  • Since the buffer capacity = 0, the channel cannot hold values.

  • When a goroutine executes ch <- v:

    1. The runtime checks if there’s a waiting receiver in the channel’s recvq.
    2. If yes β†’ it directly transfers the value from sender to receiver (no buffer copy).
    3. If not β†’ the sender goroutine is put to sleep and added to the sendq.
  • Similarly, a receiver blocks until there’s a sender.

So data is passed directly, goroutine-to-goroutine, like a handoff.

Example:

func main() {
    ch := make(chan int)

    go func() {
        ch <- 42 // blocks until receiver is ready
    }()

    val := <-ch // blocks until sender is ready
    fmt.Println(val) // 42
}

This ensures synchronization β€” the print only happens after the send completes.


πŸ”Ή Buffered Channels

A buffered channel is created like this:

ch := make(chan int, 3) // capacity = 3

βœ… Key Behavior:

  • Asynchronous communication up to capacity.

    • A send (ch <- v) only blocks if the buffer is full.
    • A receive (<-ch) only blocks if the buffer is empty.
  • Acts like a queue between goroutines.

πŸ” Under the hood:

  • Channel has a circular buffer (qcount, dataqsiz, buf).

  • On ch <- v:

    1. If a receiver is waiting β†’ value bypasses buffer, sent directly.
    2. Else, if buffer is not full β†’ value is enqueued in buffer.
    3. Else (buffer full) β†’ sender goroutine is parked in sendq.
  • On <-ch:

    1. If buffer has elements β†’ dequeue and return.
    2. Else, if a sender is waiting β†’ take value directly.
    3. Else β†’ receiver goroutine is parked in recvq.

So buffered channels allow decoupling: senders and receivers don’t have to line up perfectly in time (up to buffer capacity).

Example:

func main() {
    ch := make(chan int, 2)

    ch <- 1 // does not block
    ch <- 2 // does not block

    go func() {
        ch <- 3 // blocks until someone reads
    }()

    fmt.Println(<-ch) // frees up buffer, goroutine sends 3
    fmt.Println(<-ch)
    fmt.Println(<-ch)
}

πŸ”Ή Comparison

Feature Unbuffered Channel Buffered Channel
Capacity 0 > 0
Communication style Synchronous (rendezvous) Asynchronous (queue, but limited)
Blocking (Send) Until a receiver is ready Until buffer has space
Blocking (Receive) Until a sender is ready Until buffer has data
Use case Enforce synchronization, handoff semantics Decouple producer/consumer timing

πŸ”Ή Analogy

  • Unbuffered channel: Like passing a note hand-to-hand β€” both people must meet at the same time.
  • Buffered channel: Like a mailbox β€” sender can drop the note and leave, receiver can pick it up later (until the mailbox is full).

πŸ”Ή CS-Level Insight

  • The Go runtime (runtime/chan.go) maintains:

    • hchan struct:

      type hchan struct {
          qcount   uint           // number of elements in queue
          dataqsiz uint           // size of circular buffer
          buf      unsafe.Pointer // actual buffer
          sendx    uint           // next send index
          recvx    uint           // next receive index
          recvq    waitq          // list of waiting receivers
          sendq    waitq          // list of waiting senders
          lock     mutex
      }
  • Unbuffered: dataqsiz == 0, always uses sendq/recvq.

  • Buffered: Uses circular buffer for queuing, only parks goroutines when full/empty.


πŸ‘‰ So in short:

  • Unbuffered channels couple goroutines tightly in time (synchronous handoff).
  • Buffered channels decouple them (asynchronous, but limited by capacity).

Perfect πŸ‘Œ β€” this is where Go really shines. Let’s break down how the Go runtime orchestrates goroutines + channels together, in detail, both conceptually and at the runtime (CS-level implementation).


πŸ”Ή The Go Runtime Model

Go doesn’t rely on the OS to schedule lightweight tasks. Instead, it implements its own scheduler inside the runtime. This allows goroutines and channels to work smoothly together.


1. Goroutines in the Runtime

  • A goroutine is a lightweight thread of execution, managed by the Go runtime (not OS).

  • Under the hood:

    • Each goroutine is represented by a g struct.
    • Each has its own stack (starts tiny, grows/shrinks dynamically).
    • Thousands (even millions) of goroutines can run inside one OS thread.

Scheduler: M:N model

  • M = OS threads

  • N = Goroutines

  • The runtime maps N goroutines onto M OS threads.

  • Key runtime structs:

    • M (Machine) β†’ OS thread
    • P (Processor) β†’ Logical processor, responsible for scheduling goroutines on an M
    • G (Goroutine) β†’ A goroutine itself
  • Scheduling is cooperative + preemptive:

    • Goroutines yield at certain safe points (e.g., blocking operations, function calls).
    • Since Go 1.14, preemption also works at loop backedges.

So: goroutines are not OS-level threads β€” they’re scheduled by Go’s own runtime.


2. Channels in the Runtime

Channels are the synchronization primitive between goroutines.

Runtime implementation: runtime/chan.go.

Struct:

type hchan struct {
    qcount   uint           // # of elements in queue
    dataqsiz uint           // buffer size
    buf      unsafe.Pointer // circular buffer
    sendx    uint           // next send index
    recvx    uint           // next receive index
    recvq    waitq          // waiting receivers
    sendq    waitq          // waiting senders
    lock     mutex
}

Core idea:

  • Channels are queues with wait lists:

    • If buffered β†’ goroutines enqueue/dequeue values.
    • If unbuffered β†’ goroutines handshake directly.
  • Senders & receivers that cannot proceed are parked (suspended) into the sendq or recvq.


3. How Goroutines & Channels Interact

Case A: Unbuffered channel

ch := make(chan int)
go func() { ch <- 42 }()
val := <-ch
  1. Sender (ch <- 42):

    • Lock channel.
    • Check recvq (waiting receivers).
    • If receiver waiting β†’ value copied directly β†’ receiver wakes up β†’ sender continues.
    • If no receiver β†’ sender is parked (blocked) and added to sendq.
  2. Receiver (<-ch):

    • Lock channel.
    • Check sendq (waiting senders).
    • If sender waiting β†’ value copied β†’ sender wakes up β†’ receiver continues.
    • If no sender β†’ receiver is parked and added to recvq.

This ensures synchronous handoff.


Case B: Buffered channel

ch := make(chan int, 2)
  1. Sender (ch <- v):

    • Lock channel.
    • If recvq has waiting receivers β†’ skip buffer, deliver directly.
    • Else if buffer has space β†’ enqueue value β†’ done.
    • Else (buffer full) β†’ park sender in sendq.
  2. Receiver (<-ch):

    • Lock channel.
    • If buffer has values β†’ dequeue β†’ done.
    • Else if sendq has waiting senders β†’ take value directly.
    • Else β†’ park receiver in recvq.

So buffered channels act as a mailbox (async up to capacity).


4. Parking & Resuming Goroutines

When goroutines can’t make progress (blocked send/recv), the runtime:

  • Parks them: puts them in channel queues (sendq or recvq) and removes them from the scheduler’s run queue.
  • Stores a sudog (suspended goroutine) object in the queue with metadata (which goroutine, element pointer, etc.).

When the condition is satisfied (buffer space, sender arrives, etc.):

  • The runtime wakes up a waiting goroutine by moving it back into the scheduler’s run queue.
  • The scheduler later assigns it to a P (processor) β†’ M (thread) β†’ resumes execution.

This is why Go channels feel seamless: the runtime transparently parks and wakes goroutines.


5. Select & Channels

select is also handled in runtime:

  • The runtime checks multiple channels in random order to avoid starvation.
  • If one is ready β†’ proceeds immediately.
  • If none are ready β†’ goroutine is parked, attached to all involved channels’ queues, and woken up when one becomes available.

6. Performance & Efficiency

  • Channel operations are protected by mutex + atomic ops β†’ very efficient.
  • Goroutines are cheap (KB stack, small structs).
  • Parking/waking is implemented in pure runtime β†’ no heavy syscalls unless all goroutines block (then Go hands thread back to OS).

πŸ”Ή Visual Summary

Unbuffered

G1: ch <- 42   <----->   G2: val := <-ch
(synchronous handoff, both must rendezvous)

Buffered

G1: ch <- 42 ---> [ buffer ] ---> G2: val := <-ch
(asynchronous until buffer full/empty)

Runtime scheduling

[M:OS Thread] <----> [P:Logical Processor] <----> [G:Goroutine Queue]

πŸ”Ή Big Picture

  • Goroutines = cheap lightweight threads managed by Go runtime.
  • Scheduler = M:N model with P (processor) abstraction.
  • Channels = safe queues with wait lists.
  • Interaction = senders/receivers park & wake, enabling CSP-style concurrency.
  • Runtime magic = efficient, cooperative scheduling + lightweight context switching.

πŸ‘‰ So: goroutines are like "actors," channels are "mailboxes," and the Go runtime is the "stage manager" that schedules actors and delivers their messages efficiently.


Let’s build a step-by-step execution timeline for how the Go runtime handles goroutines + channels.

Two cases: unbuffered and buffered channels.


πŸ”Ή Case 1: Unbuffered Channel

Code:

ch := make(chan int)

go func() {
    ch <- 42
    fmt.Println("Sent 42")
}()

val := <-ch
fmt.Println("Received", val)

Execution Timeline (runtime flow)

  1. Main goroutine (G_main) creates channel ch (capacity = 0).

    • Runtime allocates an hchan struct with empty sendq and recvq.
  2. Spawn goroutine (G1) β†’ scheduled by runtime onto an M (OS thread) via some P.

  3. G1 executes ch <- 42:

    • Lock channel.
    • Since recvq is empty, no receiver is waiting.
    • Create a sudog for G1 (stores goroutine pointer + value).
    • Add sudog to sendq.
    • G1 is parked (blocked) β†’ removed from run queue.
  4. Main goroutine executes <-ch:

    • Lock channel.
    • Sees sendq has a waiting sender (G1).
    • Runtime copies 42 from G1’s stack to G_main’s stack.
    • Removes G1 from sendq.
    • Marks G1 as runnable β†’ puts it back in the scheduler’s run queue.
    • G_main continues with value 42.
  5. Scheduler resumes G1 β†’ prints "Sent 42". **Main goroutine prints "Received 42".


πŸ”Έ Key point: In unbuffered channels, send/recv must rendezvous. One goroutine blocks until the other arrives.


πŸ”Ή Case 2: Buffered Channel

Code:

ch := make(chan int, 2)

go func() {
    ch <- 1
    ch <- 2
    ch <- 3
    fmt.Println("Sent all")
}()

time.Sleep(time.Millisecond) // give sender time
fmt.Println(<-ch)
fmt.Println(<-ch)
fmt.Println(<-ch)

Execution Timeline (runtime flow)

  1. Main goroutine (G_main) creates channel ch (capacity = 2).

    • Runtime allocates buffer (circular queue), size = 2.
  2. Spawn goroutine (G1).

  3. G1 executes ch <- 1:

    • Lock channel.
    • Buffer not full (0/2).
    • Enqueue 1 at buf[0].
    • Increment qcount = 1.
    • Return immediately (non-blocking).
  4. G1 executes ch <- 2:

    • Lock channel.
    • Buffer not full (1/2).
    • Enqueue 2 at buf[1].
    • qcount = 2.
    • Return immediately.
  5. G1 executes ch <- 3:

    • Lock channel.
    • Buffer is full (2/2).
    • No receivers waiting (recvq empty).
    • Create sudog for G1.
    • Put it in sendq.
    • Park G1 (blocked).
  6. Main goroutine executes <-ch:

    • Lock channel.

    • Buffer has elements (qcount = 2).

    • Dequeue 1.

    • qcount = 1.

    • Since there’s a blocked sender in sendq (G1 with value 3), runtime:

      • Wakes G1.
      • Copies 3 into buffer (at freed slot).
      • G1 resumes later.
  7. Main goroutine executes <-ch again:

    • Dequeue 2.
    • qcount = 1 (still has 3).
  8. Main goroutine executes <-ch final time:

    • Dequeue 3.
    • qcount = 0 (buffer empty).
  9. Scheduler resumes G1 β†’ "Sent all" printed.


πŸ”Έ Key point: Buffered channels decouple sender/receiver timing. G1 only blocked when the buffer was full.


πŸ”Ή Visual Snapshot

Unbuffered

G1: send(42) ---- waits ----> G_main: recv() 
             <--- wakes ----

Buffered (capacity = 2)

Buffer: [ 1 ][ 2 ]    <- send 1, send 2
Buffer: full          <- send 3 blocks
Recv 1 β†’ slot frees   <- wakes sender, puts 3 in
Recv 2, Recv 3        <- empties buffer

πŸ‘‰ In both cases, the Go runtime orchestrates this:

  • sendq & recvq hold waiting goroutines (sudog objects).
  • Blocked goroutines are parked (suspended).
  • When conditions change (buffer frees, peer arrives), goroutines are woken and put back into the scheduler’s run queue.

Buffered channels in Go β€” deep dive πŸ”Ž

A buffered channel is a channel with capacity > 0:

ch := make(chan int, 3) // capacity 3

It provides a small queue (a circular buffer) between senders and receivers. A send (ch <- v) only blocks when the buffer is full; a receive (<-ch) only blocks when the buffer is empty β€” unless there are waiting peers, in which case the runtime can do a direct handoff.

Use it when we want to decouple producer and consumer timing (allow short bursts) but still bound memory and concurrency.


Creation & introspection

  • Create: ch := make(chan T, capacity) where capacity >= 1.
  • Zero value is nil: var ch chan int β†’ nil channel (send/recv block forever).
  • Inspect: len(ch) gives number of queued elements, cap(ch) gives capacity.

High-level send/receive rules (precise)

When sending (ch <- v):

  1. If there is a waiting receiver (parked on recvq) β†’ direct transfer: runtime copies v to receiver and wakes it (no buffer enqueue).
  2. Else if the buffer has free slots (len < cap) β†’ enqueue the value into the circular buffer and return immediately.
  3. Else (buffer full and no receiver) β†’ park the sender (sudog) on the channel's sendq and block.

When receiving (<-ch):

  1. If buffer has queued items (len > 0) β†’ dequeue an item and return it.
  2. Else if there is a waiting sender (in sendq) β†’ direct transfer: take the sender’s value and wake the sender.
  3. Else (buffer empty and no sender) β†’ park the receiver on recvq and block.

Important: the runtime prefers delivering directly to a waiting peer if one exists β€” it avoids unnecessary buffer operations and wake-ups.


Under-the-hood (simplified runtime view)

Channels are implemented by the runtime in a structure conceptually like:

// simplified conceptual fields
type hchan struct {
    qcount   uint         // number of elements currently in buffer
    dataqsiz uint         // capacity (buffer size)
    buf      unsafe.Pointer // pointer to circular buffer memory
    sendx    uint         // next index to send (enqueue)
    recvx    uint         // next index to receive (dequeue)
    sendq    waitq        // queue of waiting senders (sudog)
    recvq    waitq        // queue of waiting receivers (sudog)
    lock     mutex        // protects the channel's state
}
  • The buffer is a circular array indexed by sendx/recvx modulo dataqsiz.
  • sendq and recvq are queues of parked goroutines (sudog objects) waiting for a send/receive.
  • Operations lock the channel, check queues and buffer, then either enqueue/dequeue or park/unpark goroutines.
  • Parked goroutines are moved back to the scheduler run queue when woken.

Example β€” behavior & output

package main

import (
	"fmt"
	"time"
)

func main() {
	ch := make(chan int, 2) // capacity 2

	go func() {
		ch <- 1 // does NOT block
		fmt.Println("sent 1")
		ch <- 2 // does NOT block
		fmt.Println("sent 2")
		ch <- 3 // blocks until receiver consumes one
		fmt.Println("sent 3")
	}()

	time.Sleep(100 * time.Millisecond) // let sender run

	fmt.Println("recv:", <-ch) // receives 1; this will unblock sender for 3
	fmt.Println("recv:", <-ch) // receives 2
	fmt.Println("recv:", <-ch) // receives 3
}

Expected printed sequence (order may vary slightly with scheduling, but logically):

sent 1
sent 2
recv: 1
sent 3       // unblocks here after first recv frees slot
recv: 2
recv: 3

Closing a buffered channel

  • close(ch):

    • Makes the channel no longer accept sends. Any sends to a closed channel Panic.
    • Receivers can still drain buffered items.
    • Once buffer is empty, subsequent receives return the zero value and ok == false.
  • Example:

ch := make(chan int, 2)
ch <- 10
ch <- 20
close(ch)

v, ok := <-ch // v==10, ok==true
v, ok = <-ch  // v==20, ok==true
v, ok = <-ch  // v==0, ok==false (channel drained and closed)
  • Closing is normally done by the sender/owner side. Closing from multiple places or closing when other senders still send is dangerous.

select + buffered channels (non-blocking tries)

We often use a select with default to attempt a non-blocking send/recv:

select {
case ch <- v:
    // succeeded
default:
    // buffer full β€” do alternate action
}

This is how we implement try-send / try-receive semantics.


Typical patterns & idioms

  1. Bounded buffer / producer-consumer

    • Buffer provides smoothing for bursts.
  2. Worker pool (task queue)

    • tasks := make(chan Task, queueSize) β€” spawn worker goroutines that for t := range tasks { ... }.
  3. Semaphore / concurrency limiter

    sem := make(chan struct{}, N) // allow N concurrent active tasks
    sem <- struct{}{}             // acquire (blocks when N reached)
    <-sem                        // release
  4. Pipelines

    • Stage outputs into buffered channels to decouple stages.

Synchronization & memory visibility

  • A successful send on a channel synchronizes with the corresponding receive that receives the value. That means the receive sees all memory writes that happened before the send (happens-before guarantee).
  • Using channels for signalling is safe: if we send after setting fields, the receiver will see those fields set.

Performance considerations

  • Buffered channels improve throughput where producers and consumers are not tightly synchronized.

  • Too large buffers:

    • Consume more memory.
    • Increase latency for consumers (items may sit in buffer).
    • Mask backpressure (producers can outrun consumers).
  • Too small buffers:

    • Lead to frequent blocking and context switching.
  • Tuning:

    • Choose cap to match burst size / acceptable queueing.
    • For heavy throughput, benchmark channels vs other concurrency primitives (e.g., pools, atomics) β€” channels are convenient and fast but not free.

Common pitfalls & gotchas

  • Deadlock: If producers fill the buffer and nobody consumes, they block. If blocked sends prevent the program from progressing, deadlock occurs.
  • Send on closed channel: panic β€” avoid by ensuring only the owner closes the channel.
  • Nil channel: var ch chan T without make is nil β€” send/recv block forever.
  • Large struct values: sending large values copies them into the buffer; prefer pointers or smaller structs if copying is expensive.
  • Mixing close and multiple senders: close only from a single owner to avoid races/panics.

FIFO & fairness

  • The runtime enqueues waiting senders/receivers (sudogs) and generally wakes them in FIFO order β€” so waiting goroutines are served in roughly the order they arrived. For select across multiple channels, selection is randomized among ready cases to avoid starvation.

Quick cheatsheet

  • make(chan T, n) β†’ buffered channel with capacity n.
  • len(ch) β†’ items queued now.
  • cap(ch) β†’ total capacity.
  • close(ch) β†’ no more sends; readers drain buffer then get ok==false.
  • select { case ch<-v: default: } β†’ non-blocking send attempt.

When to use buffered channels

  • When producers produce in bursts and consumers are slower but able to catch up.
  • When you want some decoupling but still bounded memory/queueing.
  • When you need a simple concurrency limiter (semaphore style).

Channel Synchronization is one of the most important and elegant parts of Go’s concurrency model.


πŸ”Ή What is Channel Synchronization?

  • In Go, channels are not just for communication (passing values between goroutines).
  • They are also a synchronization primitive: they coordinate execution order between goroutines.

Think of it like: πŸ‘‰ Send blocks until the receiver is ready (unbuffered) πŸ‘‰ Receive blocks until the sender provides data πŸ‘‰ This mutual blocking acts as a synchronization point.


πŸ”Ή Case 1: Synchronization with Unbuffered Channels

Unbuffered channels enforce strict rendezvous synchronization:

  • When goroutine A sends (ch <- x), it is blocked until goroutine B executes a receive (<- ch).
  • Both goroutines meet at the channel, exchange data, and continue.

Example:

package main

import (
	"fmt"
	"time"
)

func worker(done chan bool) {
	fmt.Println("Worker: started")
	time.Sleep(2 * time.Second)
	fmt.Println("Worker: finished")

	// notify main goroutine
	done <- true
}

func main() {
	done := make(chan bool)

	go worker(done)

	// wait for worker to finish
	<-done
	fmt.Println("Main: all done")
}

πŸ”Ž Here:

  • done <- true synchronizes the worker with the main goroutine.
  • Main will block on <-done until the worker signals.
  • No explicit mutex or condition variable is needed β€” the channel ensures correct ordering.

πŸ”Ή Case 2: Synchronization with Buffered Channels

Buffered channels allow decoupling between sender and receiver, but can still be used for synchronization.

Rules:

  • Sending blocks only if buffer is full.
  • Receiving blocks only if buffer is empty.

Example:

package main

import (
	"fmt"
	"time"
)

func worker(tasks chan int, done chan bool) {
	for {
		task, more := <-tasks
		if !more {
			fmt.Println("Worker: all tasks done")
			done <- true
			return
		}
		fmt.Println("Worker: processing task", task)
		time.Sleep(500 * time.Millisecond)
	}
}

func main() {
	tasks := make(chan int, 3)
	done := make(chan bool)

	go worker(tasks, done)

	for i := 1; i <= 5; i++ {
		fmt.Println("Main: sending task", i)
		tasks <- i
	}
	close(tasks) // signals no more tasks

	<-done // wait for worker
	fmt.Println("Main: worker finished")
}

πŸ”Ž Here:

  • Buffer allows temporary queuing of tasks.
  • Synchronization happens when tasks is full (main blocks) or empty (worker blocks).
  • Closing the channel signals the worker to stop.

πŸ”Ή How the Go Runtime Synchronizes with Channels

Now let’s peek under the hood.

1. Each channel (hchan) has:

  • A buffer (circular queue, if buffered).

  • Two wait queues:

    • sendq β†’ goroutines waiting to send.
    • recvq β†’ goroutines waiting to receive.

2. Unbuffered channel (capacity = 0):

  • A send operation checks recvq:

    • If a goroutine is waiting to receive β†’ direct handoff (value copied, receiver resumed).
    • If not β†’ sender parks itself in sendq (blocked).
  • A receive operation checks sendq:

    • If a goroutine is waiting to send β†’ direct handoff.
    • If not β†’ receiver parks itself in recvq.

This ensures synchronous rendezvous.


3. Buffered channel (capacity > 0):

  • Send:

    • If buffer is not full β†’ enqueue value, return immediately.
    • If buffer is full β†’ block in sendq.
  • Receive:

    • If buffer is not empty β†’ dequeue value, return immediately.
    • If buffer is empty β†’ block in recvq.

4. Synchronization = parking and unparking goroutines

  • When a goroutine blocks, the runtime:

    • Saves its state (stack, registers).
    • Moves it off the run queue.
    • Adds it to the channel’s wait queue.
  • When the opposite operation happens, the runtime:

    • Wakes a goroutine from the wait queue.
    • Puts it back on the scheduler run queue.
  • This is how Go synchronizes goroutines without explicit locks.


πŸ”Ή Real-world Patterns of Channel Synchronization

  1. Signaling (done channels, as in worker example).
  2. Worker pools (tasks + done channels).
  3. Bounded queues (buffered channels to control throughput).
  4. Fan-in / Fan-out (multiple producers and consumers).
  5. Rate limiting (token buckets using buffered channels).

βœ… Summary

  • Channels synchronize goroutines naturally: send blocks until receive, receive blocks until send (with buffering rules).
  • Runtime uses wait queues (sendq, recvq) and goroutine parking/unparking for this.
  • This synchronization mechanism replaces the need for explicit mutexes in many cases.

Releases

No releases published

Packages

No packages published

Languages