Go Concurrency Tutorial
Concurrency is a fundamental strength of Go, designed into the language from the beginning. Go's concurrency model is based on goroutines and channels, providing an elegant way to write concurrent programs that can efficiently utilize multi-core processors.
1. Goroutines Basics
1.1 What are Goroutines?
Goroutines are lightweight threads managed by the Go runtime. They are much more efficient than OS threads and enable you to run thousands or even millions of concurrent operations.
// Starting a goroutine
func sayHello() {
fmt.Println("Hello from goroutine!")
}
func main() {
// Start a new goroutine
go sayHello()
// Give goroutine time to complete
time.Sleep(100 * time.Millisecond)
}
1.2 Goroutines vs Threads
Key differences between goroutines and traditional threads:
- Memory usage: Goroutines start with small stack (2KB) that grows as needed
- Creation cost: Much cheaper than OS threads
- Scheduling: Managed by Go runtime, not OS kernel
- Communication: Designed to use channels for safe communication
1.3 Multiple Goroutines
You can easily start many goroutines:
func worker(id int) {
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
for i := 1; i <= 5; i++ {
go worker(i)
}
time.Sleep(2 * time.Second)
}
2. Channels
2.1 Channel Basics
Channels are typed conduits for communication between goroutines. They provide synchronization and safe data exchange.
// Creating and using channels
func main() {
// Create a channel of strings
messages := make(chan string)
// Send a message in a goroutine
go func() { messages <- "ping" }()
// Receive the message
msg := <-messages
fmt.Println(msg) // "ping"
}
2.2 Channel Buffering
Channels can be buffered to allow sending without immediate receivers:
func main() {
// Buffered channel with capacity of 2
messages := make(chan string, 2)
// Send without blocking
messages <- "buffered"
messages <- "channel"
// Receive later
fmt.Println(<-messages) // "buffered"
fmt.Println(<-messages) // "channel"
}
2.3 Channel Directions
You can specify channel direction in function parameters for type safety:
// Only accepts a channel for sending
func ping(pings chan<- string, msg string) {
pings <- msg
}
// Accepts one channel for receiving and another for sending
func pong(pings <-chan string, pongs chan<- string) {
msg := <-pings
pongs <- msg
}
3. Synchronization
3.1 WaitGroups
sync.WaitGroup waits for a collection of goroutines to finish:
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Notify WaitGroup when done
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // Increment WaitGroup counter
go worker(i, &wg)
}
wg.Wait() // Block until all workers complete
}
3.2 Mutexes
Mutexes protect shared state from concurrent access:
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Inc() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
3.3 Once
sync.Once ensures code runs exactly once:
var (
once sync.Once
instance *Singleton
)
func getInstance() *Singleton {
once.Do(func() {
instance = &Singleton{}
})
return instance
}
4. Concurrency Patterns
4.1 Worker Pools
A pool of goroutines processing work from a channel:
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("worker %d started job %d\n", id, j)
time.Sleep(time.Second)
fmt.Printf("worker %d finished job %d\n", id, j)
results <- j * 2
}
}
func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)
// Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send 5 jobs
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
// Collect results
for a := 1; a <= 5; a++ {
<-results
}
}
4.2 Fan-out/Fan-in
Distribute work to multiple goroutines (fan-out) and combine results (fan-in):
func producer(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
func main() {
// Set up the pipeline
c := producer(1, 2, 3, 4)
out := square(c)
// Consume the output
for n := range out {
fmt.Println(n) // 1, 4, 9, 16
}
}
4.3 Context
context package manages deadlines, cancellations across API boundaries:
func worker(ctx context.Context, wg *sync.WaitGroup) {
defer wg.Done()
for {
select {
case <-ctx.Done():
fmt.Println("Worker cancelled")
return
default:
// Do work
fmt.Println("Working...")
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
var wg sync.WaitGroup
wg.Add(1)
go worker(ctx, &wg)
wg.Wait()
}
5. Advanced Topics
5.1 Select Statement
select lets a goroutine wait on multiple communication operations:
func main() {
c1 := make(chan string)
c2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
c1 <- "one"
}()
go func() {
time.Sleep(2 * time.Second)
c2 <- "two"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-c1:
fmt.Println("received", msg1)
case msg2 := <-c2:
fmt.Println("received", msg2)
}
}
}
5.2 Timer and Ticker
Timers and tickers help with delayed or periodic execution:
func main() {
// Timer fires once after duration
timer := time.NewTimer(2 * time.Second)
<-timer.C
fmt.Println("Timer fired")
// Ticker fires repeatedly
ticker := time.NewTicker(500 * time.Millisecond)
done := make(chan bool)
go func() {
for {
select {
case <-done:
return
case t := <-ticker.C:
fmt.Println("Tick at", t)
}
}
}()
time.Sleep(1600 * time.Millisecond)
ticker.Stop()
done <- true
}
5.3 Atomic Operations
sync/atomic provides low-level atomic memory operations:
type AtomicCounter struct {
count int64
}
func (c *AtomicCounter) Inc() {
atomic.AddInt64(&c.count, 1)
}
func (c *AtomicCounter) Value() int64 {
return atomic.LoadInt64(&c.count)
}
6. Common Pitfalls
6.1 Goroutine Leaks
Goroutines can leak if they're blocked indefinitely:
// Leaking goroutine example
func leak() {
ch := make(chan int)
go func() {
val := <-ch // Blocked forever
fmt.Println(val)
}()
return // Channel never gets sent to
}
// Fixed version
func noLeak() {
ch := make(chan int)
done := make(chan bool)
go func() {
val := <-ch
fmt.Println(val)
done <- true
}()
ch <- 42 // Send value
<-done // Wait for completion
}
6.2 Race Conditions
Data races occur when goroutines access shared data without synchronization:
// Race condition example
var counter int
func increment() {
counter++
}
func main() {
for i := 0; i < 1000; i++ {
go increment()
}
time.Sleep(1 * time.Second)
fmt.Println(counter) // May not be 1000
}
// Fixed with mutex
var (
counter int
mu sync.Mutex
)
func safeIncrement() {
mu.Lock()
defer mu.Unlock()
counter++
}
6.3 Deadlocks
Deadlocks occur when goroutines wait for each other indefinitely:
// Deadlock example
func main() {
ch := make(chan int)
ch <- 42 // Send blocks until someone receives
val := <-ch // But we never get here
fmt.Println(val)
}
// Fixed by using buffered channel or separate goroutines
func noDeadlock() {
ch := make(chan int, 1) // Buffered channel
ch <- 42
val := <-ch
fmt.Println(val)
}
7. Best Practices
7.1 Principles of Go Concurrency
Key principles for effective concurrent programming in Go:
- Don't communicate by sharing memory; share memory by communicating
- Keep goroutines focused on a single task
- Use channels to coordinate goroutines
- Prefer small, independent goroutines
7.2 When to Use What
Choosing the right concurrency tool:
| Scenario | Solution |
|---|---|
| Independent tasks | Goroutines |
| Communication between goroutines | Channels |
| Shared state access | Mutexes |
| Wait for completion | WaitGroup |
| Deadlines/cancellations | Context |
7.3 Debugging Concurrent Programs
Tools and techniques for debugging:
- Use
go run -raceto detect race conditions - Add logging with timestamps
- Use
pproffor performance analysis - Keep goroutines simple and testable