love Go’s concurrency story. Goroutines + channels are still one of the most productive ways to build services that feelconcurrent without drowning in callbacks.
But there’s a trap I’ve watched teams fall into (and I’ve done it myself): using channels as a “locking replacement” for shared state in hot paths.
Under extreme web load—think high QPS handlers doing tiny, repeated critical sections (counters, rate-limit state, in-memory maps, per-tenant stats)—channels often lose to sync.Mutex. Not because channels are “bad”, but because they do more work than a lock/unlock in the common uncontended-to-mildly-contended cases. In a classic benchmark, a channel send/receive comes in around ~100ns while an uncontended mutex lock/unlock can be ~4x faster.
Go’s own guidance is pragmatic here: use what’s simplest/most expressive; don’t be afraid to use a mutex when it fits.
Let’s prove it with code.
The scenario: a hot counter in an HTTP handler
Imagine you want to count requests per route (or per tenant), and you’re doing it in-process.
Two common approaches:
-
Channel-owned state (“share memory by communicating”): send increments to an aggregator goroutine.
-
Mutex-protected shared state: lock, update, unlock.
Here’s what those look like.
package stats
import "sync/atomic"
type ChannelCounter struct {
ch chan string
closed atomic.Bool
// state lives in the goroutine below
}
func NewChannelCounter(buffer int) (*ChannelCounter, map[string]uint64) {
cc := &ChannelCounter{ch: make(chan string, buffer)}
counts := make(map[string]uint64)
go func() {
for key := range cc.ch {
counts[key]++
}
}()
return cc, counts
}
func (c *ChannelCounter) Inc(key string) {
// Under heavy load, this send can become the dominant cost:
// - contention on the channel
// - scheduling/parking if buffer fills
// - serialization through a single reader
c.ch <- key
}
func (c *ChannelCounter) Close() {
if c.closed.CompareAndSwap(false, true) {
close(c.ch)
}
}
What happens at high QPS?
Even if your HTTP server is handling requests across many goroutines, the aggregation is serialized through the one goroutine reading the channel.
