When CPU Is Low, but Load Average Is High
Load average represents the number of processes that are:
Running on the CPU, or
Waiting to run because they are blocked (most commonly on I/O)
So:
Load average ≠ CPU usage
A system can have idle CPUs and still be overloaded.
The Most Common Cause: Disk I/O Wait
When processes are blocked on disk reads or writes:
CPU stays idle
Load average increases
The system feels slow
This happens because processes are stuck in uninterruptible sleep (D state) and cannot be scheduled.
How to Confirm It Quickly
Run:
vmstat 1
Example output (simplified):
procs -----------memory---------- -----cpu----- r b free cache us sy id wa 1 6 82000 450000 4 3 68 25
What to focus on:
us (user CPU) → application work
sy (system CPU) → kernel work
wa (I/O wait) → CPU idle, waiting on disk
Here:
us + syis lowwais high
→ CPU is available, but processes are blocked.
Identify the Disk Bottleneck
Next, run:
iostat -x 1
Example output:
Device r/s w/s rkB/s wkB/s await avgqu-sz %util nvme0n1 120 85 4800 6200 45.3 7.8 98.6
Red flags:
High
await→ slow I/O responseHigh
avgqu-sz→ requests piling upHigh
%util(~100%) → disk fully saturated
This confirms disk I/O is the bottleneck, not CPU.
Key Takeaway
Low CPU + High Load = Something else is blocking progress
Most of the time, that “something” is disk I/O.
Understanding this distinction prevents chasing the wrong problem
and saves real incident response time.
One Rule to Remember
CPU usage shows work being done.
Load average shows work waiting to be done.