Ever had a perfectly good backup drive suddenly start throwing Input/output errors on Linux? It doesn’t explain what failed. It doesn’t explain why. It just tells you something at the block layer went wrong. That could mean minor corruption, or a disk quietly dying. I recently had an external (USB) backup drive do exactly this, as I noticed a specific path on my hard drive was impossible to touch, as it was a rotating backup, and I knew I still had 12 good days after this one, I wanted to simply delete the folder that errored out, but I got the following:

sudo rm -rf DockerStorage_XXXX/
 rm: cannot remove 'DockerStorage_XXXX/docker/rootfs/overlayfs/e7230360c36905826e6318f169ce29e0ee6a788cf5b2ce57066cd86cc626b60c/usr/share/X11/locale/microsoft-cp1256': Input/output error
 rm: cannot remove 'DockerStorage_XXXX/docker/rootfs/overlayfs/e7230360c36905826e6318f169ce29e0ee6a788cf5b2ce57066cd86cc626b60c/usr/share/X11/locale/nokhchi-1': Input/output error
 rm: cannot remove 'DockerStorage_XXXX/docker/rootfs/overlayfs/e7230360c36905826e6318f169ce29e0ee6a788cf5b2ce57066cd86cc626b60c/usr/share/X11/locale/ja_JP.UTF-8': Input/output error
 rm: cannot remove 'DockerStorage_XXXX/docker/rootfs/overlayfs/e7230360c36905826e6318f169ce29e0ee6a788cf5b2ce57066cd86cc626b60c/usr/share/X11/locale/koi8-u': Input/output error

What an I/O Error Actually Means

An Input/output error means: The kernel attempted to read or write a block and failed. That failure can originate from:
  • Filesystem corruption
  • Transport instability (USB/SATA controller)
  • Power interruption - including yanking the USB from the computer
  • Physical sector damage
  • Failing firmware
The mistaken path I've seen people take the most is: I/O error = dead disk. Well... Sometimes yes. Often no.

How It Looked Before Fixing It

A full disk read was attempted: My 1TB USB drive was sitting at /dev/sdb initially (before the fix).
sudo dd if=/dev/sdb of=/dev/null bs=1M status=progress
Result from this test:

841105801216 bytes (841 GB, 783 GiB) copied, 9191 s, 91.5 MB/s
dd: error reading '/dev/sdb': Input/output error
802188+0 records in
802188+0 records out
841155084288 bytes (841 GB, 783 GiB) copied, 9197.1 s, 91.5 MB/s
The disk read cleanly for 841 GB. Then it hit a wall. The kernel confirmed it, using dmesg | tail -50, I was seeing this spam:

[955180.685995] Buffer I/O error on dev sdb, logical block 205360128, async page read
[955180.686004] Buffer I/O error on dev sdb, logical block 205360129, async page read
[955180.686009] Buffer I/O error on dev sdb, logical block 205360130, async page read
[955180.686015] Buffer I/O error on dev sdb, logical block 205360131, async page read
[955180.686021] Buffer I/O error on dev sdb, logical block 205360132, async page read
[955180.686027] Buffer I/O error on dev sdb, logical block 205360133, async page read
[955180.686033] Buffer I/O error on dev sdb, logical block 205360134, async page read
[955180.686038] Buffer I/O error on dev sdb, logical block 205360135, async page read
[955180.686044] Buffer I/O error on dev sdb, logical block 205360136, async page read
[955180.686050] Buffer I/O error on dev sdb, logical block 205360137, async page read
This tells us:
  • The failure occurred at specific logical blocks
  • The block layer could not retrieve those sectors
  • The kernel reported async page read failures
At this stage, it absolutely looks like hardware failure. But I still wanted to verify, (while browsing new hard drives).

Why This Happens

There are several realistic causes:
  • Unsafe Removal: USB drives pulled without unmounting can leave metadata inconsistent.
  • Power Interruptions: Write caches not flushed = corrupted filesystem state.
  • USB Bridge Instability: Cheap controllers and cables introduce failure points.
  • Aging Hardware: Bad sectors develop over time.
  • Heavy Write Workloads: Docker, backups, and databases stress disks constantly (This could very well be what caused it for me).

Backing up the backup

If you have anything important on this drive that you want to save, you can use a tool like rsync to move the files from your /mnt/ to a new temporary backup path:

sudo rsync -av --ignore-errors /mnt/BackupExternal/ /path/to/safe/backup/
--ignore-errors allows rsync to skip unreadable files. At this point, I wanted to unmount my drive which the system just would not let me do. It would tell me it was already unmounted, but I could not work on the partition because the drive was in use:
sudo umount -f /mnt/BackupExternal
As this command still failed for me, I ended up pulling the drive from its port and plugging it into another. Therefore, the drive changed to /dev/sdd, and will be referred as such from now.

The Great Wipe & The Phoenix

The Great Wipe

In order to make sure the partition table got wiped, ensured to write it zeroes to the first 10MB of the drive. DO NOT PLAY AROUND WITH THIS COMMAND, IT IS VERY DESTRUCTIVE
sudo dd if=/dev/zero of=/dev/sdd bs=1M count=10

... & The Phoenix

I then created a new, fresh ext4 partition on the drive.
sudo mkfs.ext4 -F /dev/sdd

And theeen

I set up the mounts for the drive for use once again:

sudo mkdir -p /mnt/BackupExternal
sudo mount /dev/sdb /mnt/BackupExternal

The Critical Test: Full Sequential Read

After wiping and reinitializing, the disk was tested again:

sudo dd if=/dev/sdd of=/dev/null bs=1M status=progress
Result this time:

1000111865856 bytes (1.0 TB, 931 GiB) copied, 11935 s, 83.8 MB/s
953819+1 records in
953819+1 records out
1000152457216 bytes (1.0 TB, 931 GiB) copied, 11935.8 s, 83.8 MB/s
What changed:
  • Entire 1TB device read successfully
  • No I/O errors
  • Stable throughput (~84 MB/s)
Conclusion: The hardware survived a full read scan. That strongly suggests the original issue was filesystem corruption, not physical disk death - phew.

Repair Strategy

If data is disposable:

sudo umount /dev/sdX
sudo mkfs.ext4 -F /dev/sdX
Recreate mount:

sudo mkdir -p /mnt/BackupExternal
sudo mount /dev/sdX /mnt/BackupExternal
Update fstab for mounting (persisting through reboots):

sudo blkid /dev/sdX
sudo nano /etc/fstab
Example:

UUID=<new-uuid> /mnt/BackupExternal ext4 defaults,nofail 0 2
Test:

sudo mount -a

When It Really Is Hardware

If a full sequential read repeatedly fails at the same blocks even after reformat:
  • The sectors are physically damaged
  • The controller is failing
  • The disk cannot be trusted
At this point - Replace it. Backups do not belong on “mostly working” disks - needless to say, but here we are.

The Real Takeaway

An I/O error means: The kernel failed to complete a block operation. It does not automatically mean the disk is dead.