You're seeing two (at least) differnt things at work.
The "bad sectors" are marked as such in the file system, for reasons lost to time. Possibly there was a transient seek or recoverable read which timed out causing windows to report an i/o error. In the old days (before they loaded tons of software in the drive controller and started LBA) the only way to track potentially bad sectors was for the file system to mark them as such in the file system disk allocation tables.
Then came IDE and LBA, where the drive automagically did bad sector tracking and tracked them inside the drive and did automatic bad sector relocation/reallocation. The idea was the the drive would always appear "perfect" to the attached computer, so the OS could stop worrying about bad sectors. (The OS didn't, thus "bad cluster" marking is still going on in the file system. Of course if the drive lived up to it's claim of always appearing "perfect", the file system would never mark any space as bad/unusable... )
Finally SMART (which is dumb) comes in. The drive is supposed to track statistics and warn when the numbers indicate impending failure. However, what manufacturer is going to confess that their drive is going flakey? Instead, the wait until the last possible moment before sending the SMART "immenent death" message. I've had drives last only 90 minutes from first SMART warning to total failure!
And while the drive is keeping stats, they fudge and clean. Somehow the "worst experienced" number will slowly increase, after a long period of no errors, back up to near the original "never had a problem" values, thus wiping the memory of any problems encountered.
I've seen drives that have two levels of sector reallocation (alternate sector on the same track, and full track reallocation) not bother to count (or at least report in the defined SMART number bucket) the sector reallocs; just the track reallocs. These drives also reported nearly perfect SMART numbers, including no reallocations, until just before the bitter end.
In short, the manufactures will do anything to make their drive SMART data look as good as possible for as long as possible, which defeats the whole goal of failure prediction.
So what you probably saw was a glitch in an average drive which isn't going to fail any sooner than most of its production run siblings, resulting in the file system creating a "bad area" record for an area that the drive would say is perfectly usable. I'd wager a file system true low level chkdsk type full-read test would return the failed area to use.
- Marked as answer by Jeffery Smith Monday, February 08, 2010 9:19 PM