(this was posted to Classic Computer mailing list, please disregard if you’re on the list. I think this is an important topic)
The last two nights I’ve been busy archiving some of my Amiga floppy collection. Most disks were written over 20 years ago.
On a sample size of about 150 floppies, most of them were perfectly readable by my homegrown usb external amiga floppy drive controller.
I paid very close attention to the failures or ones where my controller struggled.
Without sounding too obvious here, the time between the pulses (which more or less define the data) were grossly out of spec. The DD pulses should nominally be 4us, 6us, and 8us apart before pre-write compensation. Most good disks are slightly faster, and normal times for these ranges are:
4us: 3.2-4.2us. Many around 3.75us
(notice margins around 1-1.3us)
My original microcontroller implementation was 3.2-4.2, 5.2-6.2, and 7.2-8.2.
When my current FPGA controller would have a problem, I’d notice that there were problems right on a boundary. So maybe pulses were coming in at 3.1us apart instead of 3.2. Or maybe 4.3 instead of 4.2. So I kept bumping the intervals apart, making a larger range of pulse times acceptable — the XOR sector checksums were passing, so I was likely making the right choices. The bits were ending up in the right buckets.
But as I went through some of these disks, I ended up with the difference between ranges(and basically my noise margin) being reduced smaller and smaller. Some to the point where an incoming pulse time might fall darn smack in the middle of the noise margin. Which bucket does THAT one go into?
My approach has been very successful(easily 95%+), but it makes me wonder about Phil’s DiscFerret dynamic adaptive approach where a sample of the incoming data defines the ranges.
Some disk drives and controllers might be faster or slower than others, and if you create custom ranges for each disk (each track?), perhaps you’ll have better luck.