Jan 02

recent status

Well, I’ve put a decent number of hours in on the project over the last couple weeks.

I’ve mostly been “spinning my wheels” because I haven’t made much progress, but I think I’ve stumbled on something this morning that will help.

I’ve really thought there’s been a problem with the MFM decoding routines, mainly just my lack of understanding of how marco’s stuff works, how to interface to my code, etc.  But I haven’t been able to get any traction on any of those issues.

I have a better handle on how this MFM decoding works and plan to do some manual decodes this afternoon.  Bit shifts make this somewhat of a problem, but I’m working on that too.

I actually started to write an “amiga mfm explorer”, which I mostly finished in one day, which is a graphical application to read in raw MFM, display it, decode it, and so on.  It doesn’t work yet but the framework is there.

Here’s what I found today:

Remember this post that talks about the various possibilities for MFM?  This has proven to be good stuff.  Well worth the minor effort in putting it together.

Basically, I’ve found that there are impossible byte sequences coming out of my SX.  Remember that I’m checksumming the data leaving the SX and coming into the PC, so I know the data was transferred correctly.  When I say “impossible”, I mean bytes like D5, 5A, and B5, and so on which look like 1101 0101, 0101 1010, and 1011 0101.  Notice the back to back ones.  This makes them illegal bytes according to MFM rules.  No back to back ones and no more than 3 consecutive zeros.

As I’ve mentioned, I have two test disks, one that is the full range of ASCII characters, and one that is alternating blocks of 0×00′s and 0xFF’s.  The full ascii range is obviously the “tougher” of the disks, and if that works, anything should work.  I put the all 00′s and FF’s disk in, and I was able to successfully read the entire track, decode everything, and all checksums (transfer, header, and data) all come out correctly.

When I use the “character distribution” function of hexworkshop, I see that the “bad reads”, ie reading the full ascii disk, have between 1/1000 errors and 4/1000 errors.  So 1-4 errors per thousand bytes.  This error rate is way too high.  Works out to be 20-50 errors per track.

The good read had exactly ONE byte(0xB5, actually) not found on the “possibilities” chart out of 13824(full over-sized track read), and this single byte did not affect the proper decoding of the data.  Very acceptable.

So to define the problem:

My SX is still not correctly reading all of the data although its not far off.  There are some particular cases where it fails.  Now I don’t know if this is between sectors, or all grouped together, etc I still have to look at the relationship between the bad data and good data, in terms of the location of the data.

What’s nice is that I now have an easy-to-run litmus test that will say immediately (and quantify!!) whether the data is good or bad.  Maybe I’ll throw a quick utility together that provide drag-and-drop results.

What’s also nice is that, bit-shifted or not, the results of that test are still valid.  This means I don’t have to bitshift before testing. Very nice.

I’m REALLY hoping that the errors happen consistently, in the same place, following the same data, etc so that I can identify the problem.  I have yet to see, however, on the scope, any type of error actually occur.  I can tell this because I can relate the ISR firing to the data.  If the ISR fires when its no supposed to (or doesnt when it should), then this is an error.

At least I now have a problem defined, and a test to see if the problem still exists.

Aug 10

minor success at 2:00am

well I’ve got MFM flowing from the PC drive now.  A couple of things:

Pins 10 and 14, the pins I’ve been using since last night, appear to be correct.

Pin 10 is the motor lead.  Dropping a corresponding SX lead causes the motor to come on.

Pin 14 is the drive select lead.  Although this should technically be an input as far as the drive is concerned, I’m getting a +5v from the drive.  I’m measuring from a ground lead to pin 14, and getting +5 with the SX completely out of the picture.  If I take this lead and short it to ground, then the drive is “selected” and it works.

If I simply cause the SX to drop its lead (pin = 0) and attach it to pin 14, this does NOT work, and the multimeter still shows +5v.

Why doesn’t dropping the SX lead cause that line to go to 0v? (incidentally, measuring the SX by itself, the lead does infact go to 0v, so the SX/programming/pin/etc is ok)

And why am I getting +5v from the drive on an INPUT pin?!@#

The pull-up resistor is amazing on the output READ DATA lead.  It’s screwed up.  With no pull-up whatsoever, looking at the scope shows NOTHING happening.  Put a 1k  (well brown black red, Im too tired to go check, but for records sake) pull-up to 5v, and bam-o I have a perfect signal.  Much faster rise-times on the signal.  It also looks ummmm cleaner than the amiga floppy drive. not sure why yet.

Herb sent me some good links to his site here



Jul 30

more testing

so I put the amiga floppy back on the amiga, and the amiga reads disks no problem.  Glad to see that my drive hadn’t fried through all my messing around.

I haven’t seen valid MFM coming from the drive in so long, I’m going to sniff the connection once again and take a peek.

Jacob: as far as collaboration, this sounds good — but I really have to get over this hurdle and get the thing basically working.

I’ve found a nice pdf, “How Magnetic Disks Work” which appears to be some bonus chapter of an A+ guide.  Pretty decent.

I’m still plugging away — I just cant for the life of me figure how I broke this damn thing.


Jan 21

first two bytes of checksum bad

Despite the data being received 100% correct, the data checksum fails in certain cases.

It’s always the first two bytes of the 4-byte, 32-bit checksum that are wrong.  The last two are always right.  What does this mean!?

In my test track, track 0 of my 0×00-0xff floppy, here are the results:

sector no. 0-5, and 8: all computed checksums match stored checksums

sector nos. 6,7,9,10: last two bytes of computed checksum match, first two are wrong.

The checksum routine is pretty simple and based on exclusive-or:

unsigned long CalcSectorDataCHK(void){

int i;unsigned long odd, even;unsigned long chk = 0L;

for(i=0; i<128; i++){

odd = GetMFMLong(64 + (i< <2));

even = GetMFMLong(576 + (i<<2));

chk ^= odd ^ even;




}Note GetMFMLong returns a 32-bit long, so we are processing 128*4 bytes = 512 real bytes.

I guess really the question of the week is if I can get away with ignoring the first two bytes of the checksum.  I’m still working on the fix.  I really think this has to do with difference in the size of types of variables between old C and .net compilers, but I can’t nail the problem down.


Jan 21

updated code and comments


Here’s a current version of the code.  I cleaned up my code tonight, added a bunch of comments, and tried to explain and document what I’m doing and the logic behind it.

The current code is so much better than older versions, the time spent in the ISR is tiny in comparison to what it used to be.  I think I’m roughly under 600ns — which I’m happy about.  I’m too tired to fire up the scope and double check this value, but I’m almost positive its 580ns or something.

The main gain in terms of processing speeds comes from the fact that I’m not messing with any ugly bit-shifters.  I simply detect either the pin change or a rollover condition, and immediately write this bit to the FRAM.

I have separate code that xfers the FRAM contents to the PC via USB, which I’ll put up if there’s interest.  It’s easy enough and uses the same SENDFRAM and RECVFRAM routines that I wrote that are used in the main program.  I want to eventually integrate this separate xfer code into the main program, but this will involve two different modes, ie a “receive from floppy drive” mode and a “transfer to PC” mode.  I say two different modes because each uses the ISR for something different.  I use the ISR to xfer stuff to the PC to make sure I’m respecting the ASYNC baud rate requirements…..

I’m also using some terminal software ala hyperterminal to store the raw mfm data into a file to be processed by the PC software.  I should be able to integrate this read right into the processing software.  It just needs to grab bytes and throw them into an array.

comments are welcome.


Oct 21

today’s successes

I really stumbled on some stuff, and I think I have traction on a couple of problems.

1. My sizes are absolutely perfect. For me, this is a big milestone. This tells me alot. I’m getting perfect distances between mostly perfect sector headers. This tells me that I’m really not dropping even a single byte across a whole track. Or, if I am, it gets made up — which really wouldn’t happen perfect 1:1 ratio.

2. I’m getting repeatable results, even if they aren’t 100%. This means there isn’t any sort of flaky intermittent non-deterministic results.

3. On my current ‘problem’ track, I’m getting 7 of 11 good sectors. Sectors 6,7 and 9,10 are failing a data checksum.

4. My original read routines, which have always appeared solid, are performing with the ram perfectly. And much much faster. Something like 580ns total for the whole ISR including a write to the ram chip. Im very happy with the speed/length of code. Very clean, easy, and nice.

5. When comparing the decoded bad read sector with the actual contents of the sector, they match identically in content, BUT read data is shifted one byte to the right consistently. So even “bad” sectors are only “bad” in that my decode routines aren’t checking the checksum properly. The routines I’m currently using is Marco Veneri’s ported code. If you recall from this post


I think I’m running into similar problems, but pouring over his code hasn’t produced anything interesting. What’s wierd is that it does sometimes work, and sometimes doesnt when it comes to the checksum. You say, how can it work if the data is consistently shifted? It just so happens that the values I lose at the beginning (normally a 0×00 for whatever, probably a file-system os character) is the same as the character at the end after a data segment. So the checksum comes out right, but like I said, in certain cases only.

It’s refreshing to see the bad sectors aren’t really bad and that its a SOFTWARE problem on the PC.  mainly because it does heavy bit-level math using tricks to combine data, etc I’d like to rewrite it, but I dont have a grasp YET on how to do that. I think that floppy.c code that I mention here


works well, is easy to read, etc. But I havent looked yet to see if they even check the checksum. I see where they decode data and thats nice BUT they just ignore the checksum? Like I said, though, I havent checked to be sure if they do it elsewhere. I’d really like to write my own code, so that I can truly understand how it works. Fixing other people’s code that I dont truly understand or porting it is a pain in the butt, although lord knows Ive done plenty of it.

If the data contained in the bad sector is 100%, which it is, that means that my SX code is working perfectly. I checked the stuff byte for byte on the mfm decoded side.

The main problem this whole time has been the PC not being fast enough, which Ive resolved through USB and FRAM. Tommorow’s task will be narrow down the problem in the PC software or figure out how the checksum works, and write the darn routines myself.

Aug 31

Amiga MFM possibilities

Today, while thinking how the heck I’m going to integrate my 3mbps Parallax USB to Serial converter, I was trying to figure out exactly how many different possibilities there are coming out of an Amiga floppy drive. Since there are strict limitations on the number of one’s in a row (one!), and the number of zero’s(three) then this seriously limits the character set of the output. So the output is no longer 256 different choices. I long knew that, and doing character distribution tests on my output files, even though they were partially corrupt, I could see that this set was very small. I wanted to find out exactly how small.

There are only 32 possible MFM bytes! Too many zeros, or too many one’s ruled out all the others.

Note that you could go further and create valid groupings like cryptographers do with knowing that only certain letters follow only certain other letters, and NEVER follow a different letter, etc.

In HEX, the following bytes are the only possible MFM output from an amiga floppy disk:

0×11, 0×12, 0×14, 0×15
0×22, 0×24, 0×25, 0×28, 0×29, 0x2A
0×44, 0×45, 0×48, 0×49, 0x4A
0×51, 0×52, 0×54, 0×55
0×88, 0×89, 0x8A
0×91, 0×92, 0×94, 0×95
0xA2, 0xA4, 0xA5, 0xA8, 0xA9, 0xAA

I’m not sure exactly how this helps my cause but this means only 5-bits really *need* to get communicated back to the PC to represent all the possibilities, not 8-bits. This may save transfer time or something.

Aug 08

losing data

I’ve been thinking more and more about how exactly I’ve been losing data.

It’s alot of data. The worst case I’ve seen is about 140 bytes. This is 1120 bits, or 2240us. This is a LONG time for my SX to really be doing nothing — especially during an active transfer — this is 2.24 milliseconds!

This means my clock is rolling over 1120 times with no activity.

The only thing I can think this means is that I’m not seeing any edges within that timeframe. But why not? What’s different about this dead-period than before and after?

And why does this almost always happen DURING the data-portion of the sector, and not usually during the sector header? The sector headers almost always checksum right.

And if this isn’t obvious, the sector headers are almost always the perfect length. So we don’t drop bits during a sector header. Now not all sector headers are perfect, but 95% of the time, they are. So the length is perfect, and the checksum is perfect.

It sounds like something is losing sync, but the sync occurs with every edge, not just the ones at the beginning of the sector.

The header is 32 bytes, 64 RAW MFM bytes. So this is 512 mfm bits without error. Without dropping a bit, or adding a bit. Or losing data.

This is the question of the week.

Jul 17

current results

Now that I have a portion of Marco’s old code implemented, I’ve run an output file from my SX through it. It’s one whole track from a capture of a DiskMonTools. It’s almost 14k long, which sounds short to me, but I’m improving.

I apologize for the format, these are my debug messages from the code :

result from first gotonext is bPosByte = 204header checksum ok for sector 3
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=3971, and i=0sync was found!@#
 at pos=9460, and i=1sync was found!@#
 at pos=1288, and i=6header checksum ok for sector 4
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=3971, and i=0sync was found!@#
 at pos=9460, and i=1sync was found!@#
 at pos=2369, and i=6header checksum ok for sector 5
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=3971, and i=0sync was found!@#
 at pos=9460, and i=1header checksum ok for sector 6
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=5055, and i=0sync was found!@#
 at pos=9460, and i=1header checksum ok for sector 7
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=5454, and i=0sync was found!@#
 at pos=9460, and i=1header checksum ok for sector 8
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=6537, and i=0sync was found!@#
 at pos=9460, and i=1header checksum wrong.in gotonextsync()
sync was found!@#
 at pos=8375, and i=0sync was found!@#
 at pos=9460, and i=1header checksum ok for sector 0
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=9460, and i=1header checksum ok for sector 1
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=11251, and i=1header checksum ok for sector 3
checksum error occurred.
in gotonextsync()
sync was found!@#
 at pos=12328, and i=1header checksum ok for sector 4
checksum error occurred.

The key thing to notice here, really, is that I’m detecting most of the SYNC’s, and most of the header checksums are passing. I’m seeing 11 sectors, and 10 of the 11 passed the header-checksum test. This is good.

Now, none of the data-checksums are passing. This is bad. :) However, I have to have a series of 1088+ raw MFM bytes correct in a row. Obviously, I’m not that close yet.

Jul 12

Creating more test disks

From both Tim’s suggestions and from a reinforcement from Agans’ Debugging book, Chapter 6, “Divide and Conquer”, they suggest to “Inject Easy-to-Spot Patterns”. And of course, I knew this from before, and I picked “AMIGA” which is truly easy to spot, but only AFTER its been decoded from MFM. I couldn’t tell looking in a hex editor whether I was seeing AMIGA or just plain junk. (because of the odd/even bit separation)

I created a test disk that had 100k of repeating 0xFF’s followed by 100k of 0×00′s. This worked well and the results of my current version of my software aren’t incredibly horrible. Obviously just enough of the sector headers are getting corrupted to make larger sections look incorrect. The bulk of the file is close. There are some problems with stuff getting shifted, and if you shift LEFT one bit from the 0×55′s you get 0xAA’s, so those two aren’t ideal choices.

I have now created a file with 200k of 0×00 – 0xff’s, ie the full ascii range. This should help me identify how exactly the data is being corrupted.

The good news is that I am still receiving a good number of sectors with my current software, and some of the sectors actually look complete! Also, at least a sector or two offset is EXACTLY 1088 bytes apart which tells me that I’m not completely off target, because my code is working in some cases.

I’ve contemplated spending time on getting the checksum working, which should be easy enough, but I’d rather see more headers and work on that stuff later.