techtravels.org

recent status

Well, I’ve put a decent number of hours in on the project over the last couple weeks.

I’ve mostly been “spinning my wheels” because I haven’t made much progress, but I think I’ve stumbled on something this morning that will help.

I’ve really thought there’s been a problem with the MFM decoding routines, mainly just my lack of understanding of how marco’s stuff works, how to interface to my code, etc.  But I haven’t been able to get any traction on any of those issues.

I have a better handle on how this MFM decoding works and plan to do some manual decodes this afternoon.  Bit shifts make this somewhat of a problem, but I’m working on that too.

I actually started to write an “amiga mfm explorer”, which I mostly finished in one day, which is a graphical application to read in raw MFM, display it, decode it, and so on.  It doesn’t work yet but the framework is there.

Here’s what I found today:

Remember this post that talks about the various possibilities for MFM?  This has proven to be good stuff.  Well worth the minor effort in putting it together.

Basically, I’ve found that there are impossible byte sequences coming out of my SX.  Remember that I’m checksumming the data leaving the SX and coming into the PC, so I know the data was transferred correctly.  When I say “impossible”, I mean bytes like D5, 5A, and B5, and so on which look like 1101 0101, 0101 1010, and 1011 0101.  Notice the back to back ones.  This makes them illegal bytes according to MFM rules.  No back to back ones and no more than 3 consecutive zeros.

As I’ve mentioned, I have two test disks, one that is the full range of ASCII characters, and one that is alternating blocks of 0x00’s and 0xFF’s.  The full ascii range is obviously the “tougher” of the disks, and if that works, anything should work.  I put the all 00’s and FF’s disk in, and I was able to successfully read the entire track, decode everything, and all checksums (transfer, header, and data) all come out correctly.

When I use the “character distribution” function of hexworkshop, I see that the “bad reads”, ie reading the full ascii disk, have between 1/1000 errors and 4/1000 errors.  So 1-4 errors per thousand bytes.  This error rate is way too high.  Works out to be 20-50 errors per track.

The good read had exactly ONE byte(0xB5, actually) not found on the “possibilities” chart out of 13824(full over-sized track read), and this single byte did not affect the proper decoding of the data.  Very acceptable.

So to define the problem:

My SX is still not correctly reading all of the data although its not far off.  There are some particular cases where it fails.  Now I don’t know if this is between sectors, or all grouped together, etc I still have to look at the relationship between the bad data and good data, in terms of the location of the data.

What’s nice is that I now have an easy-to-run litmus test that will say immediately (and quantify!!) whether the data is good or bad.  Maybe I’ll throw a quick utility together that provide drag-and-drop results.

What’s also nice is that, bit-shifted or not, the results of that test are still valid.  This means I don’t have to bitshift before testing. Very nice.

I’m REALLY hoping that the errors happen consistently, in the same place, following the same data, etc so that I can identify the problem.  I have yet to see, however, on the scope, any type of error actually occur.  I can tell this because I can relate the ISR firing to the data.  If the ISR fires when its no supposed to (or doesnt when it should), then this is an error.

At least I now have a problem defined, and a test to see if the problem still exists.

keith

Amateur Electronics Design Engineer and Hacker

Add comment