Tag - sample

1
Visualizing amiga track data with matlab
2
on recent disk reading results
3
bought a Saleae Logic. Another tool for my toolbox
4
Intronix LA1034
5
mid-january status
6
Latest attempt
7
not the PC again?
8
review of timing details
9
Changes to the PC side
10
still hashing some ideas around

Visualizing amiga track data with matlab

So looking at a lot of raw data is pretty tough, but matlab handles it with ease.

So the above image shows a sample amiga track from a brand new disk, just recently formatted, within the amiga.  It is basically as perfect of a sample as one could expect.  A few things to notice about it:

  • nice tight groupings
  • Lots of space between ranges
  • No real data points fall in the middle
  • 3 separate ranges all within a reasonable definable tolerance

now let’s look at a bad track.  Now this track was written on my amiga 15-20 years ago, and to tell you how bad this disk was getting —- it literally self-destructed a day or two after I sampled this data.  I’m not sure if I posted about this, but at least one track was completely scraped off the disk —- it could have been lots of wear/tear with the read head constantly touching one particular track, but in any event, the disk was in bad shape.

Now the two images aren’t exactly the same size, so don’t inadvertently read into that.  But now let’s notice what’s so wrong with this picture:

  • much fatter groupings.
  • a bunch of data in no-mans land. Compare especially the space between 4us and 5us.
  • Almost the anti-thesis of the first picture!

You can see where reading the second disk poses some real problems, and I think that some PLL just isn’t going to deal well with this!

Now that this disk is gone, I really can’t work with it.  I’ve got to find other samples that aren’t as bad that I can work with to further refine my hardware/software.

If anyone is interested, I took a capture using my Saleae Logic analyzer and then processed it with a small C program that spits out basically .csv.  Then I imported the .csv into matlab, and turned off lines connecting the data points, and turned on a dot to represent a data point.  And just for clarification, if you haven’t been following the blog, the y index represents time between low-going edges.  The x index is basically the number of edges, so the zero-ith first delta t is on the left, and the rightmost dot would be the last delta t sampled.

I’m very happy to have the ability to visualize this stuff!

on recent disk reading results

(this was posted to Classic Computer mailing list, please disregard if you’re on the list.  I think this is an important topic)

The last two nights I’ve been busy archiving some of my Amiga floppy collection.  Most disks were written over 20 years ago.

On a sample size of about 150 floppies, most of them were perfectly readable by my homegrown usb external amiga floppy drive controller.

I paid very close attention to the failures or ones where my controller struggled.

Without sounding too obvious here, the time between the pulses (which more or less define the data) were grossly out of spec.  The DD pulses should nominally be 4us, 6us, and 8us apart before pre-write compensation.  Most good disks are slightly faster, and normal times for these ranges are:

4us: 3.2-4.2us.  Many around 3.75us
6us: 5.5-6.2us.
8us: 7.5-8.2us

(notice margins around 1-1.3us)

My original microcontroller implementation was 3.2-4.2, 5.2-6.2, and 7.2-8.2.

When my current FPGA controller would have a problem, I’d notice that there were problems right on a boundary.  So maybe pulses were coming in at 3.1us apart instead of 3.2.  Or maybe 4.3 instead of 4.2.  So I kept bumping the intervals apart, making a larger range of pulse times acceptable — the XOR sector checksums were passing, so I was likely making the right choices.  The bits were ending up in the right buckets.

But as I went through some of these disks, I ended up with the difference between ranges(and basically my noise margin) being reduced smaller and smaller.  Some to the point where an incoming pulse time might fall darn smack in the middle of the noise margin.  Which bucket does THAT one go into?

My approach has been very successful(easily 95%+), but it makes me wonder about Phil’s DiscFerret dynamic adaptive approach where a sample of the incoming data defines the ranges.

Some disk drives and controllers might be faster or slower than others, and if you create custom ranges for each disk (each track?), perhaps you’ll have better luck.

bought a Saleae Logic. Another tool for my toolbox

logic

I bought at Saleae Logic which is an inexpensive logic analyzer.  See link here.

It isn’t nearly as fast (only samples at 24mhz max), and it doesn’t have as advanced triggering capabilities, but it does do millions->billions of samples.

So, of course, I put it to the test!  I recorded 5 million samples at 24mhz which works out to be 210ms, just slightly over a floppy track time of 203ms.  I sampled an entire track, which is FANTASTIC if you know anything about logic analyzers.  They just don’t usually store much.

Then I wrote a small C program which converts the exported binary data to RAW AMIGA MFM.  I searched for binary patterns of the sync code 0x94489, and exactly 11 of them came up.  Which means that my little code is working, the logic analyzer is correctly reading the data.  I still have to try to decode this track and see if it decodes properly, but this is pretty neat.  It’s like third party verification of what I’m doing.

I have these odd exception cases where sometimes a track refuses to read although the amiga reads it perfectly.  I’m going to get the bottom of those cases.

I hate to say this, but everything just worked tonight.  No problems. Brought back up the java client, programmed the SX, and off I went.  Pretty neat.

I’ll have more to say on this logic analyzer, but the software is really nice, clean, simple.  It does its job.

I can’t tell you for how long I’ve wanted to sample a whole track w/ some test equipment.  You can buy $$$ logic analyzers and not get this type of buffer space…. It achieves it by doing real-time samples straight to the PC’s ram……

Intronix LA1034

I’m seriously looking at the Intronix LA1034 logic analyzer.

It’s a USB PC-based logic analyzer.  34 channels, samples up to 500mhz, and has nice trigger options, etc.  I’ve downloaded the software, and tried it out in demo mode, and it looks pretty powerful.

I could monitor multiple pins, namely the floppy data lead, the debug ISR pin, and the memory output.  By comparing the floppy data lead and the memory output pin, I could actually see if the correct bits are being written for each.

It will store at least 1023 transitions (more if they happen inside the same sample period) and with rough numbers, this looks like I could probably store upwards of 150 – 200 bytes of data.

What I’m missing now is visibility.  I know something’s going wrong, but I can’t see it.  Sure, double 1’s are being written, but WHY.  What’s the status of the various leads when that happens?
The trigger options look nice, I’m not sure if I can trigger on a “11” written in memory mode, but they have a pre-trigger buffer.  So you can see what leads up to something happening.

They also have interpreters for displaying real data, like decoding 232 into values, etc.

mid-january status

So here’s where I’m at:

I know my memory read and write routines are good.  I calculate a checksum as I’m writing the data, and on the output of that data to the PC, I also calculate the checksum.  They match, no problems there.

I know my USB to PC routines are good.  I calculate a different byte-based checksum (8-bit checksum)  from the data I get from the FRAM, and then I have the PC software calculate it.  They always match and I’m using Parallax’s UART for the time being.  Mainly for reliability.

I’m using a new basic ISR routine, which I posted a post or two back.  It’s simple, it doesn’t force any particular type of encoding.  What comes off the drive goes into the memory.  There are some drawbacks, for instance, I don’t support any type of idling.  For the initial data, I wait to see a transition, and then I turn on interrupts and start recording.  I don’t check double 1’s now, and I don’t check more than (3) 0’s.  The data SHOULD be coming out of the drive correct, and force fitting it into correct form just doesn’t work, and while it does fix SOME situations, I *REALLY* have to get to the bottom of why this happens.

My MFMSanityCheck software is telling me .3% of the data is bad, which I think 99.7% still isn’t anything to complain about, but I really have to find the source of the problems.

All .3% of at least one sample file is a double 1’s situation.  And I’ve seen this before.  And its NEVER triple one’s, and it’s NEVER too many zeros.  Just double 1’s.  Two 1’s back to back.

So now that I’ve tested my memory, the problem is either the drive is REALLY spitting out two 1’s (and I have no clue how to fix that problem), or my ISR is triggering twice on the same edge.

I’m leaning towards the second choice but I really have to figure out how that is happening.

Latest attempt

Tim mentioned something in the last post regarding SYNC which really had me thinking about how to SYNC the data coming in the drive with the data going to the UART of the USB2SER converter.

Then I was thinking, now that I have this super fast serial port on my machine, why not just SAMPLE the data aka pc-based-oscope. I can choose my sample rate, and then forward the raw data to the machine to process it.

The fastest rate (given my ISR code size) I can achieve is about 1.25mbps for a sample rate. Not too too bad. I’m running it at 921600 right now, because that’s a standard rate. maybe I’ll bump it to 1mbps just to round off the numbers.

Anyways what I do is write a 1-bit if i’ve seen a falling edge since the last 1 us, and write a 0-bit if I haven’t. Since the clock is very accurate, have the PC count the number of bits/bytes between edges, and I should be on target.

I’ve tried a couple terminal programs (namely realterm, ‘the terminal’, hyperterminal) but they all sort of choke when handling such a high baud rate. I can easily write some software that will capture this, as long as the PC can handle the speed. I may have to adjust buffer sizes, etc, but it should be doable.

The USR2SER supports 3mbps, so I’m guessing the PC/OS/driver etc should handle 1mbps in terms of real throughput.

I originally was just sampling at every 1us, but if I miss the low, which is quite possible — I miss the ‘edge.’ This way is much better.

There is the issue of LSB vs MSB since the drive spits out the stuff in MSB, and the UART takes stuff LSB. I haven’t thought about the impact of that. I can certainly do whatever on the PC — I don’t want to add anything else to the SX at this point.

I gotta do more manipulation of the data on the PC now, and I’ll post back this weekend about the results there.

not the PC again?

I was doing some more tests trying to get the PC to fail.

It seems that the PC really likes EVEN delays between data, and it’s wierd because faster data that comes in evenly is better than slower data that comes in uneven.

In some cases I was dropping multiple bytes. Now I’m at the point where I’m not sure if the PC is screwing things up.

I’m going to go BACK to some sort of ACK’d protocol. But I don’t care for the one that I have used in the past.

In the meantime, I’ve been researching memory options. I’m looking for something that is fast (have to write to it about 600kbps), reads/writes data in BYTES (so something x 8 bits), has about 32K bytes, is SERIAL because I don’t have a lot of pins, and has an easy enough interface to it.

I’ve been told that Ramtron has what I’m looking for, and I’m waiting to see if I can obtain some samples from them. I’d outright buy them if I could find a place that sells the one I need.

The one I narrowed down was the FM25256 datasheet.

I’d be open to alternatives that are more easily available through digi-key, newark, mouser, etc.

I also might buy Parallax’s USB2SER development tool that supports super high baud rates.

review of timing details

I set up a couple more hardware flags on my software so I could see things like, edge detections vs rollover detections vs PC transmissions, etc. It all looks perfect to me. I took maybe a dozen different samples, looking for problems, and everything lines up well. Edges are detected where they should be, high’s are detected properly, transmissions happen every 8 bits, etc.

I also manually counted cycles for both the ISR and the main routine, and made sure that I wasn’t exceeding the number of cycles in the ISR. I also checked to see if the ISR is starving main, it’s not — main is soo tiny, it only takes at most 200-300 ns to process the “send” which is really all main does.

I did find one problem. I noticed that the number of stored bits BEFORE I reached the PC send routine was reaching exactly 9 bits(never 10 or greater) for some reason. I also checked to see if an edge or a HIGH was consistently triggering causing the shift register to overflow. I found that an edge-trigger is consistently overflowing it, and that a HIGH never caused it to overflow.

This means I’m losing one bit on a regular basis — the lost bit just gets shifted off the left hand side. I made a correction by forcing a PC xfer IN THE ISR right before the 9th bit gets added. Normally main handles the PC comm. This did fix the problem I was seeing, but it really did not improve the results at all. I was hoping that would be the “big problem”. I want to narrow down WHY that was happening though, and see if that sheds light. It has something to do with the timing of an edge vs the transmission.

I guess the problem I’m having now is that everything looks kosher. I want to put up some more timing images showing just how well it looks like its working.

Onward and forward.

Changes to the PC side

With more and more things pointing to the PC, I’ve decided to really reduce the amount of work the PC has to do in its receive loop. I looked at Marco’s because his was designed to run (albeit in dos/dos window of 95/98) under much slower machines — and his worked.

I think I’ve optimized the rate the PC can receive reliably down to about half of what it was before.

What I did mainly to optimize it was to remove the PCACK portion of the protocol. This means that the PC no longer ACKs the byte back to the SX. Although the ACK lead was nice to determine what state the PC was in, the overhead associated with wasn’t worth the benefit.

Marco’s didn’t ack it, so maybe I won’t either.

I basically put the byte on the port and raise byteready. Then, after 1/2 byte has been received from the drive, I drop byteready, which gets raised in another 4 bits when the byte is ready.

So I basically have byteready on for 4*2us, and off for 4*2us.

The PC looks for the rising edge by first looking for a low, and then looking for a high. I would think 8us should be plenty for the PC to detect the change in state.

I’m getting the best results so far. I’m getting about 38 correct SYNC’s and sector headers, and partially correct decodes. This puts me at about 20%. Not too bad, but not too good either.

I hope I have the PC issue worked out, and I’m going to further investigate how I sample the data.

still hashing some ideas around

I’ve been getting lots of good ideas from the smart guys over at Parallax. They are throwing out a variety of methods and ways I can get this job done. I will say that I’m a little overwhelmed at the pure number of ways to attack the problem. I’m glad there are multiple ways, but this allows me to instill doubt in the current method, and leave an easy way out. It’s easy to say, “oh this isn’t working, forget about debugging it, and just switch methods”

I now have at least 5 different versions of code written by different people, but which none of them currently work. I then, of course, have my own version of code which is the only version to have yet produced any valid data whatsoever. And so I’m partial to my own code. It’s easy, simply, straight forward……..and its not in assembly. Which I will say, however, that I’m getting much better at reading it since all these versions of code are in assembly.

Some of the different methods presented are:

Two interrupt sources:

mine (in SX/B): the ISR handles both the edge detection (via wkpnd_b) and the timeout condition. The main code handles PC communication solely. All shifting happens in the ISR. Edge triggers the ISR.

I actually have several different versions that handle it all sorts of ways, but the only version to produce any results is this one.

Peter’s: the ISR handles both edge detection (via RTCC values) and the timeout condition. All shifting done in the ISR. Main handles PC communications only. Edge triggers the ISR.

His version is very similar to mine in overall layout, although I’m sure his is much better done. This is reassuring though, because this was the method I ended up choosing — he determines an edge differently from mine, but the overall idea is the same.

Michael’s: the ISR is triggered both by timeout and by edge detection. ISR handles the shifter. Main handles the PC communication only.

One Interrupt source:

Guenther’s: ISR triggered only by RTCC rollover. The ISR keeps track of the number of interrupts that have occurred between edges. ISR handles PC comm. Main polls wkpnd_b for an edge and does the shifting. Similar method suggested by Jim P.

Brainstorming indeed. I just have to figure out what’s going to be the best method.

I’d be thrilled to debug my code and get it working. I have to sit down and list the possible things that could be going wrong, and design tests for them to rule them out. Since other’s code is similar to mine, and the fact that it is producing some data is encouraging. It tells me I’m not entirely on the wrong track.

Some questions on my code might be:

1. Are we missing edges because we spend too much time processing a high, which is a very common and regular task. Edges are less frequent, but more important. Losing an edge causes us to lose at least one ‘0’, and potentially up to three ‘0’s — because the SX will think we’re idle when we are not.

2. Is our rollover not happening at the right time? For instance, is a rollover happening too close to an edge, which further complicates problem one? It complicates it because if it samples too close to the edge, then we’re in the ISR during an edge event, and so the interrupt doesn’t re-fire. And then, we only detect the edge has occurred as result of entering the interrupt from a rollover which is BAD BAD BAD because we should be storing a ‘0’ at that point, not an edge. We should never sample an edge or a low during a “0”-cell.

3. Does the PC perchance get overwhelmed with the flow of data, and because of multitasking, etc start dropping bytes? My code currently wouldn’t notice this —- and this wouldn’t effect new data…

I guess what I really need to do is start writing more checks into my code that will generate a breakpoint if one of these bad things happen. Run the real-time debugger, and see if it breaks someplace.

It all comes down to timing, and this is why I have pjv really reviewing my code for timing issues, etc.