Category - Amiga Floppy

1
Bug Katcher on Agnus
2
EEVBLOG does a video teardown on the Amiga 500!
3
Visualizing amiga track data with matlab
4
status updates
5
Amiga Forever
6
on floppy adpll, this time, my solution
7
results from adding the PJL
8
motor speed variation tests
9
on recent disk reading results
10
New demo video for the FPGA version of the AFP!

Bug Katcher on Agnus

I swear I already posted about this but heck if I can find the post.  Anyways, here’s a photo of a bug katcher installed on Agnus on a Commodore Amiga 500.

This gives me the ability to attach a logic analyzer to Agnus which is normally difficult because of the PLCC package that Agnus uses.  You insert the bug katcher into the PLCC socket on the amiga motherboard, and then plug agnus into the katcher.  You can then attach the logic analyzer leads to the labelled (very nice) pins on top of the bug katcher. And speaking of labels, how cool are the HP logic analyzer leads that are also color coded and also labelled with the numbers.  In both cases, this really helps prevent making mistakes when hooking up the test equipment.

The one I’ve got is here.

http://www.emulation.com/pdf/F4535.pdf

bug_katcher_installedbk_in_action

EEVBLOG does a video teardown on the Amiga 500!

Dave just posted this! I’m so glad to see him take this on!

http://www.eevblog.com/2013/03/13/eevblog-438-amiga-500-retro-computer-teardown

Visualizing amiga track data with matlab

So looking at a lot of raw data is pretty tough, but matlab handles it with ease.

So the above image shows a sample amiga track from a brand new disk, just recently formatted, within the amiga.  It is basically as perfect of a sample as one could expect.  A few things to notice about it:

  • nice tight groupings
  • Lots of space between ranges
  • No real data points fall in the middle
  • 3 separate ranges all within a reasonable definable tolerance

now let’s look at a bad track.  Now this track was written on my amiga 15-20 years ago, and to tell you how bad this disk was getting —- it literally self-destructed a day or two after I sampled this data.  I’m not sure if I posted about this, but at least one track was completely scraped off the disk —- it could have been lots of wear/tear with the read head constantly touching one particular track, but in any event, the disk was in bad shape.

Now the two images aren’t exactly the same size, so don’t inadvertently read into that.  But now let’s notice what’s so wrong with this picture:

  • much fatter groupings.
  • a bunch of data in no-mans land. Compare especially the space between 4us and 5us.
  • Almost the anti-thesis of the first picture!

You can see where reading the second disk poses some real problems, and I think that some PLL just isn’t going to deal well with this!

Now that this disk is gone, I really can’t work with it.  I’ve got to find other samples that aren’t as bad that I can work with to further refine my hardware/software.

If anyone is interested, I took a capture using my Saleae Logic analyzer and then processed it with a small C program that spits out basically .csv.  Then I imported the .csv into matlab, and turned off lines connecting the data points, and turned on a dot to represent a data point.  And just for clarification, if you haven’t been following the blog, the y index represents time between low-going edges.  The x index is basically the number of edges, so the zero-ith first delta t is on the left, and the rightmost dot would be the last delta t sampled.

I’m very happy to have the ability to visualize this stuff!

status updates

I’ve been more active on the project than the posts have illustrated. I’ve been working on a few different things :

Been trying different PLL solutions, examining how Paula does it, and actually analyzing my read data with Matlab to get a graphical view of the “problem.”  Most of my solutions, while they technically work (ie they have the desired behavior), they don’t actually solve the problem at hand.  I have to get a better hand on what “bad data” looks like, and then try to invent solutions that address the specific problems I’ve seen. I’m also using Paula as one metric of “goodness.”  In my mind, my controller should be better — should read disks that the amiga is not able to.

I don’t think I fully understand the problem.  I know what the SYMPTOMS are — flux transitions are read sometimes as being “in the middle.”, 5us between pulses where 4us=’10’ and 6us=’100′. What the heck does 5us mean?  How do we bias this towards the right direction?  Many of the controllers use some sort of PLL — but I see values one after another that span the acceptable range of values.  You can’t track something that doesn’t trend!

I also want to get a better handle on how Paula works.  I’ve got it hooked back up to the logic analyzer, and been trading messages on English Amiga Board about disk DMA and the like.  I’d like to do automatic comparisons between the output of Paula and my project and see when Paula gets it right, but I get it wrong!

Amiga Forever

I should have mentioned this before, but the Amiga Forever CDs and DVDs are really pretty awesome. The original commodora-era videos, pre-installed games/environment, everything is done very nicely.

I have the 2008 Premium edition and have meant to post about it for a long time. I’ve got to break it out and check it out again. I don’t think I ever watched all the videos.

on floppy adpll, this time, my solution

This isn’t by any stretch finished, but it does do what I’ve expected it to do.

It responds both to differences in phase (counter is set to 0), and differences in frequency (period is adjusted in the direction of the frequency error)

I did this at 3am last night, so there could be a couple bugs in there.

With all this being said, I’m not entirely sure that PLLs are actually required for good reading of amiga floppy disks. My regular “static interval” solution works about 95%-98% of the time. I’m going to come up with a list of problem disks and see if this solution works better/worse/otherwise.

I’ve used a read data window of 1us, which starts being centered on 2us, and is automatically adjusted as data comes in. This produces windows around 2us, 4us, 6us, 8us. I output the overall error which is the deviation from the center of the window as each pulse is received. I’d like to graph this error, but it doesn’t look like Xilinx’s iSim will export a particular output as CSV or whatever.

module floppy_pll(
    input clk,
    input floppy_data,
     input reset,
    output reg window_open,
    output reg [7:0] data,
    output reg strobe,
     output reg [7:0] error
    );

// window is 1us wide
// starts at .5us before counter rollover
// ends .5us after counter rollover
// ideally, edges should be arriving right when the counter rolls over

reg [7:0] period = 100;
reg [10:0] counter = 0;

reg IN_D1, IN_D2;
wire out_negedge = ~IN_D1 & IN_D2;

always@(posedge clk or posedge out_negedge) begin

    if (reset) period <= 100;
   
    if (clk) begin
   
        counter <= counter + 1;
        if (counter == period) counter <= 0;
   
        if ( (counter > (period-25)) || (counter < 25) ) window_open <= 1;
        else window_open <= 0;
   
    end
   
    if (out_negedge) begin
   
        // if counter == 0 and we see out_negedge be positive, then we are perfectly aligned
        // so we dont need to adjust anything
        if (counter != 0) begin
   
            if (window_open) begin
           
                counter <= 0; // align counter to the phase of the incoming signal
               
                if (counter < 25) begin
               
                    //we rolled over before pulse was seen, so make period larger
                    //error values will be over 128
                    period <= period + 1;
                    error <= (128 + counter);
                   
                end
               
                if (counter > (period-25)) begin
               
                    //we haven't rolled when we saw the pulse, so make period shorter
                    //error values will be less than 128
                   
                    period <= period - 1;
                    error <= 128 - (period-counter);
                   
                end
           
            end

        end
       
    end
   
   
    // edge detection flops for data in direct from floppy
    IN_D1 <= floppy_data;
    IN_D2 <= IN_D1;
end

endmodule

results from adding the PJL

So, as I mentioned in another post, I added Jim Thompson’s phase-jerked-loop to the project.  Phil had good luck using it with his DiscFerret, but it really didn’t help me.  My problem disks are still problem disks.  And the data returned as a result still have the same problem.

This PJL really didn’t help me.

http://www.analog-innovations.com/SED/FloppyDataExtractor.pdf

I expected the numbers to look a little more stable —- as if we are tracking them better, but I really just didn’t see this.

I’m planning on reverting my changes soon unless some more playing around will help.

motor speed variation tests

I collected roughly 1,000 index pulses from the Sony MPF920-E with one common floppy inserted.

The motor speed variation was very very small.  I am impressed as to the accuracy.

Across 967 index pulses, all were within 56 microseconds of each other.  The average was 200.487ms.  Most of the group were within 20 microseconds of the average.

The standard deviation is 9.46 microseconds.

on recent disk reading results

(this was posted to Classic Computer mailing list, please disregard if you’re on the list.  I think this is an important topic)

The last two nights I’ve been busy archiving some of my Amiga floppy collection.  Most disks were written over 20 years ago.

On a sample size of about 150 floppies, most of them were perfectly readable by my homegrown usb external amiga floppy drive controller.

I paid very close attention to the failures or ones where my controller struggled.

Without sounding too obvious here, the time between the pulses (which more or less define the data) were grossly out of spec.  The DD pulses should nominally be 4us, 6us, and 8us apart before pre-write compensation.  Most good disks are slightly faster, and normal times for these ranges are:

4us: 3.2-4.2us.  Many around 3.75us
6us: 5.5-6.2us.
8us: 7.5-8.2us

(notice margins around 1-1.3us)

My original microcontroller implementation was 3.2-4.2, 5.2-6.2, and 7.2-8.2.

When my current FPGA controller would have a problem, I’d notice that there were problems right on a boundary.  So maybe pulses were coming in at 3.1us apart instead of 3.2.  Or maybe 4.3 instead of 4.2.  So I kept bumping the intervals apart, making a larger range of pulse times acceptable — the XOR sector checksums were passing, so I was likely making the right choices.  The bits were ending up in the right buckets.

But as I went through some of these disks, I ended up with the difference between ranges(and basically my noise margin) being reduced smaller and smaller.  Some to the point where an incoming pulse time might fall darn smack in the middle of the noise margin.  Which bucket does THAT one go into?

My approach has been very successful(easily 95%+), but it makes me wonder about Phil’s DiscFerret dynamic adaptive approach where a sample of the incoming data defines the ranges.

Some disk drives and controllers might be faster or slower than others, and if you create custom ranges for each disk (each track?), perhaps you’ll have better luck.

New demo video for the FPGA version of the AFP!

New demo video for the FPGA version of the AFP!

Enjoy!

(45mb download warning)

FPGA implementation of AFP demo