techtravels.org

of bit-shifted sectors

I’m beginning to think more and more that my software on the SX is just fine. Perhaps minus the idle routine mentioned in the last post. I have to do something there.

However, without the idle routine, I was only seeing a couple sector headers out of 11+ in one diskmontools read that were properly byte-aligned. I was wondering if there were other sectors there, but hidden from view because they were bit-shifted. I calculated 0xAAAA 0xAAAA 0x4489 0x4489 shifted left by one through 7 bits, to get seven different results. I searched on the remaining center two bytes and got hits on practically all of them.

Out of 13k+, I should be getting 11-12 sectors, and I found 11 sector headers. This is very good news. I have to automate this process to really see the results of the overall read, but I think I’m onto something here.

I’m going to update Marco’s code from the afr(amiga floppy reader), which looks pretty portable. His code relies on some assumptions about global variables, etc etc so I’ll haveta play with writing some code to drive his routines, but this should be easy.

Right now I’m remaining cautiously optimistic about the results.

What Marco does is basically first do a bit by bit search for the FIRST sector header, and then assumes the rest of the sector headers are shifted by the same amount. In my most recent case, I found that there there five sectors shifted by x, 3 shfited by y, and 3 shifted by z. It’s interesting that they are grouped together. If Marco’s code can’t find the next sector header byte-aligned (after adjusting for the preliminary bit-shift result), then he goes right back to bit-shifting.

This tells me that he expected sectors to not be byte-aligned AND he expected the sector-headers plus data etc to be bit-shifted too. This is important, and the results will be interesting here.

keith

Amateur Electronics Design Engineer and Hacker

10 comments

  • If that bit shift skew (the amount of bits sectors get shifted by) only increases with time – that is, if X is less than Y and Y is less than Z – then I’d suggest that you ease your timeout condition.

    Say, if it is set to 8us or 9.5us, try 10us or even 15us.

  • What I have in mind is that if sectors get ‘delayed’ then a number of ‘phantom’ bits must occur. That hardly can be due to some garbage bits’ edges; I would believe there could be one shift due to these stray edges, but not two (second group of sectors) and definitely not three (third group of sectors) of them per track.

    So it seems to me there can be some ‘stretched’ bit (or bits) because of unstable spindle speed, for example, or for some other reason.

    Let me try my myself in ascii art:

    a) “1” followed by “0” (2us + 2us):
    – — —
    __ __
    b) “1” followed by two “0”s (2us + 4us):
    – —- —
    __ __
    c) “1” followed by three “0”s (2us + 6us):
    – —— —
    __ __

    Now, if you have your timeout happen in the middle of “0” that is longer than 6us here’s what happens:
    d) “1” followed by three “0”s (2us + 6us and then some):
    – ——– —
    __ __
    ^
    You sample the last “0” at “^” mark, and
    e) the rest of last “0” after the “^” mark gets sampled as a ‘phantom bit’.

    Hope I make myself as clear as possible with all those HTML tags 🙂

  • OK, I’ve successfully failed to illustrate my point in ascii, so here’s one picture that should talk better then I do:

    Here is a link to it in case it won’s show up:
    https://milliways.chance.ru/~tnt23/pics/misc/stretched_bit.jpg

    So, if RTCC roll-over condition occures as showed at the last bar, to the left of red line, then you get “1” and three “0”s as you should. What probably happens next is that your code then senses the rest of that stretched “0” to the right of the red line as another “0”, effectively shifting next data right by that fantom bit.

    Neat idea I think, which of course can be totally wrong.

  • Unfortunately, the shifts aren’t as you describe, x is not less than y, not less than z.

    The sync’s are all shifted to the right, so this means that you must shift left a certain number of bits to correct it.

    Here’s one file:

    First three sectors are shifted by seven bits (I guess we could consider this a shift of some sort of the next byte by one bit? maybe?)

    Next five shifted by one bit.

    Last three shifted by two bits.

    So although I understand what you are saying, I’m not sure this applies to me, and if so I have no idea how to apply it.

    I will say that Marco’s code specifically allows for sectors to be shifted by different amounts within the same track, so he must have expected this type of behavior.

  • GetMFMData() function seems to break under Visual Studio .net C++. I’m not sure exactly what the problem is, but I think it has to do with the fact that unsigned ints are larger today than they used to be — or maybe VC++ interprets them differently than Borland did.

    Since his code does a lot of shifting, he assumed that trashed bits would be shifted off to the left as a result of exceeding the size of the variable. Because the size of the variables are different(new ones are larger), the bits stayed there as the MSB, and screwed up everything.

    This is function is relied on for every operation within his code. This really screwed me up, and took at least a few hours to figure out what the heck was going on. I’m the weakest in bit-operations, and so I had to break out the graph paper and a pencil to figure out how exactly stuff was getting screwed up.

    I did eventually fix it, although it might be ugly. I forced the shifting to happen in a 8-bit char, and then left shifted that into the high order bits of an unsigned int.

    His code compiles just about perfectly though, and I really appreciate that he used C instead of C++, and that his code was a single monolithic source file. Forget this modular shit.

  • Still, can you give an idea with increasing the timeout value a try? I will fill better even if it fails 🙂

  • Sure. I was planning on it, anyways 🙂 My timeout right now is just short of 2us, about 1.95us. Don’t ask me where I got this number. It was mostly through trial and error. Before, I didn’t have a tool that could automatically shift, and so for “success” I looked at the number of byte-aligned 4489’s. I had no easy way of checking to see if all the sectors were shifted —- but I do now.

    The timing looks perfect on the scope, though, as it is. I meant to throw an image up last night, but I was too tired 🙂

    Also, I was going to create a simple scoring system, where an output file is given a number that tells how accurate it is based on a couple different factors like number of SYNC’s found, number of headers whose checksum is valid, number of sectors whose data checksum is valid, etc.

    I’ll try a bunch of different timings, try to get some pictures, correlate the results, and put up a post.

  • OK. So I tried increasing the timeout, no help really. My current “best result” was with 97, so I decided to go crazy and go to 127. That’s something like 30% higher. No pictures today, but suffice to say that all numbers between 103 and 127 was too too late. On short 4us groups, it was ok, but if you went to 6us or 8us groups, it was completely missing the entire bitcells to sample a “high.”

    102 I found as basically the fringe edge. This was the highest number that produced even a very small good result.

    on the other side I went to 85. From 85-90, it produced garbage as well.

    The range of 91-96/97 was the best — in some cases finding 12 correct headers!(remember diskmontools gets 13 sectors or something) In the one case, 93, it fully decoded the full sector! So all checksums cleared for one sector!

    From 97-102, the results were marginal. Still ok, ranging from 2 good sectors headers to 6 good sector headers.

    I know these numbers seem arbitrary so let me briefly explain. You multiply these numbers * .02 to get you the microsecond delay. 100 = 2us, 50 = 1us, 75 = 1.5us. You can also look at it in terms of cycles, if you have 20ns per cycle * 50 cycles then that = 1us.

    Now the RTCC is CLEARED at entry to ISR, no matter the cause. There is some delay before entering the ISR due an edge, so these timeouts are shifted — adjusted from the beginning of an edge cell…..

    Still really no closer than when I started today, but I’m used to it! 🙂