techtravels.org

more updates

I know I’ve been bad keeping you guys updated….. This doesn’t mean I haven’t been working on it, however.

I recoded the ISR completely in assembly because I was just plain sick of the damn mandatory 27 cycles worth of instructions that SX/B forces on you. I also recoded things so the very first instruction is the debug pin, so I can clearly now see that the SX is reacting quickly to the edges.

Because I’ve recoded the routines, they are both very streamlined, but about the same size, this means that telling an edge from a high isn’t easily done….. I don’t care because I think I’m getting this part right anyways.

I did change the way I detect an edge — I now look at the RTCC value to determine whether its a rollover or an edge. If its a rollover, the value is ALWAYS 3, or in my case 4, because I store it in another variable. Anything else (and I can add tolerance to this of course) is an edge.

I have more whole ISR down to approximately 580ns total. Much better than the 900 or 1000ns I was seeing before.

Even with this recode, it still appears that for whatever reason, that main isn’t running after the 8th high, and before the edge which would make up the 9th bit. I really can’t figure this out. And the RTCC gets reloaded, so using that to tell the real time between events might be tough, but I might be able to adjust for it.

Before I recoded, the problem “edges” always occurred whenever the RTCC showed a value of 206-219. Any value 221 and up were good edges that main took care of xfering beforehand….. The numbers have changed because ISR is more efficient, but the problem still shows up.

I still don’t know wtf main is not sending the byte before processedge in the ISR runs again.

keith

Amateur Electronics Design Engineer and Hacker

2 comments

  • I see lots of people doing things this way:
    * Collect information 1 bit at a time in some variable in the ISR
    * Hope that the main loop realizes the byte is there, and grabs it, in the narrow window of time between the last bit of one byte and the first bit of the next byte.

    (Even more people collect an entire packet of data 1 byte at a time in an ISR,
    then hope that the main loop can decode the entire packet in the narrow window of time between the last byte of one packet and the first byte of the next packet).

    You have a spare byte of RAM somewhere, right ?

    I find it far better to
    * Collect information 1 bit a a time in some variable in the ISR, a variable that the main loop neither knows about nor cares about.
    * Once I have all 8 bits, the ISR *copies* it to some public variable.
    * The main loop now has a full 8 bit-times to grab that byte before it’s overwritten by the next byte.

    Doesn’t that sound simple ?

    Double-buffering also gives you more places to add test code to toggle LEDs or whatever.
    (For example, the ISR could *increment* a variable “unhandled_bytes” every time it gives a byte to the main loop, and the main loop could *clear bit 0* of that variable every time it grabs a byte. Test code in the ISR *or* in the main loop could toggle a LED or something whenever that variable is anything other than 0 or 1).

    Why more people don’t use double-buffering, or think it’s “only for graphics”, mystifies me.
    https://en.wikipedia.org/wiki/Double_buffering
    .


    David Cary

  • HI David,

    Thanks for the post. I appreciate all the ideas I can get.

    I did implement your suggestion today, and it has fixed the current problem I was seeing with getting the byte out in time.

    I actually had something like this implemented a few versions back, and I’ve changed designs so much that the particular ‘feature’ went away. It still doesn’t really answer the question as to WHY it was happening — as there was plenty of time — but the question is moot at this point.

    I’m still not “fixed” however, and I’m still having problems with corrupted sectors where the DATA portion is getting corrupted. The sector headers are decoding properly, and the checksum is passing on the sector headers.

    The data checksum isn’t passing because the amount of data between sector headers is too small. This means that I’m dropping bytes somewhere. I did use an “unhandled bytes” counter, and it’s not dropping as a result of the code being too slow, etc. Main is properly dispatching the bytes to the PC, fast enough that the ISR isn’t overflowing the temporary holding byte.

    In one test, the sectors were short by anywhere from 140 bytes to about 4 bytes.