Although I’m transmitting from the Parallax SX microcontroller to the PC via USB at a data rate of 2mbps, the actual throughput is much less. First, there is some overhead, namely start and stop bits, which is 25%. Next, I actually have to read the data from the FRAM, and this takes time.
It takes approximately 1.7us to read 1 byte, and then 5us to transmit that byte. The 5us is 500ns (1/2mbps) * 10 bits (8 start + 1 start + 1 stop). So 6.7us per byte. This doesn’t include the time it takes to findsync().
So my throughput is approximately ~800 kbps on a data rate of 2mbps.
Kind of sucks, but getting to 2mbps is impossible unless I integrate the reading/findsync’ing into the transmit routine. And I think that’s generally a bad idea. I really want to protect my bit times so I have quality transmission. I don’t want to get into the uh-o situation where processing something else overruns the time slot, etc.
Yeah, so right now it looks like <READDATA aka pause><SEND BYTE><READ><SEND><READ><SEND> and so on. To get to 2mbps, I’d basically have to <send><send><send><send>. Now, if I could utilize the “dead time” between bits to actually read the data….. well… then I might be closer. Remember, too, that I’m bit-banging so doing something interrupt driven is out of the question.
I’m not 100% sure the PC isn’t introducing some of this delay. Which is why I’ve been looking at revamping the read routines. First, they are butt-ugly, and second, they don’t handle error cases well. Actually, they hang in error cases.
I’m still floating around the idea of error CORRECTION by taking advantage of the inherent structure of MFM. I really think that there is something here.
Next steps are to try to work out a better read routine, and then implement a retry system for bad tracks.