Yeah, I’ve been slacking lately on the project. School has been keeping me busy, work too.
My current problem is that something seems up with the data checksum routine, the code on the PC that is processing the data. The SX code seems fine, and I’m getting very reliable data, both in terms of the length of the data (exactly 1088 bytes between headers consistently), and in terms of the actual data.
My tests in October were showing that although I received the data perfectly correct (verified by looking on the Amiga in DiskMonTools at the actual real original data, byte for byte), the data checksum was failing.
The best I can figure out so far is that one of a couple things is happening. Either 1> the data isn’t being stored properly in the array, ie its shifted by a byte somehow or 2> the checksum routine is reading the data FROM the array incorrectly. I know the data is RIGHT, so it’s just a matter of getting the darn checksum routine to realize it.
The problem may lie in the difference in variable sizes between an old school compiler (like borland or something) and .NET. We’ve seen this crop up earlier. I’m weakest on understanding exactly what those differences are, and what the easiest way is to fix them. It doesn’t help that the checksum routines are semi-cryptic — or at least written in the “let’s write this the most efficient and smallest way possible” format of old school c programmers. Forget about readability, let’s make it fast and small.
Read the next post (the previous post, that is) for details from the last session.