Another Y2K Bug?

2 minute read

There is a bug that is starting to peek its ugly head through the software industry. The new bug will target older software that deals calendar dates, just like the Y2K bug. Everybody has heard of the Y2K bug... but does everybody know why it was a big deal?
Way back when even the best computers had a whopping 4MB of memory (some 40 years ago), software engineers needed to cut corners in every aspect of their code, streamlining it to make it more efficient. Dates were stored as text back then so every bit counted. When storing text, typically each letter/number gets 8 bits. The engineers figured that instead of representing the current date with 4 digits (32 bits) they could only use the last 2 digits (16 bits) and cut the bits used by 1/2, and then could use the unused bits for other stuff.
1960 = 00110001 00111001 00110110 00110000
  60 =                   00110110 00110000
The problem arose when we hit the year 2000, and software running using this standard would then think the current year was 1900, not 2000. This could have created problem across interest generating programs, scheduling programs, satellites and who knows what else... it needed to be fixed everywhere, and was.
Now onto the new bug. Most software written since 1970 uses the Unix standard for representing time: The number of seconds since January 1st, 1970. The current value (as of the time this post was written) is 1201761450. From the seconds it is easy to do time comparisons and conversions in the software. Although it is standard to use a "long" value (64 bits) to hold this number, older software (an poorly written ones too) use an integer (32 bits) to hold the value. Right now, the current time (in seconds) can be represented using 32 bits:

1201761450 = 01000111 10100001 01101100 10101010
So the current time fits ok, but the largest value we can hold in a 32 bit integer shell is 2^32-1 or 4402341478399, and this amount of seconds past January 1st, 1970 lands somewhere in 2038.

4402341478399 = 11111111 11111111 11111111 11111111
So in that magic first second AFTER 4402341478399, machines that use an integer to hold time values will then wrap around and show 0 (as in January 1st, 1970) whereas machines that use longs to hold time values will show 4402341478400 seconds:
long: 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000000 *GOOD
int:                                      00000000 00000000 00000000 00000000 *BAD
Of course we expect the problem to be fixed by 2038, but what about software in use NOW. For example, it is fairly common to take out a 30-year mortgage. The dates are put into a computer to generate interest and manipulate the details and if the software uses integers to hold time, the software will crash. What about satellites designed to orbit the earth for 100 years, needing the time to calculate which way to point. While the extent of this problem is not clearly visible, it is easy to imagine scenarios in which something bad can occur.
The moral of the story: Code your time values with longs!

Was this page helpful for you? Buy me a slice of 🍕 to say thanks!