The Y2K Problem at the End of the 20th Century

The Y2K Problem at the End of the 20th Century

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

While many were ready to party "like it was 1999," many others predicted catastrophe at the end of the year from a small assumption made long ago when computers were first being programmed.

The Y2K (Year 2000) problem came to exist culturally because of a fear that computers would fail when their clocks were meant to update to January 1, 2000. Because computers were programmed to automatically assume the date began with "19" as in "1977" and "1988," people feared that when the date turned from December 31, 1999, to January 1, 2000, computers would be so confused that they would shut down completely.

The Age of Technology and Fear

Considering how much of our everyday lives were run by computers by the end of 1999, the new year was expected to bring serious computer repercussions. Some doomsayers warned that the Y2K bug was going to end civilization as we know it.

Other people worried more specifically about banks, traffic lights, the power grid, and airports - all of which were run by computers by 1999.

Even microwaves and televisions were predicted to be affected by the Y2K bug. As computer programmers madly dashed to update computers with new information, many in the public prepared themselves by storing extra cash and food supplies.

Preparations for the Bug

By 1997, a few years ahead of widespread panic over the Millennium problem, computer scientists were already working toward the solution. The British Standards Institute (BSI) developed a new computer standard to define conformity requirements for the Year 2000. Known as DISC PD2000-1, the standard outlined four rules:

Rule 1: No value for current date will cause any interruption in operation.
Rule 2: Date-based functionality must behave consistently for dates prior to, during and after year 2000.
Rule 3: In all interfaces and data storage, the century in any date must be specified either explicitly or by unambiguous algorithms or inferencing rules.
Rule 4: Year 200 must be recognized as a leap year.

Essentially, the standard understood the bug to rely on two key issues: the existing two-digit representation of dates was problematic in date processing and a misunderstanding of calculations for leap years in the Gregorian Calendar had caused the year 2000 to not be programmed as a leap year.

The first problem was solved by creating new programming for dates to be entered as four-digit numbers (ex: 2000, 2001, 2002, etc.), where they were previously represented only as two (97, 98, 99, etc.). The second by amending the algorithm for calculating leap years to "any year value divided by 100 is not a leap year," with the addition of "excluding years which are divisible by 400," thereby making the year 2000 a leap year (as it was).

What Happened on January 1, 2000?

When the prophesied date came and computer clocks around the world updated to January 1, 2000, very little actually happened. With so much preparation and updated programming done before the change of date, the catastrophe was quelled and only a few, relatively minor millennium bug problems occurred - and even fewer were reported.