What's the final word about Y2K? We were told this was a serious problem, and that huge dollars and man-hours were needed to head off trouble. Why didn't the sky fall, as predicted? Were the dollars spent before January 1, 2000, well spent or not? The date change seemed seamless to a layman. Was this because we headed off most trouble before it happened, or because it wasn't as serious as predicted? -Paul Wheeler
One may inquire: Why am I answering this now? Because the question keeps coming in, and at some point you have to ask, if I don't take it on, who will? So here's the best answer you're likely to get: (1) While the true extent of Y2K issues will never be known, what we do know suggests the problem was wildly exaggerated. In retrospect, it would have been smarter to focus resources on a few truly high-risk areas, wait till 1/1/2000 for everything else, and fix what broke. Looked at in that light, the money spent on remediation, estimated at between $100 billion and $600 billion, was mostly wasted. (2) That's hindsight talking. Many now say the world as we know it is going to end due to global warming. You think the smart choice is to say: relax?
Y2K fears arose because of the old programming practice of truncating dates to save memory: 1964 = 64. As century's end approached, people realized computers wouldn't be able to distinguish 2000 from 1900. Nightmare scenarios abounded: aircraft falling out of the sky, nuclear reactors melting down, bank accounts wiped out.
On New Year's Eve 1999, millions stared at their TVs as midnight approached in the easternmost time zones, waiting for the worst. Nothing much happened. Y2K postmortems fell into two categories. Early ones often took a self-congratulatory tone: Due to our heroic efforts, civilization was saved! Later analyses tended to the opposite view: Y2K panic was a gross overreaction to a minor problem.
A few observations:
• Some problems did surface. In February 2000 the Senate Special Committee on Y2K listed more than 50 incidents in the U.S. and more than 100 elsewhere, all minor. Probably the scariest news was an alert that three Russian Scud missiles had been launched. Turns out this wasn't a Y2K bug, just another day in the Chechnya conflict. I don't claim the things that went wrong were inconsequential: in the UK, for example, a medical software application gave incorrect Down syndrome test results. But software bugs show up all the time, and none has yet brought civilization to its knees.
• The Y2K-was-real crowd explained the quiet millennial dawn by saying developed countries that depended most on computers marshaled the most resources and fixed the problems. Less developed countries didn't do as much but used fewer computers, so less could go wrong. That's not credible. Italy had plenty of computers but its Y2K effort lagged; despite this, its problems were no worse than elsewhere.
• Great anxiety was expressed about the millions of individuals and enterprises relying on personal computers, but few problems turned up. Two things may account for this. First, PCs are replaced frequently, and Microsoft software was largely Y2K-compliant by 1997. Conceivably those few might have included some critical applications, except for the second factor, which I offer in seriousness: Windows is so notoriously unreliable that no one would ever build a life-or-death system around it.
• Another concern was the embedded microchips built into cars, medical devices, etc. Of the 7 to 25 billion such chips worldwide, initial estimates suggested 2 to 3 percent might fail. By late 1999 the risk had been downgraded to 0.001 percent, and even that was high.
Some contend much Y2K expenditure was an effort to fend off litigation. But so what? Would you want to be the bean counter whose attempts to economize let the nuclear missiles accidentally launch?
You can make the argument that after a year or so of intensive work it should have been clear that the worst fears were unjustified. But really, who knew? Are your insurance premiums wasted if your house doesn't burn down?