Boston: In the early 1940s, IBM’s president, Thomas J Watson, reputedly said: “I think there is a world market for about five computers.” Watson’s legendary misjudgement did not prove fatal to his company. When businesses began buying mainframes in large numbers in the early 1950s, he quickly steered IBM into the new business. The proliferation of computers has, of course, accelerated ever since. But Watson’s prediction is suddenly coming back into vogue. In fact, some leading computer scientists believe that his seemingly ludicrous forecast may yet be proven correct.
Greg Papadopoulos, the chief technology officer at Sun Microsystems, recently declared on his blog: “The world needs only five computers”. Yahoo’s head researcher, Prabhakar Raghavan, seconds Papadopoulos’s view. In an interview in Business Week last December, he said: “In a sense, there are only five computers on Earth.” Most striking of all, some researchers at IBM believe that five computers may be four too many. In a new paper, they describe how a single IBM supercomputer, which they codename Kittyhawk, may be all we need. “One global-scale shared computer,” they say, may be able to run “the entire internet”.
The idea isn’t that we’ll all end up using one big, central box to run our software and store our data. What these experts are saying is that the very nature of computing is changing. As individual computers are wired together with the fibre-optic cables of the internet, the boundaries between them blur. They start to act like a single machine, their chips and drives melding into a shared pool. Rather than writing software that runs on just one microprocessor inside one box, programmers can write code that runs simultaneously, or in parallel, on thousands of networked machines.
Such giant computing grids, explains Papadopoulos, “will comprise millions of processing, storage and networking elements, globally distributed into critical-mass clusters”. His point in calling them “computers,” he says, “is that there will be some organisation, a corporation or government, that will ultimately control” their construction and operation. Their many pieces will work in harmony, like the components inside your PC.
This is not just a futuristic theory. High-tech companies such as Google, Amazon, IBM and Deutsche Telekom are already building powerful computing grids that can do the work of thousands or even millions of individual servers and PCs. The computer scientist Danny Hillis, who is one of the pioneers of the parallel-processing method that the grids use, has called Google’s global network of data centres “the biggest computer in the world”. It could be argued that the current consolidation of computing power is the fulfilment of the computer’s destiny. In 1936, the great Cambridge mathematician Alan Turing laid out a theoretical blueprint for what he called a “universal computing machine” — a blueprint that would take physical form in the electronic digital computer.
Turing showed that such a machine could be programmed to carry out any computing job. Given the right instructions and enough time, any computer would be able to replicate the functions of any other computer.
So in theory, then, it’s always been possible to imagine a single giant computer taking over the work of all the millions of little ones in operation today. But until recently the idea has been firmly in the realm of science fiction. There has never been a practical way to build a computing grid that would work fast enough and efficiently enough. Lots of little computers was the only way to go.
Now, thanks to the explosion in computing power and network bandwidth, the barriers to building a universal computer are falling. Very bright people can talk seriously about a world where there are only five computers — or even just a single one — that all of us share. It’s not a world that Thomas J Watson would recognise, even if it represents the future he accidentally foretold.
—Dawn/Guardian News Service
Dear visitor, the comments section is undergoing an overhaul and will return soon.