Back in the 1940s, when scientists like Richard Feynman were developing nuclear weapons, they employed teams of young mathematics whiz kids to perform complex mathematical calculations, or models, for the Manhattan Project.
But as government requirements for more complex calculations have grown, so has the need for specially designed supercomputers that can handle the massive operations involved in projects like the nuclear weapons Stockpile Stewardship Program and climate modeling.
After signing the international ban on nuclear tests last year, the US government is investing more resources to create virtual nuclear weapons tests on supercomputers. Last February, the US Department of Energy purchased a US$85 million supercomputer from International Business Machines that runs at speeds of 10 teraops (10 trillion-operations-per-second). This summer, the department is making even bigger technological strides with the multimillion-dollar purchase of two SRC-6 supercomputers, which possess four times the processing power of that IBM computer.
The new supercomputers will primarily be used for nuclear weapons test modeling under the government's Accelerated Strategic Computing Initiative (ASCI), a project chartered with designing simulations to test the stability of nuclear arsenals without requiring underground nuclear tests.
"Nuclear weapons modeling and climate modeling take up a lot of computing power," said Darrol Hammer, group leader of high performance computing procurement at the Lawrence Livermore National Laboratory in California. "The supercomputer systems lead to better models of events, and better predictions about outcomes."
"Dramatic advances in computer technology have made virtual testing and prototyping viable alternatives to traditional nuclear and non-nuclear test-based methods," a Lawrence Livermore fact sheet states. Primarily, the simulations help scientists evaluate the security and reliability of weapons as they age, and verify that repaired or replaced components are functioning correctly.
But while the government is ramping up its virtual nuclear weapons tests, researchers at universities across the country are seeing a decline in the amount of time available for other supercomputer research projects that have previously been funded by the government.
In the recent past, the National Science Foundation funded four supercomputer centers: Cornell University, University of Pittsburgh, University of Illinois, and the University of San Diego. However, this year it stopped funding Cornell and Pittsburgh, and the number of machines available to US scientists for general research was approximately halved, said Peter Pacheco, chairman of the department of computer science at the University of San Francisco, who confirmed that the government is shifting funding from civilian computing toward more military-minded applications.
"At the same time that all this has been going on [general research funding cuts], the Department of Energy has been spending millions on supercomputers. These machines are being principally used for the study of the nation's nuclear stockpile," Pacheco said.
But this may not necessarily be bad, he said, because civilian scientists can still get time on the supercomputers if their projects are relevant to the goals of the Department of Energy laboratories.
Hammer said his lab recently sent out a notice of procurement, seeking scientists to propose projects for the supercomputers at the lab. The lab received 189 proposals from more than 100 universities across the United States. "All of them had relevance to the problems we are working on here," Hammer said, noting that scientists have historically tailored much of their research to government priorities.
Climate modeling and nuclear testing are done in these supercomputers because only they can handle the mathematical algorithms and formulas that go into these types of model development. For example, when conducting a climate modeling project, satellite images of the ozone level over San Francisco can be compared to those over the city a decade ago. Has anything changed? If so, this can be expressed mathematically, and extrapolations can be built. These models can provide estimates as to what the environment may look like in 20 years, and can lead to intervention strategies, if need be.
To keep the costs of these projects low, the computers at Lawrence Livermore lab and the Oak Ridge National Laboratory in Tennessee have been designed with off-the-shelf components. The SRC-6 supercomputers use Intel's 400 Mhz Pentium II, while Intel's 64-bit chip, the Merced, is under serious consideration for future use. The system is created with a parallel architecture -- all the Pentium chips simultaneously perform the same operation -- and is three to four times faster than previous supercomputers.
The 10 teraops supercomputer at Livermore will consist of more than 8,000 of IBM's newest and fastest RS/6000 processors, and future plans call for acquisition of a 30 teraops and a 100 teraops system.
"The government is moving away from big, fast systems to parallel systems, which use lots of parallel memory processors," said Hammer. The move toward parallel processing is being carried out because it is less expensive to develop a computer based on commodity processors -- like the Pentium -- than it is to build a completely new processor for each supercomputer.
Parallel processing is also faster than the older method of processing, which broke the scientific problem down into discrete parts, called vectors. As a result, virtually all US supercomputers are being built with commodity processors: The IBM SP, Cray T3E, HP Exemplar, SGI Origin, and the Intel ASCI machine at Sandia National Labs are all built with processors that are used in ordinary desktop workstations.
But none, until now, have used the Pentium II.
The shift to Pentiums is being made for the same reason that PC users upgrade their computers: They want improved processing power. And the Pentium II chips are the best available for the task of modeling, said Betsy Riley, manager of user services at the Oak Ridge facility.
At Oak Ridge, scientists plan to evaluate the SRC-6, made by the Supercomputer Research Corp., founded by the late Seymour Cray, and will examine computer systems and components as well as methods to connect multiprocessor units.
Speed is what is attractive about this new machine. Its performance can exceed 40 teraops. By contrast, the IBM machine purchased earlier this year performs at 10 teraops. Overall system performance is supposed to be dramatically improved via memory banks and individual memory ports for each processor.
"We expect the SRC-6 to out-perform other designs of comparable CPU power and to be far easier to program," said Ken Kliewer, director of the Oak Ridge lab's Center for Computational Sciences. "We are anticipating the participation of many users with a large variety of applications codes in our SRC-6 evaluation. And we expect them to be impressed with the machine, both in performance and ease of use."
In addition to the increased funding for supercomputer purchases, the government is working with universities to develop software for the supercomputers at Oak Ridge and Lawrence Livermore, which will have commercial applications as well, said a government official involved in supercomputing.
"We think the rising tide of funding for [Department of Energy] projects can lift all boats if industry and universities pick projects that have dual applications," he said.
According to Pacheco, outside of the department acquisitions, the supercomputing industry is not experiencing much growth, and these projects may provide the only chance that some scientists have to work with supercomputers at all.
"In the US, SGI/Cray, IBM, and H-P are really the only super manufacturers," he said. "At this point, SRC and Tera are making experimental and largely unproven machines.
"Further, the government still provides much of the support for supercomputing, and, in spite of increases at [the Department of Energy], I think most other agencies have cut back on spending on supers," said Pacheco. "In the early '90s it seemed that there was almost no limit on the amount of support available for supercomputing. I definitely don't have that feeling now."