It’s not all that hard to imagine a world without supercomputers—just look back several decades. Without high-performance-computing (HPC) systems and the software, networks and the supersmart people who service them, our world would be stuck with technology that resembles that of the Cold War era. Teletypes would be all the rage, cell phones would weigh 2 pounds and refrigerators would run constantly. In short, the world would be far less efficient without HPC.
But what is HPC, and how does it differ from the desktop computers we use to check e-mail and compose to-do lists? The answer is a bit fuzzy. Matt Dunbar, director of software architecture for the Simulia realistic simulation software from Dassault Systèmes, Waltham, Mass., defines HPC as the “execution of tightly coupled [lines of code] on multiple cores.”
Suppose you have a difficult math problem, like figuring out what the price of gas will be 2 years from now, or calculating the odds of the Cubs winning the World Series. You might sit down with a stack of legal pads and box of #2 pencils and start crunching numbers. Several weeks later, you’d determine the Cubs winning the pennant is as mathematically likely as goats grazing the foul line at Wrigley Field.
Coming to this understanding will be much faster if you call for pizza delivery and invite all your friends over to help with the statistical modeling. Bob can work on one part of the calculation and Carol will work on another, while Ted and Alice divvy up the rest. Collectively, you’ll determine the answer before the pizza kid knocks on the door. If computers were people, this would be the essence of HPC.
The brains of any computer are composed of one or more processors, each with multiple cores and sharing a common bank of memory, or RAM. My Mac Airbook has a dual-core Intel i7 processor with 8GB RAM. It cost a smidge over $2,000. By comparison, a business-class cluster node with 64GB RAM and a pair of 6-core processors might cost 5 times that. Purchase several of these machines, equip them with load balancing tools and a fast interconnect, and you have a 3-node cluster. You’re on the way to HPC.
Provided you have the right software, the Cubs calculation can now be split up into bite-size pieces, or parallelized, and distributed across the system’s various processors, cores and memory. You’d understand the fate of the Cubbies within seconds. Dassault Systèmes offers several such software programs, specializing in the Finite Element Analysis (FEA), multiphysics simulation and complex modeling systems that often require the computing power of cluster servers and HPC.