*New Cluster Compute Instances provide scalable, elastic, cost-efficient AWS cloud resources for advanced HPC workloads*
SEATTLE, Jul 13, 2010 – Amazon Web Services LLC, an Amazon.com company, today announced Cluster Compute Instances for Amazon EC2, a new instance type specifically designed for high-performance computing (HPC) applications and other demanding network-bound applications.
Customers with complex computational workloads such as tightly coupled parallel processes, or with applications sensitive to network performance, can now achieve the same high compute and networking performance provided by custom-built infrastructure while benefiting from the elasticity, flexibility and cost advantages of Amazon EC2. To get started using Cluster Compute Instances for Amazon EC2, visit https://aws.amazon.com.
Prior to Cluster Compute Instances for Amazon EC2, organizations with advanced HPC needs have been required to fund expensive, in-house compute clusters by purchasing dedicated, purpose-built hardware. As a result, the demand for high-performance cluster computing often exceeds the capacity of many organizations, and many projects are cut altogether or wait in long queues to access shared resources. With Cluster Compute Instances, businesses and researchers now have access to the high-performance computing capabilities they need – with pay-as-you-go pricing, the ability to scale on-demand, and no upfront investments.
Cluster Compute Instances provide similar functionality to other Amazon EC2 instances but have been specifically engineered to provide high-performance compute and networking. Cluster Compute Instances provide more CPU than any other Amazon EC2 instance. Customers can also group Cluster Compute Instances into clusters – allowing applications to get the low-latency network performance required for tightly coupled, node-to-node communication (typical of many HPC applications). Cluster Compute Instances also provide significantly increased network throughput making them well suited for customer applications that need to perform network-intensive operations. Depending on usage patterns, applications can see up to 10 times the network throughput of the largest current Amazon EC2 instance types.
“Businesses and researchers have long been utilizing Amazon EC2 to run highly parallel workloads ranging from genomics sequence analysis and automotive design to financial modeling. At the same time, these customers have told us that many of their largest, most complex workloads required additional network performance,” said Peter De Santis, General Manager of Amazon EC2.
“Cluster Compute Instances provide network latency and bandwidth that previously could only be obtained with expensive, capital intensive, custom-built compute clusters. For perspective, in one of our pre-production tests, an 880 server sub-cluster achieved 41.82 TFlops on a LINPACK test run – we’re very excited that Amazon EC2 customers now have access to this type of HPC performance with the low per-hour pricing, elasticity, and functionality they have come to expect from Amazon EC2.”
The National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory is the primary high-performance computing facility supporting scientific research sponsored by the U.S. Department of Energy. “Many of our scientific research areas require high-throughput, low-latency, interconnected systems where applications can quickly communicate with each other, so we were happy to collaborate with Amazon Web Services to test drive our HPC applications on Cluster Compute Instances for Amazon EC2,” said Keith Jackson, a computer scientist at the Lawrence Berkeley National Lab. “In our series of comprehensive benchmark tests, we found our HPC applications ran 8.5 times faster on Cluster Compute Instances for Amazon EC2 than the previous EC2 instance types.”
MathWorks is a leading developer and supplier of software for technical computing and model-based design. The company now enables its customers, using MATLAB and Parallel Computing Toolbox on their desktops, to scale data-intensive computation problems up to access greater compute power with Cluster Compute Instances for Amazon EC2, which are running MATLAB Distributed Computing Server. “Cluster Compute Instances give MATLAB users the opportunity to test and run their high performance computing problems for data-intensive applications in the cloud at a price and performance level that allows us to continually innovate and meet customer needs,” said Silvina Grad-Freilich, Senior Manager Parallel-Computing at MathWorks. “We’re thrilled to allow our customers to leverage Amazon Web Services as an easily accessible way to meet their needs for increased compute power.”
Adaptive Computing provides automation intelligence software, powered by its Moab technology, for HPC, data center and cloud environments. Moab is the management layer for more than 50 percent of the resources at the top computing systems in the world. “The availability of Cluster Compute Instances on Amazon EC2 gives organizations access to on-demand and highly available HPC resources,” said Michael Jackson, COO and President of Adaptive Computing. “For years we’ve helped customers build and manage the world’s most complex large-scale computing clusters, and now with Cluster Compute Instances, customers can leverage Adaptive Computing’s familiar automation software tools to manage HPC resources on Amazon’s leading cloud infrastructure.”
David Patterson is a world-renowned expert, author and academic who has been recognized with more than 30 awards for research, teaching and service. He is the co-inventor of RAID, RISC and several other computer innovations and has taught computer architecture at University of California, Berkeley, since joining the faculty in 1977. “The high-performance networking of Cluster Compute Instances for Amazon EC2 fills an important need among scientific computing professionals, making the on-demand and scalable cloud environment more viable for technical computing,” said Patterson.
Cluster Compute Instances complement other AWS offerings designed to make large-scale computing easier and more cost effective. For example, Public Data Sets on AWS provide a repository of useful public data sets that can be easily accessed from Amazon EC2, allowing fast, cost-effective data analysis by researchers and businesses. These large data sets are hosted on AWS at no charge to the community. Additionally, the Amazon Elastic MapReduce service enables low-friction, cost effective implementation of the Hadoop framework on Amazon EC2. Hadoop is a popular tool for analyzing very large data sets in a highly parallel environment, and Amazon EC2 provides the scale-out environment to run Hadoop clusters of all sizes.
For more information on Amazon EC2 and Cluster Compute Instances, visit https://aws.amazon.com/hpc-applications.