Science On the Outskirts of In-Memory

  In-memory computing has been an active topic of discussion in commercial “big data” circles, but a recent use case of its ability to address complex computational chemistry demands highlights its potential in scientific computing. Today, GridGain Systems detailed how…

 

In-memory computing has been an active topic of discussion in commercial “big data” circles, but a recent use case of its ability to address complex computational chemistry demands highlights its potential in scientific computing.

Today, GridGain Systems detailed how Portland State University made use of their In-Memory HPC offering in its attempt to build an adaptive learning system that will be able to detect diseases, offer medical diagnoses, and offer therapeutic recommendations based on biocompatible computers that can be embedded into living cells.
To highlight how the technology fits into the broader spectrum of high performance computing applications, we talked to the company’s VP of Product Management, Jon Webster. In addition to covering how this approach to in-memory computing differs from that which is being touted in the commercial sector, we hit on how it compares to other modes of handling massive datasets for scientific simulations, and how a sense of ROI comes through for users who could alternately simply throw more hardware at their problem.

GridGain’s approach to in-memory for HPC applications is somewhat different than what some of the other “in-memory” companies are producing. In fact, they term this offering as a “real-time, high performance distributed computation framework” which, for those who pay attention to another important movement from the more commercial side of the performance house, sounds a lot like Hadoop/MapReduce.
The two are different, says GridGain, noting that “if Hadoop MapReduce tasks take input from disk, produce intermediate results on disk and then output that result onto disk, GridGain does everything Hadoop does in memory—so in other words, it takes input from memory via direct API calls, produces intermediate results in memory and then creates results in-memory as well.”
In other words, it’s not just about storage, which is what so many vendors on the mainstream side are talking about when they use the term in-memory, says Webster. He says that the current approaches aren’t enough to handle complex workloads—they require what GridGain offers, at least for HPC workloads, he argues, which is actually embedding a purpose-built compute engine across a large partitioned in-memory database (which is another component to their offering) so you have all your data in memory. In essence, instead of grabbing data, processing it and putting it back, this allows users to process it with locality in mind so that data movement can be minimized.

GridGain’s framework seeks to support different execution modes, including MapReduce processing, while also offering support for other common HPC-oriented models (MPP, MPI and RPC) to help broaden their base of HPC customers.
They’ve been able to serve a number of other HPC-oriented users with this approach, including using it in financial markets for fast trade matching, risk management. Other industries that tend to fall outside the purview of what has always been considered traditional HPC like online gaming and real-time ad targeting represent a “filtering down” of technologies that have been proven at massive scale.
As you’ll hear in the interview above, this approach allowed Portland State’s researchers to do something that wouldn’t have been possible before with their current infrastructure. Webster also details what in-memory means for other HPC applications and how their approach to data-intensive workloads might evolve to meet other hardware and software architectures and frameworks.

Sponsored Links

RSS Feeds

Feeds by Topic
Feeds by Industry

Feeds by Content Type
Subscribe to All Content

 

Short TakesNSA Expands Academic Cyber InitiativeSep 05, 2013 |
The National Security Agency (NSA) announced this week that they will be expanding their academic cyber initiative aimed at increasing technical proficiency in the US workforce. The security agency has announced that four schools have been newly selected to be added as a part of the NSA’s National Centers of Academic Excellence (CAE) in Cyber Operations program.Read more…

ADIOS Team Wins AwardSep 04, 2013 |
In this fast-moving field, certain HPC software projects manage to both stand out and stand the test of time, and that’s the case with ADIOS (the ADaptable I/O file System), an open-source middleware developed to handle big data workloads.Read more…

Securing the National Power GridSep 04, 2013 |
Before we connect the dots on national smart power grid strategy, we need to be able to ensure its safety from malevolent cyber attacks. A new NSF-funded endeavor aims to do just that.Read more…

Up and Running With SLURMSep 04, 2013 |
Members of the BYU Supercomputing team recently posted a tutorial for getting started with SLURM, the scalable resource manager that has been designed for Linux clusters.Read more…

HPC Helps Power Industry React QuicklySep 04, 2013 |
GE’s Steve Pavlosky explains how high-performance computing technology helps users in the power industry, where changes happen very fast. Whether it’s changes in demand or a power failure, the control system has to be able to react very quickly.Read more…

Read more headlines…

 
Sponsored Whitepapers

Extracting Maximum ROI from HPC Server Technologies

08/29/2013 | Asetek, Cray, HP, Sabalcore and Supermicro | Learn a number of methods by which HPC users can increase the ROI of their installations which include:Software technologies for ROI gain
Server hardware technologies enabling greater efficiency
Data center power and cooling methodology for cost savings

Additional discussion is around ROI improvement coming from cloud computing solutions as a component of an HPC workload.Technical Computing for a New Era

07/30/2013 | IBM | This white paper examines various means of adapting technical computing tools to accelerate product and services innovation across a range of commercial industries such as manufacturing, financial services, energy, healthcare, entertainment and retail. No longer is technically advanced computing limited to the confines of big government labs and academic centers. Today it is available to a wide range of organizations seeking a competitive edge.

View the White Paper Library

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
HPCwire Live! Atlanta’s Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta’s first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today’s big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

More Multimedia

 
HPC Job Bank

Visit the HPCwire Job Bank

Featured Events

September 9, 2013
– September 11, 2013

HPC User Forum US MeetingBoston, MAUnited States

September 9, 2013
– September 09, 2013

10th Annual HPC for Wall StreetNew York City, NYUnited States

September 12, 2013
– September 12, 2013

HPC Advisory Council Spain Conference 2013Barcelona, Spain

September 23, 2013
– September 24, 2013

ISC Cloud ‘13Heidelberg, Germany

September 25, 2013
– September 26, 2013

ISC Big Data ‘13Heidelberg, Germany

September 25, 2013
– September 25, 2013

Cyber Security SummitNYC, NYUnited States

September 30, 2013
– October 04, 2013

Interop New YorkNew York City, NYUnited States

October 21, 2013
– October 22, 2013

Analytics 2013 Conference, hosted by SASOrlando , FL

October 21, 2013
– October 23, 2013

Cloud Connect ChicagoChicago, ILUnited States

November 17, 2013
– November 22, 2013

SC’13Denver, COUnited States
View/Search Events
Post an Event