JOIN SHPCP   |   EVENTS   |  NEWS  |  LOGIN

Log in
As a charitable service-based nonprofit organization (NPO) coordinating individuals, businesses, academia and governments with interests in High Technology, Big Data and Cybersecurity, we bridge the global digital divide by providing supercomputing access, applied research, training, tools and other digital incentives “to empower the underserved and disadvantaged.”

SHPCP @ SEG 2018

Schedule | Abstracts  | SEG Floor Layout (download) | Presentations

Society of High Performance Computing Professionals Focus Group and Presentation Theater at Digital Arena # 341

Geophysics has been associated with computing technology from its inception. High Performance Computing (HPC) consistently provides challenges and opportunities to enhance hardware, software and infrastructure solutions to match the performance and quality of the industry’s algorithms, data consumption and exploration advances.


The geophysical industry constantly increases the demands placed on HPC suppliers to process more data faster than ever before.

Considering the SEG’s interest in and need for HPC, the SEG and The Society of HPC Professionals (SHPCP) have brought together a group of HPC organizations to exhibit in a common area of the Exhibit Hall and make technical presentations. This theater will showcase two state-of-the-art 84” LCD panel systems provided by Prysm. 


Several SHPCP (www.hpcsociety.org) are regular exhibitors at the SEG Annual Meetings; however, the theater will also include participants that are first-time SEG Exhibitors and will feature new industry technology. Sponsors of this area will deliver presentations in the theater with daily programs. 


The following organizations will be participating in the HPC Focused Group: Altair, Bluware, Energistics, KOVE, Nvidia, Prairie View A&M (PVAMU), Red Hat, Rescale and Western Digital.


PRESENTATIONS

 

No pictures to show

 

SCHEDULE

Monday, OCTOBER 15th  

8:30 AM

SEG

Opening Session and Presidential Address
Level 3, Ballroom AB


10:00 AM

Bluware

An Elastic, Distributed Dataflow Architecture for Seismic Computations in the Cloud  
Alberto Melacini - Senior Cloud Architect


11:00 AM

KOVE

Software Defined Memory  
John Overton - CEO


1:00 PM

Altair

Altair PBS Works Software Suite  
Victor Wright - Western Region Enterprise Computing Account Manager


2:00 PM

Energistics

Reservoir Model Transfer with Enrichment Using The RESQML Standard   
Jay Hollingsworth - Chief Technology Officer


3:00 PM

Red Hat

Open Agnostic Cloud  
E.G. Nadhan - Chief Strategist


4:00 PM

Western Digital

The Premise of On-Premises Storage  
Brian Bashaw - System Engineer, Director


5:00 PM

Rescale

How to Incorporate Cloud Computing into Simulation Method and IT Environment  
Fanny Treheux - Director of Solution

Tuesday, OCTOBER 16th 

9:00 AM

Altair

Altair PBS Works Software Suite   
Victor Wright - Western Region Enterprise Computing Account Manager


10:00 AM

Nvidia

Nvidia DGX for Deep Learning 
Osama Qazi


11:00 AM

Energistics

Reservoir Model Transfer with Enrichment Using The RESQML Standard  
Jay Hollingsworth - Chief Technology Officer


1:00 PM

Red Hat

Open Agnostic Cloud  
E.G. Nadhan - Chief Strategist


2:00 PM

Nvidia

Accelerated Analytics for the right insights at the right time
Ken Hester


3:00 PM

KOVE

Software Defined Memory  
John Overton - CEO


4:00 PM

Western Digital

The Premise of On-Premises Storage  
Brian Bashaw - System Engineer, Director


5:00 PM

Rescale

How to Incorporate Cloud Computing into Simulation Method and IT Environment  

Fanny Treheux - Director of Solution

Wednesday, OCTOBER 17th 

9:00 AM

Prairie View A&M

When Data Science Meets Geophysics: Apply Deep Learning to Seismic Inversion   
Lei Hung Ph. D. - Associate Professor


10:00 AM

KOVE

Software Defined Memory  
John Overton - CEO


11:00 AM

Nvidia

Accelerated Analytics for the right insights at the right time
Ken Hester



1:00 PM

Rescale

How to Incorporate Cloud Computing into Simulation Method and IT Environment  
Fanny Treheux - Director of Solution


2:00 PM

Red Hat

Open Agnostic Cloud  
E.G. Nadhan - Chief Strategist


3:00 PM

ADJOURN





ABSTRACTS


10:00 AM - Monday, OCTOBER 15th

Bluware

An Elastic, Distributed Dataflow Architecture for Seismic Computations in the Cloud  
Alberto Melacini - Senior Cloud Architect


Abstract: The Headwave platform dataflow architecture is modeled as a Directed Acyclic Graph (DAG) where each node describes its input/output (e.g. datatype, dimensionality, …) as well as the spatial extent of input data ("halo") required to produce a subset of the output. The computation is triggered on-demand: when output data is requested (either for visualization or to perform complex calculations), the architecture ensures that all dependencies are resolved, while caching intermediate results. Such design is well suited for deployment as a distributed system where multiple instances of the graph (not copies of the data!) are deployed and each instance can produce a subset of data. Using wavelet based compression, the client receives and assembles the results from the participating graph instances. Although the overall Cloud architecture is conceptually platform agnostic, the processing elements are instances of Docker containers orchestrated by Kubernetes and the Cloud technology is provided by Amazon Web Services (AWS). Here we present a case study of stacking massive seismic datasets, making considerations regarding scalability - as the instances of the processing elements and the compute nodes gradually increase.


Presenter’s Bio: Mr. Melacini has a background in computing science with a Masters in Artificial Intelligence from Aberdeen University (Scotland, UK). For over two decades he has been working in the Oil-and-Gas industry as a software engineer. Mr Melacini recently joined Bluware, where he is working on the next generation Cloud native architecture for the Headwave platform.



11:00 AM - Monday, OCTOBER 15th

KOVE

Software Defined Memory  
John Overton - CEO


Abstract: Software-Defined technologies for storage, networking, and CPUs have provided flexibility to computing infrastructures, but Software-Defined Memory (SDM) has remained elusive because of latency, distance-of-cable versus speed-of-light concerns. Kove® External Memory is a mature SDM implementation, providing a fast & reliable memory meshed approach to computing at scale, via protected, server-attachable, external memory. Kove’s SDM approach integrates directly with internal server memory, providing practically unlimited external memory on-demand, regardless of memory capacity in the local server. SDM provides excellent performance, dynamic provisioning, high CPU utilization rates, and flexibility of local memory performance with an arbitrarily scaled external memory resource. No code changes required for adoption. SDM has direct applicability across a diverse set of use cases, including databases, analytics, animation, graphs, visualization, financial services, web services, virtual machines, communications, and genomics.


Presenter’s Bio: John Overton, Ph.D., CEO In the late 1980s John worked for the Open Software Foundation where he wrote software used in 67% of the worldwide workstation market. John’s first company built the world’s fastest network optimized database. Correctly identifying the need for memory hierarchies, John founded Kove in 2004, delivering a patented and mature product suite used in numerous countries around the world.



1:00 PMMonday, OCTOBER 15th

Altair

Altair PBS Works Software Suite  
Victor Wright - Western Region Enterprise Computing Account Manager


Abstract: With PBS Works 2018, Altair is reimagining the HPC experience to Control, Access, and Optimize HPC to increase productivity and reduce expenses. For administrators, the Altair Control portal provide 360-degree visibility and Control to configure, deploy, monitor, troubleshoot, report, and simulate clusters and clouds, including bursting peak workloads, and managing cloud appliances. For engineers and researchers, Altair Access portals provide natural Access to HPC (no IT expertise needed) to run solvers, view progress, manage data, and use 3D remote visualization via web, desktop, and mobile. This presentation will also showcase how Altair’s HyperWorks Unlimited (HWUL) physical appliance helps customers overcome the complexities normally associated with HPC and Hybrid Cloud. Since it is completely managed by Altair as a secure ‘black box’ private cloud environment, it frees the end customer from all the typical administrative and technical challenges of managing an HPC infrastructure. HWUL removes all software licensing restrictions by providing unlimited use of Altair’s entire HyperWorks® CAE software suite. It also allows customers to add 3rd party tools under a BYOL model.


Presenter’s Bio: Bio coming soon.



2:00 PM

Energistics

Reservoir Model Transfer with Enrichment Using The RESQML Standard  
Jay Hollingsworth - Chief Technology Officer


Abstract: Many companies, led by Energistics and including operators and software providers, have invested significant time and money over the past decade to develop and deliver a vendor-neutral industry standard for reservoir model data exchange. In 2015 a significant milestone was achieved with the delivery of version 2 of the RESQML standard. From an end-user’s perspective, this was the first version of the standard to deliver capability above and beyond RESCUE, its precursor. Many active RESQML SIG members are anxious to start benefitting from the efforts. A multi-company implementation pilot of the latest version in commercial use (v2.0.1) is demonstrating the benefits of RESQML and the goal is to promote widespread adoption. Energistics and members associated with this pilot will make a multi-vendor live demonstration of RESQML data transfers at the HPC Showcase planned for the October 2018 SEG Annual Meeting.


Presenter’s Bio: Jay Hollingsworth – Energistics CTO – will be the main speaker Participants representing the following companies: CMG, Dynamic Graphics, Emerson (Roxar & Paradigm), IFPEN/Beicip, and Schlumberger will demonstrate data transfers live on the Amazon Web Services and Delfi cloud environments BP and Shell – the operators leading the pilot – will not be presenting



3:00 PMMonday, OCTOBER 15th

Red Hat

Open Agnostic Cloud  
E.G. Nadhan - Chief Strategist


Abstract: Abstract coming soon


Presenter’s Bio: Bio coming soon.



4:00 PMMonday, OCTOBER 15th

Western Digital

The Premise of On-Premises Storage  
Brian Bashaw - System Engineer, Director


Abstract: The Cloud, it’s everywhere you turn, yet many are still struggling with what it really means to them and how to best utilize this tool. Come along on one man’s journey through the cloud. Along the way, we’ll explore what makes the cloud great, things to look out for when selecting a cloud provider, and ultimately why a hybrid approach is likely the best for most.


Presenter’s Bio: Brian Bashaw is the Director of System Engineering for Western Digital’s Data Center Systems Group. He spent the first 11 years of his career optimizing HPC environments for upstream geophysical research companies. In 2004, Brian joined NetApp, where he applied that expertise to NetApp’s emerging products team, assisting in the evolution from 1st platform to 2nd platform computing. Realizing that there was more potential to evolve the datacenter, Brian joined Avere Systems in 2010 and began looking towards solving the challenges associated with providing tier 1 performance, while leveraging the economics of converged and cloud infrastructures. Now with Western Digital, Brian is focusing on furthering that evolution into the 3rd platform world and beyond, helping businesses realize the benefits of deploying their own private & hybrid cloud infrastructure.



5:00 PMMonday, OCTOBER 15th

Rescale

How to Incorporate Cloud Computing into Simulation Method and IT Environment  
Fanny Treheux - Director of Solution


Abstract: The Headwave platform dataflow architecture is modeled as a Directed Acyclic Graph (DAG) where each node describes its input/output (e.g. datatype, dimensionality, …) as well as the spatial extent of input data ("halo") required to produce a subset of the output. The computation is triggered on-demand: when output data is requested (either for visualization or to perform complex calculations), the architecture ensures that all dependencies are resolved, while caching intermediate results. Such design is well suited for deployment as a distributed system where multiple instances of the graph (not copies of the data!) are deployed and each instance can produce a subset of data. Using wavelet based compression, the client receives and assembles the results from the participating graph instances. Although the overall Cloud architecture is conceptually platform agnostic, the processing elements are instances of Docker containers orchestrated by Kubernetes and the Cloud technology is provided by Amazon Web Services (AWS). Here we present a case study of stacking massive seismic datasets, making considerations regarding scalability - as the instances of the processing elements and the compute nodes gradually increase.


Presenter’s Bio: Bio coming soon.



9:00 AMTuesday, OCTOBER 16th

Altair

Altair PBS Works Software Suite  
Victor Wright - Western Region Enterprise Computing Account Manager


Abstract: With PBS Works 2018, Altair is reimagining the HPC experience to Control, Access, and Optimize HPC to increase productivity and reduce expenses. For administrators, the Altair Control portal provide 360-degree visibility and Control to configure, deploy, monitor, troubleshoot, report, and simulate clusters and clouds, including bursting peak workloads, and managing cloud appliances. For engineers and researchers, Altair Access portals provide natural Access to HPC (no IT expertise needed) to run solvers, view progress, manage data, and use 3D remote visualization via web, desktop, and mobile. This presentation will also showcase how Altair’s HyperWorks Unlimited (HWUL) physical appliance helps customers overcome the complexities normally associated with HPC and Hybrid Cloud. Since it is completely managed by Altair as a secure ‘black box’ private cloud environment, it frees the end customer from all the typical administrative and technical challenges of managing an HPC infrastructure. HWUL removes all software licensing restrictions by providing unlimited use of Altair’s entire HyperWorks® CAE software suite. It also allows customers to add 3rd party tools under a BYOL model.


Presenter’s Bio: Bio coming soon.



10:00 AMTuesday, OCTOBER 16th

Nvidia

Nvidia DGX for Deep Learning
Osama Qazi


Abstract: "DGX - the instrument of AI research and HPC" this talk will provide a brief of the hardware and software that leverages the AI super computer and how Nvidia is able to continue to push 10x Moore's law.


Presenter’s Bio: Bio coming soon.



11:00 AMTuesday, OCTOBER 16th

Energistics

Reservoir Model Transfer with Enrichment Using The RESQML Standard  
Jay Hollingsworth - Chief Technology Officer


Abstract: Many companies, led by Energistics and including operators and software providers, have invested significant time and money over the past decade to develop and deliver a vendor-neutral industry standard for reservoir model data exchange. In 2015 a significant milestone was achieved with the delivery of version 2 of the RESQML standard. From an end-user’s perspective, this was the first version of the standard to deliver capability above and beyond RESCUE, its precursor. Many active RESQML SIG members are anxious to start benefitting from the efforts. A multi-company implementation pilot of the latest version in commercial use (v2.0.1) is demonstrating the benefits of RESQML and the goal is to promote widespread adoption. Energistics and members associated with this pilot will make a multi-vendor live demonstration of RESQML data transfers at the HPC Showcase planned for the October 2018 SEG Annual Meeting.


Presenter’s Bio: Jay Hollingsworth – Energistics CTO – will be the main speaker Participants representing the following companies: CMG, Dynamic Graphics, Emerson (Roxar & Paradigm), IFPEN/Beicip, and Schlumberger will demonstrate data transfers live on the Amazon Web Services and Delfi cloud environments BP and Shell – the operators leading the pilot – will not be presenting



1:00 PMTuesday, OCTOBER 16th

Red Hat

Open Agnostic Cloud  
E.G. Nadhan - Chief Strategist


Abstract: Abstract coming soon


Presenter’s Bio: Bio coming soon.



2:00 PMTuesday, OCTOBER 16th

Nvidia

Accelerated Analytics for the right insights at the right time
Ken Hester


Abstract: RAPIDS offers a suite of open-source libraries for GPU-accelerated analytics, machine learning and, soon, data visualization. It has been developed over the past two years by NVIDIA engineers in close collaboration with key open-source contributors.


Presenter’s Bio: Bio coming soon.



3:00 PMTuesday, OCTOBER 16th

KOVE

Software Defined Memory  
John Overton - CEO


Abstract: Software-Defined technologies for storage, networking, and CPUs have provided flexibility to computing infrastructures, but Software-Defined Memory (SDM) has remained elusive because of latency, distance-of-cable versus speed-of-light concerns. Kove® External Memory is a mature SDM implementation, providing a fast & reliable memory meshed approach to computing at scale, via protected, server-attachable, external memory. Kove’s SDM approach integrates directly with internal server memory, providing practically unlimited external memory on-demand, regardless of memory capacity in the local server. SDM provides excellent performance, dynamic provisioning, high CPU utilization rates, and flexibility of local memory performance with an arbitrarily scaled external memory resource. No code changes required for adoption. SDM has direct applicability across a diverse set of use cases, including databases, analytics, animation, graphs, visualization, financial services, web services, virtual machines, communications, and genomics.


Presenter’s Bio: John Overton, Ph.D., CEO In the late 1980s John worked for the Open Software Foundation where he wrote software used in 67% of the worldwide workstation market. John’s first company built the world’s fastest network optimized database. Correctly identifying the need for memory hierarchies, John founded Kove in 2004, delivering a patented and mature product suite used in numerous countries around the world.



4:00 PMTuesday, OCTOBER 16th

Western Digital

The Premise of On-Premises Storage  
Brian Bashaw - System Engineer, Director


Abstract: The Cloud, it’s everywhere you turn, yet many are still struggling with what it really means to them and how to best utilize this tool. Come along on one man’s journey through the cloud. Along the way, we’ll explore what makes the cloud great, things to look out for when selecting a cloud provider, and ultimately why a hybrid approach is likely the best for most.


Presenter’s Bio: Brian Bashaw is the Director of System Engineering for Western Digital’s Data Center Systems Group. He spent the first 11 years of his career optimizing HPC environments for upstream geophysical research companies. In 2004, Brian joined NetApp, where he applied that expertise to NetApp’s emerging products team, assisting in the evolution from 1st platform to 2nd platform computing. Realizing that there was more potential to evolve the datacenter, Brian joined Avere Systems in 2010 and began looking towards solving the challenges associated with providing tier 1 performance, while leveraging the economics of converged and cloud infrastructures. Now with Western Digital, Brian is focusing on furthering that evolution into the 3rd platform world and beyond, helping businesses realize the benefits of deploying their own private & hybrid cloud infrastructure.



5:00 PMTuesday, OCTOBER 16th

Rescale

How to Incorporate Cloud Computing into Simulation Method and IT Environment  
Fanny Treheux - Director of Solution


Abstract: The Headwave platform dataflow architecture is modeled as a Directed Acyclic Graph (DAG) where each node describes its input/output (e.g. datatype, dimensionality, …) as well as the spatial extent of input data ("halo") required to produce a subset of the output. The computation is triggered on-demand: when output data is requested (either for visualization or to perform complex calculations), the architecture ensures that all dependencies are resolved, while caching intermediate results. Such design is well suited for deployment as a distributed system where multiple instances of the graph (not copies of the data!) are deployed and each instance can produce a subset of data. Using wavelet based compression, the client receives and assembles the results from the participating graph instances. Although the overall Cloud architecture is conceptually platform agnostic, the processing elements are instances of Docker containers orchestrated by Kubernetes and the Cloud technology is provided by Amazon Web Services (AWS). Here we present a case study of stacking massive seismic datasets, making considerations regarding scalability - as the instances of the processing elements and the compute nodes gradually increase.


Presenter’s Bio: Bio coming soon.



9:00 AMWednesday, OCTOBER 17th

Prairie View A&M

When Data Science Meets Geophysics: Apply Deep Learning to Seismic Inversion  
Lei Hung Ph. D. - Associate Professor


Abstract: Seismic Inversion is a geophysics-based solution to reconstruct the earth subsurface image by inverting seismic data observed via the distributed sensors on the surface, where a forward model simulates the physical rules of wave propagation through different earth materials. The solution heavily relies on physics and numerical optimization methods to find a solution. The physical theories not only allow us to understand how the world works but also enable us to simulate the universe. Physics theories are accurate and universal. In contrast, data science, as a fast-spreading science, is based on probability and statistics, which is very sensitive to data quality and quantity. As the data science rapidly reshapes the methodology used in the geophysics community, we are studying how to leverage the power of data science to facilitate the seismic inversion and interpretation problems. When the data science meets geophysics, it requires us to rethink of the classical workflow, data augmentation, model training, and optimization methodology. The talk will introduce our research findings in applying data science to the seismic inversion and interpretation. It will also present our HPC environment enclosed with data science software in the seismic inversion research. The presentation is intended to lead a discussion on the data-driven applications in seismic imaging, seismology, and interpretation. The work is sponsored by the National Science Foundation (NSF).


Presenter’s Bio: Dr. Lei Huang is an Associate Professor in the Department of Computer Science, Prairie View A&M University (PVAMU), where he is leading research at the Cloud Computing Research Lab. He also serves as the Associate Director of Research in the Center of Excellence in Research and Education for Big Military Data Intelligence at PVAMU sponsored by Department of Defense (DoD). He is currently the Principal Investigator of multiple research projects sponsored by NSF in the Big Data Analytics, Cloud Computing, and High Performance Computing (HPC) areas. He joined PVMAU in 2011 with research experience in HPC at the University of Houston, and working experience in seismic software R&D. Huang has earned his Ph.D. from the Computer Science Department at the University of Houston in 2006.



10:00 AMWednesday, OCTOBER 17th

KOVE

Software Defined Memory  
John Overton - CEO


Abstract: Software-Defined technologies for storage, networking, and CPUs have provided flexibility to computing infrastructures, but Software-Defined Memory (SDM) has remained elusive because of latency, distance-of-cable versus speed-of-light concerns. Kove® External Memory is a mature SDM implementation, providing a fast & reliable memory meshed approach to computing at scale, via protected, server-attachable, external memory. Kove’s SDM approach integrates directly with internal server memory, providing practically unlimited external memory on-demand, regardless of memory capacity in the local server. SDM provides excellent performance, dynamic provisioning, high CPU utilization rates, and flexibility of local memory performance with an arbitrarily scaled external memory resource. No code changes required for adoption. SDM has direct applicability across a diverse set of use cases, including databases, analytics, animation, graphs, visualization, financial services, web services, virtual machines, communications, and genomics.


Presenter’s Bio: John Overton, Ph.D., CEO In the late 1980s John worked for the Open Software Foundation where he wrote software used in 67% of the worldwide workstation market. John’s first company built the world’s fastest network optimized database. Correctly identifying the need for memory hierarchies, John founded Kove in 2004, delivering a patented and mature product suite used in numerous countries around the world.



11:00 AMWednesday, OCTOBER 17th

Nvidia

Accelerated Analytics for the right insights at the right time
Ken Hester


Abstract: RAPIDS offers a suite of open-source libraries for GPU-accelerated analytics, machine learning and, soon, data visualization. It has been developed over the past two years by NVIDIA engineers in close collaboration with key open-source contributors.


Presenter’s Bio: Bio coming soon.



1:00 PMWednesday, OCTOBER 17th

Rescale

How to Incorporate Cloud Computing into Simulation Method and IT Environment  
Fanny Treheux - Director of Solution


Abstract: The Headwave platform dataflow architecture is modeled as a Directed Acyclic Graph (DAG) where each node describes its input/output (e.g. datatype, dimensionality, …) as well as the spatial extent of input data ("halo") required to produce a subset of the output. The computation is triggered on-demand: when output data is requested (either for visualization or to perform complex calculations), the architecture ensures that all dependencies are resolved, while caching intermediate results. Such design is well suited for deployment as a distributed system where multiple instances of the graph (not copies of the data!) are deployed and each instance can produce a subset of data. Using wavelet based compression, the client receives and assembles the results from the participating graph instances. Although the overall Cloud architecture is conceptually platform agnostic, the processing elements are instances of Docker containers orchestrated by Kubernetes and the Cloud technology is provided by Amazon Web Services (AWS). Here we present a case study of stacking massive seismic datasets, making considerations regarding scalability - as the instances of the processing elements and the compute nodes gradually increase.


Presenter’s Bio: Bio coming soon.



2:00 PMWednesday, OCTOBER 17th

Red Hat

Open Agnostic Cloud  
E.G. Nadhan - Chief Strategist


Abstract: Abstract coming soon


Presenter’s Bio: Bio coming soon.



FOUNDATION SPONSORS

           

                 



PREMIERE SPONSORS

                         



  


ENTERPRISE SPONSORS