The Department of Energy (DOE) again recognized the value of Intelligent Light’s efforts to support innovation by awarding us a Phase IIB SBIR follow-on grant to continue promising R&D on integrating FieldView and VisIt. This brings the total to $2 million that has been committed to enable FieldView to use VisIt’s scalable back end server. Bringing FieldView and VisIt together will empower FieldView users across many disciplines to gain useful insights from the largest datasets generated on the largest computers. The FieldView-VisIt integration extends FieldView’s power into the High Performance Computing (HPC) regime and brings to bear exciting technologies from VisIt such as scalable rendering. Intelligent Light’s success during the Phase II SBIR grant has translated into useful improvements to the VisIt code today and there is more to come during the Phase IIB.
Whereas prior work on FieldView-VisIt integration focused on initial coupling techniques that allow the codes to exchange data, the new work seeks to address performance of the coupling as well as the performance of VisIt itself. In the early days of VisIt development at LLNL, we had a lot of pressure to add features as opposed to making those features work with the utmost efficiency. This means that there are a lot of places where VisIt can be sped up considerably and otherwise improved.
Related: DOE Invites Intelligent Light to Present In Situ with VisIt, Libsim, & FieldView
Performance improvements are one of the main objectives in the new work. Some of that performance will come from better utilization of parallel resources. For instance, processing an ensemble of datasets or multiple time steps can be achieved through changes in how VisIt handles the data. We plan to make changes to VisIt’s core infrastructure that enable it to process multiple datasets simultaneously in parallel so we can use more compute cores to handle a lot of intermediate sized data. These large modifications will be challenging but we know that the DOE selected Intelligent Light for our ability to carry out demanding work like this, which will benefit the larger VisIt community.
On a personal note, this will be my first time as Principal Investigator on a project of this scale. I have been a VisIt developer from the start and a figure in the VisIt community so this is a great chance for me to continue making important contributions to a code I am passionate about.
The Department of Energy hosts an annual meeting called the Computer Graphics Forum which brings together leading visualization experts who carry out DOE-supported research. Experts from National Laboratories, Department of Defense Research Institutions, Universities and select companies are invited to present updates on their research. Intelligent Light was invited for a special vendor participation session and gave a talk called “Promoting In Situ with VisIt, Libsim, and FieldView”.
Topics of interests selected by the DOE this year include: computer procurement, status updates for software packages, and research for in situ and parallel programming on advanced HPC systems.
Advanced HPC systems have special challenges as they are increasingly heterogeneous architectures (often consisting of CPUs plus accelerators such as GPUs) with deep memory hierarchies. Several talks focused on new programming paradigms that are being created to develop large code bases that are both portable and efficient on heterogeneous architectures.
In situ was also a prominent research topic. In situ brings data analysis and visualization into solvers as they run, enabling them to extract information from the resident data so that more concentrated data can be written out. Saving smaller, more concentrated data is important because HPC systems have far higher compute capacity than I/O bandwidth and storage needed to store full results.
Related: DOE Awards Follow-On Grant for FieldView / VisIt Integration
Live Event – Thursday, May 21, 2015 – 12:00 Noon, EDT
Archive Available Soon! Get notified
CREATE™–AV team and Intelligent Light tackle 45 seconds of flight time with unique In-situ XDB workflow
Click to access animation and event info
With a goal of improved pilot training for Sea-based aircraft operations, the CREATE™–AV team took on the task of coupling CREATE–AV Kestrel to the Navy flight simulator CASTLE. This two-way coupling might lead to better simulation of a difficult landing environment.
The high temporal fidelity of 45 seconds at 60 files per second meant that an innovative approach would be needed to handle the data. The CREATE™–AV team reached out to Intelligent Light for help.
I hope you will join me for this live event.
Jim Forsythe, Ph.D., Software Quality Assurance, CREATE-AV
Brad Whitlock, Post-Processing & Visualization Engineer, Intelligent
Accelerating the Post‐Processing of Large Scale Unsteady CFD Applications via In Situ Data Reduction and Extracts
Dr. Earl P.N. Duque
Manager of Applied Research, Intelligent Light
Tuesday, April 14th, 2015
Lehman Building Room 272
Writing, storing, moving and post‐processing vast unsteady datasets can interfere with an engineer’s interpretation and reporting of results. This seminar will present ongoing research to develop new methods designed to extract and reduce large unsteady CFD derived volumetric data. In‐Situ data extraction whereby sub‐setting and segmenting the volume data using data extraction and analysis libraries directly integrated within the solver codes themselves is the first step. To further reduce the amount of unsteady CFD extract data written to disk, methods such as Proper Orthogonal Decomposition may be used to reconstructed the solution data within a given error band. This seminar will present preliminary research and how the CFD could use these techniques to analyze their large‐scale CFD solutions.
BIO: Dr. Duque manages the Applied Research Group at Intelligent Light, the makers of the leading CFD post‐processing software FieldView. Previous to Intelligent Light, he was a tenured Professor of Mechanical Engineering at Northern Arizona University. Prior to his time at the university, he was a Research Scientist for the Army’s Rotorcraft CFD Applications Group located at the Numerical Aerodynamic Simulation Facility at NASA Ames Research Center. His current research focuses upon the development of large scale data management techniques for multi‐physics simulations. He has been awarded the Lichten Medal from the American Helicopter Society for his pioneering CFD studies on the BERP helicopter rotors, the Army Superior Civilian Service Medal for his lead role in the use of CFD to study and alleviate vibratory load problems on the Apache‐Longbow and Comanche Helicopters and is an Associate Fellow of the AIAA.
The accessibility of HPC via cloud computing offers tremendous flexibility for CFD users with peak workload demands as well as for organizations and consultants who do not maintain HPC systems in house.
By designing a CFD workflow that maximizes the use of HPC systems and eliminates the transfer of volume data sets, productivity gains can be tremendous. The ability to run high resolution, time dependent simulations and full suites of design points allow every idea to be thoroughly vetted. Intelligent Light sponsored research used this approach to help a single researcher perform over 60 simulations and evaluate nearly 3TB of data for the AIAA High Lift Prediction Workshop. Result files were post-processed remotely and only compact XDB files were transferred to the user’s local workstation.
Learn how this was accomplished and see how this approach can make your CFD workflow more capable and productive.