Abstract
In the last decade, supercomputers and the scientific simulations performed on them have dramatically increased in size. Currently, simulations can use hundreds of TeraFLOPs (trillions of floating point operations per second) and generate many, many terabytes of data. In the near future, we will see PetaFLOP computing and petabytes of data. In addition, a critical step in the simulation process is “postprocessing”: applying visualization and analysis techniques to better understand the simulation. As a result, the issues of visualizing and analyzing massive data sets have never been more important. This puts the spotlight on two key issues. One, are we prepared for the unprecedented scale of data that we will need to postprocess? And, two, assuming that we can handle data of this size, can we intelligently analyze these simulations? I will argue that, for both of these questions, we must “change the rules” and make dramatic departures from our current modus operandi.
Chapter PDF
Similar content being viewed by others
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Childs, H. (2008). Why Petascale Visualization and Analysis Will Change the Rules. In: Bubak, M., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds) Computational Science – ICCS 2008. ICCS 2008. Lecture Notes in Computer Science, vol 5101. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69384-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-69384-0_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69383-3
Online ISBN: 978-3-540-69384-0
eBook Packages: Computer ScienceComputer Science (R0)