Abstract
Supercomputing involves not only the development and provision of infrastructures of large capacity for the scientific and business community, but a new way to manage the tasks of research, development and innovation, making it necessary to use high-capacity communication networks that allow the transfer of a great volume of data between research and high-performance computing facilities. When first implemented, the use of supercomputers occurred mainly in the military field, at which point they were very rudimentary, offering little possibility of communication networking. Over the years, the improvement of security, privacy and service quality in information exchange has facilitated the creation of large networks for scientific communication, which in turn have allowed the incorporation of infrastructures for high-performance computing into the improvement of science. This paper analyzes the evolution of Supercomputing and Scientific Communications Networks by means of a critical review of its present state, as well as identifies the main uses today and predicts the challenges of the future uses of this type of advanced services.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Supercomputing is currently one of the three pillars, along with theory and laboratory research, on which much of the progress of science and engineering is based. Research on Distributed and Parallel Systems is one of the most developed research lines in current Computer Science [38]. They allow operations with large volumes of data and the implementation of simulation programs in the most varied fields of science through its processing elements. To facilitate access to these infrastructures and promote the efficient operation of the whole system of innovation [102] Supercomputing Centers have been created which are developing a new generation of professionals, companies and organizations related to Science and Technology [23].
The investment in these infrastructures has positive effects on the growth in productivity of economies [50, 66, 91], allowing the improvement of quality, innovation and competitiveness [36]. It is also necessary to improve the creation and exploitation of scientific knowledge and to ensure the quality of higher education [22], meaning a change in the concept of “the object of research” towards a change in the “ways of doing research” [21].
Nowadays, Supercomputing, as part of e-Science, has transformed the traditional way of scientific work, through global collaborations among researchers, the use of large quantities of data, high-speed networks and a large display capacity which allow a type of research that was not possible a decade ago [14]. The National Science Foundation (NSF) in the United States, by means of the “Atkins Report on Cyberinfrastructure” [13], the “Towards 2020 Science” report by Microsoft Research [42] and the world at large today, recognizes that no scientist can be productive or efficient in terms of global research standards if they are not able to integrate Supercomputing into their research process as a binding factor.
This article analyzes the history of Supercomputing and Scientific Communications Networks, as well as their evolution and future challenges, especially due to the significant increase of joint work in the scientific community and the process of globalization based on transnational research agreements, collaboration, resource-sharing and joint activities. For the development of the research, a collaboration of a Group of Experts on Scientific Supercomputing and Networking Communication was established with the task of clarifying concepts to fulfill the goals established. The issue will not only be analyzed technologically but mainly with respect to the different uses that have been made of it over time and which have influenced its progress, as well as what is expected in the future in various sectors. Accordingly, the study focuses on the analysis of historical facts regarding Supercomputing and scientific communication networks, establishing a different consideration of the “systematic review”, and allowing the establishment of a basis for future references of prospective studies on the subject. The paper is structured as follows: Sect. 2 describes method and research objectives; Sect. 3 details the major findings; Sect. 4 is a discussion; Sect. 5 details the main limitations of the study; and Sect. 6 relates the main conclusions and proposals for future research.
2 Methodology and research objectives
An extensive review, with a historical perspective, of the specialized databases (Scopus, Web of Science and Science Direct) was carried out to detect, obtain and consult the relevant and specialized literature in relation to the topic under study. Other useful materials were also analyzed, such as websitesFootnote 1 and relevant reports on the subject, to extract and compile the necessary information. This review led to the establishment of five objectives to ascertain the relevant aspects of Supercomputing and the relationship that the evolution of Scientific Communications Networks has in this respect. Advice on these matters was obtained from a group of experts in the management of these infrastructures.
2.1 Objectives and research questions
The objectives of this work can be summarized in the following questions:
-
Q1 Determining the historical moment considered to be the birth of each of the different stages it has gone through in its evolution. This may be done by answering the questions below. When was Supercomputing born? Which are its current major developmental milestones and the ones for the future?
-
Q2 Establishing how and when the use of Supercomputing for scientific purposes began. The question raised by this objective is: How and when did the transition from the initial uses of Supercomputing to scientific uses occur?
-
Q3 Understanding the uses of Supercomputing and its challenges in future uses. The question aiming to meet this objective is: Which are the current uses of Supercomputing and what is the forecast for the future?
-
Q4 Analyzing the development of Scientific Communications Networks. This may be done answering the question: How have Scientific Communications Networks developed?
-
Q5 Determining the support that Scientific Communications Networks provide for Supercomputing. The question attempting to meet this objective is: How does the development of Scientific Communications Networks help Supercomputing?
3 Major findings of the research
In this section, the results of the review are presented. The following sub-sections will present the results obtained by the five research goals previously introduced.
3.1 Q1: When was Supercomputing born? Which are its current major developmental milestones and the ones for the future?
In the development of Supercomputing two eras are distinguished: the sequential one, beginning in the 1940s, and the parallel one, beginning in the 1960s and continuing until today. Each era is composed of three distinct phases: one phase of architecture, in relation to the system’s hardware and two phases of software, one related to compilers and the third to libraries/application packages that let users sidestep the need to write certain parts of code [18, 20, 34].
The history of supercomputers dates back to 1943 when the Colossus was introduced, the first supercomputer in history, designed by a group pioneering the theory of computation [54], whose aim was the decryption of communications during World War II [100]. In the same year, the Electronic Numerical Integrator and Computer (ENIAC) [74] was created in the USA, one of the largest supercomputers of the time, with general large-scale purposes. In 1946, the University of Cambridge built the Electronic Delay Storage Automatic Calculator (EDSAC), considered to be the first programmable computer for general use, and the first digital, electronic and stored-program computer in the world [117]. The architecture used in that period is known as “Von Neumann architecture” [114], is still in use, and consists of a processor capable of reading and writing in a memory which stores a series of commands or instructions and performs calculations on large quantities of input data.
In the following decades, development continued at a fast pace, more so in the US than in Europe, as indicated in the 1956 report by the Department of Scientific and Industrial Research of the UK concerning high-performance computing. In the 50s, new supercomputers were created, such as the SEAC, ERA 1101 and the ERA 1103. Later, IBM developed several models as well. The latter was responsible for the creation of much of the infrastructure of this decade [81]. 1959, a significant milestone occurred when the University of Manchester and the Ferranti company cooperated to create the supercomputer known as Atlas [64]. It was introduced in 1962 and was 80 times more powerful than Meg/Mercury and 2400 times more powerful than Mark 1, the other large computational infrastructures of that time. In 1960, the first marketable supercomputer called the CDC 6600 [108] was launched, surpassing by far the most powerful computers at the time in computing power and cost. In the late 60s, the CDC 7600 [95] was released, which has been considered by many to be the first supercomputer in the strict sense, by current standards.
The introduction of Supercomputing in industry began in the 60s, when the first parallel computers were built. Most of these machines were mono-vector processors [48]. The multiprocessor-vector machines were created in the 70s. All machines included integrated circuit memory whose cost was very high, and the number of processors did not surpass 16 [48].
In the 80s, supercomputers increasingly attracted scientific attention [44], mainly due to the beginning of distributed computing, which produced a 16-fold increase in speed and main memory capacity of hitherto existing equipment. An example of this was when the CRAY-2 supercomputer [99] was released in 1985. It was between 6 and 12 times faster than its predecessor. The high-performance computers in the 90s were more related to architectural innovations and software levels [56, 105]. In the 90s, it was possible to solve the problems of parallelizing complex tasks such as statistical procedures and the use of digitized pictures using new algorithms such as the distributed stereo-correlation algorithm. These advances were based on multi-ring architectures with scalable topology, which allowed their use as building blocks for more complex parallel algorithms [4, 9, 24].
In 1991, the Congress of the United States published the High Performance Computing Act (HPCA) [55], which allowed the development of the National Information Infrastructure. In 1998, the first supercomputer that exceeded the gigaflop barrier of 109 operations per second was created according to the Linpack test [37]. In the same year, in response to a report by the Advisory Committee on Information Technology of the Presidency of the United States [59], the National Science Foundation (NSF/USA) developed several “TeraScale” initiatives for the acquisition of computers capable of performing billions of operations per second (teraflops), storage disks with capacities of billions of bytes (terabytes) and networks with bandwidths of billions of bits (gigabits) per second. Based on this initiative, the TeraGrid [31] project was begun in 2001, and in 2004 it entered full production mode, providing coordinated and comprehensive services for general academic research in the US. In 2005, the NSF extended its support to TeraGrid by providing a $150 million investment for operations, user support and improvement of the facility over the next 5 years.
In 2006, the “HPC in Europe Task Force,” a working group of experts that analyzes the evolution of Supercomputing in Europe, published a White Paper entitled “Scientific Case for Advanced Computing in Europe” [62]. This report was a boost for Partnership for Advanced Computing in Europe (PRACE), concluding that only through a joint and coordinated effort will Europe be able to be competitive, mainly because it is expected that the cost of the systems of Supercomputing will be of such a magnitude, that no European country alone could compete with the US and other countries in Asia or Latin America. In the same vein, the IDC report [57] provides a number of recommendations for Europe to lead scientific research and industry in 2020. In 2012, the European Commission (EC) announced a plan that included doubling its investments in Supercomputing from 630 to 1200 million euros [113], with focus on the development of ’exa-scale’ supercomputers for 2020, capable of performing 10\(^{18}\) operations per second.
In 2008, the first supercomputer reaching the petaflop speed (1015 operations per second) was created. It was more than one million times faster than the previous [46]. This system had almost 20,000 times the number of processors of the fastest supercomputer 20 years earlier and each of the processors was almost 50 times faster. In the first decade of the twenty-first century, this scenario of continued exponential growth experienced an interruption, due to factors such as the effect of Moore’s law [97], which states that every 24 months the ability to integrate transistors and energy consumption will double [19]. Due to this latter relationship large refrigeration systems are required, which is a limiting factor for Supercomputing. As a result, in recent years a genuine concern for energy efficiency has arisen. This fact is reflected in the establishment of the Green 500 listFootnote 2 in November 2007, which established a ranking that measures the speed of calculation at lower energy consumption of the 500 most efficient supercomputers all over the world.
Answering question Q1, Fig. 1 clearly indicates the main dates from the birth of Supercomputing to predicted future developments, describing major milestones achieved in the literature review. Horizontal lines show major development eras, blue for the past, yellow for the future. Figure 1 shows that the use of Supercomputing has been growing from 1940, helping the development of industry and science.
3.2 Q2: How and when did the transition from the initial uses of Supercomputing to scientific uses occur?
Before the advent of Supercomputing, experimentation was basically done in a laboratory or in the field, and Information and Communication Technology (ICT) only served to assist in verification. As seen in Fig. 2, over time the increase in the use of computers made it possible to create better models for scientific simulation, allowing the use of more time for the experimentation and less in verification. From the known moment as “Silicon Shift” onwards, a period began in which computers served not only to improve science but also to enable the development of science.
The 1950s and 60s were years of great competition between the two blocks that divided the world, led respectively by the Soviet Union and the United States. In 1957, the Soviet Union launched the Sputnik Program [53], which consisted of a series of unmanned space missions to demonstrate the feasibility of artificial satellites within the orbit of the earth. In response, the United States created the Research Projects Agency (RPA), whose aim it was to go beyond military applications. The National Science Foundation (NSF) was created to boost basic research and education in all non-medical fields of science and engineering. This was the moment when the real impulse was given to Supercomputing [12], which ceased to serve a purely military purpose, and became a tool supporting research institutions mainly of the public type (universities and government agencies). The first private users of Supercomputers were large companies such as oil companies and banks.
In the 60s, the Apollo Project [78] was launched by the United States, which made extensive use of large-scale Supercomputing. The goal of Project Apollo was to simulate a manned flyby of the moon to locate a suitable area for a possible moon landing of astronauts. This project supposed a radical change in the way Supercomputing was understood because of the highly complex simulations on physical equipment and the operational procedures used in the mission. This led to the use of simulation in large and complex systems such as models of biological systems, Artificial Intelligence, particle physics, weather forecasting and aerodynamic design.
In the 80s, new algorithms were developed for digitized images [6, 8], designed to work on a transputer network with a simple topology.
In the 90s, distributed computing systems were increasingly used to solve complex problems, highlighting the improvements in evolutionary computation, that is, computational intelligence methods. These emulate the natural evolution of living beings to solve problems of optimization, searching and learning [49]. In these years, the development of algorithms for more complex parallel operations continued, highlights being the cases of computer vision and image processing. The use of a multi-ring network called Reconfigurable Multi-Ring System (RMRS) was also developed, in which each node in the network has a fixed degree of connectivity and is shown to be a viable architecture for image processing and computer vision problems via parallel computation [3, 5, 7, 25, 26]. Supercomputing also became an indispensable tool for industry in the late 20th and beginning of the twenty-first century. As a result, in the 2004 study by the International Data Corporation (IDC) [57] to explore the use and impact of Supercomputing resources in industry and other sectors, almost all respondents indicated that their use was essential for their business.
In the first decade of the twenty-first century, new algorithms improved the use of parallel computing such as those based on edge detection in 3D images, that are targeted at a Multi-Ring network [10].
In 2007, it became possible to transmit data one hundred times faster and use ten times lower energy than for technologies existing at the time, using light pulses on silicon. This allowed a substantial change in the contribution of Supercomputers to science through simulation and numerical calculation [79], as a key to running experiments.
In response to the question Q2, it can be concluded that the 60s were the turning point for the use of Supercomputing in the scientific field.
3.3 Q3: Which are the current uses of Supercomputing and the forecast for the future?
Supercomputing has revolutionized design and manufacturing, allowing better products to be manufactured, and reduced risks by means of better analysis and appropriate design decisions. There is a reduction of time and cost not only in design but also in production [96], as simulations of the final product lessen the need for making prototypes. This fact is presented graphically in Fig. 3.
Figure 3 represents the process from basic research to the creation of the final product, carried out by the main actors involved (universities, laboratories and industry), using the appropriate Supercomputing tools (hardware, software, compilers and algorithms). Generally, basic research is done in small projects, exploring a multitude of ideas. Applied research projects normally validate ideas from basic research and often involve larger groups. When it is possible to develop a prototype that may become a product, the integration of multiple technologies (e.g., hardware, software, new compilers and algorithms) becomes necessary, thereby validating the design by showing the interplay of these technologies. Such development includes many interactions, whereby projects inspire one another, moving quickly from basic research to final products, sometimes requiring multiple iterations of applied research. In this context, failures are as important as successes in motivating new basic research and the search for new products.
The improvement of Supercomputing resources provides new capacities for managing and analyzing information, as well as facilities for archiving, conserving and exploiting many kinds of data through which the researchers interpret scientific phenomena. The devices of future supercomputers will provide a new way for application developers to tackle new challenges through the use of open languages and other tools [35] in various scientific and economic sectors.
The improvement of data acquisition devices, the availability of networks for distribution and the increased storage capacity of computers have made it possible for supercomputers to acquire and manage large quantities of data, in the range of terabyte (a trillion characters) or petabytes (a quadrillion characters), and even greater (exa-, zetta-, yotta- etc.). This fact has been highlighted in various scientific publications, as for instance (omitir coma) in a special edition of Nature in 2008 under the title “Big Data: Welcome to the petacentre, science in the petabyte era.” In recent years a great number of international initiatives also stand out, which were based on the anticipation of using exa-scale hardware in the coming years [15, 47], specifically around 2018 according to some authors [80].
In fact, the most important current challenges of science [28] and engineering, both in simulation and in data analysis, are beyond the capacity of petaflops and are quickly approaching the needs of exaflop computing [39]. Processing these volumes of data poses problems whose computing requirements are beyond the scope of a single machine [83], thereby marking the need to improve the design of high-performance computers through a mathematical process that allows adequate use of the whole infrastructure of Supercomputing.
Some uses of Supercomputing in various industries and sectors are detailed in the following Table 1.
The table above is based on a major study by the University of Edinburgh in 2011. The analysis proposed in this paper compares Supercomputing applications described in Table 1 with the cases describing the state-of-the-art for the period 2012–2014 to analyze the development of uses in a field exhibiting very rapid advancement.
In the last few years, new uses and applications of Supercomputing have been described that will shape future trends in this discipline. In the following, we provide a description of uses of Supercomputing in the early twenty-first century in various fields from a historical viewpoint, by analyzing bibliographical references concerning the use of Supercomputing in the Web of Science in the period 2012–2014. Details of these challenges are as follows:
3.3.1 Health care sector
-
Development of parallelization techniques for the analysis of multiple-concurrent genome not only greatly reduces computation time, but also results in an increased usable sequence per genome [88]. Also, new techniques for the sequencing of DNA, such as translocation of molecules through biological nucleotides and synthetic nanopores [72], are using, among others, genome assembly by means of Next-Generation Sequencing techniques (NGS) [76]. This allows personalized cancer treatments by developing virtualization techniques of and by improving the utilization of resources and the scalability of NGS [111]. Furthermore, the uses of tools for large-scale phylogenetic inference with supercomputers of maximum probability [103] are highlighted.
-
Cardiology has managed to build models through the use of complex algorithms [121], which show the full three-dimensional interaction of blood flow with the arterial wall. It has also improved the understanding of cardinal function regarding integrated health and disease, using anatomical multistate computer models that are realistic and biophysically detailed. These require a high level of computational capacity and highly scalable algorithms to reduce execution times [82].
-
The Human Brain Project (HBP) will develop a new integrated strategy to understand the human brain and a novel research platform that will integrate all data and knowledge about the structure and function of the brain to build models that are valid for simulations. The project will promote the development of Supercomputing in the field of life sciences and will generate new neuroscientific data as a reference point to model. It will develop new tools for computing, modeling and simulation, and will allow the construction of virtual laboratories for basic and clinical studies, the simulation of drug use and the creation of virtual prototypes of brain function and robotic devices [71].
-
In oncology, systems of diagnosis for colon cancer based on virtual colonoscopies have been described, processed by computationally intensive algorithms, in order to study aspects such as bowel preparation, with computer-assisted screening and examination of colon cancer and computer-assisted detection in real time, with the aim of improving sensitivity in detecting colon polyps. There are also mobile systems with high-resolution displays connected to the virtual colonoscopy system to allow visualization of all intestinal lumens and the diagnosis of colon lesions anytime and anywhere [125].
-
A study from 2014 uses computational intelligence to analyze large-scale genetic next-generation sequencing data. This allows using approaches to identify genetic diseases which can be utilized in the identification of regulators, which is important in effective biomarker identification for early cancer diagnosis and treatment planning with therapeutic drug targets for kidney cancer [123].
-
The area of pharmacy has seen the development of polypharmacy, which studies the ability of drugs to interact with multiple factors, thus attacking the current problems of a rise in the cost of drug development and decreased productivity, incorporating applications such as high-performance virtual screening (docking) [40].
-
In relation to data processing in the health care sector, a study of high-resolution displays has been implemented by means of self-organizing maps (SOM) based on a corpus of over two million medical publications. The results of this study show that it is possible to transform a large corpus of documents into a map that is visually appealing and conceptually relevant for experts [101]. Also, an increase in the volume of biomedical data must be met, including next-generation sequencing in clinical histories, which will require large storage capacity and calculation methodologies [29]. The use of Supercomputing in complex statistical techniques will allow the accumulation of data on epidemiology, survival and pathology of the world to discover more about genetic and environmental risk, biology and etiology [90].
-
High-performance applications will be useful for large-scale projects of virtual screening, bioinformatics, structural systems biology and basic research in understanding protein-ligand recognition [58].
-
It will be possible to estimate biologically realistic models of neurons, based on electrophysiological data, which is a key issue in neuroscience for the understanding of neuronal function [69].
3.3.2 Aerospace sector
-
The new generation of radiotelescopes offers a vision of the universe with greater sensitivity [115]. The latest generation of interferometers for astronomy will conduct sky surveys, generating petabyte volumes of spectral line data [116].
-
Simulations of core-collapse supernovae in galaxies [70] are being developed. This is a difficult phenomenon to analyze, even after the extensive studies done over many decades. This unresolved issue involves nuclear and neutrino physics in extreme conditions, as well as hydrodynamic aspects of astrophysics [107], thus creating an interesting field of study for the future.
-
Supercomputing is used as a fundamental tool for NASA missions [27] and for scientific and engineering applications of NASA [93].
-
Another important application is the study of the properties of core convection in rotating A-type stars and their ability to create strong magnetic fields. 3D simulations can serve to provide data regarding asteroseismology and magnetism [43], as does NASA’s Kepler mission, which is currently collecting data on a frequent timetable on the asteroseismology of hundreds of stars [77]. This will allow the Sun to be understood in a broader context than it is nowadays, providing comparable structural information on hundreds of solar-type stars. Simulations of emerging data of solar-magneto flow are likewise being carried out [104]. Recent advances in asteroseismology and spectropolarimetry are beginning to provide estimates of differential rotation and magnetic structures for G-type stars and core convection in A-type stars [109].
-
Research in astronomy will soon pose serious computational challenges, especially in the Petascale data era, and not every task (e.g. calculating a histogram and computing minimum/maximum data) may be achievable without access to a Supercomputing facility that provides an unprecedented level of accuracy and coverage. The analysis of GPU and many-core CPUs is important in this context because it provides a tool which is easy to use for the wider astronomical community and enables a more optimized utilization of the underlying hardware infrastructure [52].
3.3.3 Aeronautical sector
-
Development of new aerodynamic designs via a simulation that consists of three subparts: core geometry, a computational fluid dynamics (CFD) flow analysis, and an optimization algorithm [63]. Calculations are made on the aerodynamics of the vertical stabilizer as well as an accurate estimation of its contribution to the directional stability and control of aircraft, especially during the preliminary design phase [84]. Another remarkable application is the study of airflow using Supercomputing [65].
3.3.4 Meteorology
-
The running of simulations on HPC platforms creates a climate model for conducting climate research at 24 academic institutions and meteorological services in 11 European countries [11].
-
Using CFD to model windfarms on land, the prediction and optimization of farm production through the assimilation of meteorological [17] data is possible.
-
Simulations of atmospheric dust storms [2], based on the data of an experiment using lasers for remote sensing of aerosol layers in the atmosphere above Sofia (Bulgaria), during an episode of Saharan dust storms [106].
3.3.5 Environment
-
Development of climate modeling through international multi-institutional collaboration on global climate models and prior knowledge of the climate systems inspired by the World Modeling Summit 2008 [60].
-
Modeling of chemical transport emissions (MCTs) to estimate anthropogenic and biogenic emissions for Spain with a temporal and spatial resolution of 1 h and 1 km\(^{2}\), taking 2004 as the reference period [51].
-
Assistance with the generation of clean energy [61].
3.3.6 Biological sector
-
Creation of databases for the analysis of plant genes [124].
-
Development of parallel Supercomputing systems for solving large-scale biological problems using protein–protein interaction (PPI) [73].
3.3.7 Emergencies
-
Development of algorithms related to seismic tomography [67].
-
Investigation of both tropical cyclones and the impact of climate change through modern models based on Supercomputing work done by NASA [98].
3.3.8 Naval sector
-
Forecasting of real situations on three-dimensional models set up for the Navy and using virtual simulation [30].
3.3.9 National security
The support of Supercomputers will be essential in national security, one of the main users of Big Data in a wide range of case studies and application scenarios, such as in the fight against terrorism and crime, necessitated by high-performance analysis [1].
Data-intensive applications will gain importance in the future. The volume of measurements, observations and results of simulations will increase exponentially, so that future research efforts should be focused on the collection, storage and exploitation of data as well as on knowledge extraction from these databases.
In summary, in response to question Q3, it can be observed, by means of detailing Supercomputing applications that virtually all fields of science and industry will experience a breakthrough using Supercomputing.
3.4 Q4: How have Scientific Communications Networks developed?
The development of Scientific Communications Networks began in the United States in the 1960s, when ARPANET came into existence. This computer communication network was created by the United States Department of Defense, whose first node was opened in 1969 at the University of California. This network was funded by the Defense Advanced Research Projects Agency (DARPA) and can be considered to be the first scientific communication network in history [85]. One of its alleged origins lies in the space race between the United States and the Soviet Union in the 1950s and 60s, especially after the launch of the Soviet ’Sputnik’ satellite in 1957 [53].
1983 is considered the year in which the Internet, as it is known currently, resulted in a separation of the military and civilian parts of the network. 1984 was an important milestone for the interconnection of supercomputers when the US National Science Foundation (NSF) started to design a high-speed successor to ARPANET that would create a backbone network to connect its six Supercomputer centers in San Diego, Boulder, Champaign, Pittsburgh, Ithaca and Princeton. In 1986, the NSF permanently established its own network, called NSFnet, motivated by the bureaucratic impediments to using ARPANET, which disappeared from general traffic as such in 1989. At that time many institutions already had their own networks and the number of servers in the network exceeded 100,000. The aforementioned developments can be seen in Fig. 4 (credited to the Internet Society).
The High Performance Computing Act (HPCA) was passed in the United States in 1991, which allowed the funding of a National Research and Education Network (NREN). The law was popularly referred to as “the information superhighway,” and primarily allowed the development of high-performance computing and advanced communication, giving a boost to many important technological developments. The experts concluded that if the development of the areas covered by the Act had been left to private industry, it would not have been possible to reach the scientific development achieved through the Act [87].
From the early 90s onward, the Supercomputing Centers of Illinois, Pittsburgh and San Diego all contributed to the development of high-capacity networks through their participation in the Gigabit Network Project [92], supported by the NSF and Defense Advanced Research Projects Agency (DARPA). In 1994, this support was extended again for another 2 years and in 1995, after the end of the NSFnet project, these centers became the NFS’s first nodes of high performance, Backbone Services for research and education. Finally on April 30, 1995, the NSFnet closed down. Since then, the Internet has consisted entirely of various commercial ISPs and private networks (including networks between universities).
In 1996, Internet 2 was created, based on a consortium that emerged as an idea similar to those of the Scientific Communications Networks of the 70s, bringing together over 200 universities, mainly American, in cooperation with 70 leading corporations, 45 government agencies, laboratories and other institutions of higher education in addition to more than 50 international partners [16]. The project’s main objectives were to provide the academic community with an extended network for collaboration and research among different members, thereby enabling the development of applications and protocols that can then be commercialized through the Internet and to develop the next generation of telematics applications, facilitating research and education as well as promoting a generation of new commercial or non-commercial technologies.
The national cyberinfrastructure in the United States [28] was the result of the Next Generation Internet Research Act of 1998, the HPCA of 1991 [55], the American Competitiveness Initiative (ACI) and TeraGrid created in 2001. In 2003, TeraGrid capabilities were expanded through high-speed network connections to link the resources of the University of Indiana, Purdue University, Oak Ridge National Laboratory, and the Texas Advanced Computing Center at the University of Texas, Austin. With this investment, the TeraGrid facilitated access to large volumes of data and other computing resources within the scope of research and education. Early in 2006, these integrated resources included more than 102 teraflops of computing power and more than 15 petabytes (a quadrillion bytes) of online and file data storage with an access and retrieval system using high-performance networks. Through the TeraGrid, researchers could access more than 100 databases specific to each discipline.
It must be noted that at the early stages of ARPANET few attempts were made in Europe to join the new network, with the exception of the National Physics Laboratory (NPL), the University College of London in England and the Royal Radar Establishment in Norway [94]. However, despite these limited early initiatives, real interest in the technology developed in the United States did not begin until the second half of the 80s, when there was a large number of TCP/IP networks operating in Europe in an isolated fashion. Some of them began to enjoy the first transatlantic connections to the Internet, usually via dedicated lines financed by US agencies such as the NSF, NASA and the Department of Energy (DoE), which were very interested in cooperating with certain European research centers. Thus, in 1988 and 1989, prestigious European institutions in the Nordic countries (through NORDUnet/KTH), France (INRIA), Italy (CNUCE), Germany (Universities of Dortmund and Karlsruhe), the Netherlands (CWI, NIKHEF) and the UK (UCL) became connected. Some supranational organizations also established dedicated links to the Internet in those years, such as the European Laboratory for Nuclear Research (CERN), the European Space Agency (ESA) and the European UNIX Users Group (EUUG).
To coordinate the various initiatives for academic and research networks appearing on a national level in most Western European countries, both economic investment and possible technical solutions were rationalized. Thus emerged such organizations as: JANET (UK), DFN (Germany) and SUNET (Sweden) in 1984, SURFnet (the Netherlands) and ACOnet (Austria) in 1986, SWITCH (Switzerland) in 1987, RedIRIS (Spain) and GARR (Italy) in 1988. These networks were interdisciplinary: their aim was to serve the whole of the academic and research community, regardless of their area of activity, by using a single centralized infrastructure, which meant joint forces and benefits from the resulting synergies and economies of scale.
In order to optimize the use of these networks, the European Union is currently promoting the technological development by establishing a network for the joint use of Supercomputing resources by its member countries and the support for studies related to high-performance computing [15]. Through these advanced networks, Europe makes Supercomputing resources more accessible to the projects of scientific and industrial research and participates in important world-class collaborations that improve productivity by providing Supercomputing resources for general and scientific researchers.
Over the years, the development of Scientific Communications Networks has been increasing in many countries and continents, in addition to the cases cited in the US and Europe. For instance, Latin America has been developing of such networks since the 90s [89]. Currently, there is a network called CLARA (Latin American Cooperation of Advanced Networks), which supports research networks in Latin America and the Caribbean, and a project called ALICE for the interconnection between Latin America and Europe to create an infrastructure for research networks using the Internet Protocol (IP). Likewise, there is a pan-European research network called GÉANT16, whose objective is to lead the operation through the partnership with four European National Research and Education Networks (NRENs) with close historical and social ties to Latin America. Figure 5 shows the details.
Since the 90s, the government in other countries such as China has effectively used the public-sector research potential to boost the knowledge-based economy [110], funding virtually unlimited highly skilled human resources, and becoming the fifth leading nation in terms of its share of the world’s scientific publications with exponential growth in the rate of papers published, thus making it a major player in critical technologies like nanotechnology. The construction of networks of scientific communication [126] has been outlined, and many studies have been carried out in China from the perspective of how to develop an effective national system or environment for innovations and for increased collaboration between industry and higher education, leading to knowledge transfer between two [120].
In Japan, the development of a new research system throughout the 90s has led to the emergence of new innovation systems in which university–industry linkages have been sought as a means of stimulating regional economic growth. The idea of a regional innovation system (RIS) is relatively new and did not receive much attention in policy frameworks until recently. In 2004, a ‘radical’ change [122] was introduced to Japanese national universities through the National University Incorporation Law (2003), which meant a change of roles for the universities because of the concentration of resources in ‘elite’ institutions, and the ‘regionalisation’ of science and innovation policies. This included ‘cluster’ initiatives and policies promoting wider university–industry links at a regional level and the promotion of networks among industry, universities and public research institutes, by supporting the creation of new businesses and new industries [119].
3.5 Q5: How does the development of Scientific Communications Networks help Supercomputing?
For many years, the management and analysis of data produced by Supercomputing applications were a minimal component of the process of modeling and simulation, in which the management of user data was neglected. With the growing complexity of systems, the complexity of input and output data has also increased. In the future, the volume of data will greatly exceed the current volume, so processing it will become very important, while privacy must be preserved. At the current rate of progress, it is projected that exaflop-capacity systems (EFLOPs) will be available around 2019 and zetaflop- capacity (ZFLOPs) in 2030 [75]. To achieve this predicted increase, a highly effective memory will be required, as well as the development of effective programming methodologies, languages and new algorithms capable of exploiting the new, massive, heterogeneous parallel systems with multiple cores. Irregular non-local communication patterns might cause bottlenecks in multi-core supercomputers given the increased data volume. New efficient parallelization algorithms are being developed, but this problem still remains as one of the most complex issues of Supercomputing [118].
The NSFnet allowed a large number of connections, especially from universities. Although its initial objective was to share the use of expensive Supercomputing resources, the organizations connected soon discovered that they had a superb medium for communication and collaboration amongst each other. Its success was such, that successive enlargements of the capacity of the NSFnet and its trunk lines became necessary at a multiplication rate of 30 every 3 years: 56,000 bits per second (bps) in 1986, 1.5 million bps in 1989 and 45 million bps in 1992. In 1993, the National Information Infrastructure (NII) was announced, one aspect of which is the National Research and Education Network (NREN), a billion bps “backbone,” completed in 1996. Currently the Internet2 Network offers 8.8 Terabits of capacity and 100 gigabit Ethernet technology on its entire footprint, and connection to an international 100 Gbps network backbone.
It is significant to note that the assessment of the effectiveness of research communities must be addressed, not only considering quantitative and scientific production factors but also qualitative factors which influence the successful or unsuccessful integration of research communities. Usually, these platforms are geographically dispersed and interconnected by communication systems that allow implementing new grid and cloud computing platforms [45]. For this reason, task scheduling becomes very important to manage different users and avoid long delays in queues for computing resources [83].
In 2005, the Council and the Commission of the European Union agreed, through resolutions, to promote and encourage the growth of innovation, research and joint work to attract researchers and encourage trans-disciplinary research projects from global research networks [33]. Currently, new collaborative research projects are being launched, enabling a new way of doing research by linking research communities remotely, via e-science, or e-knowledge.
A clear example of the necessity of using Scientific Communications Networks connected to large computing capabilities is the Square Kilometer Array Project (SKA), considered as an unprecedented global project of scienceFootnote 3 in terms of its size and scale in the field of radio astronomy, and whose mission is to build the world’s largest radio telescope, with a square kilometer collecting area. It will constitute the largest array of radio telescopes ever built, as well as represent a qualitative leap in the fields of engineering and research. It will result in increased scientific capacity, so that it is expected to revolutionize fields such as astronomy, astrophysics, astrobiology and fundamental physics. Radio telescopes will be located in South Africa and Australia and the processing of data by supercomputers will be conducted mainly in Europe and the United States. The large volume of data to be handled demonstrates the need for good communication networks that allow optimal data transport from the point of collection to the places of processing, thousands of miles away. The project will run from January 2013 to 31 December 2023. Another proof is the project of the European Laboratory for Nuclear Research (CERN) [32] with a particle accelerator that generates large amounts of information per second. It has become necessary to turn to new sources of analysis, situated in various countries.
In line with this strategy of large-scale development of communications, it must be noted that, according to Robert Vietzke, executive director of Internet 2 [16], in the future, the United States will be interconnected by a network using 100 Gbps wavelengths. Currently, organizations can work with Internet2 and advanced regional networks, based on 100 gigabit Ethernet (GE) technology Layer 2 connection, support for software-defined networking (SDN), and implementation of a model developed by the Department of Energy’s ESnet, called Science DMZ. Thus, more than 200,000 academic centers, libraries, health centers, government and research organizations will be connected, which will permit the transport by network of special applications for health, safety and public administration, and improve the transport of data to be analyzed by Supercomputers [16].
In response to the question Q5, we can conclude that the increase in the volume of data to be processed by supercomputers nowadays, and that expected in the future, requires harmonized development not only of supercomputers, as estimated by some authors [80], but in the capacity of scientific communication networks for transporting data and its reliability, as well as the consolidation of research communities.
4 Discussion
This study is based on an extensive historical review of the literature of the evolution of Supercomputing and Scientific Communications Networks, infrastructures that help to carry out simulations [96], essential for scientific work [13, 42], which in turn will promote the development of various sectors.
Supercomputing will be the driving force in the development of the most important milestones of science. The development of Supercomputing is based on processing large volumes of data, especially when the exaflop capacity arrives in few years’ time. It will become necessary to implement parallel processes that require complex algorithms, as well as to improve and expand the capacity offered by Scientific Communications Networks with improved network connectivity that will enable a new generation of applications to interact with machines based on cloud computing. The big volume of data used in various fields will create new challenges and opportunities in modeling, simulation and theory of Supercomputing. Computational challenges make possible new opportunities for research. In many areas (as it can be seen on the Sect. 3) it will be essential to have a specialist in every area of knowledge for modeling because, if not, the facilities of Supercomputing will not be enough to satisfy the future challenges for the advance of science and technology.
It must not to be forgotten, however, that the exponential growth in the processing power of Supercomputers, which requires constant technological advances [97] and occurs every few months, is limited by the considerable increase in power consumption connected with the new infrastructures. Nowadays, the growth in capacity is linked to concerns about finding a range of services with the lowest possible energy consumption. The future development scope scheduled for 2018 will involve “exa-scale” Supercomputers [80], capable of processing a volume vastly superior to current limits as well as the necessary Scientific Communications Networks that allow the transport of huge data volumes.
In summary, the analysis of the five research questions demonstrates how, anticipating the future evolution of scientific infrastructures, it is possible to improve their present use and gain extensive insight regarding future use. Furthermore, it is clear that with knowledge of the different uses and future possibilities, performance will improve, not only in those areas of greatest use described, but also in new fields yet to develop.
5 Limitations of the study
This study has a number of limitations that should be considered when interpreting its results and conclusions:
-
Only those academic publications which relate to Supercomputing and Scientific Communications Networks in indexed journals have been examined, as well as those which are presentations from seminars and conferences relating to technical matters. In any case, in this and other matters related to technologies, there is a wide range of relevant, albeit informal, information where experiences and projects are detailed in blogs and technical reports that could also provide very important information and which could be used to as a supplement to the basis of the study.
-
Some relevant issues might remain unanswered, though of interest to further research. To know whether the questions accurately meet the objectives, the collaboration of a Group of Experts on Scientific Supercomputing and Networking Communication was requested, with the aim of verifying that the approach taken was consistent with the responses needed to fulfill the objectives.
-
We have tried to analyze the largest possible number of studies on the subject matter, based on a historical perspective of the analysis of the main milestones, but it has been impossible to guarantee a 100 % inclusion of all the studies that could be of interest, as set out in systematic reviews. The reason for this limitation is the large amount of existing information and the excessive workload this would imply and which would not guarantee quality references.
-
The search was conducted primarily through digital databases and the constraints encountered were that in some cases the searches were done based on authors. In other cases it was only possible to search for the content inferred from keywords or the purpose of the study, with the advice of the Group of Experts on Scientific Supercomputing and Networking Communication, whose profile is more technical than academic. The number of references of the word ’Supercomputing’ note quotes was 1627 in the Web of Science, of which 10 % have been used to conduct the study of this article.
-
Due to the numerous fields in which Supercomputing and Scientific Communications Networks are used, there are a large number of studies that analyze the infrastructures as a means, without taking into account the ultimate goal of either this research project or the aspects covered by the objectives and questions in this article. Therefore, in many cases the information obtained was not relevant.
6 Conclusion
This paper provides a historical review of Supercomputing and Scientific Communications Networks, as well as of their current and future uses, optimizing the work of organizations, while observing that the progress made by the academic and research community has historically contributed decisively to paradigm shifts. During the development of this study, an in-depth analysis of existing literature was conducted, identifying five research questions to demonstrate the importance of Supercomputing and Scientific Communications Networks in the advancement of science, thus enabling new paradigms that will allow doing qualitative and competitive research.
In particular, we have observed throughout the overall data collected, that Supercomputing has progressed on a broad scale since its inception in the 1940s, then exclusive to the military field, until today, when, a part from being applied more intensively to science and in various fields of knowledge, issues such as energy efficiency are matters of great importance. What poses a challenge for the future is the processing of large volumes of information, which need large communication networks.
Based on the above-mentioned historical analysis, we may highlight the following conclusions about the past, the present and the future of Supercomputing services and Scientific Communications Networks, through answers elicited by five explorative research questions: (1) the review of the main milestones of the past can afford the challenges of the future, especially with the prevision of creation, in next years of exaflop supercomputers; (2) the use of the supercomputers for scientific purposes has a wide background and we can conclude that no scientific research in the future can be ruled without a use of Supercomputing tools; (3) in practically in all fields of science and industry will experience a breakthrough in using Supercomputing, so new projects and business can consider Supercomputing as a base for its research; (4) the rise of high-speed networks, differentiated from commercial Internet, creates new spaces for sharing, discussions and joining forces without restriction of space, time or distance and for transferring large amounts of data across regions, countries and continents, so it is essential a harmonized development of Supercomputing and the Scientific Communication Networks; and (5) it is clear that, in general, the more capability of Scientific Communications Networks lets more optimal performance of Supercomputing services.
This study has found that the available Supercomputing facilities must meet the purpose of being suitable instruments for simulation processes in various fields, especially when increasingly often the vast majority of problems must be solved by a joint effort of multiple scientific disciplines. The development of collaborative research should be sought to optimize the use of Supercomputers.
The models for simulations in Supercomputers, due to the large volume of data, will be algorithmically and structurally complex and will contain large amounts of information. Therefore, the hardware should be efficiently used, while simultaneously trying to minimize elevated power consumption. The designs of interconnection networks among processors in each chip and among system nodes are issues that require new ideas, as do communication networks for the exchange of data. It is essential to have the means to train personnel adequately in the use of these technologies.
Finally, we must note that it will be necessary to use the current knowledge related to these matters to provide a starting point for further research and to explore new fields where the use of Supercomputing will be helpful.
Notes
References
Akhgar B, Saathoff GB, Arabnia HR, Hill R, Staniforth A, Bayerl PS (2015) Application of big data for national security: a practitioner’s guide to emerging technologies. Butterworth-Heinemann Elsevier Ltd, Oxford, United Kingdom
Alonso-Pérez S, Cuevas E, Pérez C, Querol X, Baldasano JM, Draxler R, De Bustos JJ (2011) Trend changes of African airmass intrusions in the marine boundary layer over the subtropical Eastern North Atlantic region in winter. Tellus Ser B-Chem Phys Meteorol 63(2):255–265
Arabnia HR (1990) A parallel algorithm for the arbitrary rotation of digitized images using process-and-data-decomposition approach. J Parallel Distrib Comput 10(2):188–193
Arabnia HR (1995) A distributed stereocorrelation algorithm. In: Proceedings of computer communications and networks (ICCCN’95) Fourth International Conference. IEEE, pp 479–482. doi:10.1109/ICCCN.1995.540163
Arabnia HR, Bhandarkar SM (1996) Parallel stereocorrelation on a reconfigurable multi-ring network. J Supercomput 10(3):243–270 (Springer Publishers)
Arabnia HR, Oliver MA (1987) A transputer network for the arbitrary rotation of digitised images. Comput J 30(5):425–433
Arabnia HR, Oliver MA (1987) Arbitrary rotation of raster images with SIMD machine architectures. Comput Graph Forum 6(1):3–11. doi:10.1111/j.1467-8659.1987.tb00340.x
Arabnia HR, Oliver MA (1989) A transputer network for fast operations on digitised images. Comput Graph Forum 8(1):3–11. doi:10.1111/j.1467-8659.1989.tb00448.x
Arabnia HR, Smith JW (1993) A reconfigurable interconnection network for imaging operations and its implementation using a multi-stage switching box. In: Proceedings of the 7th annual international high performance computing conference. The 1993 high performance computing: new horizons supercomputing symposium, Calgary, Alberta, Canada pp 349–357
Arif WM, Arabnia HR (2003) Parallel edge-region-based segmentation algorithm targeted at reconfigurable multi-ring network. J Supercomput 25(1):43–63
Asif M, Cencerrado A, Mula-Valls O, Manubens D, Doblas-Reyes F, Cortés A (2014) Impact of I/O and data management in ensemble large scale climate forecasting using Ec-Earth3. Procedia Comput Sci 29:2370–2379. doi:10.1016/j.procs.2014.05.221
Aspray W, Williams BO (1994) Arming American scientists: NSF and the provision of scientific computing facilities for universities, 1950–1973. Ann Hist Comput IEEE 16(4):60–74
Atkins D (2003) Report of the National Science Foundation Blue-Ribbon Advisory Panel on Cyberinfrastructure. National Science Foundation
Atkins D, Borgman C, Bindoff N, Ellisman M, Feldman S, Foster I,Ynnerman A (2010) Building a UK foundation for the transformative enhancement of research and innovation. Report of the international panel for the 2009 review of the UK research councils e-scienceprogramme, Swindon
Attig N, Gibbon P, Lippert T (2011) Trends in supercomputing: the European path to exascale. Comput Phys Commun 182(9):2041–2046
Aulkner L (2006) Internet 2 about us. http://www.internet2.edu/about. Accessed 12 Nov 2014
Avila M, Folch A, Houzeaux G, Eguzkitza B, Prieto L, Cabezón D (2013) A parallel CFD model for wind farms. Procedia Comput Sci 18:2157–2166
Bacon J (1998) Concurrent system. Operating systems, database and distributed systems: an integrated approach, 3rd edn. Addison Wesley
Balladini J, Grosclaude E, Hanzich M, Suppi R, Rexachs del Rosario D, Luque Fadón E (2010) Incidence of parallel models and scaling CPU frequency in the energy consumption of the HPC systems programming. In: XVI Congreso Argentino de Ciencias de la Computación (CACIC’10)
Banerjee P (1994) Parallel algorithms for VLSI computer-aided design. Prentice-Hall, Inc., Upper Saddle River, NJ, USA
Bermeo HP, De los Reyes E, Bonavia T (2008) Dimensions of the scientific collaboration and its contribution to the academic research groups’ scientific quality. Res Eval 18(4):301–311
Bernhard A (2009) A knowledge-based society needs quality in higher education. Probl Educ 21st Century 12:15–21
Bethel EW, Van Rosendale J, Southard D, Gaither K, Childs H, Brugger E, Ahern S (2011) Visualization at supercomputing centers: the tale of little big iron and the three skinny guys. IEEE Comput Graph 31(1):90–95
Bhandarkar SM, Arabnia HR (1995) The REFINE multiprocessor: theoretical properties and algorithms. Parallel Comput 21(11):1783–1806. doi:10.1016/0167-8191(95)00032-9
Bhandarkar SM, Arabnia HR (1995) The Hough transform on a reconfigurable multi-ring network. J Parallel Distrib Comput 24(1):107–114
Bhandarkar SM, Arabnia HR, Smith JW (1995) A reconfigurable architecture for image processing and computer vision. Int J Pattern Recognit Artif Intell (IJPRAI) 9(2):201–229 (Special issue on VLSI algorithms and architectures for computer vision, image processing, pattern recognition and AI). doi:10.1142/S0218001495000110
Biswas R, Dunbar J, Hardman J, Bailey FR, Wheeler L, Rogers S (2012) The impact of high-end computing on NASA missions. IT Prof 14(2):20–28
Bollen J, Fox G, Singhal P (2011) How and where the teragrid supercomputing infrastructure benefits science. J Informetr 5(1):114–121
Brown JR, Dinu V (2013) High performance computing methods for the integration and analysis of biomedical data using SAS. Comput Methods Progr Biomed 112(3):553–562
Bub FL, Mask AC, Wood KR, Krynen DG, Lunde BN, DeHaan CJ, Wallmark JA (2014) The Navy’s application of ocean forecasting to decision support. Oceanography 27(3):126–137
Catlett C, Allcock WE, Andrews P, Aydt RA, Bair R, Balac N, Marsteller J (2006) TeraGrid: analysis of organization, system architecture, and middleware enabling new types of applications. High perfomance computing and grids in action, vol 16, IOS Press, pp 225–249
CERN-European Organization for Nuclear Research (2011) Web. http://public.web.cern.ch/public/. Accessed July 2011
Comunidad Europea (2010) Assessing Europe’s university-based reseach
Cosnard M, Trystran D (1995) Parallel algorithms and architectures. International Thomson Computer, Boston
Davis NE, Robey W, Ferenbaugh CR, Nicholaeff D, Trujillo DP (2012) Paradigmatic shifts for exascale supercomputing. J Supercomput 62(2):1023–1044
De Filippo D, Morillo F, Fernández MT (2008) Indicadores de colaboración científica del CSIC con Latinoamérica en base de datos internacionales. [Indicators of Scientific Collaboration between CSIC and Latin America in International Databases]. Rev Esp de Doc Cient 31(1):66–84
Dongarra JJ, Bunch JR, Moler CB, Stewart GW (1979) LINPACK users’ guide, SIAM. doi:10.1137/1.9781611971811
Dongarra JJ, Foster I, Fox G, Gropp W (2002) The sourcebook of parallel computing. Morgan Kaufmann, San Francisco, USA
Dongarra JJ, Van Der Steen AJ (2012) High-performance computing systems: status and outlook. Acta Numér 21:379–474
Ellingson SR, Smith JC, Baudry J (2014) Polypharmacology and supercomputer-based docking: opportunities and challenges. Mol Simul 40(1):10–11
Elmagarmid AK, Samuel A, Ouzzani M (2008) Community-cyberinfrastructure-enabled discovery in science and engineering. IEEE Comput Sci Eng 10(5):46–53
Emmot S, Rison S (2005) Towards 2020 science report. Microsoft Research
Featherstone NA, Browning MK, Brun AS, Toomre J (2009) Effects of fossil magnetic fields on convective core dynamos in a-type stars. Astrophysi J 705(1):1000
Fernbach S (1984) Supercomputers—past, present, prospects. Future Gener Comput Syst 1(1):23–30
Foster I, Kesselman C, Tuecke S (2001) The anatomy of the grid: enabling scalable virtual organizations. Int J High Perform Comput Appl 15(3):200–222
Gaudiani A (2012) Análisis del rendimiento de algoritmos paralelos de propósito general en GPGPU [Performance analysis of parallel algorithms for general purposes in GPGPU]. Doctoral dissertation, Facultad de Informática. http://sedici.unlp.edu.ar/bitstream/handle/10915/22691/Documento_completo__.pdf?sequence=1. Retrieved 10 Aug 2014
Geller T (2011) Supercomputing’s exaflop target. Commun ACM 54(8):16–18
Gengler M, Ubeda S, Desprez F (1996) Initiation au parallélisme: concepts, architectures et algorithmes [Introduction to parallelism: concepts, architectures and algorithms]. Masson. ISBN 2-225-85014-3
Goldberg D (1989) Genetic algorithms in search, optimization and machine learning. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA
Guellec D, Van Pottelsberghe de la Potterie B (2001) R&D and productivity growth: panel data analysis of 16 OECD countries. OECD Economic Studies No. 33/2, OECD, Paris
Guevara M, Martínez F, Arevalo G, Gassó S, Baldasano J (2013) An improved system for modelling spanish emissions: HERMESv2.0. Atmos Environ 81:209–222
Hassan AH, Fluke CJ, Barnes DG (2011) Unleashing the power of distributed CPU/GPU architectures: massive astronomical data analysis and visualization case study. arXiv:1111.6661
Hauben M (2010) History of ARPANET. http://pages.infinit.net/jbcoco/Arpa-Arpanet-Internet.pdf. Retrieved 23 Oct 2014
Hermes H (1969) Enumerability. Decidability. Computability. DieGrundlehren der Mathematischen Wissenschaften in Eeinzedartellungen.Band, vol 127. Springer, Berlin
High Performace Computing Act of 1991 (HPCA) (1991) Act of congress promulgated in the 102nd United States Congress as (Pub.L. 102–194)
Hwang K (1993) Advanced computer architecture: parallelism, scalability, programmability. McGraw-Hill, USA
IDC (2004) White paper: council on competiveness study of US industrial HPC users. http://www.compete.org/storage/images/uploads/File/PDF%20Files/HPC_Users_Survey%202004.pdf. Retrieved 15 June 2014
Kantardjiev AA (2012) Quantum.Ligand.Dock: protein-ligand docking with quantum entanglement refinement on a GPU system. Nucl Acids Res 40(W1):W415–W422
Kennedy K, Joy W (1998) Interim report to the president, President’s Information Technology Advisory Committee (PITAC). National Coordination Office for Computing, Information and Communication, 4201 Wilson Blvd, Suite 690, Arlington, VA 22230
Kinter JL, Cash B, Achuthavarier D, Adams J, Alshuler E, Dirmeyer P, Wong K et al (2013) Revolutionizing climate modeling with project Athena. A multi-institutional, international collaboration. Bull Am Meteorol Soc 94(2):231–245
Kramer D (2011) Supercomputing has a future in clean energy. Phys Today 64(7):27–29
Kupczyk M, Meyer N (2010) PRACE world-class computational facilities ready for polish scientific community. Comput Methods Sci Technol 2010:57–62
Kwon HI, Kim S, Lee H, Ryu M, Kim T, Choi S (2013) Development of an engineering education framework for aerodynamic shape optimization. Int J Aeronaut Space Sci 14(4):297–309
Lavington SH (1978) The Manchester Mark I and atlas: a historical perspective. Commun ACM 21(1):4–12
Lawson SJ, Woodgate M, Steijl R, Barakos GN (2012) High performance computing for challenging problems in computational fluid dynamics. Prog Aeroesp Sci 52:19–29
Lederman D, Maloney W (2003) R&D and development. World Bank Policy Research Working Paper 3024
Lee E, Huang H, Dennis JM, Chen P, Wang L (2013) An optimized parallel LSQR algorithm for seismic tomography. Comput Geosci 61:184–197
Leiner BM, Cerf VG, Clark DD, Kahn RE, Kleinrock L, Lynch DC, Wolff S (2009) A brief history of the internet. ACM SIGCOMM Comput Commun Rev 39(5):22–31
Lepora NF, Overton PG, Gurney K (2012) Efficient fitting of conductance-based model neurons from somatic current clamp. J Comput Neurosci 32(1):1–24
Lingerfelt EJ, Messer OE, Desai SS, Holt CA, Lentz EJ (2014) Near real-time data analysis of core-collapse supernova simulations with bellerophon. Procedia Comput Sci 29:1504–1514
Markram H, Meier K, Lippert T, Grillner S, Frackowiak R, Dehaene S, Saria A (2011) Introducing the human brain project. Procedia Comput Sci 7:39–42
Martin HSC, Jha S, Coveney PV (2014) Comparative analysis of nucleotide translocation through protein nanopores using steered molecular dynamics and an adaptive biasing force. J Comput Chem 35(9):692–702
Matsuzaki Y, Uchikoga N, Ohue M, Shimoda T, Sato T, Ishida T, Akiyama Y (2013) MEGADOCK 3.0: a high-performance protein-protein interaction prediction software using hybrid parallel computing for petascale supercomputing environments. Source Code Biol Med 8:18
Mc. Cartney S (1999) ENIAC: the triumphs and tragedies of the world’s first computer. Walker & Company. ISBN: 0802713483
Meuer HW, Gietl H (2013) Supercomputers-prestige objects or crucial tools for science and industry? PIK-Praxis der Informationsverarbeitung und Kommunikation 36(2):117–128
Menhorn F, Reumann M (2013) Genome assembly framework on massively parallel, distributed memory supercomputers. Biomed Eng/Biomed Tech. doi:10.1515/bmt-2013-4309
Metcalfe TS, Mathur S, Dogan G, Woitaszek M (2012) First results from the asteroseismic modeling portal. Prog Sol/Stellar Phys Helio- Asteroseismol 462:213
Milone D, Azar A, Rufiner L (2002) Supercomputadoras basadas en “clusters” de PCs. Trabajo de desarrollo tecnológico realizado en el Laboratorio de Cibernética de la Facultad de Ingeniería (UNER) [“Supercomputers based on PC “clusters.” Technological development project carried out in the cybernetic laboratory of the engineering faculty]. Revista Ciencia, Docencia y Tecnología 8(25):173–208
Moraleda A (2007) Supercomputing: a qualitative leap for competitiveness. Economistas 26(116):294–297
Munetomo M (2011) Realizing robust and scalable evolutionary algorithms toward exascale era. In: IEEE congress on evolutionary computation (CEC), pp 312–317
National Academy of Sciences (2005) Getting up to speed the future of the future of supercomputing. In: Committee on the future of supercomputing computer science and telecommunications board division on engineering and physical sciences. National Research Council of the National Academies. The National Academies Press. Washington, D.C
Neic A, Liebmann M, Hoetzl E, Mitchell L, Vigmond EJ, Haase G, Plank G (2012) Accelerating cardiac biodomain simulations using graphics processing units. IEEE Trans Biomed Eng 59(8):2281–2290
Nesmachnow S (2014) Planificación de tareas en sistemas cluster, grid y cloud utilizando algoritmos evolutivos. [Scheduling in cluster systems, grids and clouds using evolutionary algorithms]. Komputer Sapiens 6 (1)
Nicolosi F, Della Vecchia P, Ciliberti D (2013) An investigation on vertical tail plane contribution to aircraft sideforce. Aerosp Sci Technol 28(1):401–416
O’Neill JE (1995) The role of ARPA in the development of the ARPANET, 1961–1972. Ann Hist Comput IEEE 17(4):76–81
Patterson CA, Snir M, Graham SL (2005) Getting up to speed: the future of supercomputing. National Academies Press, Washington, D.C
Perine K (2000) The early adopter—Al Gore and the internet—government activity. The Industry Standard
Puckelwartz MJ, Pesce L, Nelakuditi V, Dellefave-Castillo L, Golbus JR, Day SM, McNally EM (2014) Supercomputing for the parallelization of whole genome analysis. Bioinformatics 30(11):1508–1513
RedCLARA (2011) Compendio RedCLARA de Redes Nacionales de Investigación y Educación Latinoamericanas. [CLARA compendium of Latin American national research networks and education]. http://dspace.redclara.net/bitstream/10786/918/1/2011_CompendioRedCLARA_ES.pdf. Retrieved 10 June 2014
Reumann M, Makalic E, Goudey BW, Inouye M, Bickerstaffe A, Bui M, Hopper JL (2012) Supercomputing enabling exhaustive statistical analysis of genome wide association study data: preliminary results. In: Engineering in medicine and biology society (EMBC), 2012 annual international conference of the IEEE, pp 1258–1261
Romer P (1990) Endogenous technological change. J Polit Econ 98(5):71–102
Rosenberg LC (1991) Update on national science foundation funding of the “collaboratory”. Commun ACM 34(12):83
Saini S, Rappleye J, Chang J, Barker D, Mehrotra P, Biswas R (2012) I/O performance characterization of lustre and NASA applications on pleiades. In: 19th international conference on high performance computing. IEEE High Performance Computing (HiPC), pp 1–10
Sanz MA (1998) Fundamentos históricos de la Internet en Europa y en España. [Historical foundations of the internet in Europe and Spain]. Boletín Rediris 45. http://www.rediris.es/difusion/publicaciones/boletin/45/enfoque2.html. Retrieved 12 May 2014
Saunders VR, Guest MF (1982) Applications of the CRAY-1 for quantum chemistry calculations. Comput Phys Commun 26(3):389–395
Sawyer M, Parsons M (2011) A strategy for research and innovation through high performance computing. The University of Edinburgh, Edinburgh
Schaller R (1997) Moore’s law: past, present and future. Spectrum IEEE 34(6):52–59
Shen B, Nelson B, Cheung S, Tao WK (2013) Improving NASA’s multiscale modeling framework for tropical cyclone climate study. Comput Sci Eng 15(5):56–67
Simmons ML, Wasserman HJ (1990) Performance comparison of the CRAY-2 and CRAY X-MP/416 supercomputers. J Supercomput 4(2):153–167
Singh S (2000) The code book: the secret history of codes and code-breaking: 77–85. Fourth Estate, London
Skupin A, Biberstine J, Boerner K (2013) Visualizing the topical structure of the medical sciences: a self-organizing map approach. Plos One 8(3):e58779. doi:10.1371/journal.pone.0058779
Soete L, O’Doherty D, Arnold E, Bounfour A, Fagerberg J, FarinelloU, Schiestock G (2002) Benchmarking national research policies: the impact of RTD on competitiveness and employment (IRCE). AStrata-ETAN Expert Working Group, European Commission DG Research, Brussels
Stamatakis A, Aberer AJ, Goll C, Smith SA, Berger SA, Izquierdo-Carrasco F (2012) RAxML-light: a tool for computing terabyte phylogenies. Bioinformatics 28(15):2064–2066
Stein RF, Lagerfjärd A, Nordlund Å, Georgobiani D (2012) Helioseismic data from emerging flux simulations. Prog Sol/Stellar Phys Helio-Asteroseismol 462:345
Stone H (1993) High performance computer architectures. Addison Wesley, Massachusetts, USA
Stoyanov D, Grigorov I, Deleva, A, Kolev N, Peshev Z, Kolarov G, Ivanov D (2013) Remote monitoring of aerosol layers over sofia during sahara dust transport episode (April, 2012). In: Seventeenth international school on quantum electronics: laser physics and applications, pp 87700Y–87700Y. International society for optics and photonics. doi:10.1117/12.2014154
Sumiyoshi K (2011) A numerical challenge on the core-collapse supernovae: physics of neutrino and matter at extreme conditions. J Phys Conf Ser 302(1):012060
Thornton JE (1970) Design of a computer—the control data 6600, Scott Foresman & Co., Glenview, Illinois, USA
Toomre J, Augustson KC, Brown BP, Browning MK, Brun AS, Featherstone NA, Miesch MS (2012) New era in 3-D modeling of convection and magnetic dynamos in stellar envelopes and cores. In: Progress in solar/stellar physics with Helio- and asteroseismology, vol 462, pp 331
Turpin T, Lian Y, Tong J, Fang X (1995) Technology and innovation networks in the People’s Republic of China. J Ind Stud 2(2):63–74
Um J, Choi H, Song SK, Choi SP, Mook Yoon H, Jung H, Kim TH (2013) Development of a virtualized supercomputing environment for genomic analysis. J Supercomput 65(1):71–85
Utreras F (2014) Visión y Proyectos: el presente y el futuro. [Vision and projects: present and future]. XI Encuentro Temático Nacional Renata—RUP. RedClara, Popayán Colombia
Villarubia C (2012) La Comisión Europea dobla el presupuesto para HPC [The European commission doubles the budget for HPC]. http://www.bsc.es/sites/default/files/public/about/news/hpc-22022012-datacenterdynamics.pdf. Retrieved 27 Aug 2014
Von Neumann J (1945) First draft of a report on the EDVA. Between the United States Army Ordinance Department and the University of Pennsylvania Moore School of Electrical Engineering University of Pennsylvania. Contract No. W-670-ORD-4926
Wang R, Harris C (2013) Scaling radio astronomy signal correlation on heterogeneous supercomputers using various data distribution methodologies. Exp Astron 36(3):433–449
Westerlund S, Harris C (2014) A framework for HI spectral source finding using distributed-memory supercomputing. Publ Astron Soc Aust 31:e023. doi:10.1017/pasa.2014.18
Wilkes MV, Renwick W (1950) The EDSAC (electronic delay storage automatic calculator). Math Comput 4(30):61–65
Winkel M, Speck R, Huebner H, Arnols L, Krause R, Gibbon P (2012) A massively parallel, multi-disciplinary Barnes–Hut tree code for extreme-scale \(n\)-body simulations. Comput Phys Commun 183(4):880–889
Woolgar L (2007) New institutional policies for university-industry links. Jpn Res Policy 36:1261–1274
Wu W (2007) Cultivating research universities and industrial linkages in China: the case of Shanghai. World Dev 35(6):1075–1093
Wu Y, Cai X-C (2014) A fully implicit domain decomposition based ale framework for three-dimensional fluid-structure interaction with application in blood flow computation. J Comput Phys 258:524–537
Yamamoto K (2004) Corporatization of national universities in Japan: revolution for governance or rhetoric for downsizing? Financ Account Manag 20(2):153–181
Yang W, Yoshigoe K, Qin X, Liu JS, Yang JY, Niemierko A, Deng Y, Liu Y, Dunker AK, Chen Z, Wang L, Xu D, Arabnia HR, Tong W, Yang MQ (2014) Identification of genes and pathways involved in kidney renal clear cell carcinoma. BMC Bioinform 15(Suppl 17):S2
Yin S, Luo H, Ding S (2013) Real-time implementation of fault-tolerant control systems with performance optimization. IEEE Trans Ind Electron 61(5):2402–2411
Yoshida H (2013) Cloud-super-computing virtual colonoscopy with motion-based navigation for colon cancer screening. In: IEEE third international conference on Consumer
Zhou P, Leydesdorff L (2006) The emergence of China as a leading nation in science. Res Policy 35(1):83–104
Acknowledgments
The authors acknowledge financial support from the Spanish Ministry of Science and Competitiveness Grant (ECO2012-35439). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fernández-González, Á., Rosillo, R., Miguel-Dávila, J.Á. et al. Historical review and future challenges in Supercomputing and Networks of Scientific Communication. J Supercomput 71, 4476–4503 (2015). https://doi.org/10.1007/s11227-015-1544-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-015-1544-3