Keywords

1 Introduction

1.1 What is High Performance Computing?

Computing is the process of handling computer technology system, both hardware and software for the purpose of task completion. According to Dayanni and Khayyambashi (2013) high performance refers to the rapidness at which data can be accessed and shared amongst the set of distributed system. Demands in the speed of data processing system has led to an extensive progress in High Performance Computing (HPC), also known as the commodity HPC cluster, a dedicated supercomputer, HPC cloud computing and grid computing (Eadline 2009). HPC uses the largest and fastest computer in solving a complex problem particularly in modern science and engineering. It has provided an exceptional environment, transforming into an indispensable tool in scientific and industrial communities (Ranilla et al. 2014). Although, sometimes the collected data are inconsistent and ambiguous making them even more difficult to be interpreted. HPC systems are notable of such failure as they are usually used for running advanced application programs (Shawky 2014). HPC assists in solving problems using computer architecture, computer networks, algorithms, programs and electronics, system software as well as environments in creating adaptable systems.

Now, due to cutting-edge computing technology, HPC is utilized among devices as extended as smartphones, tablets, laptop and stand-alone computers. These devices are becoming computational resources based on multi core processors, which possibly include coprocessor devices that reduces the load on the basic microprocessor circuitry and allowing it to perform at optimum speed. Organizations benefit from HPC through its’ faster times to solution, better science, informed decisions, more competitive products, and how those things can lead to higher profits. HPC represents a tremendous competitive edge in the marketplace because it can provide users the capability to quickly model and then manipulate a product or process to observe the impact of ranges of decisions before they are contrived.

1.2 What is Grid Computing?

According to the information systems concept, grid computing consists of computing infrastructure technologies or machines that gathers different computer resources such as storage and processors, in order to provide computing support for various applications, turning a computer network into a powerful supercomputer (Oesterle et al. 2015). This computer functions to disseminate information, requiring an environment that provide its’ ability to share through virtualization not only in a form of servers and CPUs, but also utilizes storage, networks and applications. This virtualization differs inside and outside the enterprise. Inside the business enterprise, virtualization is carried across distributed technologies, platforms, and organizations, emphasizing the social aspect in organizations. Virtualizing externally however can be across the Internet, making it accessible for enterprises to scan resources from manufacturers and suppliers. It makes networking and collaborations easier as communication is enhanced through shared information (Jacob et al. 2005) and is often used to complete complicated or tedious mathematical or scientific calculations in standard application areas such as scheduling, security, accounting, and systems management. Grid also enables massively scaled architectures and brings with it a host of organization as well as technical challenges. No special networking is required for grid, and is not limited to the local LAN. (Vile and Liddle 2008)

2 Discussions

2.1 How are High Performance Computing and Grid Computing interrelated?

Applying compute cycles to a problem can be done in many ways. According to Eadline (2009), there are four different types of High Performance Computing system. They are known as the commodity HPC cluster, a dedicated supercomputer, HPC cloud computing and grid computing.

The dedicated supercomputer however, does not offer the commodity bargain but are still produced. Before the commodity HPC cluster, a dedicated supercomputer was the only process to carry a large number of compute cycles to solve a problem. Yet they are still in used depending on the occasion.

HPC cloud computing is distributed computing that is built using services scattered over multiple locations which offers virtualization flexibility, progressive and extensible resources to the end-user as a service compared to supercomputers. HPC cloud offers a range of benefits, including elasticity, small start-up and maintenance costs, and economic of scale. Although, it also places some layers between the user and hardware that may reduce performance.

Grid computing and HPC cloud computing are complementary, but requires more control by the person who uses it. It is a more economical way of achieving the processing capabilities of HPC as running an analysis using grid computing is a free-of-charge for the individual researcher once the system does not require to be purchased. Computational grid is similar to a massively parallel supercomputer because its’ software requires existing computer hardware to function all at once.

In large HPC systems, Input/output forwarding isolates local compute clients, connected by a high bandwidth, low latency interconnect from the more distant, higher latency parallel file system. Consequently, it is protecting the file system from being harmed by bombarding requests through collecting and merging of requests first and foremost before transmitting them to the file system. From the perspective of the remote file system, this cut down the number of visible clients and requests, hence intensifying the performance. In Grid computing environment, these optimizations are also applicable, even though it is on a different scale. While latencies might be much higher, the same discontinuity occurs when an application, running on a local Grid resource needs to retrieve data from a remote data store. So does the case in large HPC systems, where a substantial amount of simultaneous requests to a remote site might adversely affect the stability and security of the remote file server. This observation is valid both for data staging and wide area Grid filesystems (Cope et al. 2010)

Grids, and HPC are therefore generally interrelated in such a way that an organization or institution can provide and access resources at the same time. This is specifically applicable for grid computing as access is ‘paid’ for in-kind by donating access to your own resources Upfront development for grid computing can be a long process as different systems are to be dealt with and you cannot control or determine what a specific institute runs on their machines. Using Grid & HPC tools on the clouds also allow smaller businesses and individuals to pay only for what they use and allow new business ideas models to come to life without the amount of dollar price tag it can take to put together all the software license and hardware required solutions.

2.2 Application 1 – Autonomic Management

2.2.1 What is Autonomic Management?

Autonomic management targets to demolish the complexity that entangles the management of complex systems, mainly by attaching their parts with self-management facilities.

2.2.2 The Relationship of Autonomic Management with Grid Computing and High Performance Computing

2.2.2.1 The Relationship of Autonomic Management with Grid Computing

Grid Computing, one of the large scale distributed systems, has its own specific complexity that makes its management a highly troublesome task. The solution to this may possibly be a theoretical one called autonomic computing as it supplies the grid system with the necessary tools that may simplifies the system administrator’s decisions (Sanchez 2010). Moreover, an alternative way which is to study it as a single entity is utilized for a better understanding since the system involve massive resources that may defect the efficiency of analysing and implementing policies on each one. This alternative approach involves the technique of self-adaptive with a single entity vision of the grid in order to cater autonomic management and raise the level of dependability.

Agustin et al. (2007) proposed an autonomic network- aware scheduling infrastructure that is capable of adapting its behaviour to the current status of the environment. Grid technologies have enabled the aggregation of geographically distributed resources, in the context of a particular application. The network remains an important requirement for any grid application, as entities involved in a grid system (such as users, services, and data) need to communicate with each other over a network. The performance of the network must therefore be considered when carrying out tasks such as scheduling, migration or monitoring of jobs. Making use of the network in an efficient and fault tolerance manner, in the context of such existing research, leads to a significant number of research challenges. One way to address these problems is to make grid middleware incorporate the concept of autonomic systems. Such a change would involve the development of “self-configuring” systems that are able to make decisions autonomously, and adapt themselves as the system status changes.

2.2.2.2 The Relationship of Autonomic Management with High Performance Computing

High Performance Computing (HPC) has evolved into becoming a complex and powerful systems. Various components include complete computers with on board CPUS, storage, power supplies, and network interfaces connected to a network (private, public or the Internet) and by a conventional network interface, such as Ethernet, act as roles and build the systems of high performance computing. The relationship between autonomic management with high performance computing can be seen through making a connection of how the current HPC systems consume several MWs of power, which further leads to energy/power-efficiency that has to be addressed in a mixture with the requirement of solutions quality, performance and reliability and other objectives.

2.2.3 Autonomic Management Issues in Grid Computing

The intrinsic complexity of grid systems is difficult to mend by the employment of traditional management techniques. In grid, the system characteristics – heterogeneity, variability and decentralization – requires several and varied perspective in system management that are not easily adaptable by traditional behaviour patterns. Sanchez (2010) states that although these system characteristics may be adapted by adopting an autonomic computing approach, the effects on the grid complexity is unavoidable, namely in its four main areas: self-configuration, self-healing, self-optimization and self-protection.

  • Self-Configuration issues

The variability and heterogeneity of most grids make resource configuration an inconsiderable amount of problem. Most traditional distributed approaches – cluster computing among other – may very frequently display desirable characteristics that makes system configuration nearly an exclusively a design problem. Moreover, the process of reconfiguration may usually be performed in an off-line or semi-off-line operation status and simultaneously acquires specific degree of redesign of the system’s structure.

  • Self-Healing issues

Various situations such as resources may appear and disappear at an unpredictable time and rate, and network links may be temporarily or even permanently bothered as well as uncontrollable system parts overloaded by global system administrators, may take place as the aftermath of the natural grid system features. Moreover, all the mentioned situations are considered normal in the eyes of grids’ typical behaviour environment, eventhough these variables may have a direct impact on its dependability. However, grids play a vital role as a large sum of resources that fulfil a series of services. And so, the typical behaviour environments of the grids are difficult to be fault-based categorized as they may be diluted by a high degree of grids cohesion. To conclude, the provision of the high quality of services should be valued in their operation instead of taking into account the nature of its internal resources.

  • Self-Optimization issues

In the point of view of an abstract, optimizing the level of services provided may be entailed by the need to modelling the functionality of the systems as well as being aware and make changes on the limitations of the performance’s surroundings. To enjoy the maximization of the system’s available resources; a fundamental understanding of the system may aid to develop an advanced management strategies and policies as the basis for performance improvement relies heavily on the understanding of the behaviour of the system.

  • Self-Protection issues

The importance essence of proactive identification and protection from external attacks is highly focused, given the grids nature with regards to the distributed, heterogeneous and decentralized. Due to this, an obligatory initial step is providing shield to each individual resource. This can be executed by the inclusion of traditional and verified techniques to stop it from undisciplined usage as well as other security threats. Unfortunately, these techniques are deemed insufficient due the large amount of resource interaction that exists in grid systems, leading to the necessary need for protection gadgets focused solely on the system’s global perspectives. However, the grid’s optimum complexity makes the aforementioned task heavy, leading to learn the system as well as analyse the depth of the resources internal and external interactions.

2.2.4 Autonomic Management Issues in High Performance Computing

Assuming the relationship between autonomic management and high performance computing on the basis of achieving the main goal of expanding power management at different levels, several autonomic mechanisms are adapted to manage the energy efficiency of HPC through a dynamic cross-layer base in an integrated manner. This adaptation is necessary as current data management approaches is operated on centralized data repositories and is unable to support extreme rates of data generation and distribution scales (Rodero and Parashar 2012).

The implication of the modelled energy trade-offs of power management techniques at varying levels as well as the designed model-driven autonomic optimizations and adaptations have revolved on the base of cross-layer approach to produce effective adaptations and optimizations. Also, the creation of innovative test beds for the experimental evaluation conduct and the modern-driven platforms such as the Intel SCC many-core system have been explored thoroughly. Moreover, energy-efficient and thermal-aware autonomic management of high performance computing workloads is in inclusion of the thorough exploration.

The decomposition of the proposed approach may be effectively be implied by following the different stages: (1) categorize relevant data-intensive applications and high performance computing benchmarks and give a thorough explore on the matter of power/performance trade-offs to give a definite meaning to the models, (2) develop a set of strategies, application extensions and modes usage so as to be accepted by the model-driven optimizations and adaptations, (3) permits cross-layer interactions and integrate them with the runtime system, and lastly (4) adapt simulation into the exploration of the suggested approach at a larger scope as well as taking into account the non-standard hardware configurations, hardware and software co-design and determine how these components have an impact towards the applications and system.

Furthermore, the main challenges of the specific cross-layer tactic can be summed as: (1) understanding the application models and maintaining the application to help facilitate in performing effective optimizations as well as adaptations at varying measurements; (2) studying the applications of these adaptations being implied towards other dimensions which may include performance, data management and the methods of using the different stages of memory hierarchy; (3) comprehending the control plane and mechanisms usage in order to implement and fulfil such a management approach (Rodero and Parashar 2012).

2.3 Application of Both HPC and Grid Computing in Finance Sector

2.3.1 The Role of High Performance Computing in Finance Sector

The impression of the financial turmoil on ICT has been quick and boundless which resulted in a general reduction in investment and clearly created the idea that all the priorities need to be on how ICT should be the core focus of the business to recover of its profitability and needs to support the changes and streamlining. On the contrary, the ICT departments are developing hard pressed by the conflicting demands of cutting costs set against the clear idea for an improved reactiveness and immediate response times to the demand of the business. (Nicoletti 2013)

HPC has been quite prominent in the financial sector for years even if it was too bulky or highly complicated to use. At the present moment, HPC is used to solve complicated and advanced computation problems. HPC is mainly utilized in financial services for regulatory compliance, pricing, pre-trade risk analysis and future trends to meet with. Financial Sector clearly opts for HPC mainly reasoning the fact that the ability of it to handle a massive number of data at a very high speed as well as the ability to solve complex computation.

Garg et al. (2012) stated an example whereby if an investor would like to know on how to make a decision whether to purchase a stock at a future date and wanted to retrieve any information at a particular price based on some basic information visible to public such as Yahoo! Finance. This type of future investment is known as an option, and this requires knowledge of the current stock price and the chances on when to exercise the stock for the profit. As there are plenty of algorithms to price an option which creates this problem called an option pricing problem in finance. If the investors are interested in pricing an option, they would need to have the working knowledge of these algorithms to help them make a great and informative decision on computing the option prices. The algorithms used in the option pricing problem are computationally intensive and require parallel processing to obtain results in real time. The option pricing problem uses the algorithms which are very complexed for an investor to understand as it is not easy to understand the algorithms with no backgrounds of it. The option pricing problems fall under the category HPC application and has successfully done several works in this field

One of the very famous scenario that made the financial sector usher its way to using HPC was during the financial crisis of 2007–2008 hit badly regarding the miscalculation which was the reason as the traditional computer technology were not reliable to handle such complex and heavy computations. Ever since then, HPC has been shining around the financial sector.

As for now, stated by Stanoevska-Slabeva et al. (2010), companies need to survive and develop competitive advantage in a dynamic and in an unstable environment of global competition and accelerated business change.

2.3.2 The Role of Grid Computing in Finance Sector

The application of the grid technology can be adequately strengthening the financial system, the ability to process data, different technical aspects of resolving the present problem thereby, cutting the financial cost of doing business. Many international companies have well established in their respective financial grid system to realize the demand, disposal of resources and efficient cost savings researched by Bingxiang nad Lihua (2012).

They also mentioned that the use of this technology, which can adequately strengthen the financial system, the ability to process the data, finding solutions for current complex computation problems may benefit the firms to cut down on the financial cost incurred by doing business. As well as the adaptation of the grid technology by many financial companies have settled their respective financial grid system to apprehend their demand on deploying of the resources, on improving resources utilization and efficiency and saving cost.

The financial system need to build a virtual grid focal point in the system in which they can provide the means of computing, information service and other resources to be shared amongst and problem arises as if the intensity of the calculation to carry out is lagging behind in the financial sector in which will affect the product innovation, pricing, loan analysis and utilizing the resource fully to achieve the business’s objective as majorly, the rather expensive machines will be stopped in use or in idle state which may incur cost. (Zhang 2014).

According to Zhang (2014), the grid system used can be divided into resource layer, middleware layer in which the acts as a middle man between the software and the database, the web environment layer of four financial uses. Firstly the resource layer which includes the grid systems’ main groundwork, the famous grid nodes, and broadband network systems, in which the node consists of different resources such as supercomputers, application software, databases and cluster systems. The main objective is the hardware and software resources, to provide the access the platform and to control management.

2.3.3 The Future of High Performance Computing and Grid Computing

In relevance to what International Data Corporation (2015) found out that, the business firms are changing to HPC rapidly because of the advantages as well as the reduction in cost. Even though the recent trend states there is a great potential growth for the HPC but still there is challenges like having a difficulty to find enough data centre space and power. And also, the bandwidth limitations play a huge role in moving around large data sets with these geographically dispersed networks with rather high reliability and high bandwidth interconnectivity.

In future, it is predicted that the cloud computing may expand to its fullest as it has a clear advantage over HPC. In contempt of the similarities between HPC and cloud computing, they do differ in many known ways. Both are driven by cost, reliability and energy efficiency imperatives, the commercial cloud will require more continuous service even when there is some failure Geist and Reed (2015).

Furthermore, HPC is not only too bulky but consumes a lot of space comparing to recent innovation, cloud computing. With this being in trend helps even more for the business to be capable of scaling up or down rapidly as well as the availability.

For instance, NASDAQ OMX GROUP had disclosed around the year 2012 the launch of a new cloud computing named, FinQloud. This was mechanized by Amazon Web Services and is only particularly made for the financial service industry. This clouding platform caters regulators broke dealers and advisory services to exchange all within the global framework.

Another example would be “Aneka, an enterprise cloud computing solution, harnesses the power of computing resources by relying on private and public clouds and delivers to users the desired quality of service. It is quiet adaptable which supports multiples of programming protocols that makes Aneka quite different. It is mainly designed for finance applications to computational science states Vecchiola et al. (2009).

2.4 Application 3 – Service Level System Management

In this section, we will use research that have been conducted as evidence to back up our claim that grid computing and high performance computing are used in service level system management. Some examples of service level system management are health care, e-science and school. Additionally, we will also cover some of the drawbacks of grid computing and high performance computing and how to overcome the challenges.

Health care

Grid computing and high performance computing plays an important role in health care. Karthikeyan and Manjula (2010) stated that grid computing enable doctors to make medical decisions, easier transfer and collaboration of information as well as improve medication management. They further added that grid computing may offer more advanced services such as telemedicine, daily monitoring and exchanging information between two or more parties.

Electronic services (e-services) such as storing the patients’ record can be improved with the help of grid computing. (Liu and Zhu 2013) The fact that patients’ records were in paper makes it not environmental friendly to edit the record as more paper have to be attached to the file, time consuming and the records could be lost. (Liu and Zhu 2013). These problems can be addressed if grid computing is used instead. In the same piece of journal article, they mentioned that the resource grid layer which is where grid computing is used have two primary functions. They are to help providers and clients connect and to build a reliable setting in order for the business operates and transaction to take place over the Internet via grid computing.

High performance computing (HPC) is vital to capture more biological knowledge according to Guerro et al. (2014). Their study was focused on how HPC can benefit bioinformatics. They stated that having the most efficient performance is favourable but at the same time other organisations should focus more on cutting the cost to build a HPC. They found out that volunteer computing is not the solution for all remedies because it relies on the severity of the problem and compatible materials that are given during a project. Overall, HPC is used mainly to gather knowledge with less delay due to the higher GPU which the computers use.

E-science

E-science is a branch of science that require the usage of significant amounts of computing resources and massive data sets in order to perform scientific enquiry. The data produced may need the experts’ scrutiny. (Mustafee 2010), Mustafee (2010) explained that grid computing is an asset for e-science due to its wide applications on various fields. According to Mustafee (2010) some of the projects which rely on grid computing were the earth system grid (ESG) which focal point was climatology, Large Hadron Collider which focuses on particle physics and creating a simulation for earthquake engineering. However, grid computing is costly to produce yet it is regarded as an investment because grid computing is used to solve bigger problems. (Mustafee 2010)

Barati et al. (2014) proposed an algorithm in their research which will minimise the failure of grid computing so that the same algorithm can be applied in their e-Science projects for resource management. They tried to minimise the error occur when one of the agent failed to complete the resources chain by defining a new agent called task agent. The function of the task agent is to create sub agents so that they can communicate with the resource agent in order to check the success or failure of the resource agent. If there is a breakdown in the resource agent the sub agent will find another resource agent which has the required data and maintain connectivity with the new resource agent. They found out that there is an inverse relationship between increasing the number of resource types and the success percentage of the data transferred. But, they are still willing to use their algorithm in the near future probably with more advancement and hopefully overcome the inverse relationship between success percentage and the number of resource types.

HPC is also used in the science field as well as other different areas and this is a proven fact by Bennedict (2013). In his survey, he found that although HPC based cloud applications are used in various fields of work, HPC based cloud applications still faced some obstacles such as resource outages, SLA violations and the lack of adequate performance analysis tools. But in his research he came up with a few solutions to tackle these problems and hopefully some of them are being implemented as it will bring greater advantage to all parties concern.

School

Given that schools usually takes up a bigger space, there is no doubt that grid computing can be utilise as it can be linked by the Internet or low speed networks. Grid computing is particularly useful as a school tend to have a minimum of hundreds students who have access to the school’s network which may cause low speed Internet. Lee et al. (2008) explained the execution process if a person wishes to operate the learning platform. First, the learner will have to enter the grid portal and key in the user’s login name and password. The system will provide a list of resources available along with the status if the login is acknowledged. The circulation of computation or data across other computers in the same grid is being assigned by a broker. GridFTP (file transfer protocol) is used to access the computer’s resource for the distribution of data whereas for the resource which have high speed performance, Replica Location Service is used instead. Thus, the students can benefit from grid computing.

Another study conducted by Goldsborough (2010) however proves that grid computing is not necessarily the form of file sharing that is popular. It is instead cloud computing. The reasons why cloud computing are more favourable compared to grid computing are it does not require high intensity processing power and space to contain valuable information, cost friendly as nearly all the time they are free of charge and less time consuming. It brings benefit to not just businesses but also to other organizations such as schools. In addition to the many advantages cloud computing offers, perhaps the most important one is cloud backup services. We never know when our technology will fail us either due to virus, our own clumsiness, blackouts and so on. Cloud computing such as Google Drive is very useful to act as a ‘safe house’ and it is very user friendly. Hence, grid computing seems to be on the losing end against cloud computing as it seems that cloud computing is higher in demand.

One of the benefit of using HPC is that it can be used to make business models as validated by Eurich et al. (2013). They discussed about the pros and cons that HPC provide for business models which can be utilised by higher education based HPC centres’ directors, schools and government to design an acceptable business model. They also predicted that the pricing strategies and the design of the business models will be more relevant in the future. The decision makers have to do research on their consumers, investigate the national e-infrastructure to keep up with the current trends.

3 Conclusion

Grid is to defined as sharing resources between multiple parties, for example the government, universities, computing centres, that have a need to do a joint work on a project for a period of time. It is not only concentrate on file exchange but also allow an open access to computers, software, data and other resources.

The access to high-end computational resources with the hardware and software facility need to be, dependable, consistent, pervasive and inexpensive. It is also expected that the usage of grid computing in our everyday life can serve multiple colonies such as government, private sectors and etc. Hence, some national-level problems that are correlated with the government are able to solve through the use of nation’s super-fast computers, data archives.

There are many areas in service level management that require the use of grid computing. Some example given are as in Health care, E-science, and School. The use of grid computing in these area help to reduce hassle working environment. Some hassle working environment like bureaucratic paperwork and the time needed in completing a specific task is reduced Some risk such as the loss of record are also reduced and replaced by those real time tracking and recording.

In conclusion, grid computing is important in many areas. Government and important IT supplier such as AITI plays important role in enhancing and implementing these technologies. But, firstly, it had to be integrated with several other computer systems such as cloud computing, distributed computing, object-oriented programming and also the web service in order for them to work more effective and efficiently. Once both effective and efficient are achieved in the long run, productivity can increase and operating costs can be reduced which eventually can helps save and reduce the budget of an institute or an organization.