Keywords

1 Introduction

Heterogeneous CPU-GPU architecture are now broadly used in a wide variety of computing systems, including mobile devices, PCs, datacenter servers, and HPC systems. For instance, over 160 out of the 500 top-class supercomputers are now GPU-accelerated systems (as of Jun 2022) [1]. This trend is driven by the end of Dennard scaling [14], i.e., the exponential growth of single-thread performance in microprocessors had ceased, and the industry rather shifted toward thread-level parallelism and heterogeneous computing using domain specific accelerators [15]. GPUs are one of the most commonly used accelerators due to their wide range of application areas, including image processing, scientific computing, artificial intelligence and data mining.

As computing systems are becoming more powerful and more heterogeneous using a wide variety of resources, it also becoming more difficult to fully utilize the entirety of compute resources by one single application. One reason behind the trend is that it is not always easy to identify a large enough fraction of a code that can be ported to GPUs (or any other accelerator) while balancing loads across all the processing units (CPU, GPU, or any accelerators). Further, the scalability of applications inside of a chip can be limited by various factors such as intensive memory accesses and shared resource contentions, which can induce a significant waste of compute resources.

Therefore, co-scheduling, i.e., co-locating multiple processes in a space sharing manner, is a key feature to mitigate resource wastes and to maximize throughput on such systems, if the processes are complimentary in their resource usage. To achieve the latter, a sophisticated mechanism to partition resources at each component and allocate them accordingly to co-scheduled processes is indispensable. Recent commercial CPUs and GPUs support such resource partitioning features: (1) one can designate physical core allocations to co-scheduled processes on CPUs; and (2) GPUs have begun to support hardware-level resource partitioning features for co-locating multiple processes—one example is NVIDIA’s Multi-Instance GPU (or MIG) feature that is supported in the most recent high-end GPUs to enable co-locating multiple programs at the same chip while partitioning it at the granularity of GPC [21].

Meanwhile, as power (or energy) consumption is now a first-order concern in almost all the computing systems from embedded devices to HPC systems [10, 22, 23], performance optimizations for modern computing systems, including co-scheduling, must consider power optimization and in most cases also hard power limits or constraints. Once a power constraint is set on a system, the power budgets must be distributed to components accordingly so as to maximize the performance while keeping the constraint. To realize such a mechanism, modern CPUs and GPUs now support power capping features that set a power limit at the granularity of chip (or even at a finer granularity for some hardware).

Driven by the above trends, this paper explicitly targets the combination of co-scheduling, resource partitioning, and power capping on CPU-GPU heterogeneous systems, and provides a systematic solution to co-optimize them using a machine-learning-based performance model as well as a graph-based scheduling algorithm. Our model takes both application profiles and hardware knob states into account as its inputs and returns the estimated performance of the co-located applications as the output. More specifically, the profiles are based on hardware performance counters, and the hardware knob states include resource partitioning and power capping on both the CPU and GPU. We use this performance model to estimate the best performance of different hardware setups for a given application pair, which is used to determine the best co-scheduling pairs in a graph-based algorithm, i.e., Edmonds’ algorithm [13].

The followings are the major contributions of this paper:

  1. 1.

    We comprehensively and systematically optimize (1) co-scheduling pair selections, (2) resource partitioning at both CPU and GPU, and (3) power budgeting on both CPU and GPU, using a real CPU-GPU heterogeneous hardware platform.

  2. 2.

    We define an optimization problem and provide a systematic solution to select the best job pair and the best hardware setups including resource partitioning and power capping on CPU/GPU.

  3. 3.

    We develop a machine-learning-based performance model that takes both the characteristics of the co-located applications and the hardware states (including partitioning and power capping on CPU/GPU) into account.

  4. 4.

    We solve the optimization problem by using the above performance model building on the graph-based Edmonds’ algorithm.

  5. 5.

    We quantify the benefits of our approach by using a real hardware, and show that we improve the system throughput by 67% compared to a time-sharing-based scheduling with a naive power capping that evenly distributes the power budgets across the CPU and GPU.

2 Related Work

Ever since multi-core processors appeared on the market, co-scheduling, resource partitioning, and power capping have been studied. However, ours is the first work that covers all of the following aspects simultaneously: (1) targeting CPU-GPU heterogeneous systems; (2) comprehensively co-optimizing co-scheduling pair selections, resource partitioning, and power capping, using machine-learning-based performance modeling and a graph-based algorithm; and (3) quantifying the benefits using a real hardware that is capable of both resource partitioning and power capping at both the CPU and the discrete GPU.

M. Bhadauria et al. explored co-scheduling multi-threaded programs in a space sharing manner using a multi-core processor [9]. S. Zhuravlev et al. focused on the shared resource contention across co-located programs on multi-core processors and proposed a scheduler-based solution to mitigate the interference [26]. R. Cochran et al. proposed Pack & Cap that optimizes the scale of multi-threaded applications via the thread packing technique while applying power capping [12]. Then, H. Sasaki et al. provided a sophisticated power-performance optimization method that coordinates the thread packing technique and DVFS for multi-programmed and multi-threaded environments [24]. These seminal studies provided insightful ideas, however they did not target CPU-GPU heterogeneous systems.

Few studies looked at the combination of co-scheduling and power capping on CPU-GPU heterogeneous systems. Q. Zhu et al. worked on the combination of job scheduling and power capping for integrated CPU-GPU systems [25], but they did not cover the following aspects: resource partitioning inside of CPU/GPU; and co-scheduling multiple processes on the GPU in a space sharing manner. R. Azimi et al. developed a framework called PowerCoord that allocates power budgets to components on CPU-GPU heterogeneous systems for co-scheduled workloads [5], but their work did not target adjusting the resource partitions as well. Recent hardware advances (e.g., NVIDIA MIG feature [4, 21]) made it possible to apply both the process co-location and resource partitioning on both CPUs and GPUs, which opened up new optimization opportunities.

There have been several studies that utilize machine learning (including linear regression) for performance/power modeling in the literature. B. Lee et al. utilized the linear regression to predict performance for CPUs [18]. E. Ïpek et al. conducted microarchitectural design space explorations using a neural network [17]. B. Barnes et al. proposed a statistical approach to predict performance of parallel applications [7]. H. Nagasaka et al. constructed a power consumption model for GPUs that is based on the linear regression and hardware performance counters [19]. Beyond these pioneering studies, machine-learning-based approaches have been utilized also for more complicated system design and optimization purposes such as: clock frequency setups at both CPU and GPU at the same time on a CPU-GPU integrated chip [6]; power capping setups on CPU, DRAM, and NVRAM [3]; coordination of thread/page mapping and prefetcher configurations [8]; and CPU-GPU heterogeneous chip designs in the industry [16]. We follow the literature and utilize a neural network that is tailored to solve our new problem.

3 Motivation, Problem, and Solution Overview

3.1 Motivation: Technology Trends

Setting a power cap on a processor is a crucial feature and is now supported on a variety of commercial CPUs and GPUs. One prominent use case for this feature is to protect a chip from overheating and, instead of having to be conservative, to adjust the needed settings to the machine environment such as the cooling facility. Another prominent use case is enabling a hierarchical and cooperative power budgeting across components or compute nodes while keeping a total power constraint, which has been widely studied from standalone computers to large-scale systems, including datacenters and supercomputers [10, 22, 23]. In our work, we target CPU-GPU heterogeneous computing systems (or nodes) and optimize the power cap setups on both CPU and GPU in order to maximize performance under a total node power constraint.

As compute nodes are becoming fatter and systems more heterogeneous, it is also becoming more difficult to fully utilize an entire node’s resources by one single process. For instance, compute resources are typically under utilized for memory-bound applications, while memory bandwidth is often wasted for compute-bound applications. Further, some applications are suitable for running on GPUs, but others are not. To improve resource utilization, mixing different kinds of processes and co-scheduling (or co-locating) them on the node at the same time while setting resource allocations accordingly at both the CPU and GPU is a desired feature.

Fig. 1.
figure 1

Problem overview

3.2 Problem Definition

Figure 1 illustrates the overall problem we target in this paper. Here, we assume that we have one single job (or process) queue on the system (\(\textrm{Q}\)). We convert the job queue into a list of job sets (or pairs) to co-schedule (\(\mathrm {JS_1}, \mathrm {JS_2}, \cdots \)). Note these jobs are selected from inside of the window (W) on the queue. The concurrency, i.e., the maximum number of jobs launched at a time, is limited by a given parameter (C), and we particularly target \(C=2\) in this paper, meaning that no more than 2 CPU-GPU jobs will be co-scheduled at any given time. This value was chosen as for higher values no polynomial-time algorithms for job-set selection is known. We represent a set of these scheduling parameters as \(\textrm{SP} = \{C, W\}\). When launching/co-locating the ith job set (\(\mathrm {JS_i}\)), we optimize the hardware knob configurations (\(\mathrm {HC_i}\)), i.e., resource partitioning on CPU/GPU (\(R^c_i\)/\(R^g_i\)) as well as the power cap setups on CPU/GPU (\(P^c_i\)/\(P^g_i\)). Note, the sum of the power caps must be less than or equal than the given total power constraint \(P_{total}\).

The following is the mathematical formulation as an optimization problem:

$$\begin{aligned}&inputs&\textrm{Q} = \{\mathrm {J_1}, \mathrm {J_2}, \cdots , \mathrm {J_{W}}\}, P_{total}, \textrm{SP} \\&outputs&\mathrm {L_{JS}} = \{\mathrm {JS_1}, \mathrm {JS_2}, \cdots \}, \mathrm {L_{HC}} = \{\mathrm {HC_1}, \mathrm {HC_2}, \cdots \} \\&\min&\sum _{i=1}^{|\mathrm {L_{JS}}|}CoRunTime(\mathrm {JS_i},\mathrm {HC_i})\\&s.t.&CoRunTime(\mathrm {JS_i},\mathrm {HC_i}) \le SoloRunTime(\mathrm {JS_i}, P_{total}) \\&\ {}&P_i^c+P_i^g \le P_{total}, 1 \le |\mathrm {JS_i}| \le C\\&\ {}&(\forall i : 1 \le i \le |\mathrm {L_{JS}}| (= |\mathrm {L_{HC}}|)) \\&\ {}&\mathrm {JS_1}\cup \cdots \cup \mathrm {JS_{|L_{JS}|}} = \textrm{Q}, |\mathrm {JS_1}|+\cdots +|\mathrm {JS_{|L_{JS}|}}| = W \end{aligned}$$

The inputs are the job queue, the total power cap setup, and the set of the scheduling parameters. The outputs are the lists of job sets (\(\mathrm {L_{JS}}\)) and the associated hardware configurations (\(\mathrm {L_{HC}}\)). The objective is to minimize the sum of the co-run execution time (CoRunTime), each of which is a function of the co-located jobs as well as the hardware configurations.

We take several constraints for this optimization problem into count: the first is the requirement that the space-sharing co-run execution should take shorter time than the time-sharing execution with exclusive solo-runs under the power cap (SoloRunTime)—otherwise we should not co-schedule them. The second one is the power constraint, i.e., the sum of the CPU/GPU power caps must be less than or equal to the total node power cap. The next constraint regulates the concurrency on the system, i.e., the number of jobs in a set to be co-scheduled (\(\mathrm {JS_i}\)) must be less than or equal to C. The last two constraints specify that the list of job sets (\(\mathrm {L_{JS}}\)) must be created from the job queue (\(\textrm{Q}\)) in a mutually exclusive and collectively exhaustive manner. Note Table 1 summarizes the parameters/functions used.

Table 1. Definitions of parameters/functions
Fig. 2.
figure 2

Workflow of our solution

3.3 Solution Overview

Figure 2 depicts the overall workflow of our approach. As shown in the figure, it consists of an offline (right) and an online part (left).

During the offline part, we train the coefficients of our performance model, which we describe later in the paper, by using a predetermined benchmark set. More specifically, by executing various job sets while changing the hardware configurations, we generate a large enough number of data sets, which are used as inputs for the model training.

During the online part, we solve the optimization problem described in Sect. 3.2. This solution process consists of three parts (from top to bottom), and they work in a cooperative manner. We first determine the list of co-scheduling job sets (\(\mathrm {L_{JS}}\)) and return it with the associated list of optimal hardware configurations (\(\mathrm {L_{HC}}\)) (top part in the left figure). This component then communicates with the next stage (middle part in the left figure), i.e., continuously sends a temporal job set (\(\textrm{JS}\)) and receives the estimated co-run execution time (CoRunTime) along with the optimal hardware configurations (\(\textrm{HC}\)) and the solo-run time (SoloRunTime), which are used for the scheduling decisions. The component in the middle optimizes the hardware configurations (\(\textrm{HC}\)) for the job set (\(\textrm{JS}\)) given by the previous component. More specifically, it continuously sends the job set (\(\textrm{JS}\)) and a temporal hardware configuration (\(\textrm{HC}\)) to the third part (bottom part in teh left figure) and receives the estimated slowdowns until finding the optimal hardware configuration. The latter component estimates the slowdowns for the given jobs (\(\textrm{JS}\)) with the given hardware configuration (\(\textrm{HC}\)) by using the associated job profiles as well as the model coefficients obtained in the offline model training. Here, we assume that the profile of a job is collected beforehand during its first run without co-scheduling nor power cappingFootnote 1. The details are described in the next section that provides also the definitions of \(SoloRunAppTime(P_{max})\), \(F^j_k\), and Slowdown shown in the figure.

4 Modeling and Optimization

4.1 Slowdown Estimation for a Given Job Set and Hardware Setup

Metric Formulations: We first provide simple formulations for the metrics appeared in Sect. 3.2 as follows:

$$\begin{aligned} CoRunTime(\textrm{JS},\textrm{HC})= & {} \max _{\mathrm {J_j}\in \textrm{JS}}CoRunAppTime_j(\textrm{JS},\textrm{HC})\\ SoloRunTime(\textrm{JS},P_{total})= & {} \sum _{\mathrm {J_j}\in \textrm{JS}} SoloRunAppTime_j(P_{total})\\ CoRunAppTime_j(\textrm{JS},\textrm{HC})= & {} Slowdown_j(\textrm{JS},\textrm{HC}) \cdot SoloRunAppTime_j(P_{max}) \\ SoloRunAppTime_j(P_{total})= & {} Slowdown_j(\{\mathrm {J_j}\},\{R^c_{max}, R^g_{max}, OptP^c_j, OptP^g_j\}) \\{} & {} \cdot SoloRunAppTime_j(P_{max}) \end{aligned}$$

The parameters and functions used to formulate them are summarized in Table 2. The first equation denotes that the total execution time when co-scheduling a job set (\(\textrm{JS}\)) is determined by the longest execution time in the set. The second equation represents that the total execution time of time-shared scheduling is simply the sum of the solo-run execution time of the jobs in the set. The third equation shows that the execution time of a co-scheduled job (\(CoRunAppTime_j\)) is equal to the slowdown (\(Slowdown_j\)) multiplied by the solo-run execution time without power capping (\(SoloRunAppTime_j(P_{max})\)). In the fourth one, the performance degradation caused by power capping for a solo run can be described by using the same slowdown function used in the third equation. In this paper, the solo-run execution time without power capping is given by the associated profile, and we predict the slowdown part in those last two equations.

Table 2. Definitions of parameters or functions to formulate CoRunTime()/SoloRunTime()
Fig. 3.
figure 3

General structure of our performance modeling (\(C=2\))

Performance Modeling: Figure 3 illustrates the general structure to model the slowdown function provided above. Here, we utilize a simple feedforward neural network (FNN) to estimate the slowdown for the first job (\(\mathrm {J_1}\)) in the job set (\(\textrm{JS}\)) when co-scheduling. We regard the slowdown as a function of the job features (\(F^j_k\)) of all the co-located jobs as well as the hardware configuration (\(\textrm{HC}\)) to assess various factors such as scalability, interference, and resource allocations. The job features here are the hardware performance counters collected from both the CPU and GPU during the profile run described in Sect. 3.3. The exact definitions for the job features used in our evaluation are listed in Sect. 5.1. As for the slowdowns of the other co-located job(s), we simply reorder or replace the input locations (i.e., exchange the location between \(\mathrm {J_1}\) and \(\mathrm {J_j}\)) and modify the resource allocation parameters (\(R_*\)) accordingly so that the allocations are associated with the new job order. Further, we also utilize the model to estimate the impact of power capping on solo-run performance. To do so, we simply designate \(\textrm{HC}=\{R^c_{max}, R^g_{max}, OptP^c_j, OptP^g_j\}\) as previously mentioned and set all the job features other than the first job to zero in the model inputs. The detailed network configurations such as the exact inputs, the layer setups, the activation function, or the loss function are described in Sect. 5.1.

Fig. 4.
figure 4

Overview of graph-based job sets creation (\(W=6\), \(C=2\))

4.2 Hardware Setup Optimization for a Given Job Set

By using the performance model provided above, we optimize the hardware configuration parameters (\(\textrm{HC}\)) for a given job set (\(\textrm{JS}\)) when co-scheduling. Here, we attempt to pick up the best hardware configuration (\(\textrm{HC}\)) from all the possible configurations so as to minimize \(CoRunTime(\textrm{JS}, \textrm{HC})\). In this study, we simply utilize the exhaustive search, i.e., testing all the possible \(\textrm{HC}\) for the model inputs and choosing one that minimizes CoRunTime for the given job set (\(\textrm{JS}\)). This is because the number of all the possible setups for \(\textrm{HC}\) on our target platform (or other systems available today) is limited as described later in Sect. 5.1. If the configuration space would explode in future systems, applying heuristic algorithms (e.g., hill climbing) would be a promising option. In addition, we select the pair of (\(OptP^c_j\), \(OptP^g_j\)) for each job (\(\mathrm {J_j}\)) in a given job set so as to obtain \(SoloRunTime(\textrm{JS}, P_{total})\), for which we also explore in an exhaustive manner under the constraint of \(OptP^c_j\) + \(OptP^g_j = P_{total}\).

figure a

4.3 Job Sets Selection

We then make scheduling decisions using the above hardware setup optimization functionality based on the results of our performance model. We regard the job co-scheduling problem as a minimum weight perfect matching problem and solve it using Edmonds’ algorithm [13]. Figure 4 depicts the overview of the solution. In the figure, the vertices represent the jobs in the queue (\(\textrm{Q}\) = \(\{\mathrm {J_1}, \cdots , \mathrm {J_W}\}\)), and the weights represent the minimum execution time for the associated job sets. To obtain each weight, we estimate both of the best CoRunTime and SoloRunTime for each edge (or job pair) by using the model-based hardware configuration optimization described above, and choose one from them so that the execution time is minimized. Then, by using the graph, we create the list of job sets (\(\mathrm {L_{JS}}\) = \(\{\mathrm {JS_1}, \mathrm {JS_2}, \cdots \}\)) that includes all jobs in the queue in a mutually exclusive and collectively exhaustive manner, while minimizing the sum of the weights of \(\mathrm {L_{JS}}\). This is a well-known minimum weight perfect matching problem and is identical to the optimization problem defined in Sect. 3.2 except that a job set can be executed in the time-sharing manner, which we can easily convert to meet the problem definition in Sect. 3.2 by simply dividing such a job set into multiple job sets, all of which include only one job. The Edmonds’ algorithm provides the optimal solution with polynomial time complexity, particularly when the scheduling parameter set (\(\textrm{SP}\)) meets both of the following conditions: (1) W is an even number; and (2) C is equal to 2 [13]. For the former, we simply set the window size to an even number, and as for the latter, we focus on \(C=2\) to limit the complexity as described before. Note that a more precise version of the solution is described in Algorithm 1.

5 Evaluation

5.1 Evaluation Setup

Environment. For our evaluation, we use the platform summarized in Table 3. Our approach is applicable when both the CPU and GPU are capable of both resource partitioning and power capping. This is usually the case for most of the commercial CPUs today, and we utilize an NVIDIA A100 GPU card that supports the MIG feature and power capping [21].

Table 4 summarizes the resource partitioning and power capping settings we explore in this evaluation. We allocate CPU cores in a compact fashion, i.e., physically adjacent cores are assigned to the same program. We partition the GPU into 3GPCs/4GPCs or 4GPCs/3GPCs, on which low level memory hierarchies including L2 caches and memory modules are shared across all the GPCsFootnote 2. To collect performance counter values when profiling, we utilize Linux perf [2] command for the CPU and NSight Compute [20] for the GPU. By using these profiling frameworks, we collect the performance counter values listed in Table 5. The definitions of these performance counters are the same as those shown in the tools (Table 5).

Table 3. Evaluation system specifications
Table 4. Power cap and partitioning setups

Benchmarks and Dataset. We use the Rodinia benchmarks [11], which is a well-known benchmark suite widely-used for various heterogeneous computing studies, as well as a synthetic compute-intensive dense matrix-vector multiplication program (matvec). In particular, from the Rodinia benchmark suite, we pick up seven programs that utilize both CPU and GPU extensively/cooperatively. Further, the matvec program uses both CPU and GPU in a cooperative manner, i.e., a part of the computation is offloaded to GPU and the rest is performed on CPU. We then create three different job queues (JobMix1, JobMix2, and JobMix3) with different window sizes (W) ranging from 4 to 8. The programs in the queues are selected mutually-exclusively (and randomly for JobMix1/JobMix2) from the eight benchmarks.

We then generate the training/validation/test datasets by using the benchmarks. More specifically, we randomly select 8\(\,\times \,\)2=16 job pairs out of all the possible \({}_8 C_2=28\) pairs and measure the co-scheduling slowdowns for each of them while testing 100 different hardware setups that is identical to all the co-run hardware setups that meet \(P_{total}\) = 350 or 400 [W] in Table 4. To validate the performance model, we divide the dataset in the following way: the first 12 pairs multiplied by 100 hardware configurations (= 2,400 data points) are used for the training and validation; and the rest of the 800 data are used for the inference testing. Note the above division process is based on random pair selections. The training and validation here are corresponding to the offline procedure shown in Fig. 2 in Sect. 3.3.

Neural Network Architecture and Training. Table 7 lists our neural network architecture and training setups based on the general structure described in Sect. 4.1. In our neural network, all the inputs are normalized between 0 and 1 (including the hardware configuration) in order to equalize the significance of them, which ultimately helps the convergence. To normalize the resource partitioning states (\(R^*_i\)), we simply pick the first element that represents the number of core or GPC allocation to the first job (\(\mathrm {J_1}\)), and then divide it by the maximum number of the resource allocation (32 for cores and 8 for GPCs in our environment). We set up two hidden layers to well recognize the patterns in the input values, which is better than relying on one single hidden layer for this purpose. The rectified linear activation function is applied to all the layers except for the input layer, and all the neurons in both the hidden layers and the output have biases. The input layer is fully connected with the first hidden layer in order to use the model while re-ordering the job inputs (see also Sect. 4.1). In our Python implementation, the training with the dataset described above takes only few minutes, and the slowdown estimations for all the jobs in a job set takes only 1.17 ms in total.

Table 5. Collected performance counters (\(\textbf{F}\))
Table 6. Tested job mixes

5.2 Experimental Results

Figure 5/6 demonstrate the total execution time comparisons across multiple different scheduling and resource management polices for different total power cap setup (\(P_{total}=350,400\)[W]). The vertical axis indicates the total execution time, while the horizontal axis lists job queues (JobMix1-JobMix3) in both the figures. The details of the compared policies listed in the legends are as follows: Time Sharing + Naive Pow Cap schedules jobs in the time-sharing manner while setting up the power caps to the CPU and GPU equally; Time Sharing + Opt Pow Cap also utilizes the time-shared scheduling but the power caps are set to the optimal; and Our Co-scheduling schedules jobs and configures the hardware using our proposed approach. As shown in these figures, we achieve significant speedups by up to 67.4% (= (108.8/65.0 − 1)*100) by using our approach compared with Time Sharing + Naive Pow Cap. Note the hardware partitioning is done only at the job launches, thus the overhead is negligible here. Table 8 presents the list of job sets created by our approach for each queue under different power capping. The job set selections can change depending on the total power cap setup, which implies our approach can flexibly deal with hardware environment changes, e.g., with changes in the power supply level.

Fig. 5.
figure 5

Execution time comparison (\(P_{total}=350[W]\))

Fig. 6.
figure 6

Execution time comparison (\(P_{total}=400[W]\))

Fig. 7.
figure 7

Comparison of measured vs estimated time (JobMix3, \(P_{total}=350[W]\))

Fig. 8.
figure 8

Comparison of measured vs estimated time (JobMix3, \(P_{total}=400[W]\))

Table 7. Model and training setups
Table 8. Lists of job sets created by our approach

We then compare the measured and estimated execution time (excluding online scheduling time) for different power cap setups in Fig. 7/8. The X-axis indicates the accumulated execution time of all the co-scheduled job sets created from JobMix3 by using our approach. As shown in the figure, the estimated execution times are close to the measured ones, and the total estimation error is only 0.4% or 3.1% for \(P_{total}=350\) or 400, respectively. Note that our approach achieves closer performance to the optimal as the error becomes smaller. This is because the Edmonds’ algorithm returns the optimal scheduling job sets if the performance estimation is 100% accurate.

Fig. 9.
figure 9

Power cap setups (JobMix3, \(P_{total}=350[W]\))

Fig. 10.
figure 10

Core allocation (JobMix3, \(P_{total}=350[W]\))

Fig. 11.
figure 11

GPC allocation (JobMix3, \(P_{total}=350[W]\))

Finally, we demonstrate the hardware setup decisions made by our scheduler in Figs. 9/10/11, in particular, for JobMix3 under the total power cap of \(P_{total}=350\)[W]. The X-axis indicates the job sets created from JobMix3 by our approach, while the Y-axis represents the breakdown of power caps, core allocations, or GPC allocations in Figs. 910, or 11, respectively. As shown in these figures, these hardware knobs are set very differently in accordance with the characteristics of co-located jobs, including the task size on CPU/GPU, the compute/memory intensity, and the interference on shared resources. As our performance modeling can recognize these features well based on the corresponding hardware performance counters and the well-structured neural network, our approach achieves the significant performance improvement by up to 67%.

6 Conclusion

In this paper, we targeted co-scheduling, resource partitioning, and power capping comprehensively for CPU-GPU heterogeneous systems and proposed an approach to optimize them, which consists of performance modeling and a graph-based scheduling algorithm. We demonstrated how a machine learning model, namely a neural network, can successfully be used to predict the performance of co-scheduled applications, while using the application characteristics and partitioning/power states as inputs. We then moved on to the application pair selections where we successfully applied Edmond’s algorithm to determine the mathematically optimal pairing. The experimental result using a real system shows that our approach improves the system throughput by up to 67% compared with a time-sharing-based scheduling with a naive power capping that evenly distributes power budgets on CPU/GPU.