1 Introduction

The use of deep neural network (DNN) techniques in intelligent vehicles has expedited the development of self-driving vehicles in research and industry. Self-driving cars can operate automatically because equipped perception, planning, and control modules operate cooperatively [1,2,3]. The most common perception components used in autonomous vehicles include cameras and radar/lidar devices; cameras are combined with DNN to recognize relevant objects, and radars/lidars are mainly used for distance measurement [2, 4]. Because of limitations related to sensor cost and size, current Active Driving Assistance Systems (ADASs) primarily rely on camera-based perception modules with supplementary radars [5].

To understand complex driving scenes, multi-task DNN (MTDNN) models that output multiple predictions simultaneously are often applied in autonomous vehicles to reduce inference time and device power consumption. In [6], street classification, vehicle detection, and road segmentation problems were solved using a single MultiNet model. In [7], the researchers trained an MTDNN to detect drivable areas and road classes for vehicle navigation. DLT-Net, presented in [8], is a unified neural network for the simultaneous detection of drivable areas, lane lines, and traffic objects. The network localizes the vehicle when a high-definition (HD) map is unavailable. The context tensors between subtask decoders in DLT-Net share mutual features learned from different tasks. A lightweight multi-task semantic attention network was proposed in [9] to achieve simultaneous object detection and semantic segmentation; this network boosts detection performance and reduces computational costs through the use of a semantic attention module. YOLOP [10] is a panoptic driving perception network that simultaneously performs traffic object detection, drivable area segmentation, and lane detection on an NVIDIA TITAN XP GPU at a speed of 41 FPS (frames per second). In the commercially available TESLA Autopilot system [11], images from cameras with different viewpoints are entered into separate MTDNNs that perform driving scene semantic segmentation, monocular depth estimation, and object detection tasks. The outputs of these MTDNNs are further fused in bird’s-eye-view (BEV) networks to directly output a reconstructed aerial-view map of traffic objects, static infrastructure, and the road itself.

In a modular self-driving system, the environmental perception results can be sent to an optimization-based model predictive control (MPC) planner to generate spatiotemporal curves over a time horizon. The system then reactively selects optimal solutions over a short interval as control inputs to minimize the gap between target and current states [12]. These MPC models can be realized with various methods [e.g., active set, augmented Lagrangian, interior point, or sequential quadratic programming (SQP)] [13, 14] and are promising for vehicle optimal control problems. In [15], a linear MPC control model was proposed that addresses vehicle lane-keeping and obstacle avoidance problems by using lateral automation. In [16], an MPC control scheme combining longitudinal and lateral dynamics was designed for following velocity trajectories. Ref. [17] proposed a scale reduction method for reducing the online computational efforts of MPC controllers, and they applied it to longitudinal vehicle automation, achieving an average computational time of approximately 4 ms. In [14], a linear time-varying MPC scheme was proposed for lateral automobile trajectory optimization. The cycle time for the optimized trajectory to be communicated to the feedback controller was 10 ms. In addition, [18] investigated automatic weight determination for car-following control, and the corresponding linear MPC algorithm was implemented using CVXGEN [19], which solves the relevant problem within 1 ms.

The constrained iterative linear quadratic regulator (CILQR) method was proposed to solve online trajectory optimization problems with nonlinear system dynamics and general constraints [20, 21]. The CILQR algorithm constructed on the basis of differential dynamic programming (DDP) [22] is also an MPC method. The computational load of the well-established SQP solver is higher than that of DDP [24]. Thus, the CILQR solver outperforms the standard SQP approach in terms of computational efficiency; compared with the CILQR solver, the SQP approach requires a computation time that is 40.4 times longer per iteration [21]. However, previous CILQR-relates studies [20, 21, 23, 24] have focused on nonlinear Cartesian-frame motion planning. Alternatively, planning within the Frenét-frame can reduce problem dimensions because it enables vehicle dynamics to be solved in tangential and normal directions separately with the aid of road reference line [25]; furthermore, the corresponding linear dynamic equations [18, 26] do not have adverse effects when high-order Taylor expansion coefficients are truncated in the CILQR framework [cf. Section 2]. These considerations motivated us to use linear CILQR planners to control automated vehicles.

Fig. 1
figure 1

Proposed vision-based automated driving framework. The system comprises the following modules: a multi-task DNN for perceiving surroundings, vision predictive control and CILQR controllers for vehicle motion planning and adherence to driving commands (steering, acceleration, and braking), and a PI controller combined with the longitudinal CILQR algorithm for velocity tracking. These modules receive input data from a monocular camera and a few inexpensive radars and operate collaboratively to operate the automated vehicle. The DNN, vision predictive control, and lateral and longitudinal CILQR algorithms are run efficiently every 24.52, 15.56, and 0.58 and 0.65 ms, respectively. In our simulation, the end-to-end latency from the camera output to the lateral controller output (\(T_{a \rightarrow b} \equiv T_{lat} \)) is longer than the actuator latency (\(T_{c\rightarrow d} \equiv T_{act}= 6.66 \) ms)

We proposed an MTDNN in [27] to directly perceive ego vehicle’s heading angle (\(\theta \)) and distance from the lane centerline (\(\Delta \)) for autonomous driving. The vision-based MTDNN model in [27] essentially provides the information necessary for ego car navigation within Frenét coordinates without the need for HD maps. Nevertheless, this end-to-end autonomous driving approach performs poorly in environments that are not shown during the training phase [2]. In [28], we proposed an improved control algorithm based on a multi-task UNet architecture (MTUNet) that comprises lane line segmentation and pose estimation subnets. A Stanley controller [30] was then designed to control the lateral automation of an automobile. The Stanley controller takes \(\theta \) and \(\Delta \) yielding from the network as it’s input for lane-centering [29]. The improved algorithm outperforms the model in [27] and has comparable performance to a multi-task-learning reinforcement-learning (MTL-RL) model [31], which integrates RL and deep-learning algorithms for autonomous driving. However, our algorithms presented in [28] have a variety of problems as follows. 1) Vehicle dynamic models are not considered in the Stanley controller, and the model has poor performance for lanes with rapid curvature changes [32]. 2) The proposed self-driving system does not consider road curvature, resulting in poor vehicle control on curvy roads [33]. 3) The corresponding DNN perception network lacks object detection capability, which is a core task in automated driving. 4) The DNN input has high dimensional resolution of 400 \(\times \) 400, which results in long training and inference times.

To address these shortcomings, this paper proposes a new system for real-time automated driving based on the developments described in [28, 29]. First, a YOLOv4 detector [34] is added to the MTUNet for object detection. Second, the inference speed of MTUNet was increased by reducing the input size without sacrificing network performance. Third, a vision predictive control (VPC) algorithm is proposed for reducing the steering command delay by enabling steer correction at a look-ahead point by applying road curvature information. The VPC algorithm can also be combined with the lateral CILQR algorithm (denoted VPC-CILQR) to rapidly perform motion planning and automobile control. As shown in Fig. 1, the vehicle actuation latency (\(T_{act}\)) was shorter than the steering command latency (\(T_{lat}\)) in our simulation. This delay may also be present in automated vehicles [35] or autonomous racing systems [36] and may induce instability in the system being controlled. Equipping the vehicle with low-level computers could further increase this steering command lag. Therefore, compensating algorithms such as VPC are key to cost-efficient automated vehicle systems.

In general, the research method of this paper is similar to those in [51, 52], which have also presented self-driving systems based on lane detection results. In [51]. an optimal LQR scheme with the sliding-mode approach was proposed to implement preview path tracking control for intelligent electric vehicles with optimal torque distribution between their motors. In [52], a safeguard-protected preview path tracking control algorithm was presented. The proposed preview control strategy comprises feedback and feedforward controllers for stabilizing tracking errors and preview control, respectively. The proposed controller was implemented and validated on open roads and Mcity, an automated vehicle platform. The tested vehicle was equipped with a commercial Mobileye module to detect lane markings.

The main goal of this work was to design a computationally efficient automated driving system for real-time lane-keeping and car-following. The contributions of this paper are as follows:

  1. 1.

    The proposed MTDNN scheme can execute simultaneous driving perception tasks at a speed of 40 FPS. The main difference between this scheme and previous MTDNN schemes is that the post-processing methods provide crucial parameters (lateral offset, road curvature, and heading angle) that improve local vehicular navigation.

  2. 2.

    The VPC-CILQR controller comprising the VPC algorithm and lateral CILQR solver is proposed to improve driverless vehicle path tracking. The method has a low online computational burden and can respond to steering commands in accordance with the concept of look-ahead distance.

  3. 3.

    We propose a vision-based framework comprising the aforementioned MTDNN scheme and CILQR-based controllers for operating an autonomous vehicle; the effectiveness of the proposed framework was demonstrated in challenging simulation environments without maps.

The remainder of this paper is organized as follows: the research methodology is presented in Section 2, the experimental setup is described in Section 3, and the results are presented and discussed in Section 4. Section 5 concludes this paper.

Fig. 2
figure 2

Overview of proposed MTUNet architecture. The input RGB image of size 228 \(\times \) 228 is fed into the model, which then performs lane line segmentation, ego vehicle’s pose estimation, and traffic object detection at the same time. The backbone-seg-subnet is an UNet-based network; three variants of UNet (UNet\(\_\)2\(\times \) [28], UNet\(\_\)1\(\times \) [37], and MResUNet [37]) are compared in this work. The ReLU activation functions in pose and det subnets are not shown for simplicity

2 Methodology

The following section introduces each part of the proposed self-driving system. As depicted in Fig. 1, our system comprises several modules. The DNN is an MTUNet that can solve multiple perception problems simultaneously. The CILQR controllers receive data from the DNN and radars to compute driving commands for lateral and longitudinal motion planning. In the lateral direction, the lane line detection results from the DNN are input to the VPC module to compute steering angle corrections at a certain distance in front of the ego car. These corrections are then sent to the CILQR solver to predict a steering angle for the lane-keeping task. This two-step algorithm is denoted VPC-CILQR throughout the article. The other CILQR controller handles the car-following task in the longitudinal direction.

Table 1 Conv layers used in the UNet-based networks

2.1 MTUNet network

As indicated in Fig. 2, the proposed MTDNN is a neural network with an MTUNet architecture featuring a common backbone encoder and three subnets for completing multiple tasks at the same time. The following sections describe each part.

2.1.1 Backbone and segmentation subnet

The shared backbone and segmentation (seg) subnet employ encoder-decoder UNet-based networks for pixel-level lane line classification task. Two classical UNets (UNet\(\_\)2\(\times \) [28] and UNet\(\_\)1\(\times \) [37]) and one enhanced version (MultiResUNet [37], denoted as MResUNet throughout the paper) were used to investigate the effects of model size and complexity on task performance. For UNet\(\_\)2\(\times \) and UNet\(\_\)1\(\times \), each repeated block includes two convolutional (Conv) layers, and the first UNet has twice as many filters as the second. For MResUNet, each modified block consists of three 3 \(\times \) 3 Conv layers and one 1 \(\times \) 1 Conv layer. Table 1 summarizes the filter number and related kernel size of the Conv layers used in these models. The resulting total number of parameters of UNet\(\_\)2\(\times \)/UNet\(\_\)1\(\times \)/MResUNet is 31.04/7.77/7.26 M, and the corresponding total number of multiply accumulate operations (MACs) is 38.91/9.76/12.67 G. All 3 \(\times \) 3 Conv layers are padded with one pixel to preserve the spatial resolution after the convolution operations are applied [38]. This setting reduces the network input size from 400 \(\times \) 400 to 228 \(\times \) 228 but preserves model performance and increases inference speed compared with the models in our previous work (the experimental results are presented in Section 4) [28]. That network used unpadded 3 \(\times \) 3 Conv layers [39], and zero padding was therefore applied to the input to equalize the input–output resolutions [40]. In the training phase, the weighted cross-entropy loss is adopted to deal with the lane detection sample imbalance problem [41, 42] and is represented as

$$\begin{aligned} {\begin{matrix} L_S = &{} - \frac{N^{-}}{N^{+}+N^{-}}\sum \limits _{\tilde{y}=1} \log \left( \sigma (y) \right) \\ &{} - \frac{N^{+}}{N^{+}+N^{-}} \sum \limits _{\tilde{y}=0} \log \left( 1-\sigma (y) \right) , \end{matrix}} \end{aligned}$$
(1)

where \(N^{+}\) and \(N^{-}\) are the numbers of foreground and background samples in a batch of images, respectively; y is a predicted score; \(\tilde{y}\) is the corresponding label; and \(\sigma \) is the sigmoid function.

2.1.2 Pose subnet

This subnet is mainly responsible for whole-image angle regression and road type classification problems, where the road involves three categories (left turn, straight, and right turn) designed to prevent the angle estimation from mode collapsing [28, 43]. The network architecture of the pose subnet is presented in Fig. 2; the pose subnet takes the fourth Conv-block output feature maps of the backbone as its input. Subsequently, the input maps are fed into shared parts including two consecutive Conv layers and one global average pooling (GAP) layer to extract general features. Lastly, the resulting vectors are passed separately through two fully connected (FC) layers before being mapped into a sigmoid/softmax activation layer for the regression/classification task. Table 2 summarizes the number of filters and output units of the corresponding Conv and FC layers, respectively. The expression MTUNet\(\_\)2\(\times \)/MTUNet\(\_\)1\(\times \)/MTMResUNet in Table 2 represents a multi-task UNet scheme in which subnets are built on the UNet\(\_\)2\(\times \)/UNet\(\_\)1\(\times \)/MResUNet model throughout the article. The pose task loss function, including L2 regression loss (\(L_R\)) and cross-entropy loss (\(L_C\)), is employed for network training; this function is represented as follows:

$$\begin{aligned} L_R= & {} \frac{1}{{2B}}\sum \limits _{i = 1}^B {\left| {\sigma (\tilde{\theta }_i)- \sigma (\theta _i) } \right| ^2 },\end{aligned}$$
(2a)
$$\begin{aligned} L_C= & {} -\frac{1}{B}\sum \limits _{i = 1}^B \sum \limits _{j = 1}^3 \tilde{p}_{ij} \log (p_{ij}), \end{aligned}$$
(2b)

where \(\tilde{\theta }\) and \(\theta \) are the ground truth and estimated value, respectively; B is the input batch size; and \(\tilde{p}\) and p are true and softmax estimation values, respectively.

Table 2 Conv and FC layers used in the pose subnet of various MTUNets
Table 3 Conv layers used in the detection subnet of various MTUNets

2.1.3 Detection subnet

The detection (det) subnet takes advantage of a simplified YOLOv4 detector [34] for real-time traffic object (leading car) detection. This fully-convolutional subnet that has three branches for multi-scale detection takes the output feature maps of the backbone as its input, as illustrated in Fig. 2. The initial part of each branch is composed of single or consecutive 3 \(\times \) 3 filters for extracting contextual information at different scales [37], and a shortcut connection with one 1 \(\times \) 1 filter from the input layer for residual mapping. The top of the addition layer contains sequential 1 \(\times \) 1 filters for reducing the number of channels. The resulting feature maps of each branch have six channels (five for bounding box offset and confidence score predictions, and one for class probability estimation) with a size of K = 15 \(\times \) 15 to divide the input image into K grids. In this article, we select M = 3 anchor boxes, which are then shared between three branches according to the context size. Ultimately, spatial features from three detecting scales are concatenated together and sent to the output layer. Table 3 presents the design of detection subnet of MTUNets. The overall loss function for training comprises objectness (\(L_{O}\)), classification (\(L_{CL}\)), and complete intersection over union (CIoU) losses (\(L_{CI}\)) [44, 45]; these losses are constructed as follows:

$$\begin{aligned} {\begin{matrix} L_{O} = &{} - \sum \limits _{i = 1}^{K \times M} { {I_{i}^{o} \left[ {\tilde{Q}_i \log \left( {Q_i } \right) } \right. } } \left. { + \left( {1 - \tilde{Q}_i } \right) \log \left( {1 - Q_i } \right) } \right] \\ {} &{} - { {\lambda _{n} I_{i}^{n} \left[ {\tilde{Q}_i \log \left( {Q_i } \right) } \right. } } \left. { + \left( {1 - \tilde{Q}_i } \right) \log \left( {1 - Q_i } \right) } \right] , \\ \end{matrix}} \end{aligned}$$
(3a)
$$\begin{aligned} {\begin{matrix} L_{CL} = &{}- \sum \limits _{i = 1}^{K \times M} { I_{i}^{o} \sum \limits _{c \in classes} {{\tilde{p}_i \left( c \right) \log \left( {p_i \left( c \right) } \right) } } } \\ &{} + \left( {1 - \tilde{p}_i \left( c \right) } \right) \log \left( {1 - p_i \left( c \right) } \right) , \end{matrix}} \end{aligned}$$
(3b)
$$\begin{aligned} L_{CI} = 1 - IoU + \frac{{E ^2 \left( {\tilde{{\textbf {o}}},\textbf{o}} \right) }}{{\beta ^2 }} + \alpha \gamma , \end{aligned}$$
(3c)

where \(I_{i}^{o/n}\) = 1/0 or 0/1 indicates that the i-th predicted bounding box does or does not contain an object, respectively; \(\tilde{Q}_i\)/\(Q_i\) and \(\tilde{p}_i\)/\(p_i\) are the true/estimated objectness and class scores corresponding to each box, respectively; and \(\lambda _{n}\) is a hyperparameter intended for balancing positive and negative samples. With regard to CIoU loss, \({\tilde{{\textbf {o}}}}\) and \(\textbf{o} \) are the central points of the prediction (\(B_p\)) and ground truth (\(B_{gt}\)) boxes, respectively; E is the related Euclidean distance; \(\beta \) is the diagonal distance of the smallest enclosing box covering \(B_p\) and \(B_{gt}\); \(\alpha \) is a tradeoff hyperparameter; and \(\gamma \) is used to measure aspect ratio consistency [44].

2.2 The CILQR algorithm

This section first briefly describes the concept behind CILQR and related approaches based on [20, 21, 46, 47]; it then presents the lateral/longitudinal CILQR control algorithm that takes the MTUNet inference and radar data as its inputs to yield driving decisions using linear dynamics.

2.2.1 Problem formulation

Provided a sequence of states \(\textbf{X} \equiv \left\{ {\textbf{x}_0 ,\textbf{x}_1 ,...,\textbf{x}_N } \right\} \) and the corresponding control sequence \(\textbf{U} \equiv \left\{ {\textbf{u}_0 ,\textbf{u}_1 ,...,\textbf{u}_{N - 1} } \right\} \) are within the preview horizon N, the system’s discrete-time dynamics \(\textbf{f}\) are satisfied, with

$$\begin{aligned} \textbf{x}_{i + 1} = \textbf{f}\left( {\textbf{x}_i ,\textbf{u}_i } \right) \end{aligned}$$
(4)

from time i to \(i+1\). The total cost denoted by \(\mathcal {J}\), including running costs \(\mathcal {P}\) and the final cost \(\mathcal {P}_f\), is presented as follows:

$$\begin{aligned} \mathcal {J}\left( {\textbf{x}_0 ,\textbf{U}} \right) = \sum \limits _{i = 0}^{N - 1} {\mathcal {P}\left( {\textbf{x}_i ,\textbf{u}_i } \right) + \mathcal {P}_f \left( {\textbf{x}_N } \right) }. \end{aligned}$$
(5)

The optimal control sequence is then written as

$$\begin{aligned} \textbf{U}^* \left( {\textbf{x}^* } \right) \equiv \underset{\textbf{U}}{\arg \min }\ \mathcal {J}\left( {\textbf{x}_0 ,\textbf{U}} \right) \end{aligned}$$
(6)

with an optimal trajectory \({\textbf{x}^* }\). The partial sum of \(\mathcal {J}\) from any time step t to N is represented as

$$\begin{aligned} \mathcal {J}_t \left( {\textbf{x},\textbf{U}_t } \right) = \sum \limits _{i = t}^{N - 1} {\mathcal {P}\left( {\textbf{x}_i ,\textbf{u}_i } \right) + \mathcal {P}_f \left( {\textbf{x}_N } \right) }, \end{aligned}$$
(7)

and the optimal value function \(\mathcal {V}\) at time t starting at \(\textbf{x}\) takes the form

$$\begin{aligned} \mathcal {V}_t \left( \textbf{x} \right) \equiv \underset{\textbf{U}_t }{\arg \min } \mathcal {J}_t \left( {\textbf{x},\textbf{U}_t } \right) \end{aligned}$$
(8)

with the final time step value function \(\mathcal {V}_N \left( \textbf{x} \right) \equiv \mathcal {P}_f \left( {\textbf{x}_N } \right) \).

In practice, the final step value function \(\mathcal {V}_N\left( \textbf{x} \right) \) is obtained by executing a forward pass using the current control sequence. Subsequently, local control signal minimizations are performed in the proceeding backward pass using the following Bellman equation:

$$\begin{aligned} \mathcal {V}_i \left( \textbf{x} \right) = \mathop {\min }\limits _\textbf{u} \left[ {\mathcal {P}\left( {\textbf{x},\textbf{u}} \right) + \mathcal {V}_{i + 1} \left( {\textbf{f}\left( {\textbf{x},\textbf{u}} \right) } \right) } \right] . \end{aligned}$$
(9)

To compute the optimal trajectory, the perturbed function around the i-th state-control pair in (9) is used; this function is written as follows:

$$\begin{aligned} \begin{aligned} \mathcal {O}\left( {\delta \textbf{x},\delta \textbf{u}} \right) =&\mathcal {P}_i \left( {\textbf{x} + \delta \textbf{x},\textbf{u} + \delta \textbf{u}} \right) - \mathcal {P}_i \left( {\textbf{x},\textbf{u}} \right) \\&+ \mathcal {V}_{i + 1} \left( {\textbf{f}\left( {\textbf{x} + \delta \textbf{x},\textbf{u} + \delta \textbf{u}} \right) } \right) - \mathcal {V}_{i + 1} \left( {\textbf{f}\left( {\textbf{x},\textbf{u}} \right) } \right) . \end{aligned} \end{aligned}$$
(10)

This equation can be approximated to a quadratic function by employing a second-order Taylor expansion with the following coefficients:

$$\begin{aligned} \mathcal {O}_\textbf{x}= & {} \mathcal {P}_\textbf{x} + \textbf{f}_\textbf{x}^\textrm{T} \mathcal {V}_\textbf{x}, \end{aligned}$$
(11a)
$$\begin{aligned} \mathcal {O}_\textbf{u}= & {} \mathcal {P}_\textbf{u} + \textbf{f}_\textbf{u}^\textrm{T} \mathcal {V}_\textbf{x},\end{aligned}$$
(11b)
$$\begin{aligned} \mathcal {O}_{\textbf{xx}}= & {} \mathcal {P}_{\textbf{xx}} + \textbf{f}_\textbf{x}^\textrm{T} \mathcal {V}_{\textbf{xx}} \textbf{f}_\textbf{x} + \mathcal {V}_\textbf{x} \textbf{f}_{\textbf{xx}},\end{aligned}$$
(11c)
$$\begin{aligned} \mathcal {O}_{\textbf{ux}}= & {} \mathcal {P}_{\textbf{ux}} + \textbf{f}_\textbf{u}^\textrm{T} \mathcal {V}_{\textbf{xx}} \textbf{f}_\textbf{x} + \mathcal {V}_\textbf{x} \textbf{f}_{\textbf{ux}}, \end{aligned}$$
(11d)
$$\begin{aligned} \mathcal {O}_{\textbf{uu}}= & {} \mathcal {P}_{\textbf{uu}} + \textbf{f}_\textbf{u}^\textrm{T} \mathcal {V}_{\textbf{xx}} \textbf{f}_\textbf{u} + \mathcal {V}_\textbf{x} \textbf{f}_{\textbf{uu}}. \end{aligned}$$
(11e)

The second-order coefficients of the system dynamics (\(\textbf{f}_\textbf{xx}\), \(\textbf{f}_\textbf{ux}\), and \(\textbf{f}_\textbf{uu}\)) are omitted to reduce computational effort [24, 46]. The values of these coefficients are zero for linear systems [e.g., Eq. (19) and Eq. (25)], leading to fast convergence in trajectory optimization.

The optimal control signal modification can be obtained by minimizing the quadratic \(\mathcal {O}\left( {\delta \textbf{x},\delta \textbf{u}} \right) \):

$$\begin{aligned} \delta \textbf{u}^* = \underset{\delta \textbf{u}}{\arg \min } \mathcal {O}\left( {\delta \textbf{x},\delta \textbf{u}} \right) = \textbf{k} + \textbf{K}\delta \textbf{x}, \end{aligned}$$
(12)

where

$$\begin{aligned} \textbf{k}= & {} - \mathcal {O}_{\textbf{uu}}^{ - 1} \mathcal {O}_\textbf{u},\end{aligned}$$
(13a)
$$\begin{aligned} \textbf{K}= & {} - \mathcal {O}_{\textbf{uu}}^{ - 1} \mathcal {O}_{\textbf{ux}} \end{aligned}$$
(13b)

are optimal control gains. If the optimal control indicated in (12) is plugged into the approximated \(\mathcal {O}\left( {\delta \textbf{x},\delta \textbf{u}} \right) \) to recover the quadratic value function, the corresponding coefficients can be obtained [48]:

$$\begin{aligned} \mathcal {V}_\textbf{x}= & {} \mathcal {O}_\textbf{x} - \textbf{K}^\textrm{T} \mathcal {O}_{\textbf{uu}} \textbf{k},\end{aligned}$$
(14a)
$$\begin{aligned} \mathcal {V}_{\textbf{xx}}= & {} \mathcal {O}_{\textbf{xx}} - \textbf{K}^\textrm{T} \mathcal {O}_{\textbf{uu}} \textbf{K}. \end{aligned}$$
(14b)

Control gains at each state (\(\textbf{k}_i\), \(\textbf{K}_i\)) can then be estimated by recursively computing Eqs. (11), (13), and (14) in a backward process. Finally, the modified control and state sequences can be evaluated through a renewed forward pass:

$$\begin{aligned} \hat{\textbf{u}}_i= & {} \textbf{u}_i +\lambda \textbf{k}_i + \textbf{K}_i \left( {\hat{\textbf{x}_i} - \textbf{x}_i } \right) ,\end{aligned}$$
(15a)
$$\begin{aligned} \hat{\textbf{x}}_{i + 1}= & {} \textbf{f}\left( \hat{\textbf{x}}_i ,{\hat{\textbf{u}_i}} \right) , \end{aligned}$$
(15b)

where \({\hat{\textbf{x}}}_0 = \textbf{x}_0 \). Here \(\lambda \) is the backtracking parameter for line search; it is set to 1 in the beginning and designed to be reduced gradually in the forward-backward propagation loops until convergence is reached.

If the system has the constraint

$$\begin{aligned} \mathcal {C}\left( {x,u} \right) < 0, \end{aligned}$$
(16)

which can be shaped using an exponential barrier function [20, 23]

$$\begin{aligned} \mathcal {B}\left( {\mathcal {C}\left( {x,u} \right) } \right) = q_1 \exp \left( {q_2 \mathcal {C}\left( {x,u} \right) } \right) \end{aligned}$$
(17)

or a logarithmic barrier function [21], then

$$\begin{aligned} \mathcal {B}\left( {\mathcal {C}\left( {x,u} \right) } \right) = - \frac{1}{t}\log \left( { - \mathcal {C}\left( {x,u} \right) } \right) , \end{aligned}$$
(18)

where \(q_1\), \(q_2\), and \(t > 0\) are parameters. The barrier function can be added to the cost function as a penalty. Eq. (18) converges toward the ideal indicator function as t increases iteratively.

2.2.2 Lateral CILQR controller

The lateral vehicle dynamic model [26] is employed for steering control. The state variable and control input are defined as \( \textbf{x} = \left[ {\begin{array}{*{20}c} \Delta &{} {\dot{\Delta }} &{} \theta &{} {\dot{\theta }} \\ \end{array}} \right] ^\textrm{T} \) and \(\textbf{u} = \left[ \delta \right] \), respectively, where \(\Delta \) is the lateral offset, \(\theta \) is the angle between the ego vehicle’s heading and the tangent of the road, and \(\delta \) is the steering angle. As described in our previous work [28, 29], \(\theta \) and \(\Delta \) can be obtained from MTUNets and related post-processing methods, and it is assumed that \(\dot{\Delta }= \dot{\theta }= 0\). The corresponding discrete-time dynamic model is written as follows:

$$\begin{aligned} \textbf{x}_{t + 1} \equiv \textbf{f}\left( {\textbf{x}_t ,\textbf{u}_t } \right) = \textbf{Ax}_t + \textbf{Bu}_t, \end{aligned}$$
(19)

where

$$ \textbf{A} = \left[ {\begin{array}{*{20}c} {\alpha _{11} } &{} {\alpha _{12} } &{} 0 &{} 0 \\ 0 &{} {\alpha _{22} } &{} {\alpha _{23} } &{} {\alpha _{24} } \\ 0 &{} 0 &{} {\alpha _{33} } &{} {\alpha _{34} } \\ 0 &{} {\alpha _{42} } &{} {\alpha _{43} } &{} {\alpha _{44} } \\ \end{array}} \right] ,\quad \textbf{B} = \left[ {\begin{array}{*{20}c} 0 \\ {\beta _1 } \\ 0 \\ {\beta _2 } \\ \end{array}} \right] , $$

with coefficients

$$ \begin{array}{l} \alpha _{11} = \alpha _{33} = 1, \quad \alpha _{12} = \alpha _{34} = dt, \\ \alpha _{22} = {1 - \frac{{2\left( {C_{\alpha f} + C_{\alpha r} } \right) dt}}{{mv }}},\quad \alpha _{23} = {\frac{{2\left( {C_{\alpha f} + C_{\alpha r} } \right) dt}}{m}}, \\ \alpha _{24} = {\frac{{2\left( { - C_{\alpha f} l_f + C_{\alpha r} l_r } \right) dt}}{{mv }}},\quad \alpha _{42} = {\frac{{2\left( {C_{\alpha f} l_f - C_{\alpha r} l_r } \right) dt}}{{I_z v }}} , \\ \alpha _{43} = {\frac{{2\left( {C_{\alpha f} l_f - C_{\alpha r} l_r } \right) dt}}{{I_z }}},\quad \alpha _{44} = {1 - \frac{{2\left( {C_{\alpha f} l_f^2 - C_{\alpha r} l_r^2 } \right) dt}}{{I_z v }}}, \\ \beta _1 = {\frac{{2C_{\alpha f} dt}}{m}} ,\quad \beta _2 = {\frac{{2C_{\alpha f} l_f dt}}{{I_z }}}. \\ \end{array} $$

Here, v is the ego vehicle’s current speed along the heading direction and dt is the sampling time. The model parameters for the experiments are as follows: vehicle mass m = 1150 kg, cornering stiffness \({C_{\alpha f} }\) = 80 000 N/rad, \({C_{\alpha r} }\) = 80 000 N/rad, center of gravity point \(l_f\) = 1.27 m, \(l_r\) = 1.37 m, and moment of inertia \(I_z\) = 2000 kgm\(^{2}\).

The objective function (\(\mathcal {J}\)) containing the iterative linear quadratic regulator (\(\mathcal {J}_{ILQR}\)), barrier (\(\mathcal {J}_{b}\)), and end state cost (\(\mathcal {J}_{f}\)) terms can be represented as

$$\begin{aligned} \mathcal {J}= & {} \mathcal {J}_{ILQR} + \mathcal {J}_{b}+ \mathcal {J}_{f}, \end{aligned}$$
(20a)
$$\begin{aligned} \mathcal {J}_{ILQR}= & {} \sum \limits _{i = 0}^{N - 1} {\left( {\textbf{x}_i - \textbf{x}_{r} } \right) ^\textrm{T} \textbf{Q}\left( {\textbf{x}_i - \textbf{x}_{r} } \right) + \textbf{u}_i^\textrm{T} \textbf{Ru}_i }, \end{aligned}$$
(20b)
$$\begin{aligned} \mathcal {J}_{b}= & {} \sum \limits _{i = 0}^{N - 1} {\mathcal {B} \left( u_i \right) + \mathcal {B} \left( \Delta _i \right) },\end{aligned}$$
(20c)
$$\begin{aligned} \mathcal {J}_f= & {} \left( {\textbf{x}_N - \textbf{x}_r } \right) ^\textrm{T} \textbf{Q}\left( {\textbf{x}_N - \textbf{x}_r } \right) + \mathcal {B}\left( {\Delta _N } \right) . \end{aligned}$$
(20d)

Here, the reference state \(\textbf{x}_{r}\) = \(\textbf{0}\), \(\textbf{Q}\)/\(\textbf{R}\) is the weighting matrix, and \(\mathcal {B} \left( u_i \right) \) and \(\mathcal {B} \left( \Delta _i \right) \) are the corresponding barrier functions:

$$\begin{aligned} \mathcal {B} \left( u_i \right)= & {} - \frac{1}{t}\left( {\log \left( {u_i - \delta _{\min } } \right) + \log \left( {\delta _{\max } - u_i } \right) } \right) ,\end{aligned}$$
(21a)
$$\begin{aligned} \mathcal {B}\left( {\Delta _i } \right)= & {} \left\{ \begin{array}{l} \exp \left( {\Delta _i - \Delta _{i - 1} } \right) \quad \text {for}\quad \Delta _0 \ge 0, \\ \exp \left( {\Delta _{i - 1} - \Delta _i } \right) \quad \text {for}\quad \Delta _0 < 0, \\ \end{array} \right. \end{aligned}$$
(21b)

where \(\mathcal {B}\)(\(u_i\)) is used to limit control inputs and the high (low) steer bound is \(\delta _{\max } \left( {\delta _{\min } } \right) = \pi /6 \left( -\pi /6\right) \) rad. The objective of \(\mathcal {B} \left( \Delta _i \right) \) is to control the ego vehicle moving toward the lane center.

The first element of the optimal steering sequence is then selected to define the normalized steering command at a given time as follows:

$$\begin{aligned} \textrm{SteerCmd} = \frac{\delta _{0}^{*}}{\pi /6}. \end{aligned}$$
(22)

2.2.3 Longitudinal CILQR controller

In the longitudinal direction, a proportional-integral (PI) controller [49]

$$\begin{aligned} PI(v) = k_P e + k_I \sum \limits _i {e_i } \end{aligned}$$
(23)

is first applied to the ego car for tracking reference speed \(v_r\) under cruise conditions, where \(e=v-v_r\) and \(k_P\)/\(k_I\) are the tracking error and the proportional/integral gain, respectively. The normalized acceleration command is then given as follows:

$$\begin{aligned} \textrm{AcclCmd} = \tanh (PI(v)). \end{aligned}$$
(24)

When a slower preceding vehicle is encountered, the AccelCmd must be updated to maintain a safe distance from that vehicle to avoid a collision; for this purpose, we use the following longitudinal CILQR algorithm.

The state variable and control input for longitudinal inter-vehicle dynamics are defined as \( \mathbf{x'} = \left[ {\begin{array}{*{20}c} D &{} v &{} a \\ \end{array}} \right] ^\textrm{T} \) and \( \mathbf{u'} = \left[ j \right] \), respectively, where a, \(j = \dot{a}\), and D are the ego vehicle’s acceleration, jerk, and distance to the preceding car, respectively. The corresponding discrete-time system model is written as

$$\begin{aligned} \mathbf{x'}_{t + 1} \equiv \mathbf{f'}\left( {\mathbf{x'}_t ,\mathbf{u'}_t } \right) = \mathbf{A'x'}_t + \mathbf{B'u'}_t + \mathbf{C'w'}, \end{aligned}$$
(25)

where

$$ \begin{array}{l} \mathbf{A'} = \left[ {\begin{array}{*{20}c} 1 &{} { - dt} &{} { - \frac{1}{2}dt^2 } \\ 0 &{} 1 &{} {dt} \\ 0 &{} 0 &{} 1 \\ \end{array}} \right] ,\quad \mathbf{B'} = \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ {dt} \\ \end{array}} \right] , \\ \mathbf{C'} = \left[ {\begin{array}{*{20}c} 0 &{} {dt} &{} {\frac{1}{2}dt^2 } \\ 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ \end{array}} \right] ,\quad \mathbf{w'} = \left[ {\begin{array}{*{20}c} 0 \\ {v_l } \\ {a_l } \\ \end{array}} \right] . \\ \end{array} $$

Here, \(v_l\)/\(a_l\) is the preceding car’s speed/acceleration, and \(\mathbf{w'}\) is the measurable disturbance input [50]. The values of D and \(v_l\) are measured by the radar; v is known; and \(a = a_l = 0\) is assumed. Here, MTUNets are used to recognize traffic objects, and the radar is responsible for providing precise distance measurements.

The objective function (\(\mathcal {J}'\)) for the longitudinal CILQR controller can be written as,

$$\begin{aligned} \mathcal {J}'= & {} \mathcal {J}'_{ILQR} + \mathcal {J}'_{b} + \mathcal {J}'_{f},\end{aligned}$$
(26a)
$$\begin{aligned} \mathcal {J}'_{ILQR}= & {} \sum \limits _{i = 0}^{N - 1} {\left( {\mathbf{x'}_i - \mathbf{x'}_r } \right) ^\textrm{T} \mathbf{Q'}\left( {\mathbf{x'}_i - \mathbf{x'}_r } \right) + \mathbf{u'}_i^\textrm{T} \mathbf{R'u'}_i }, \end{aligned}$$
(26b)
$$\begin{aligned} \mathcal {J}'_{b}= & {} \sum \limits _{i = 0}^{N - 1} {\mathcal {B}' \left( {u'_i} \right) + \mathcal {B}' \left( D_i \right) + \mathcal {B}'\left( {a_i } \right) },\end{aligned}$$
(26c)
$$\begin{aligned} \mathcal {J}_f'= & {} \left( {\mathbf{x'}_N - \mathbf{x'}_r } \right) ^\textrm{T} \mathbf{Q'}\left( {\mathbf{x'}_N - \mathbf{x'}_r } \right) + \mathcal {B'}\left( {D_N } \right) \nonumber \\{} & {} + \mathcal {B'}\left( {a_N } \right) . \end{aligned}$$
(26d)

Here, the reference state \(\mathbf{x'}_r = \left[ {\begin{array}{*{20}c} {D_r } &{} {v_l } &{} {a_l } \\ \end{array}} \right] \), and \(D_r\) is the reference distance for safety. \({\mathbf{Q'}}\)/\(\mathbf{R'}\) is the weighting matrix, and \(\mathcal {B}' \left( {u'_i} \right) \), \(\mathcal {B}' \left( D_i \right) ,\) and \(\mathcal {B}'\left( {a_i } \right) \) are related barrier functions:

$$\begin{aligned} \mathcal {B}' \left( {u'_i} \right)= & {} - \frac{1}{t'}\left( {\log \left( {u'_i - j_{\min } } \right) + \log \left( {j_{\max } - u'_i } \right) } \right) ,\end{aligned}$$
(27a)
$$\begin{aligned} \mathcal {B}' \left( D_i \right)= & {} \exp \left( {D_r - D_i } \right) ,\end{aligned}$$
(27b)
$$\begin{aligned} \mathcal {B}'\left( {a_i } \right)= & {} \exp \left( {a_{\min } - a_i } \right) + \exp \left( {a_i - a_{\max } } \right) , \end{aligned}$$
(27c)

where \(\mathcal {B}' \left( D_i \right) \) is used for maintaining a safe distance, and \(\mathcal {B}'\)(\(u'_i\)) and \(\mathcal {B}'\)(\(a_i\)) are used to limit the ego vehicle’s jerk and acceleration to [−1, 1] m/s\(^3\) and [−5, 5] m/s\(^2\), respectively.

The first element of the optimal jerk sequence is then chosen to update AccelCmd in the car-following scenario as

$$\begin{aligned} \textrm{AcclCmd} = \tanh \left( {PI\left( v \right) } \right) + j_0^*. \end{aligned}$$
(28)

The brake command (BrakeCmd) gradually increases in value from 0 to 1 when D is smaller than a certain critical value during emergencies.

2.3 The VPC algorithm

The problematic scenario for the VPC algorithm is depicted in Fig. 3, which presents a top-down view of fitted lane lines produced using our previous method [29]. First, the detected line segments [Fig. 3(a)] were clustered using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. Second, the resulting semantic lanes were transformed into BEV space by using a perspective transformation. Third, the least-squares quadratic polynomial fitting method was employed to produce parallel ego-lane lines [Fig. 3(b)]; either of the two polynomials can be represented as \(y=f(x)\). Fourth, the road curvature \(\kappa \) was computed using the formula

$$\begin{aligned} \kappa = \frac{f''}{\left( {1 + f'^2 } \right) ^{3/2}}. \end{aligned}$$
(29)

Because the curvature estimate from a single map is noisy, an average map obtained from eight consecutive frames was used for curve fitting. The resulting curvature estimates were then used to determine the correction value for the steering command in this study.

Fig. 3
figure 3

Problematic scenario for the VPC algorithm. (a) An example DNN-output lane-line binary map at a given time in the egocentric view. (b) Aerial view of the fitted lane lines. Here, o is the current position of the ego vehicle and \(o_p\) is the look-ahead point, and \(p_0\) and \(p_1\) represent the corresponding lane points at the same x coordinates as o and \(o_p\), respectively. \(\kappa \) and \(\mathrm \delta \) are the road curvature and steering angle of the ego vehicle, respectively. In this paper, the look-ahead distance \(\overline{oo_p}\) = 10 m is used, which corresponds to a car speed of approximately 72 km/h [26]

As shown in Fig. 3(b), \(\delta \) at o is the current steering angle. The desired steering angles at \(p_0\) and \(p_1\) can be computed using the local lane curvature [21]:

$$\begin{aligned} \delta _0= & {} \tan ^{ - 1} \left( {c\kappa _0 } \right) ,\end{aligned}$$
(30a)
$$\begin{aligned} \delta _1= & {} \tan ^{ - 1} \left( {c\kappa _1 } \right) , \end{aligned}$$
(30b)

where c is an adjustable parameter. Hence, the predicted steering angle at a look-ahead point \(o_p\) can be represented as

$$\begin{aligned} {\delta _p} = \delta + \left( {\delta _1 - \delta _0 } \right) \equiv \delta + \Delta \delta . \end{aligned}$$
(31)

Compared with those in existing LQR-based preview control methods [51, 52], fewer tuning parameters are required when the VPC algorithm is included in the steering geometry model; moreover, the algorithm can be combined with other path-tracking models. For example, a VPC-CILQR controller can update the CILQR steering command [Eq. (22)] as follows:

$$\begin{aligned} \begin{array}{l} \mathrm{VPC\_SteerCmd} \\ = \left\{ \begin{array}{l} \textrm{SteerCmd} + \left| {\Delta \delta } \right| \mathrm{\quad if \quad SteerCmd} \ge \textrm{0,} \\ \textrm{SteerCmd} - \left| {\Delta \delta } \right| \mathrm{\quad if \quad SteerCmd} < \textrm{0}\mathrm{.} \\ \end{array} \right. \\ \end{array} \end{aligned}$$
(32)

In summary, the proposed VPC algorithm uses the future road curvature at a look-ahead point 10 m in front of the ego car (Fig. 3) as input to generate the updated steering inputs. This algorithm is applied before the ego car enters a curvy road to improve tracking performance. Accurate and complete future road shape prediction is crucial for developing preview path-tracking control algorithms [52]. However, whether the necessary information can be obtained is greatly dependent on the maximum perception range of lane detection modules. As demonstrated in Fig. 5, LLAMAS [57] data are more useful than TORCS [28] or CULane [56] datasets for developing algorithms with such path-tracking functionality. A nonlinear MPC approach using high-quality predicted lane curvature data can achieve better control performance over the proposed method; however, if computational cost is a concern, such a nonlinear approach may not necessarily be preferred. The following sections describe validation experiments where the proposed algorithm was compared against other control algorithms.

Table 4 Datasets used in the experiments

3 Experimental setup

The proposed MTUNets extract local and global contexts from input images to simultaneously perform segmentation, detection, and pose tasks. Because the these tasks have different learning rates [40, 53, 54], the proposed MTUNets were trained in a stepwise instead of end-to-end manner to help the backbone network learn common features. The training strategy, image data, and validation are described as follows.

3.1 Network training strategy

The MTUNets were trained in three stages. The pose subnet was first trained through stochastic gradient descent (SGD) with a batch size (bs) of 20, momentum (mo) of 0.9, and learning rate (lr) starting from \(10^{ - 2}\) and decreasing by a factor of 0.9 every 5 epochs for a total of 100 epochs. The detection and pose subnets were then trained jointly with the parameters obtained in the first training stage and using the SGD optimizer with bs = 4, mo = 0.9, and lr = \(10^{ - 3}\), \(10^{ - 4}\), and \(10^{ - 5}\) for the first 60 epochs, the 61st to 80th epochs, and the last 20 epochs, respectively. All subnets (detection, pose, and segmentation) were trained together in the last stage with the pretrained model obtained in the previous stage and using the Adam optimizer. Bs and mo were set to 1 and 0.9, respectively, and lr was set to \(10^{ - 4}\) for the first 75 epochs and \(10^{ - 5}\) for the last 25 epochs. The total loss in each stage was a weighted sum of the corresponding losses [55].

3.2 Image datasets

We conducted experiments on the artificial TORCS [28] and real-world CULane [56] and LLAMAS [57] datasets. The summary statistics of the datasets are presented in Table 4. The customized TORCS dataset has joint labels for all tasks, whereas the original CULane/LLAMAS dataset only contained lane line labels. Thus, we annotated each CULane and LLAMAS image with traffic object bounding boxes to mimic the TORCS dataset. Correspondingly, the TORCS, CULane, and LLAMAS datasets had approximately 30 K, 80 K, and 29 K labeled traffic objects, respectively. To determine anchor boxes for the detection task, the k-means algorithm [58] was applied to partition the ground truth boxes. The CULane and LLAMAS datasets lack ego vehicle angle labels; therefore, these datasets could only be used for evaluations in segmentation and detection tasks. The ratio of the number of images used in the training phase to that used in the test phase was approximately 10 for all datasets, as in our previous works [28, 29]. Recall/average precision (AP; IoU was set to 0.5), recall/F1 score, and accuracy/mean absolute error (MAE) were used to evaluate model performance in detection, segmentation, and pose tasks, respectively.

Fig. 4
figure 4

Tracks A (left) and B (right) for dynamically evaluating proposed MTUNet and control models. The total length of Track A/B (Track 7/8 in [28]) was 2843/3919 m with lane width 4 m, and the maximum curvature was approximately 0.03/0.05 1/m, which was curvier than a typical road [60]. The self-driving car drove in a counterclockwise direction, and the starting locations are marked by green filled circle symbols. A self-driving vehicle [31] could not finish a lap on Track A using the direct perception approach [61]

Table 5 Performance of trained MTUNets on the test data

3.3 Autonomous driving simulation

The open-source driving environment TORCS provides sophisticated physics and graphics engines; it is therefore ideal for not only visual processing but also vehicle dynamics research [59]. The ego vehicle controlled by our self-driving framework was driven autonomously on unseen TORCS roads [e.g., Tracks A and B in Fig. 4] to validate the effectiveness of our approach. All experiments, including both MTUNet training and testing and driving simulations, were conducted on a PC equipped with an INTEL i9-9900K CPU, 64 GB of RAM, and an NVIDIA RTX 2080 Ti GPU with 4352 CUDA cores and 11 GB of GDDR memory. The control frequency for the ego vehicle in TORCS was approximately 150 Hz on this computer.

Table 6 Results for MTUNets in terms of parameters (Params), MACs, and FPS

4 Results and discussions

Fig. 5
figure 5

Example traffic object and lane-line detection results for the MTUNet\(\_\)1\(\times \) network on CULane (first row), LLAMAS (second row), and TORCS (third row) images

Table 5 presents the performance results of the MTUNet models on the testing data for various tasks. Table 6 lists the number of parameters, computational complexity, and inference speed of each scheme as a comparison of computational efficiency. As described in Section 2, although the input size of the MTUNet models was reduced by the use of padded 3 \(\times \) 3 Conv layers, model performance was not affected; MTUNet\(\_\)2\(\times \)/MTUNet\(\_\)1\(\times \) achieved similar results to our previous model in the segmentation and pose tasks on the TORCS and LLAMAS datasets [28]. For complex CULane data, the MTUNet model performance performed worse than the SCNN [56], the original state-of-the-art method for this dataset; however, the SCNN had lower inference speed because of its higher computational complexity [10]. The MTUNet models are designed for real-time control of self-driving vehicles; the SCNN model is not. Of the three considered MTUNet variants, MTUNet\(\_\)2\(\times \) and MTUNet\(\_\)1\(\times \) outperformed MTMResUNet on all datasets if each model jointly performed the detection and segmentation tasks (first, second, and fourth row of Table 5). This result differs from that of a previous study on a single-segmentation task for biomedical images [37]. Task gradient interference can reduce the performance of an MTDNN [62, 63]; in this case, the MTUNet\(\_\)2\(\times \) and MTUNet\(\_\)1\(\times \) models outperformed the complex MTMResUNet network because of their elegant architecture. When the pose task was included (last row of Table 5), MTUNet\(\_\)2\(\times \) and MTUNet\(\_\)1\(\times \) also outperformed MTMResUNet on all evaluation metrics; the decreasing AP scores for the detection task are attributable to an increase in false positive (FP) detections. However, for all models, the inclusion of the pose task only decreased the recall scores for the detection task by approximately 0.02 (last two rows of Table 5); nearly 95\(\%\) of the ground truth boxes were still detected during when the models simultaneously performed all tasks. Following the method for efficiency analysis used in [64] (Sec. V. B. in [64]), this study computed the densities of the detection AP and road type accuracy scores using the data in the last row of Tables 5/6. MTUNet\(\_\)1\(\times \) had higher efficiency in terms of parameter utilization than did MTUNet\(\_\)2\(\times \). MTUNet\(\_\)1\(\times \) was 3.26 times smaller than MTUNet\(\_\)2\(\times \) and achieved a 1.75 times faster inference speed (40.77 FPS); this speed is comparable to that of the YOLOP model [10]. These results indicate that MTUNet\(\_\)1\(\times \) is the most efficient model for collaborating with controllers to achieve automated driving. The MTUNet\(\_\)1\(\times \) model can also be run on a low-performance computer with only a few gigabytes of GDDR memory. For a computer with a GTX 1050 Max-Q GPU with 640 CUDA cores and 4 GB of GDDR memory, the MTUNet\(\_\)1\(\times \) model achieved an inference speed of 14.69 FPS for multi-task prediction. Figure 5 presents example MTUNet\(\_\)1\(\times \) network outputs for both traffic objects and lane detection on all datasets.

Table 7 Dynamic system models and parameters for implementing the CILQR and SQP controllers
Fig. 6
figure 6

Dynamic performance of lateral VPC-CILQR algorithm and MTUNet\(\_\)1\(\times \) model for an ego vehicle with heading \(\theta \) and lateral offset \(\Delta \) for lane-keeping maneuvers in the central lanes of Tracks A and B at 76 and 50 km/h, respectively. At the curviest section of Track A (near 1900 m), the maximal \(\Delta \) value was 0.52 m; the ego car controlled by this model outperformed the ego car controlled by the Stanley controller (Fig. 3 in [28]), MTL-RL [Fig. 11(a) in [31]], or CILQR (Fig. 7) algorithms

Fig. 7
figure 7

Dynamic performance of the lateral CILQR algorithm and MTUNet\(\_\)1\(\times \) model for lane-keeping maneuvers in the central lanes of Tracks A and B at the same speeds as those in Fig. 6. At the curviest section of Track A (near 1900 m), the maximal \(\Delta \) value was 0.71 m, which is 1.36 times larger than that of the ego car controlled by the VPC-CILQR algorithm

Fig. 8
figure 8

Dynamic performance of the lateral VPC-SQP algorithm and MTUNet\(\_\)1\(\times \) model for lane-keeping maneuvers in the central lanes of Tracks A and B at the same speeds as those in Fig. 6. For the curviest sections of Tracks A and B (near 1900 and 2750 m, respectively), the performance of the VPC-SQP algorithm was inferior to those of the VPC-CILQR and CILQR algorithms (Figs. 6 and 7). These algorithms also outperformed the SQP algorithm (Fig. 9), indicating the effectiveness of the VPC algorithm

Fig. 9
figure 9

Dynamic performance of the lateral SQP algorithm and MTUNet\(\_\)1\(\times \) model for lane-keeping maneuvers in the central lanes of Tracks A and B at the same speeds as those in Fig. 6. The model performance at the curviest section of Tracks A and B (near 1900 and 2750 m, respectively) was inferior to those of all other tested methods

To objectively evaluate the dynamic performance of the autonomous driving algorithms, lane-keeping and car-following maneuvers were performed on the challenging tracks, Tracks A and B, as shown in Fig. 4. The SQP-based controllers were implemented using the ACADO toolkit [65] for comparison with the CILQR-based controllers. All settings for these algorithms were the same as summarized in Table 7. For the lateral control experiments, autonomous vehicles were designed to drive at various cruise speeds on Tracks A and B. The \(\theta \) and \(\Delta \) results for the VPC-CILQR, CILQR, VPC-SQP, and SQP algorithms are presented in Figs. 6, 7, 8, 9, 10, 11, 12, and 13. The results of the CILQR and SQP controllers for the longitudinal control experiments are presented in Fig. 14. The MAEs for \(\theta \), \(\Delta \), v, and D in Figs. 614 are listed in Table 8. Table 9 presents the average time to arrive at a solution for the VPC, CILQR, and SQP algorithms. The inference time was shorter for VPC than MTUNet\(\_\)1\(\times \) (24.52 ms). Moreover, the CILQR had a computation speed that was faster than the ego vehicle control period (6.66 ms); the SQP solvers were slower. Specifically, the computation time per cycle for the lane-keeping and car-following tasks, respectively, for the SQP solvers were 16.7 and 21.5 times longer than those of the CILQR solvers. A discussion of the results for all the tested controllers are presented as follows.

Fig. 10
figure 10

Dynamic performance of the lateral VPC-CILQR algorithm and MTUNet\(\_\)1\(\times \) model for an ego vehicle with heading \(\theta \) and lateral offset \(\Delta \) for lane-keeping maneuvers in the central lanes of Tracks A and B at 80 and 60 km/h, respectively. At the curviest section of Track A, the maximal \(\Delta \) value was 1.34 m

Fig. 11
figure 11

Dynamic performance of the lateral CILQR algorithm and MTUNet\(\_\)1\(\times \) model for lane-keeping maneuvers in the central lanes of Tracks A and B at the same speeds as those in Fig. 10. At the curviest section of Track A, the maximal \(\Delta \) value was 1.48 m

Fig. 12
figure 12

Dynamic performance of the lateral VPC-SQP algorithm and MTUNet\(\_\)1\(\times \) model for lane-keeping maneuvers in the central lanes of Tracks A and B at the same speeds as those in Fig. 10

Fig. 13
figure 13

Dynamic performance of the lateral SQP algorithm and MTUNet\(\_\)1\(\times \) model for lane-keeping maneuvers in the central lanes of Tracks A and B at the same speeds as those in Fig. 10

In Figs. 69, all methods, including the MTUNet\(\_\)1\(\times \) model, could effectively guide the ego car to drive along the lane center to complete one lap at cruise speeds of 76 and 50 km/h on Tracks A and B, respectively. The discrepancy in the \(\theta \) between the MTUNet\(\_\)1\(\times \) estimation and the ground truth trajectory was attributable to curvy or shadowy road segments, which may induce vehicle jittering [31]. Nevertheless, the \(\Delta \) values estimated from lane line segmentation were more robust in difficult scenarios than those obtained with the end-to-end method [61]. Therefore, these \(\Delta \) values can be used by controllers to effectively correct \(\theta \) errors and return the ego car to the road’s center. The maximum \(\Delta \) deviations from the ideal zero values on Track A (g-track-3 in [31]) were smaller when the ego car was controlled by the VPC-CILQR controller (shown in Fig. 6) than when it was controlled by the CILQR [21] (Fig. 7) or MTL-RL [31] algorithms. Note that the vehicle speed in the MTL-RL control framework on Track A for that study was 75 km/h, which is slower than that in this study. This finding indicates that for curvy roads, the VPC-CILQR algorithm better minimized \(\Delta \) than did the other investigated algorithms. Due to the lower computation efficiency of the standard SQP solver [21], the SQP-based controllers were less effective for maintaining the ego vehicle’s stability than the CILQR-based controllers on the curviest sections of Tracks A and B (Figs. 8 and 9). Moreover, the VPC-SQP algorithm outperformed the SQP algorithm alone, further demonstrating the effectiveness of the VPC algorithm. In terms of MAE, the VPC-CILQR controller outperformed the other methods in terms of \(\Delta \)-MAE on both tracks (data for 76 and 50 km/h in Table 8). However, \(\theta \)-MAE was 0.0003 and 0.0005 rad higher on Tracks A and B, respectively, for the VPC-CILQR controller than the CILQR controller. This may have been because the optimality of CILQR solution is losing if the external VPC algorithm is applied to it. This problem could be solved by applying standard MPC methods with more general lane-keeping dynamics, such as the lateral control model presented in [52], which uses road curvature to describe vehicle states. This nonlinear MPC design is computationally expensive and may not meet the requirements for real-time autonomous driving.

Table 8 Performance of the VPC-CILQR, CILQR, VPC-SQP, and SQP algorithms with MTUNet\(\_\)1\(\times \) in terms of the MAE for the tests in Figs. 614
Fig. 14
figure 14

Results for the longitudinal CILQR and SQP algorithms in car-following scenario after ego car travels 1075 m on Track B; v and D are speed and intervehicle distance, respectively

In Figs. 1013, the ego car was guided to drive along the central lane by the MTUNet\(\_\)1\(\times \) model at higher cruise speeds (80 and 60 km/h on Tracks A and B, respectively) than those in Figs. 69. For ego vehicles with the VPC-CILQR and CILQR controllers (Figs. 10 and 11), the maximum \(\Delta \) deviations were approximately half of the lane width (2 m). In particular, the ego cars controlled by the SQP-based algorithms unintentionally left the ego-lane at the curviest section of Track A (Figs. 12 and 13). This was attributed to the slower reaction times of SQP-based algorithms (9.70 ms) than of CILQR-based algorithms (0.58 ms). Therefore, higher controller latency may not only result in ego car instability but also unsafe driving, particularly when the vehicle enters a curvy road at high speed.

The car-following maneuver in Fig. 14 was performed on a section of Track B. The ego vehicle was initially cruising at 76 km/h and approached a slower preceding car with speed in the range of 63 to 64 km/h. For all ego vehicles with the CILQR or SQP controllers, the vehicle speed was regulated, the preceding vehicle was tracked, and the controller maintained a safe distance between the vehicles. However, the uncertainty in the optimal solution led to differences between the reference and response trajectories [18]. For the longitudinal CILQR and SQP controllers, respectively, v-MAE was 0.1971 and 0.2629 m/s, and D-MAE was 0.4201 and 0.4930 m (second row of Table 8). Hence, CILQR again outperformed SQP in this experiment. A supplementary video featuring the lane-keeping and car-following simulations can be found at https://youtu.be/Un-IJtCw83Q.

5 Conclusion

Table 9 Average computation time of VPC, CILQR, and SQP algorithms

In this study, a vision-based self-driving system that uses a monocular camera and radars to collect sensing data is proposed; the system comprises an MTUNet network for environment perception and VPC and CILQR modules for motion planning. The proposed MTUNet model is an improvement on our previous model [28]; we have added a YOLOv4 detector and increased the network’s efficiency by reducing the network input size for use with TORCS [28], CULane [56], and LLAMAS [57] data. The most efficient MTUNet model, namely MTUNet\(\_\)1\(\times \), achieved an inference speed of 40.77 FPS for simultaneous lane line segmentation, ego vehicle pose estimation, and traffic object detection tasks. For vehicular automation, a lateral VPC-CILQR controller was designed that can plan vehicle motion based on the ego vehicle’s heading, lateral offset, and road curvature as determined by MTUNet\(\_\)1\(\times \) and postprocessing methods. The longitudinal CILQR controller is activated when a slower preceding car is detected. The optimal jerk is then applied to regulate the ego vehicle’s speed to prevent a collision. The MTUNet\(\_\)1\(\times \) and VPC-CILQR controller can collaborate for ego vehicle operation on challenging tracks in TORCS; this algorithm outperforms methods based on the CILQR [21] or MTL-RL [31] algorithms for the same path-tracking task on the same large-curvature roads. Moreover, the self-driving vehicle with long-latency SQP-based controllers tended to leave the lane on some curvy routes. By contrast, the short-latency CILQR-based controllers could drive stably and safely in the same scenarios. In conclusion, the experiments demonstrated the applicability and feasibility of the proposed system, which comprises perception, planning, and control algorithms. It is suitable for real-time autonomous vehicle control and does not require HD maps. A future study can apply the proposed autonomous driving system to a real vehicle operating on actual roads.