Introduction

In light of the previous studies and the recognized importance of logistics optimization, it becomes evident that tackling the intricacies of Multimodal Transportation Systems (MTS) is paramount. The global logistics landscape has seen a surge in the utilization of multiple transportation modes, emphasizing the need for advanced optimization techniques that can navigate the challenges inherent to MTS.

Complex Nature of Multimodal Transportation Systems (MTS)

  • MTS represents a pivotal aspect of today’s logistics, requiring efficient coordination among diverse transport modes like rail, road, air, and sea.

  • The central challenge lies in achieving a seamless, cost-effective, and timely integration of these modes. This often places decision-makers in the throes of dilemmas such as choosing optimal transitions, reducing transshipment costs, and circumventing mode-specific constraints.

To address these challenges, this study turns its gaze towards the Teaching–Learning-Based Optimization (TLBO) algorithm as a potential game-changer in the realm of MTS optimization.

Introduction and Rationale for the TLBO Algorithm

  • In the face of MTS complexities, the study introduces the TLBO algorithm.

  • Distinct from traditional methods, TLBO employs a teacher-learner pairing approach. Efficient solutions, dubbed “teachers”, guide and refine less efficient “learners” in an iterative push towards optimization.

  • This innovative methodology, as our findings suggest, refines the optimization process, making it more intuitive and effective.

Further evidence of the efficacy of the TLBO algorithm is furnished by our research. In-depth examinations and illustrative numerical examples vouch for its robustness and adaptability.

Comparative Efficacy of TLBO

A juxtaposition with the Genetic Algorithm elucidates the merits of TLBO. Specifically, the Genetic Algorithm, through our tests, consistently birthed solutions with steeper associated costs, highlighting the TLBO’s superior optimization capabilities in confronting MTS challenges.

This article unfolds over seven meticulously crafted sections. The journey commences with the introductory section that sets the stage. This is succeeded by a literature review in the second section, providing insights into existing works. In the third section, we delve into the mathematical modelling of MTS, laying down the foundational framework. The fourth section elucidates the proposed methodology, breaking down its intricacies. The fifth section brings this methodology to life with illustrative numerical examples. Subsequent to this, the sixth section offers a detailed analysis and interpretation of the numerical problem. The narrative culminates with the seventh section, drawing conclusive insights and encapsulating the essence of our exploration.

Review of Literature

The evolution of Multimodal Transportation Systems (MTS) signifies a transformative approach to the movement of goods, leveraging a cohesive integration of multiple transport modes, such as road, rail, air, and sea. The core attraction of MTS lies in its promise of heightened efficiency, enabling seamless transit while navigating diverse logistical landscapes. But beneath this promise lies the intricate challenge of optimizing these systems-a task that necessitates a meticulous balancing act between variables like cost, time, and the optimal selection of transport modes.

While the endeavours to refine MTS optimization trace back to the early 1970s, with pioneering works like Appa’s exploration in 1973 [1], the quest for perfection remains ongoing. Despite the passage of time and the cumulative wisdom of numerous research efforts, there persists a tangible gap. This lacuna underscores the need for crafting algorithms that can more adeptly navigate the multifaceted intricacies of these transport systems.

The annals of MTS optimization literature chronicle a fascinating journey spanning over half a century. It began with Appa’s seminal presentation of the transportation conundrum in 1973 [1]. This initial exploration paved the way for further inquiry, with scholars like Lin [2] introducing innovative dimensions to the discourse through the lens of multiple-objective optimization. Yet, for all the pioneering strides, the Specter of challenges in unearthing foolproof solutions remained, as underscored by Gass in 1990 [3].

The progression of research into the latter part of the century and beyond illuminated the multifarious aspects of MTS optimization. Pratt and Lomax [4] articulated the pressing need for holistic performance metrics that could gauge the efficacy of multimodal systems in their entirety. Venturing deeper into the twenty-first century, Pedregal’s work in 2004 [5] offered a panoramic view of the optimization techniques landscape, illuminating potential pathways to refine and resolve transportation challenges.

Cagnina et al. [6] delved into the potential of particle swarm optimization (PSO) as an effective tool for tackling engineering optimization challenges. Around the same period, research into optimization algorithms was gathering momentum. A few years later, Rao et al. [7] introduced the Teaching–Learning-Based Optimization (TLBO) algorithm, specifically tailored for constrained mechanical design optimization problems. The TLBO algorithm’s effectiveness was subsequently validated in various studies, from unconstrained real-parameter optimization [8], to heat exchangers [9], and multi-objective optimization [10]. Moreover, improvements to the original TLBO algorithm were suggested by Rao and Patel [11], further increasing its applicability in solving a wide range of optimization problems. In the context of MTS, Rouhieh and Alecsandru [12] explored optimizing route choice, with Shabanpour-Haghighi et al. [13] introducing a modified version of TLBO for the multi-objective optimal power flow problem. Kengpol et al. [14] proposed a framework for route selection, while Zhang et al. [15] developed an approach for discrete multimodal transportation network design. The applicability of TLBO in MTS was emphasized by Rao and Waghmare [16], with Zou et al. [16] enhancing the TLBO approach by incorporating the learning experience of other learners. Chen et al. [17] and others showcased its potential in solving optimization problems in MTS.

The Teaching Learning-Based Optimization (TLBO) algorithm has witnessed burgeoning interest in recent years, a fact underscored by the myriad of survey papers that have emerged. Notably, Rao and Rao [18] and Rao [19] contributed seminal reviews, accentuating the algorithm’s foundational principles and its broad applicability. Concurrently, Yu et al. [20] delved deeper into the nuances, pioneering work in constraints optimization and presenting an improved version of TLBO. As the field matured, researchers extended TLBO’s application to diverse problems. For instance, Zheng et al. [21] ingeniously employed TLBO to tackle the intricate multi-skill resource-constrained project scheduling problem, setting a benchmark for subsequent studies. Additionally, Udomwannakhet et al. [22] offered a comprehensive review of the multimodal transportation optimization model, bridging the gaps between traditional models and modern optimization techniques. The burgeoning trend continued with works like Chen et al. [24], which manifested the versatility of TLBO in Multimodal Transportation Systems (MTS). In a significant contribution in 2022, Archetti et al. [30] presented a holistic survey on optimization techniques, with an emphasis on multimodal freight transportation. The exploration of this domain was further enriched by studies like Basciftci and Van Hentenryck [32], which proposed innovative designs for on-demand multimodal transit systems, and Tong [33], which investigated rail consignment path planning in MTS while factoring in time uncertainties.

In parallel, scholars like Ma et al. [27] ventured into refining the TLBO algorithm itself, revealing nuances in its modifications tailored for specific problem-solving scenarios. Kaewfak et al. [28] presented a compelling case for multi-objective optimization, particularly in the context of making freight route choices. An illustration of the algorithm’s progressive evolution was provided by Wu et al. [29]. Their work unveiled an enhanced TLBO algorithm which seamlessly integrated a reinforcement learning strategy, thereby solidifying the algorithm’s position in the contemporary optimization domain.

Of particular interest are recent innovative studies that have contributed valuable insights to the field. Owais et al. [26] delved into optimal metro design for transit networks, particularly focusing on existing square cities from a non-demand criterion perspective. This research carved a niche by concentrating on sustainability, aligning MTS with eco-friendly city design principles. Progressing into 2022, Owais and Shahin [31] unfolded precise algorithms for the Screen Line Problem in expansive networks, employing a shortest path-based column generation approach. A year later, Alshehri et al. [34] introduced the potential of Residual Neural Networks in estimating Origin–Destination trip matrices from traffic sensor information, augmenting the MTS optimization domain by merging it with advanced neural network techniques.

Mathematical Modelling

Mathematical Modelling of Multimodal Transportation problem

A multimodal transport problem is similar to a transportation problem, but it involves more than one mode of transport (See Fig. 1). Multimodal transport is also referred to as combined transportation, which allows the goods to be transported under one contract, but it must be carried out by at least two modes of transport (e.g., road or highway automobiles, passenger rail, air passenger service, cruise ships). It is essential to plan and provide all modes of transportation in a systematic manner, similar to the way modern organizational structures are managed, such as sanitation, power supply, and buildings. Not only are there multiple modes of transportation, but transportation professionals must also plan for and ensure the safe and intelligent movement of goods and personnel between different modes, commonly referred to as intermodal transfer. Some of the important perspectives in MMTP are as follows:

  • Minimize total transportation expenses by using each transportation method where it excels.

  • Boost financial productivity and efficacy, elevating the nation’s standing in the global market.

  • Diminish excess capacity and strain on seldom-used infrastructure assets.

  • Generate better outcomes from both public and private funding,

  • Enhance transportation accessibility for the elderly, remote individuals, differently abled people, and those facing economic challenges.

  • Decrease fuel usage and play a role in improving environmental health and air purity.

Fig. 1
figure 1

Graphical representation of MMTP

We have now outlined some definitions associated with our suggested Multimodal Transportation Problem (MMTP).

Ground origin (GO) In logistics, locations that are able to provide items but lack the ability to accumulate them are termed ground origins.

Final destination (FD) In logistics, points can accumulate items but aren't equipped to.

provide they are labeled final destinations.

Supplementary origin (SO) In logistics, locations with the ability to both collect and.

distribute items are recognized as supplementary origins.

We have already seen the mathematical representation of the transportation problem (TP). In the presence of the SO in a TP, it becomes a MMTP.

Notations

To understand the mathematical formulae of the MMTP, we once again refer to the following notations:

\(m_{1}\)::

Total count of base start points (GOs),

\(n_{1}\)::

Total count of terminal points (FDs),

\(m_{t}\)::

Ccount of auxiliary starts (SOs) at level \(\left( {t - 1} \right)\), \(t\) ranging from 2 to r.

\(r\)::

Total labels designated for starting points,

\(a_{i}^{1}\)::

Goods supply at the \(i{{\text{th}}}\) start point of GO,

\(a_{i}^{t}\)::

Goods supply at the \(i{{\text{th}}}\) start of the \(t{\text{th}}\) level SO, where \(t\) ranges from 2 to r,

\(b_{j}\)::

Rrequirement at the \(j{\text{th}}\) point of FD,

\(\alpha_{1}^{1}\)::

Iindividual vehicle's max load from base origins to terminal destinations,

\(\alpha_{s}^{t}\)::

Iindividual vehicle's max load from SO of the \(\left( {t - 1} \right){\text{th}}\) level to SO of \(\left( {s - 1} \right){\text{th}}\) level; \(t\) and \(s\) both ranging from 2 to r,

\(c_{ij1}^{1}\)::

Per unit goods movement charge from the \(i{\text{th}}\) starting point to \(j{\text{th}}\) terminal of GO to FD,

\(c_{ij1}^{t}\)::

Per unit goods movement charge moving from the \(i{\text{th}}\) start to \(j{\text{th}}\) end of SO at \(\left( {t - 1} \right){\text{th}}\) level to FD; \(t\) ranging from 2 to r,

\(c_{ijs}^{1}\)::

Per unit goods movement charge transporting from the \(i{\text{th}}\) start to \(j{\text{th}}\) terminal from GO at \(\left( {r - s + 1} \right){\text{th}}\) level to SO; s ranging from 2 to r,

\(c_{ijs}^{t}\)::

Per unit goods movement charge from the \(i{\text{th}}\) beginning to \(j{\text{th}}\) end from SO of \(\left( {t - 1} \right){\text{th}}\) level to SO of \(\left( {r - s + 1} \right){\text{th}}\) level; \(t\) and \(s\) both ranging from 2 to r and \(t\) being at least \(s\),

\(x_{ij1}^{1}\)::

Vehicle count needed to move from start point i to \(j{\text{th}}\) terminal of FD from GO,

\(x_{ij1}^{t}\)::

Vehicle count needed for movement from \(i{\text{th}}\) start to \(j{\text{th}}\) terminal from SO of \(\left( {t - 1} \right){\text{th}}\) level to FD, \(t\) ranging from 2 to r,

\(x_{ijs}^{1}\): :

Vehicle count needed for moving from \(i{\text{th}}\) start to \(j{\text{th}}\) terminal from GO to SO at \(\left( {r - s + 1} \right){\text{th}}\) level; \(s\) ranging from 2 to r,

\(x_{ijs}^{t}\)::

Vehicle count necessary for transport from \(i{\text{th}}\) beginning to \(j{\text{th}}\) terminal from SO of \(\left( {t - 1} \right){\text{th}}\) level to SO of \(\left( {r - s + 1} \right){\text{th}}\) level; \(t\) and \(s\) both ranging from 2 to r with \(t\) being equal to or greater than \(s\),

\(z^{\prime}\)::

Target function to reduce transport charges to terminal from base start and all auxiliary starts,

\(z^{i}\)::

Target function to cut down transport costs to \(\left( {r - i + 1} \right){\text{th}}\) terminal points from GO and all SOs of \(\left( {t - 1} \right){\text{th}}\) level; \(i\) ranging from 2 to \(r - 1\).

To build the MMTP mathematical framework, we consider the subsequent elements:

Construct the target function \(z^{1}\) for transit to FD from GO and SO across all designations:

The associated transportation network for \(z^{1}\) is depicted in Fig. 2. In this context, transportation pathways include: from GO to FD with its related function being \(\mathop \sum \nolimits_{i = 1}^{{m_{1} }} \mathop \sum \nolimits_{j = 1}^{{n_{1} }} \alpha_{1}^{1} C_{ij1 }^{1} x_{ij1}^{1}\); from SO of the first designation to FD, its related function is \(\mathop \sum \nolimits_{i = 1}^{{m_{2} }} \mathop \sum \nolimits_{j = 1}^{{n_{2} }} \alpha_{1}^{1} C_{ij1 }^{2} x_{ij1}^{2}\); and the pattern continues.

Fig. 2
figure 2

Graphical representation of transportation for \( z^{1}\)

Concluding, for SO of designation r-1 leading to FD, the related target function becomes \(\mathop \sum \nolimits_{i = 1}^{{m_{r} }} \mathop \sum \nolimits_{j = 1}^{{n_{r} }} \alpha_{1}^{r} C_{ij1 }^{r} x_{ij1}^{r}\). Therefore,

$$ z^{1} = \mathop \sum \limits_{i = 1}^{{m_{1} }} \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{1} C_{ij1 }^{1} x_{ij1}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \mathop \sum \limits_{j = 1}^{{n_{2} }} \alpha_{1}^{1} C_{ij1 }^{2} x_{ij1}^{2} + \cdots + \mathop \sum \limits_{i = 1}^{{m_{r} }} \mathop \sum \limits_{j = 1}^{{n_{r} }} \alpha_{1}^{r} C_{ij1 }^{r} x_{ij1}^{r} . $$

Within the transport framework linked to \(z^{1}\), the requirements at the FD nodes must be met. Hence, the subsequent conditions must be fulfilled.

$$ \mathop \sum \limits_{j = 1}^{{m_{1} }} \alpha_{1}^{1} x_{ij1}^{1} + \mathop \sum \limits_{j = 1}^{{m_{2} }} \alpha_{1}^{2} x_{ij1}^{2} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{1}^{r} x_{ij1}^{r} \ge b_{j} \;\;\;\left( {j = 1,2, \ldots , n_{1} } \right). $$

Next, we develop into the formulation of the objective function \(z^{2}\) tailored for transport to the SO with a label of \(r - 1\) originating from GO and SOs labeled as \(t = 1, 2, 3, ..., r - 2\). Refer to Fig. 3 for a visual representation of the transport network associated with \(z^{2}\). In this context:

Fig. 3
figure 3

Graphical representation of transportation for \(z^{2}\)

For transport from GO to SO labeled \(r - 1\), the objective function becomes \(\mathop \sum \nolimits_{i = 1}^{{m_{1} }} \mathop \sum \nolimits_{j = 1}^{{m_{r} }} \alpha_{r}^{1} C_{ijr }^{1} x_{ijr}^{1}\).

When transporting from SO labeled 1 to SO labeled \(r - 1\), the objective function is described as \(\mathop \sum \nolimits_{i = 1}^{{m_{2} }} \mathop \sum \nolimits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} C_{ijr }^{2} x_{ijr}^{2}\).

This pattern continues, culminating in the transport from SO labeled \(r - 2\) to SO labeled \(r - 1\), which can be described by the objective function \(\mathop \sum \nolimits_{i = 1}^{{m_{2} }} \mathop \sum \nolimits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} C_{ijr }^{2} x_{ijr}^{2}\).

Hence,

$$ z^{2} = \mathop \sum \limits_{i = 1}^{{m_{1} }} \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{1} C_{ijr }^{1} x_{ijr}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} C_{ijr }^{2} x_{ijr}^{2} + \cdots + \mathop \sum \limits_{i = 1}^{{m_{r - 1} }} \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{r - 1} C_{ijr }^{r - 1} x_{ijr}^{r - 1} $$

Within the transport network depicted as \( z^{2}\) in Fig. 3, the quantity of goods held at the SO nodes with a label of \(r - 1\) should exceed the goods being moved from SO with a label \(r - 1\) to the FD nodes. Hence, certain conditions must be met to ensure this.

$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{r} x_{ij1}^{r} \le \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{r}^{1} x_{itr}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \alpha_{1}^{2} x_{it1}^{2} + \cdots + \mathop \sum \limits_{i = 1}^{{m_{r - 1} }} \alpha_{r}^{r - 1} x_{it1}^{r - 1} \;\;\;\left( {t = 1,2, \ldots , m_{r} } \right) $$

Furthermore, the cumulative quantity of goods at the SO nodes labelled \(r - 1\) should not surpass their storage limits. As a result,

$$ \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{r}^{1} x_{itr}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \alpha_{1}^{2} x_{it1}^{2} + \cdots + \mathop \sum \limits_{i = 1}^{{m_{r - 1} }} \alpha_{r}^{r - 1} x_{it1}^{r - 1} \le \alpha_{r}^{t} \;\;\;\;\left( {t = 1,2, \ldots , m_{r} } \right). $$

In a comparable manner, we establish \( z^{i}\) for \(\left( {i = 2, 3, \ldots , r - 1} \right)\), culminating in transportation from GO to the SO of the first label.

Setting up the objective function \( z^{r}\) pertains to the transportation towards the SO of the first label, originating from GO. Illustrated in Fig. 4 is the relevant transportation network associated with \( z^{r}\). In this context, when contemplating transportation from GO to the SO of the first label, the relevant objective function becomes:

$$ z^{r} = \mathop \sum \limits_{i = 1}^{{m_{1} }} \mathop \sum \limits_{j = 1}^{{m_{2} }} \alpha_{1}^{2} C_{ij2 }^{1} x_{ij2 .}^{1} $$
Fig. 4
figure 4

Graphical representation of transportation for \(z^{r}\)

Within the transportation framework represented by \(z^{r}\) as depicted in Fig. 4, the inventory at the nodes of SO with label 1 should surpass the volume of goods directed to SO with labels t (where \(t\) ranges from 2 to \(r - 1\)) and subsequently to FD. Consequently, we account for these conditions:

$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{2} x_{it1}^{2} + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} x_{tjr}^{2} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{3} }} \alpha_{3}^{2} x_{tj3}^{2} \le \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{2}^{1} x_{it2}^{1} { }\;\;\;\left( {t = 1,2, \ldots , m_{2} } \right). $$

Furthermore, the cumulative volume of items held at the SO nodes with label 1 should not exceed their storage limit. Therefore, \(\mathop \sum \nolimits_{i = 1}^{{m_{1} }} \alpha_{2}^{1} x_{it2}^{1} \le \alpha_{t}^{2} \;\;\;\left( {t = 1,2, \ldots , m_{2} } \right).\)

The comprehensive MMTP model, as depicted in Fig. 1, encompasses the combined networks through the use of objective functions \(z^{i }\), where \(i\) ranges from 1 to \(r\). Accompanied by the necessary constraints for shaping these objective functions, the mathematical representation of MMTP can be outlined as:

$$ {\text{Minimize}}\;\;\;z = { }z^{1 } + z^{2} + \cdots + z^{r } , $$

\(z^{1} = \mathop \sum \nolimits_{i = 1}^{{m_{1} }} \mathop \sum \nolimits_{j = 1}^{{n_{1} }} \alpha_{1}^{1} C_{ij1 }^{1} x_{ij1}^{1} + \mathop \sum \nolimits_{i = 1}^{{m_{2} }} \mathop \sum \nolimits_{j = 1}^{{n_{2} }} \alpha_{1}^{1} C_{ij1 }^{2} x_{ij1}^{2} + \cdots + \mathop \sum \nolimits_{i = 1}^{{m_{r} }} \mathop \sum \nolimits_{j = 1}^{{n_{r} }} \alpha_{1}^{r} C_{ij1 }^{r} x_{ij1}^{r} ,\)

$$ z^{2} = \mathop \sum \limits_{i = 1}^{{m_{1} }} \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{1} C_{ijr }^{1} x_{ijr}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} C_{ijr }^{2} x_{ijr}^{2} + \ldots + \mathop \sum \limits_{i = 1}^{{m_{r - 1} }} \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{r - 1} C_{ijr }^{r - 1} x_{ijr}^{r - 1} , $$
$$ z^{r} = \mathop \sum \limits_{i = 1}^{{m_{1} }} \mathop \sum \limits_{j = 1}^{{m_{2} }} \alpha_{1}^{2} C_{ij2 }^{1} x_{ij2 .}^{1} $$
(1)

GO and SO's limitations on all label’s availability

$$ {\text{s}}.{\text{t}}.\;\; + \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{1} x_{ij1}^{1} + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{1}^{1} x_{ijr}^{1} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{2} }} \alpha_{2}^{1} x_{ij2}^{1} \le \alpha_{i}^{1} \;\;\;\left( { i = 1,2, \ldots , m_{1} } \right), $$
(2)
$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{2} x_{ij1}^{2} + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} x_{ijr}^{2} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{3} }} \alpha_{3}^{2} x_{ij3}^{2} \le \alpha_{i}^{2} \;\;\;\left( { i = 1,2, \ldots , m_{2} } \right), $$
(3)
$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{3} x_{ij1}^{3} + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{3} x_{ijr}^{3} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{4} }} \alpha_{4}^{3} x_{ij4}^{3} \le \alpha_{i}^{3} \;\;\;\left( { i = 1,2, \ldots , m_{3} } \right), $$
(4)
$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{r} x_{ij1}^{r} \le \alpha_{i}^{r} \;\;\;\left( { i = 1,2, \ldots , m_{r} } \right), $$
(5)

The restrictions on minimum demands at the FD

$$ \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{1}^{1} x_{ij1}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \alpha_{1}^{2} x_{ij1}^{2} + \cdots + \mathop \sum \limits_{i = 1}^{{m_{r} }} \alpha_{1}^{r} x_{ij1}^{r} \ge b_{j} \;\;\;\left( { j = 1,2, \ldots , n_{1} } \right), $$
(6)

the restrictions on distributing and storing items at SO nodes for all labels

$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{2} x_{ij1}^{2} + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{2} x_{ijr}^{2} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{3} }} \alpha_{3}^{2} x_{ij3}^{2} \le \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{2}^{1} x_{it2}^{1} \;\;\;\left( { t = 1,2, \ldots , m_{2} } \right), $$
(7)
$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{3} x_{ij1}^{3} + \mathop \sum \limits_{j = 1}^{{m_{r} }} \alpha_{r}^{3} x_{ijr}^{3} + \cdots + \mathop \sum \limits_{j = 1}^{{m_{4} }} \alpha_{4}^{3} x_{ij4}^{3} \le \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{3}^{1} x_{it3}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \alpha_{3}^{2} x_{it3}^{2} \;\;\;\left( { t = 1,2, \ldots , m_{3} } \right), $$
(8)
$$ \mathop \sum \limits_{j = 1}^{{n_{1} }} \alpha_{1}^{r} x_{ij1}^{r} \le \mathop \sum \limits_{i = 1}^{{m_{1} }} \alpha_{r}^{1} x_{itr}^{1} + \mathop \sum \limits_{i = 1}^{{m_{2} }} \alpha_{r}^{2} x_{itr}^{2} \cdots + \mathop \sum \limits_{i = 1}^{{m_{r - 1} }} \alpha_{r}^{r - 1} x_{itr}^{r - 1} \le \alpha_{r}^{t} \;\;\;\left( { t = 1,2, \ldots , m_{r} } \right), $$
(9)
$$ x_{ijp}^{\left( s \right)} \ge 0 \forall i, j, s\;\; and\;\; p $$
(10)

Furthermore, to achieve a viable solution for this model, it is crucial to ensure that the quantity of goods needed at the FD nodes doesn't surpass the total available at the GO nodes. Consequently, the condition for the model's viability is defined as:

$$ \mathop \sum \limits_{i = 1}^{{m_{1} }} a_{i}^{1} \ge \mathop \sum \limits_{j = 1}^{{n_{1} }} b_{j} $$

In the MMTP framework, the decision variables amount to the product of (\(m_{1} \times {\text{m}}_{2} \times \ldots \times m_{r}\) and \(n_{1}\). The feasibility space for this model is built upon the ensuing premises:

There are \(m_{1}\) constraints, denoted as (2), related to the capacity of the primary origins. \(n_{1}\) constraints, denoted as (3), correspond to the requirements of the final destinations. Given the storage restrictions at supplementary origins, we introduce a total of (\(m_{1} + {\text{m}}_{2} + \cdots + m_{r} ) \) inequalities, spanning from (4) to (6).

Moreover, it is imperative that the quantity of goods dispatched from the supplementary origins does not surpass what's provisioned to these very origins. To capture this, we include another set of (\(m_{1} + {\text{m}}_{2} + \cdots + m_{r} ) \) inequalities, ranging from (7) to (9).

In total, this model encompasses (\(m_{1} \times {\text{m}}_{2} \times \cdots \times m_{r} \times n_{1}\)) variables and a combined \(\left[ {{2}(m_{1} + {\text{m}}_{2} + \cdots + m_{r} ) + m_{1} + {\text{n}}_{1} } \right]\) constraints, in addition to ensuring non-negativity. Model 2 stands as a fully-fledged linear programming problem (LPP), solvable using techniques such as the Big-M method, revised simplex method, Vogal’s approximation method, among others.

TLBO

This optimization method views a set of learners as a collective and the different topics introduced to them as unique parameters in an optimization challenge. A learner’s result is comparable to the 'performance' metric in optimization, with the top solution deemed the ‘educator’. Parameters are linked to a specific goal function in an optimization scenario, and the ‘optimal solution’ corresponds to the ‘peak value’ of that function. The operation of this method is split into two stages: the ‘educator stage’ and the ‘student stage’. The mechanisms of both stages are detailed subsequently.

Teacher Phase

This is the preliminary phase of the method, where students gather knowledge from the educator. This stage embodies the students' learning process under the guidance of educators. Here, an educator endeavours to enhance the collective performance of a topic they teach by leveraging their expertise. During iteration \(i\), let’s assume there are ‘\(m\)’ different topics (i.e., parameters), and ‘\(n\)’ distinct students (i.e., sample size, \(k = 1,2,...,n\)), with \(x_{mean}\) representing the average performance of a student in a given topic (\(j = 1,2,...,m\)). The optimal collective performance, \(x_{best}\), encompassing all topics among all students, can be determined from the top-performing student, denoted as k-best. But since educators are conventionally perceived as experts aiming to guide students towards superior outcomes, in this method, the top-performing student is viewed as the educator. The discrepancy between the prevailing average score for each topic and the corresponding score of the educator for the same topic is represented by,

$$ x_{difference\_mean} = r_{i} \left( {x_{best} - T_{f} x_{mean} } \right) $$
(11)

In this context, \(x_{mean}\) signifies the performance of the top student in topic \(j\). The factor \(T_{f}\) plays a pivotal role in determining the shift in the mean value, while \(r_{i}\) is a random digit within the [0,1] spectrum. The factor \(T_{f}\) could assume values of 1 or 2. Its value is chosen at random, with an equal likelihood, as follows:

$$ T_{f} = {\text{round }}\left[ {{1} + {\text{rand }}\left( {0,{1}} \right) \, \{ {2} - {1}\} } \right] $$
(12)

\(T_{f}\) is intrinsic to the TLBO method and is not an external parameter. While it's not directly fed into the algorithm, its value is deduced through Eq. (12). Testing on a range of benchmark functions showed optimal performance when \(T_{f}\) varied between 1 and 2. The most pronounced improvements were seen for values \(T_{f}\) = 1 or 2. Thus, for a more streamlined approach, the teaching factor is best set to either 1 or 2, adhering to the criteria in Eq. (12). The teacher phase then updates the present solution based on the Mean Difference.

$$ x_{new} = x_{current} + x_{difference\_mean} $$
(13)

The updated value is represented by \(x_{new}\), which is derived from \(x_{current}\). If \(x_{new}\) yields a more favorable function outcome, it is accepted. Once the teacher phase wraps up, all endorsed function values are preserved and transition to the learner phase, which builds upon the outcomes of the teacher phase.

Learner Phase

After the teacher phase, this stage emulates the learners' knowledge acquisition through mutual interactions. Here, a learner absorbs new insights when another learner possesses a more profound understanding. Given a population size '\(n\)', the subsequent learning process is depicted in this phase.

Choose two learners, P and Q, ensuring that \(x_{total - P,i}{\prime}\) is not equal to \(x_{total - Q,i}{\prime}\). These values represent the revised values of \(x_{total - P,i}{\prime}\) and \(x_{total - Q,i}{\prime}\) for P and Q at the close of the teacher phase.

For minimization problems:

$$ x_{j,P,i}^{^{\prime\prime}} = x_{j,P,i}{\prime} + r_{i} \left( { x_{j,P,i }{\prime} - x_{j,Q,i}{\prime} } \right),\;\;{\text{if}}\;\; x_{total - P,i }{\prime} < x_{total - Q,i}{\prime} $$
(14)
$$ x_{j,P,i}^{^{\prime\prime}} = x_{j,P,i}{\prime} + r_{i} \left( { x_{j,Q,i }{\prime} - x_{j,P,i}{\prime} } \right),\;\;{\text{if}}\;\; x_{total - Q,i }{\prime} < x_{total - P,i}{\prime} $$
(15)

If \(x_{j,P,i}^{^{\prime\prime}}\) yields an improved function value, it's accepted.

For maximization problems:

$$ x_{j,P,i}^{^{\prime\prime}} = x_{j,P,i}{\prime} + r_{i} \left( { x_{j,P,i }{\prime} - x_{j,Q,i}{\prime} } \right),\;\;{\text{if}}\;\; x_{total - Q,i }{\prime} < x_{total - P,i}{\prime} $$
(16)
$$ x_{j,P,i}^{^{\prime\prime}} = x_{j,P,i}{\prime} + r_{i} \left( { x_{j,Q,i }{\prime} - x_{j,P,i}{\prime} } \right),\;\;{\text{if}}\;\; x_{total - P,i }{\prime} < x_{total - Q,i}{\prime} $$
(17)

Teaching–Learning-Based Optimization (TLBO) is an optimization method inspired by classroom learning dynamics. Instead of using unique control parameters, it utilizes common parameters like the size of the group and iteration count.

The flowchart of TLBO is shown inn Fig. 5.

Fig. 5
figure 5

Flowchart of TLBO

Operational Procedure

Let’s assume any shipping company want to ship a batch of goods from a warehouse to a delivery centre. We may have mode of transportation options are road transport, rail transport, and air transport etc. Each mode of transport has its cost \(c_{i}\) $ and time \(t_{i} { }\) hr required. The objective is to deliver goods in less than \(t\) hours, and we want to minimize the cost \(C\). Where \(i = 1,2,..., n\).

Let’s now use TLBO to solve this problem:

Our decision variable is the mode of transport. The time constraint is that total transport time must be less than or equal to \(t\) hours. Each potential solution is represented as a vector of modes \(v\) of transport.

Initialization: We randomly initialize a population of \(n\) solutions. Each solution is a vector of modes of transport. Tabular form is shown in Table 1.

Table 1 Initial Population of Transportation problem

Teacher Phase: We calculate the cost for each solution and check if it satisfies the time constraint. The solution with the minimum cost is considered as the teacher. Next, update the other solutions using insights from the teacher.

Learner Phase: Choose two solutions at random. If the latter outperforms the former, refine the initial solution using insights from the second one.

Convergence Check: The procedure persists until the optimal solution remains relatively unchanged across multiple cycles.

Solution Extraction: The optimal solution is derived from the most favourable outcome within the group.

Validation and Verification: The solution should meet the time constraint, and check it is feasible in the real-world context or not.

Flowchart of complete process is shown in Fig. 6.

Fig. 6
figure 6

Flowchart of working procedure

Numerical Computations

Problem 1

Let’s assume we are a shipping company and we want to ship a batch of goods from a warehouse to a delivery centre. We have three options: road transport, rail transport, and air transport (Table 2). Each mode of transport has its cost and time required, which is as follows:

Table 2 Cost and time for different mode of Transport

We have to deliver goods in less than 18 h, and we want to minimize the cost.

Let’s now use TLBO to solve this problem:

Our decision variable is the mode of transport. The time constraint is that total transport time must be less than or equal to 18 h. Each potential solution can be represented as a vector of modes of transport. For example, [Road, Road, Air] means we first use road transport, then again road transport, and finally air transport.

Initialization: We randomly initialize a population of 21 solutions in Table 3. Each solution is a vector of modes of transport.

Table 3 Initialization Population of 10 solutions

Teacher Phase: We calculate the cost for each solution and check if it satisfies the time constraint. The solution with the minimum cost is considered as the teacher. Then, based on the teacher, more solutions are updated.

Learner Phase: Two solutions are randomly selected and the best (teacher) solution among these two solutions will be replaced in the second solution. The updated solutions are shown in Table 4.

Table 4 Updated solutions after teacher and learner phases

Utilizing the learner phase, we obtained the subsequent solution presented in Table 5.

Table 5 Updated solutions after teacher and learner phases

In Table 5, we arrived at convergence solution.

Solution Extraction: The best solution we found is [Rail, Rail, Road] or [Rail Road Rail] or [Road Rail Rail] with a cost of 1900$.

Validation and Verification: We confirm that this solution meets the time constraint (18 h), and we check it is feasible in the real-world context.

Table 6 Cost and time for different modes of transport

Problem 2

Assuming we are dispatching a consignment of goods from our warehouse to a delivery centre, we are presented with four transportation alternatives: road, rail, air, and sea. Each mode comes with its own associated costs and delivery times (Table 6), detailed as follows:

We have to deliver goods in less than 36 h, and we want to minimize the cost & first shipment will be done through road transport.

Initialization: We randomly initialize a population of 21 solutions in Table 7. Each solution is a vector of modes of transport.

Table 7 Initialization Population of 21 solutions

Teacher Phase: We calculate the cost for each solution and check if it satisfies the time constraint. The solution with the minimum cost is considered as the teacher. Then, based on the teacher, more solutions are updated.

Learner Phase: Two solutions are randomly selected and the best (teacher) solution among these two solutions will be replaced in the second solution. The updated solutions are shown in Table 8.

Table 8 Updated solutions after teacher and learner phases

Utilizing the learner phase, we obtained the subsequent solution presented in Table 9.

Table 9 Updated solutions after teacher and learner phases

Utilizing the learner phase, we obtained the subsequent solution presented in Table 10.

Table 10 Updated solutions after teacher and learner phases

Utilizing the learner phase, we obtained the subsequent solution presented in Table 11.

Table 11 Updated solutions after teacher and learner phases

Utilizing the learner phase, we obtained the subsequent solution presented in Table 12.

Table 12 Updated solutions after teacher and learner phases

In Table 12, we arrived at convergence solution.

Solution Extraction: The best solution we found is [Road, Road, Rail, Rail] with a cost of 3100 $.

Validation and Verification: We confirm that this solution meets the time constraint (36 h), and we check it is feasible in the real-world context.

Results Interpretation

In our study, we initially explored a transportation model involving three modes of transportation and subsequently expanded our analysis to incorporate a fourth mode. By adeptly applying the Teaching–Learning Best Algorithm to these multifaceted transportation challenges, we pinpointed the most efficient solution from a wide range of potential options. The comparative analysis of the optimal costs for both configurations is vividly illustrated in Figs. 7 and 8, set against a timeline. To provide a comprehensive view, one axis in these figures delineates cost, while the other succinctly captures the time dimension.

Fig. 7
figure 7

Cost and time comparison

Fig. 8
figure 8

Cost and time comparison

In our extensive analysis of the problems at hand, both were subjected to optimization using the genetic algorithm (GA). Our findings indicated that the costs incurred when utilizing GA were notably higher than those achieved with the Teaching–Learning-Based Optimization (TLBO) algorithm. Specifically, for problem 1, the GA produced a cost of 2500, while for problem 2, it resulted in a cost of 6650.Furthermore, the application of the GA presented inherent complexities, especially when determining the appropriate pairing method for the crossover operation. The selection between one-point, two-point, or more intricate crossover mechanisms compounded the computational intricacy of the GA. Conversely, the TLBO algorithm, with its straightforward approach of substituting the higher cost vector mode of transport with the lower cost vector, proved to be more intuitive and less computationally demanding. This inherent simplicity positions the TLBO as a more user-friendly and efficient option within the context of these specific problems. For a visual representation and detailed comparison of the costs associated with the TLBO and GA approaches, readers are directed to Figs. 9 and 10.

Fig. 9
figure 9

Costs comparison between TLBO and GA

Fig. 10
figure 10

Costs comparison between TLBO and GA

Conclusion

In conclusion, this study has demonstrated the successful application of the Teaching–Learning-Based Optimization (TLBO) algorithm to the challenging field of multimodal transportation Systems (MTS). The algorithm was used to determine the optimal mode of transport by iteratively comparing pairs of solutions, assigning the role of “teacher” to the lower-cost solution and “learner” to the higher-cost one. By transferring the teacher’s cost to the learner, we were able to effectively improve the learner’s cost, leading to the discovery of an optimal solution. This study showcases the capability of the TLBO algorithm to address the complex problems associated with MTS by considering both cost and time as significant factors. The two numerical examples presented in the article further serve to illustrate the versatility and effectiveness of the TLBO algorithm in the field of transportation system optimization. The results of this study are expected to contribute to the design and development of more efficient, cost-effective, and time-saving transportation solutions, aiding both industry professionals and policymakers alike in their efforts to optimize transportation systems. In future research, the incorporation of additional constraints and factors such as environmental impact and passenger preferences may be explored to enhance the scope of the model further and provide a more comprehensive view of the MTS optimization process.