Keywords

1 Introduction

Food production, quality and the security of supply chains remain critical societal challenges requiring more and more advanced scientific and engineering methods to address the arising issues. The benefits of modelling and control approaches in achieving a better process understanding, higher yields and more consistent product quality have been widely recognised in other industrial activities, such as chemical and biochemical processes [1, 2]. Similar benefits of modelling and control approaches have also been demonstrated in various sectors of the food industry [3,4,5,6,7].

The strict regulatory environment in which the food industry operates also necessitates effective use of modelling and control strategies to ensure food safety, authenticity and quality. For example, Humphrey [8] provides a comprehensive review of the current food safety and regulatory strategies, highlighting in particular the differences between the regulatory schemes in the USA and EU. Dora et al. [4], on the other hand, provide a review of a food quality management system concentrating specifically on the assessment strategies and a feasibility study for small and medium-sized European food enterprises.

Perhaps the most widely recognised and internationally accepted system of effective food safety management is the Hazard Analysis and Critical Control Points (HACCP) approach [9,10,11]. Glassey [12] discusses the implementation steps of HACCP as well as their interlinkage with the Process Analytical Technologies (PAT) and the implications of this and other regulatory frameworks upon data management in the food industry.

The major emphasis of this chapter is on advanced modelling and control approaches currently being proposed for use in the food industry sector. Through case studies, the major challenges and approaches are described and the opportunities identified.

1.1 Modelling and Control Challenges in the Food Industry

The fundamental requirements of any control scheme include a reliable measurement of the relevant process variables and the ability to modify identified manipulated variables effectively to maintain the desired process state. These requirements pose specific challenges within the food processing sector, where the ability to obtain representative measurements throughout the processing chain from the raw material though intermediate stages to the final product are frequently affected by a range of external factors. As Hitzmann et al. [13] highlighted in their status report on PAT in the food industry, the properties of raw materials, the complex transformations during the processing chain [14] and the perishable nature of the products all contribute to the increased complexity of the challenge. Ropkins and Beck [15] and Hitzmann et al. [13] argue that traditional end-point food testing does not provide an effective assurance of food safety for a number of reasons. These include [8]:

  • The challenge of obtaining a representative sample, requiring substantial sub-sampling of food for analysis

  • A limited assurance of safety as only those hazards specifically tested for can be assured

  • A range of difficulties associated with traditional testing procedures, such as time and resource demand, destructive nature and the difficulty of interpretation

  • Reactive nature of control

  • The most significant issue of product safety being assured only at the end-point rather than ‘building it into the product through prevention’

1.1.1 Advanced Measurement

Although this chapter deals predominantly with the modelling and control application case studies, it is important to highlight the importance of a representative measurement of the process state as a critical requirement for effective control. A wide range of scientific publications deals with detailed descriptions of traditional and more advanced analytical techniques used to assess the quality and authenticity of raw materials, intermediates and final products in the food processing chain. These range from simple physico-chemical sensors, visual inspection and image analysis of raw materials and food products (e.g. [16,17,18]) to more advanced, non-invasive fingerprinting techniques.

For example, Riedl et al. [19] and Nunes [20] review various applications of vibrational spectroscopy and chemometrics to assess authenticity, adulteration and intrinsic quality parameters of food and edible oils and fats, respectively. An extensive review of the benefits of various spectroscopic approaches in this area also provides useful reference material highlighting several modelling and data analysis methods used to interpret the resulting measurements. Similarly, Gutierrez-Capitan et al. [21] provide a review of the electronic tongue approach in monitoring the quality of wines. Electronic tongues (similarly to electronic noses) are devices containing an array of sensors, typically based on ion-selective field effect transistors, providing a ‘fingerprint’ trace of the analysed food sample (e.g. [18]).

Hitzmann et al. [13] argue the need for optical analytical methods, such as various spectroscopic approaches, for a variety of reasons. These include, for example, oxidative changes of raw materials during storage and processing, as well as the critical importance of the visual impression of the final product and the strict hygiene requirements throughout the production process and storage. In such circumstances, non-invasive sensor systems are particularly useful.

1.1.2 Data Analysis and Modelling Approaches

The increasing use of fingerprinting analytical techniques, such as the optical methods mentioned above, has led to increasing amounts and frequency of data collected during processing, as discussed in Glassey [12]. Multivariate data analysis methods capable of dealing with large, often highly correlated, data sets, reducing their dimensionality and enabling correlations to be built between the measured raw material characteristic, process data and the resulting product quality characteristics have shown their benefits in a number of industries (e.g. [2, 19]). Principal component analysis (PCA) and its variants have been used extensively to identify underlying features in multidimensional data (e.g. [22]). On the other hand, various regression methods, such as locally weighted regression, Partial Least Squares (PLS) and its variants and nonlinear methods including artificial neural networks were effectively used to develop models capable of quantitatively predicting the desired process outputs (e.g. [21, 23]). Whilst this chapter does not intend to provide details on these data analysis methods, they form an essential part of a successful control of any process where process output measurements cannot be directly obtained using analytical techniques. Readers are therefore referred to various sources describing the fundamentals of these methods (e.g. [24, 25]).

1.1.3 Process Control Approaches

The constantly increasing consumer expectations, market competition and strict regulatory environment necessitate the use of increasingly more advanced control approaches in the food industry. Such control approaches could not only contribute to increased product quality consistency and safety; they could also improve the manufacturing process efficiencies through reduced levels of finished product rejections and recalls. These can affect both the economics of the process and the impact upon the public trust and perception of manufacturers.

Several studies have shown the lack of competitiveness of European food industries as compared to North America and Australia [4] and the benefits of using advanced quality management systems and Statistical Process Control (SPC) approaches [26]. A comprehensive review of the use of SPC in the food industry is presented by Lim et al. [5]. They provide a detailed analysis of a number of food sector applications of SPC, with a very helpful time evolution indication for SPC implementation in the food industry, highlighting its increasing integration in the HACCP, ISO 9000:2000 and other quality management frameworks. The reviewed articles highlighted ‘reduced process variation, improved food safety control, improved knowledge about the process variation and cost savings’ as the most cited benefits [5]. The most cited challenges included ‘resistance to change, lack of sufficient statistical knowledge and lack of management support’ [5].

The case studies in this chapter indicate how benefits can be obtained even through more established control approaches which often represent less of a challenge in terms of resistance to change or the need for detailed statistical knowledge, yet still lead to tangible quality and cost benefits.

2 Case Study 1: Potato Chips (French Fries) Production

The first case study considers how an existing operating strategy can be ascertained and improvements in the control system derived and justified. The process considered is a French fry line to which lorries transport potatoes from various growers to the factory in loads of 20–30 tonnes. Each load is a single variety and from a single supplier. When they arrive at the factory they are subjected to a number of quality control tests and, if they pass these, they are unloaded into a storage bin. When required for production, the potatoes from the bin are fed via conveyor belt to the production line. First, the potatoes are peeled and then fed to a cutter to produce chips of the required size and characteristics. The size is varied in response to customer requirements. A sophisticated vision analysis system then removes from the line any of the cut potatoes that contain defects. Following this, the cut potatoes are partially cooked in a blancher and then pass into the dryer. The dryer acts to regulate moisture to give the final product its correct texture. Following the dryer, the chips are partially fried. The product is then frozen and packed ready for distribution to the customer. Quality control tests are carried out on the final packed product to confirm that it meets customers’ specifications. The paramount production objective is to manufacture French fries to quality criteria specified by the various customers. To do so requires frequent changes to the processing equipment to make product routinely whose quality falls within the target range.

2.1 Knowledge Elicitation

The first stage in the study involved determining what information existed on process variations and current plant control policy. At the outset it was clear that there was a considerable degree of manual intervention in plant operation. Whilst control loops regulated variables such as blancher, dryer and fryer temperatures, the set points of these controllers were specified by the plant supervisors based upon their process expertise. The first step was to check that these controllers were behaving acceptably. If local loops were not functioning correctly then controller set point specification would be pointless. Observations of loop behaviour confirmed that all local control loops were functioning correctly. Following this, it was necessary to get an appreciation of how and why the operators modified the controller set points to regulate product quality. This information gathering involved a series of knowledge elicitation sessions from the plant technical manager and shift supervisors.

The Knowledge Acquisition Technique (KAT) used was developed by CK Design and has proved to be an efficient knowledge elicitation tool and to result in a complete, correct and consistent knowledge base [27]. The knowledge elicitation proceeds through successive overturning of the states of belief of the expert about the core belief state. The line of questioning is carried out until the expert believes there is no further condition to overturn the belief under the preceding conditions. The knowledge base is structured in the form of exception graphs that capture the expert’s decision process. Using the KAT method, working from the core belief that the product quality was under control, exceptions were sought and actions in the event of these exceptions occurring were obtained.

It is usually the case that no one person possesses all the knowledge pertaining to the problem domain. It is therefore necessary in the initial project stages to identify all those that may contribute to the knowledge base. A degree of overlap of knowledge between ‘experts’ is desirable as inconsistencies can be highlighted. In this project several process supervisors and quality control laboratory staff were interviewed, along with the past and present production manager. A set of several exception graphs from the various experts resulted. The next stage was to combine them into a single exception graph. This requires the project ‘owner’ to adjudicate if conflicts arise. If the degree of inconsistency between ‘expert’ views is significant, then little can be gained from the knowledge elicitation other than indicating that the whole process operational strategy requires reconsideration. This was not the case in this study, with only minor inconsistencies, primarily in the severity of response operators took in response to process problems. As a result, the current control strategy was determined in the form of an exception graph. The exact details of the current control strategy are confidential as are the precise details of the CK Design technique, but the information shown in Fig. 1 is typical of the rules obtained and level of detail produced.

Fig. 1
figure 1

Example of control strategy information

Here it can be seen that State 1 indicates that the chip quality is acceptable unless State 2 or State 3 is true. To indicate the type of structure and rules that arise, consider the left hand side of the tree and the situation when a measurement is received to indicate that State 2 is true (i.e. the moisture is high). Action 1 associated with State 1 is taken. This confirms that State 1 is in fact true. As moisture is a measured value and subject to error from a variety of sources, this reconfirmation is necessary. If State 1 is still true after reconfirmation then State 4 is considered. If the raw material moisture reduces significantly then it soon results in product moisture reduction so no action is required. Otherwise Action 2 should be taken. In this scenario, Action 2 is likely to involve a reduction in product drying.

2.2 Control Strategy Development

Moisture was identified as particularly important as product is sold by weight and moisture targets set by customers are quite tight. At this stage the managing director of the company not unreasonably asked how much money would be saved by improving moisture control to ascertain whether it was a worthwhile undertaking. Answering such a question requires the use of cost benefit analysis techniques. The fundamental question to answer is how much is a control scheme going to save but this must be answered before it is implemented. To attempt to resolve this ‘Catch 22’ question, use was made of techniques proposed by Anderson [28] and verified in other industrial sectors (for example [29]). The underlying philosophy is that improved control translates to reduced product variance. By decreasing product quality variance it is still possible to stay within the range of acceptable product but with a mean value of operation which can be changed. In this case, this could lead to the mean value of product moisture increasing but still satisfying the customers’ quality control demands. From this situation, a simple financial calculation can be undertaken to reveal what a move in product quality mean is worth. The current operational records provide the information to determine existing variance. The fundamental assumption proposed by Anderson [28] is that, by implementing sophisticated control procedures on a process plant, the variance of the product quality is at least halved. Indeed, in plants where significant manual intervention is currently the norm, this is quite pessimistic. The new product distribution can be estimated and the new mean operating point determined to ensure that quality control still remains within the target range. Clearly, the figures relating to the application are financially sensitive and are unable to be revealed. However, the procedure outlined above was followed and the potential savings indicated were significant and justified the continuation of the study.

Although product moisture is influenced by several operations on the line, the main influence and therefore the control variables are within the dryer. The operation of the dryer is not an insignificant task. Analysis of the existing control policy for moisture control revealed two important issues:

  1. 1.

    The severity of control changes to the same deviation varied from operator to operator

  2. 2.

    The operators acted to correct process deviations using a feedback strategy acting on information from the quality control laboratory

Whilst the first issue could easily be rectified, the second highlighted a fundamental control problem. Feedback control is not a particularly effective means of controlling the process. Delays in the overall loop of 35 min at best are significant. This would occur if a sample was taken from the line immediately a change reached the sampling point. In the worst case, because samples to measure product moisture are only taken every hour then the delay could amount to 95 min. When the line is producing many tonnes of product, this could amount to significant off-specification product. Of equal concern is that, with significant disturbances coming from raw material variation, a change in product moisture takes at least 55 min to be observed. Corrective action could then be taken but by this time a new load of potatoes are being fed to the line because it takes around 60 min to process a load. Such corrective action would therefore be completely inappropriate. Thus it is clear that this scheme is fundamentally flawed.

In analysing the existing control scheme it is apparent that the problems are a result of process and measurement delays and the sampling rate of the quality variables. Even if the sampling rate could be increased significantly, which given human resource requirements would be difficult, the fundamental problem remains of process delay. Overcoming the problem of delay requires a predictive control philosophy. If the answers to two fundamental questions are obtained then control performance could be considerably improved upon. The two questions are:

  1. 1.

    If a change is made to the dryer, how does the product quality respond? If the product is off-target or a change to the operating target is required, information on how to change the dryer to get the product approximately within range can avoid major reliance on delayed feedback. Although predictive information is never perfect, the predictive action moves the product quality close to the desired value and feedback could provide fine modifications to the operation. This avoids typically well over an hours’ worth of production potentially out of specification.

  2. 2.

    If the raw potato quality is known can its effect on product quality be predicted? If so, by how much and when should the dryer be changed to compensate for it? If it can be anticipated how a raw material change influences product quality, corrective action can be taken in a feedforward control sense to nullify any changes in raw material. It is realised that perfect process information is not available but even approximate process information can serve to provide effective feedforward control, with feedback control again providing fine modifications.

The modified control strategy is shown in Fig. 2.

Fig. 2
figure 2

Modified control strategy for product moisture

Two key control strategy parameters had to be specified for the scheme to function acceptably. First, the feedforward controller gain was determined from analysis of data produced from some simple plant tests. Observations of independent variations of dryer temperature and raw material moisture on product moisture provided the necessary information to determine the feedforward controller gain. Second, inversion of the information on dryer temperature/product moisture provided the predictive information to determine by how much to increase temperature to correct product moisture deviation.

2.3 Control Strategy Implementation

Trials of the new control scheme took place over a number of days of operation. From a practical perspective it is important to note that no new instrumentation was required and few, if any, extra laboratory analyses were undertaken. The essential aspect of the new control philosophy was to use the available information but to respond at appropriate times using knowledge of the likely outcomes of process changes. The initial results were obtained in a series of process tests undertaken by the development team in collaboration with the process operational staff. During such tests, closer attention than normal is obviously paid to the process plant operation. The worry is therefore that, although plant improvements are indicated, in the longer term, when normal day-to-day operation resumes, without a specific focus on the new policy little additional benefit is found. Long-term performance compared with process behaviour prior to the introduction of the scheme is the best way to judge whether this is indeed the case. This information is shown in Fig. 3a. Figure 3a shows the performance of the production line prior to the implementation of the control scheme. Laboratory samples measuring moisture content are shown along with the tight bounds within which it is desirable to operate. It can be seen that deviations outside of the bounds were frequent (56% of the samples fall outside of the bounds). Figure 3b shows the behaviour of the process following the introduction of the control scheme. Much tighter regulation of the moisture content is apparent (10% of the samples fall outside of the bounds). Slight oscillatory behaviour is observed within the bounds of operation. One of the reasons for this is that potato loads are not selected at random to go through the production line. The operators make an effort to put a load of similar moisture content to the previous load through the line, hence introducing the observed perturbations.

Fig. 3
figure 3

(a) Performance prior to control scheme implementation. (b) Performance subsequent to control scheme implementation

In interpreting these figures it must be remembered that the operational bounds are tighter than the customers’ requirements but nevertheless, for the reasons discussed previously, it is important to reduce variation as much as possible. Returning to the cost/benefit analysis carried out prior to the implementation, it is interesting to observe that the process variation has been almost exactly halved, which is in line with the prevailing wisdom on improved control benefits.

In summary, the case study set out to demonstrate that variations in product quality in a food processing line could be reduced by the application of advanced control methods. The KAT knowledge elicitation proved effective at obtaining an initial idea of the control strategy. It highlighted where problems existed but it did not provide a total solution. Once the failings of the current control scheme were identified, cost benefit analysis revealed very clearly that improvements were possible and the likely savings would more than justify the investment. The control strategy itself was fairly straightforward to devise from a theoretical viewpoint, with simple process trials revealing approximate process gains which were sufficient for control design purposes. Implementation on the production line to prove that the methods worked was remarkably trouble free. In the longer term, whilst the new control strategy is simple to implement, it does rely upon manual changes to be made at roughly the correct time. This is a fundamental problem, as staff in a small company tend to have many calls upon their time and this is seen as one more. However, failing to respond to raw material changes has serious financial consequences on the production line. A general awareness of the scale of the potential loss may be encouragement to adopt the new strategy.

3 Case Study 2: Potato Crisps (Chips) Production

The amount of waste generated by food manufacturing processes presents a high financial cost, making cost reduction one of the priorities for a process analyst. The initial assessment of the process used for this case study indicated that a significant amount of waste was generated by unacceptable levels of moisture in the end product. In terms of moisture levels, the parameters within which the factory operates for ‘Product A’ are divided into three zones: the Green zone – 1.4–1.8 (product has an optimal moisture level), Amber zone – 1.1–1.3 (the product meets the process parameters but it can be further improved) and Red zone – below 1.1 and above 2.1 (see Fig. 4). When the moisture levels in the end product are situated within the Red zone, the product is rejected from the line and it has to be dealt with as waste. The moisture levels in the end product are measured online by utilising an NDC online NIR gauge that also measures the amount of fat in the end product. On a regular basis, the NDC gauge is giving a moisture reading every 30 s, but for the purposes of this project the NDC gauge was set to give moistures values every 10 s for more data to be captured so as to understand the process dynamics better. After analysing the data generated over a 3-week period, it was estimated that, on average, the amount of waste generated on one line by unacceptable levels of moisture in the end product amounts to a weekly cost of approximately €1,250. When the fact there are multiple lines within the factory is considered, this presents a significant opportunity for improvement.

Fig. 4
figure 4

Mechanism of the negative closed loop system utilising the moisture levels in the end product to modify the fryer oil set point temperature to reduce the amount of waste generated by unacceptable levels of moisture in the end product

3.1 Developing a Solution

Once the current opportunity was assessed, the next step was to identify possible solutions for reducing the waste and to achieve better process control in terms of moisture levels in the end product. The main mechanism for control of moisture levels in the factory is through the fryer oil temperature. Thus better control over how the fryer was operated was chosen as the main solution for this challenge. A closed loop negative feedback control system that utilises the moisture levels in the end product to modify the set point temperature of the fryer oil to adapt it for the subsequent product stream was developed. The mechanism through which this negative feedback system operates is presented in Fig. 4.

Figure 4 illustrates the modifications to be made to the fryer temperature set point in concordance with the three zones for moisture levels: Green zone, Amber zone and Red zone. The system also requires that, after a change was made on the fryer temperature, 4 min must pass before another change is made. The reason behind this is partly that the time delay of the fryer temperature in this case is around 4 min and also the desire to improve the robustness of the control system in the event of spurious measurements.

The system presented above was designed so that it can be developed as an automatic software solution and installed on the SCADA gauge utilised for controlling the fryers in the factory. Having this negative feedback closed loop system operating in an automated fashion offers many advantages, being more effective and more cost efficient in the long term. Nevertheless, before the software was developed, a series of trials was conducted to identify the efficiency of the system.

3.2 Trials Mimicking the Negative Closed Loop System

Two trials which lasted 12 h each were carried out to assess the efficiency of the negative feedback closed loop system. For the purposes of these two trials, one process operator was assigned to monitor closely the fryer at the control panel on the SCADA gauge and to follow the instructions presented in Fig. 4. By following exactly the system presented in Fig. 4, without being influenced by the effect of the changes on the process, the efficiency of the automatic software was tested. Moisture levels in the end product were measured as usual utilising the online NDC gauge and data was captured and exported every 10 s so that data could be further analysed and compared with previous data.

The results obtained through these two trials were positive, showing lower rejection rates of product based on unacceptable levels of moisture and also less time spent in the Amber zone of moisture content. During the first trial there was no rejected product for poor levels of moisture, although during the second trial 1 min of rejected product was recorded, amounting to around €30. Although only two trials have been carried out to date, the results were positive, giving more confidence for developing and utilising the closed loop system. Currently an automatic software system is being developed and implemented.

4 Case Study 3: Food Mixing Consistency

Consistency of mixing of various dry food mixtures and pastes remains a significant challenge in food processing, despite years of development in this area. To date, the mixing operations are predominantly operated using standard operating procedures with times of mixing specified on the basis of empirically established values to ensure product homogeneity. This may lead to excessive mixing and thus equipment underutilisation or insufficient mixing and product rejection, neither of which are desirable in food manufacturing.

This case study demonstrates how NIR may be used to improve the consistency of mixing processes in food industries. Bread and confectionery powder mixtures aimed at the bakery market were analysed in this study. The main components of these mixtures were flour, sugar, gluten and salt. Four different products were taken into consideration:

  1. 1.

    Product A: blend with small particle size distribution and more than one main component

  2. 2.

    Product B: blend with small particle size distribution and one main component that accounts for more than 50%

  3. 3.

    Product C: blend with small particle size distribution and one main component that counts for more than 90%

  4. 4.

    Product D: blend with large particle size distribution and more than one main component

The experiments were performed using two conical screw mixers of nominal capacity of 4,000 L, each equipped with a diffuse reflectance fibre-optic probe connected to a Bruker Matrix-F FT-NIR spectrometer. Figure 5 shows the configuration of the conical screw mixer (Fig. 5a) and how the NIR probe is connected to the blender (Fig. 5b). Spectral data were collected using OPUS software version 7.0 provided by Bruker. Homogeneity studies were performed by analysing spectral data with Matlab version R2014a, and calibration models were built using OPUS software.

Fig. 5
figure 5

Conical screw mixer configuration. (a) Configuration of the conical screw mixer. (b) Connection of the probe to the blender

4.1 Homogeneity Measurement

Spectra were collected continuously during the whole production time from the point of loading the first ingredient until the process was stopped. Dealing with solid samples, data collected were largely influenced by light scattering, and therefore different pretreatments algorithms were used to clean data from scattering. Four types of pretreatment were considered as detailed below.

4.2 Derivatives

Derivatives of spectra are calculated using the Savitzky–Golay algorithm. First and second order derivatives are most common: first order derivatives remove baseline from spectra and second order also eliminate linear trends [30]. Derivatives are very good at enhancing differences between spectra and differentiate the overlapping signature, but they also increase noise.

4.3 Detrending

Detrending subtracts a polynomial fit from the original spectra to correct the baseline [31]. The resulting spectrum is given by

$$ {X}_{Dt}=X-\left({a}_0+{a}_1\lambda \right) $$
(1)

4.4 Normalisation

The same weight is given to all the absorbances: each spectrum is in fact normalised to a length of 1 by dividing it by the Euclidian norm [30]:

$$ {X}_{\mathrm{norm}}=\frac{X_{\mathrm{orig}}}{\sqrt{\sum \left({X}_{\mathrm{orig}}^2\right)}} $$
(2)

4.5 Standard Normal Variate (SNV)

SNV normalises each spectrum to zero mean and unit variance by subtracting the mean of each spectrum and dividing by its standard deviation [30]:

$$ {X}_{SNV}=\frac{X_{\mathrm{orig}}-{X}_{\mathrm{mean}}}{\sigma } $$
(3)

Deviation from the target spectrum was investigated to establish the mixing time; it was calculated as the Euclidean distance between all the spectra collected and the ideal spectrum referred to the homogeneous blend:

$$ d=\sqrt{\sum {\left(\mathrm{spectra}\kern0.37em {\mathrm{matrix}}_{i,j}-\mathrm{ideal}\kern0.37em \mathrm{spectrum}\right)}^2} $$
(4)

In all the experiments the change of spectra over time was observed, eventually converging to the same steady-state spectrum (see example in Fig. 6). Green spectra represent the beginning of the production, when the blend is still under the level of the probe. The characteristic flat shape is because only the air present in the mixer is scanned at this phase. As soon as the probe starts getting covered by the powder mixture, spectra begin to show some peaks. This is represented by the blue spectra. These spectra are shown to change over time, indicating the composition is changing. In fact, during the process, different ingredients are added and blends are continuously mixed, leading to different powders being scanned by the NIR probe. Spectra are seen to start overlapping after a certain time, as illustrated by the red spectra. Because each sample of a given composition and concentration is uniquely identified by a spectrum, the overlap demonstrates that the powder inside the mixer has the same concentration, thus indicating that the blend is homogeneous.

Fig. 6
figure 6

Example of spectra collected during the production phase. Green spectra are recorded when the powder is still under the level of the probe. Blue spectra show powder reaching the level of the probe. Red spectra represent the homogeneous mixture

Mixing time is therefore determined by the time it takes for the spectra to start overlapping with each other and a steady-state fully mixed spectrum is reached.

The effect of component distribution was evaluated by comparing results obtained for Products A, B and C, and particle size distribution was studied by investigating the different effects on Products A and D. The entire blend run was analysed, employing different combinations of pre-processing techniques. In Fig. 7 the blending profiles of deviation from the target spectrum for all the products are shown using Normalization+SNV+Detrending and Normalization+second derivative. Variations in profiles were observed when using different pretreatments; however, for all the experiments an overall behaviour was observed and plots were generally divided into four parts:

  1. 1.

    First stationary phase: the deviation is stable over time and its highest value is recorded. Powder is still under the level of the probe and NIR is scanning only air. Green spectra shown in Fig. 6 represent this phase

  2. 2.

    Decreasing phase: deviation suddenly decreases because of the powder approaching the probe level. Referring to Fig. 6, this phase illustrates the passage from green to blue spectra

  3. 3.

    Oscillations: deviation changes over time as a consequence of the variation in composition during the production process. Blue spectra shifting over time in Fig. 6 describe the same phenomenon of oscillations

  4. 4.

    Second stationary phase: deviation finally approaches zero value and remains stable over time. Red spectra overlapping each other represent the second stationary phase

Fig. 7
figure 7

Comparison of pretreatment combinations for Products A, B, C and D. Data were first pretreated using Normalisation+SNV+Detrending and Normalisation+second derivative. Subsequently, deviation from the target spectrum was calculated. The red vertical line represents the homogeneity starting point. Where the red line is missing it was not possible to determine the mixing time. (a) Product A – Normalisation+SNV+Detrending; (b) Product A – Normalisation+second derivative; (c) Product B – Normalisation+SNV+Detrending; (d) Product B – Normalisation+second derivative; (e) Product C – Normalisation+SNV+Detrending; (f) Product C – Normalisation+second derivative; (g) Product D – Normalisation+SNV+Detrending; (h) Product D – Normalisation+second derivative

Mixing time is thus identified by the starting point of the second stationary phase, which may change depending on the pretreatment chosen. The value of d (deviation from target) which determines the homogeneity starting point is set depending on the product analysed. There is not a general recommended value, but d is rather based on experience by analysing previous batches of the same product and establishing the average minimum value when the profile becomes stationary.

Product A mixing time to reach homogeneity was identified with both combinations as shown by the stationary phase achieved at minute 35 in both cases (Fig. 7a, b). The homogeneity of Product B instead could only be identified using Normalisation+SNV+Detrending (Fig. 7c, d) because of the reduced variability of the system. This blend has one main component which counts for more than 50%, whereas Product A has more than one main component. The smaller variation in the component distribution of Product B causes, in turn, a smaller variation in the spectra, which makes it more difficult to detect the changes during production and therefore to understand at which point homogeneity begins. SNV and Detrending, compared to derivatives, accentuate more the spectral differences, so making more evident the homogeneity starting point. Product C homogeneity point could not be identified properly by any of the combinations employed (Fig. 7e, f): the blend in fact appears homogeneous as soon as the powder reached the probe (minute 12), and no variations are shown when different ingredients are added. The main component of Product C is present for more than 90%, so making the quantities of the remaining ingredients very small. Changes in composition are minimised and NIR is unable to detect such small variations. Product D homogeneity could be estimated accurately using Normalisation+derivative, but not by Normalisation +SNV+Detrending (Fig. 7g, h). The large variation in the particle size distribution of Product D is in fact responsible for the increase in variability, and SNV and Detrending accentuate these differences too much, causing oscillations in the second stationary phase too. Derivatives, on the other hand, do not present the issue as they enhance these variations less and are able to show more clearly the homogeneity starting point.

Normalisation+SNV+Detrending gives all the benefits provided by these three techniques: initial scattering is removed, oscillation phase is emphasised and the homogeneity starting point is clearly detectable. This combination can be generally used for products with average or small component distribution, but not for products with a single component concentration higher than 90%. For this kind of material, represented here by Product C, deviation from the target spectrum cannot identify the mixing time required to achieve homogeneity. With regard to the particle size distribution, it is preferred to employ Normalisation+derivative as differences would be accentuated too much by SNV-Detrending because of the high variability involved in these products.

4.6 Calibration

Following the homogeneity analyses described in Sect. 4.1, the process was stopped when the mixture was believed to be homogeneous and spectra collected inline were analysed to measure the composition of the blend inside the vessel. To evaluate the concentration of the blend components and to check whether they are within the specifications, calibration models for Near-Infrared probe installed inline were required.

An additional probe of Bruker Matrix-F, the same as installed inline into the two conical screw vessels, was used and it was connected to the spectrometer with fibre optic cable. The probe was placed under a bench in the laboratory in an upside down position so that the sample could be placed on top of the probe and scanned (see Fig. 8). This allowed the making of samples of known composition with a wider range of concentrations, and reducing the time to build a calibration model. Fifty samples of Product D were prepared with varying concentrations of each component and scanned with the spare probe; calibration models were built using a PLS algorithm and data were first pretreated and screened with PCA to eliminate eventual outliers.

Fig. 8
figure 8

Additional probe of Bruker Matrix-F installed offline for sample calibration. (a) Scheme showing the same NIR spectrometer connected to two Bruker probes, one installed in the conical screw mixer and one installed offline under the bench. (b) Top view of the offline probe

Results of cross-validation showed a very good correlation for most of the organic components in the mixture, and gluten is illustrated here as an example. Two factors were checked to measure the quality of predictive capability of the calibration model: Root Mean Square Error of Cross-Validation (RMSECV) and coefficient of determination (R 2). The model for gluten prediction achieved 0.476 for RMSECV and 94.74% for R 2, indicating a very low prediction error and a high correlation. The plot of predicted vs actual values displayed in Fig. 9 shows the values lying on the parity line, which indicates that the predictions are very close to the actual values for the whole studied range.

Fig. 9
figure 9

Cross-validation results obtained for calibration model of gluten using the Matrix-F NIR spectrometer

4.7 Tumble Mixer

To evaluate the effectiveness of this methodology independent of the unit operation used, Product E, a bread roll powder mixture mainly composed of flour, salt and sugar, was considered. The blending process of a Matcon tumble blender (see Fig. 10a) of a nominal capacity of 2,000 L was monitored in this experiment. Because in this case the blender itself is always in motion, it would not be possible to apply a traditional NIR probe as seen in Sect. 4.1. The fibre and power cables would rotate together with the mixer, so eventually snapping. Moreover, the considerable size of the probe would not allow it to be applied to the tumble blender, as it would certainly crash against either the floor or the ceiling of the mixer. A MicroNIR PAT (shown in Fig. 10b) was considered for this purpose because of its reduced dimensions and its cable free nature, being Wi-Fi and battery powered.

Fig. 10
figure 10

Matcon tumble blender (a) and MicroNIR PAT (b)

MicroNIR PAT was applied to the lid of the tumble blender and spectra were collected every time the lid was in the bottom position. Spectral data were collected using the MicroNIR PAT software and then analysed with Matlab version R2014a.

Deviation from the target spectrum, as described in Sect. 4.1, was monitored to establish the starting point at which the spectra overlap.

As with the convective mixer, the change of spectra over time was observed, eventually converging to the same steady-state spectrum. However, in this case the only spectra that can be seen are those varying over time, representing the change of composition (Fig. 11, coloured in blue), and those overlapping each other that exemplify the homogeneity phase (red).

Fig. 11
figure 11

Spectra collected during production in a tumble mixer. Initial flat spectra are not present as probe was always covered by powder. Blue spectra indicate composition changes over time and red spectra overlapping each other refer to the homogeneity phase

Blending profile pretreating data with Normalisation+second derivative is shown in Fig. 12. In contrast to the convective mixer, given the initial flat spectra absence, the first stationary phase is not observed here. The other phases are evident: the profile starts with the decreasing phase, then continues with the oscillation phase and ends with the second stationary phase.

Fig. 12
figure 12

Blending profiles analysed with “Deviation from target”

The homogeneity starting point can be clearly identified using “Deviation from target” and is indicated by a red line in Fig. 12.

4.8 Mixing and Cooking Vessel: Mixing of Pastes

For this part of the case study, paste products, in particular caramel and custard, were analysed. They consist of high density and high viscosity products, produced through high temperature processes. The main ingredients in both products are water and sugar.

NIR spectroscopy was applied to a Giusti mixing and cooking vessel (see Fig. 13), which has a nominal capacity of 2,000 L and is surrounded by a jacket used for cooling and heating. Temperatures vary during the process, from room temperature to a maximum of 120°C.

Fig. 13
figure 13

Giusti mixing and cooking vessel. (a) View from the bottom of the entire vessel. (b) View from the top showing the opening where raw materials are added

The spectrometer employed in this case (as in Sect. 4.3) was the MicroNIR PAT. Because of the high temperatures involved in the process, and given the instrument operative temperature range is only 0–40°C, an extended probe was applied to MicroNIR PAT so that the product was not in direct contact with the spectrometer. MicroNIR PAT was applied to the recirculation pipe and not to the vessel itself, because the presence of the jacket surrounding the Giusti mixer does not allow welding the flange. Working with high density and high viscosity materials, it is very likely they stick to the probe surface. To avoid this problem, the spectrometer was placed in the pipe, just above the recirculation pump, so there was enough pressure to remove the product layer and NIR was able to scan the product flowing into the pipe.

The production of caramel was monitored by taking samples every 15 min during the process and analysing them offline to measure different physical properties: colour (light +), refractive index, water activity and moisture. Meanwhile, spectra were collected inline and retrieved at the end of the production process. Four batches were monitored: data from the first three batches were used to build the model and data from the remaining batch were used to test the model and predict the properties. The model was built based on the full spectrum, pretreating data with SNV+first derivative, screening with PCA to check for potential outliers and finally regressing using the PLS algorithm.

The results of the calibration model for caramel for the four physical properties considered – colour, refractive index, moisture and water activity – are shown in Fig. 14.

Fig. 14
figure 14

Inline calibration model results for caramel for different properties: colour (a), refractive index (b), moisture (c), and water activity (d)

Figure 14a shows the predicted vs actual values of the model for colour prediction where a high correlation is evident, as values lay along the parity line. This is also supported by the high value of R 2 equal to 82.29% and the low value of RMSE equal to 1.72. Figure 14b shows the results of the calibration model of refractive index for caramel. A very low correlation is evident, with an R 2 equal to 50.79%. Values in the lower calibration range (around 28–29) are actually well-distributed along the parity line, but for higher values of refractive index, points are lying on a horizontal line, which indicates a lack of correlation. RMSE in this case was equal to 1.50. Results of the calibration model of moisture for caramel are reported in Fig. 14c, where the predicted values are very close to the actual values. R 2 was quite high (95.92%) and RMSE was low (0.77), indicating a very high correlation for moisture. Finally, results of water activity are shown in Fig. 14d. A high correlation of predicted vs actual values can be observed, which is confirmed by the value of R 2 equal to 74.37% and of RMSE equal to 0.03.

The models built so far were subsequently tested to assess their potential in predicting physical properties of inline products. Data from the fourth batch were analysed using the models previously built and predicted vs actual values were plotted for the different properties as shown in Fig. 15.

Fig. 15
figure 15

Validation of the inline calibration models for caramel for colour (a), refractive index (b), moisture (c), and water activity (d). Data from the fourth batch were analysed and predicted using the models previously built

Most of the values of colour are overpredicted (Fig. 15a) whereas refractive index is underestimated (Fig. 15b). Values of moisture (Fig. 15c) and water activity (Fig. 15d) are closer to the parity line, but few outliers are present. Despite the calibration models showing high correlation, the predictions for an unseen batch are not very accurate. However it should be noted that only three batches were used in this study to build the model, so it is not surprising that the model is not sufficiently robust. More data need to be included and more batches have to be monitored to improve the calibration.

5 Conclusions

This chapter discussed the drivers and some methods of food process modelling and control. A range of case studies was used to demonstrate the benefits and the challenges associated with the implementation of established control approaches as well as more advanced monitoring methodologies. Clearly significant benefits can be gained either by reducing waste generation or by increasing product consistency/reducing unit operation time requirements, although care needs to be taken when analysing multivariate spectral data and using this to predict product characteristics.