Introduction

Experimental investigations of shock wave reflection phenomena are often used to qualitatively validate results from numerical simulations through schlieren photography. However, as advancements in ultra high-speed photography make video recordings of shock dynamic events possible, quantitative information from large volumes of consecutive schlieren photographs becomes desirable. Under such context, computer aided image processing techniques that are capable of efficiently and reliably tracking the positions of shock fronts become critical for avoiding repetitive manual work during post processing. Especially, in the study of regular to irregular shock wave transition phenomena, experimental images often consist of multiple developing shock fronts intertwined, demanding more sophisticated image processing techniques for data extraction.

Various techniques for detecting the edges of shock fronts have been explored in previous studies. Shock fronts can be visually identified through their higher and lower light intensity characteristics compared to the background. Hence, many of the methods explored previously rely on finding the local extrema of light intensities through approximated partial derivatives. Methods that use local extrema to locate edges, such as the Canny and Sobel methods, focus on different ways to approximate partial derivatives while minimizing the impact of noise [1]. Although those approaches have proven to be effective in finding edges inside images, they also have inherent disadvantages. As indicated in Fig. 1, in cases of complex shock geometries, such as Mach stem development, edge detection with local extrema tends to highlight both primary and secondary shock fronts, adding to the challenge of extracting meaningful information from the photograph. Further, background subtraction alone is often insufficient for fully eliminating noise.

Fig. 1
figure 1

Edge detection results from Canny and Sobel edge detection methods in comparison to the original image. Shock waves are not obvious after edge detection, and background noise is still present. Secondary shocks are also highlighted with Canny edge detection. a original image. b image after Canny edge detection. c image after Sobel edge detection. b and c were obtained using the MATLAB image processing toolbox

Additionally, even after a relatively successful edge detection process has been completed, complex shock geometries may still contain highly entangled shock fronts that require individual identification, which would undermine the effort to automate the process of data extraction.

Therefore, in this study, we propose an alternative approach to edge detection applied to shock wave studies and a system of image processing algorithms to obtain quantitative results from shock wave experiments. Instead of using local extrema in light intensity to identify edges, morphological operations were used to highlight only primary shock fronts, and as a result algorithms to automatically track and identify individual shock fronts were developed. Our experimental setup used to generate the shock waves for developing these image processing algorithms consisted of an exploding wire system that is capable of producing small scale shocks with similar characteristics as blast waves. The exploding wire system is described in detail in [2] and very briefly explained here. The experimental setup consists of three main parts: the driver unit, the experimental test section, and a high-speed schlieren imaging system. The driver contains four capacitor banks each at 0.22 μ F and 10kV storage (General Atomics, Part No. 31160); two electromagnetic switches (Ross Engineering, Model No. E40-DT-60) that serve to ground the driver when not in use, and complete the charging circuit when in use; a spark gap (10-65 kV, Hofstra Group, Item No. 3114) used as a switch that is controlled by a pneumatic trigger; and finally a Rogowski coil (Pearson Electronics) that probes the current output and is used as trigger source for the high-speed camera (Shimadzu HPV–X2). Each test section contains the necessary number of exploding wires strung across two brass electrodes. The wires are held in place by small fishing weights. The electrodes are then connected to the exploding wire setup driver by flexible coaxial cables. All test sections are enclosed in optically clear flat acrylic panels.

The schlieren photographs were obtained through a z-type schlieren imaging system. Dynamic shock wave events are captured with an ultra high-speed camera, here set to record at 500,000 fps with a resolution of 400x250 pixel2. Schlieren photographs from the experiments were first processed through an in-house developed distortion correction algorithm to eliminate optical biases. The distortion correction algorithm process is elaborated on in “Distortion Correction”. The next step is to apply a series of morphological manipulations to the binary converted photographs to preserve only the primary shock fronts. With the shock fronts isolated, edge detection becomes a collection of locations of the remaining pixel objects. For regular and irregular shock reflection experiments, a discretization method was developed to simplify entangled patterns down to local linear segments, and different linear segments were cross compared to reconstruct the individual shock fronts. This entire process has been tested and utilized in numerous applications in the studies of regular to irregular shock wave transitions, and shock position and velocity propagation profiles for various scenarios have successfully been obtained.

Distortion Correction

Typically, two types of optic distortions are present in z-type schilieren systems: coma and astigmatism. Coma is a result of misalignment of the schlieren field mirrors away from their optical axes and can be minimized through careful calibration of mirrors and lenses [3]. Astigmatism, on the other hand, cannot be effectively eliminated optically as it is a consequence of the geometry of the z-type schlieren system [3]. While careful alignment of mirrors can reduce the presence of coma, it is difficult to determine how effectively coma is eliminated, and since astigmatism is always present alongside distortions introduced by camera lens, extension tubes and other optical elements in the schlieren system, distortion correction during post digital processing is necessary to yield reliable data.

Experimental Implementation

To allow digital distortion correction, careful steps must be taken during the experiments. Here, a clear acrylic plate scored with equally spaced grid points was machined to use as a reference for removing optical distortions. The spacing between the grid points in the plate was predetermined during manufacturing. After each experiment, the grid plate was placed in the same plane as where the dynamic event had taken place. The planar orientation of the grid plate was intentionally rotated by a small amount for programming purposes and also to reduce experimental aligning time for the grid plate. A photograph of the grid plate was then taken under the same optical condition as that of the experiment. This way, any optical distortions that occurred in the experiment would also be present in the grid plate photograph, and the distortion could be corrected for all experimental photographs by applying the same transformation matrix that corrects the distortion in the grid plate photograph. Figure 2a shows an example of a grid photograph taken during one of the experiments.

Fig. 2
figure 2

Image processing of a grid photograph to obtain a mathematical correction transformation matrix. a Original grid image. b Grid image after Canny edge detection has been performed. c Position of corrected grid points (red asterix) obtained from (b) overlaid with the undistorted grid points (blue circles)

Image Mapping and Generation of Transformation

After the grid image was taken, all the grid points would be recognized by edge detection method using the MATLAB image processing toolbox. For the purpose of grid point mapping, Canny edge detection was used. The result of the edge detection result is demonstrated in Fig. 2b. Note that due to interference from noise, some grid points are omitted in Fig. 2b to ensure accurate grid point mapping. Morphological boundary detection can then be used to identify the center location of each pixel object that matched the square pattern of the grid point arrangement. Noise pixel objects that deviate from the grid point pattern are automatically excluded.

The grid points should be equally spaced, but due to distortion they are not equally spaced in the schlieren photograph. Hence, a set of numerically generated and equally spaced grid point locations are used as a reference for the distortion correction procedure. If a transformation mapping can be generated to move all the grid points from their distorted positions to the equally spaced and undistorted positions, then the distortion for the whole image can be corrected. To generate the image transformation matrix that corrects the distortion, each grid point must be paired with a corresponding grid point in the undistorted image. The extracted grid points are first numbered and arranged orderly for pairing. The slightly angled image makes numbering of points on the same horizontal line easier, as the angle differentiates the points on the same line due to their different vertical positions. Since the supposedly undistorted image is impossible to obtain optically, it has to be digitally generated. An algorithm has been developed to create the undistorted grid point locations that match with the points obtained from the grid image. Since the angle of the rotation of the grid plate can be recorded from the experiments, and the spacing between all the points can be estimated from the distorted photograph, equally spaced and undistorted grid points can be generated along the orientation of the rotation and with the estimated spacing. Figure 2c shows an example of the original grid points overlaid with the corrected grid points.

After the undistorted grid had been generated, grid point locations from both the distorted and undistorted image are used as the input to a function in MATLAB’s image processing toolbox to generate the mathematical transformation matrix that can correct distortion in all experimental images from that same setup. The original photograph and the outcome of the distortion corrected photograph are shown in Fig. 3. Though it may seem to the naked eye that the amount of correction is small, it nevertheless needs to be done to allow for accurate measurements of the shock front location.

Fig. 3
figure 3

Distortion corrected after applying transformation. a Image before distortion correction, and b distortion-corrected image

Edge Detection

After the optical distortion has been corrected, the next step is to identify shock fronts in all the schlieren photographs obtained throughout the experiment, and extract locations of shock front pixels. To isolate the shock fronts from background noise, morphological operations are applied to the images.

Morphological Operation

Morphological operations target binary images where only black (designated as 0) and white (designated as 1) pixels are present. Morphological operations treat shapes, or clusters of white pixels, in binary images as objects that are subsets to its spatial definition. Manipulation of pixel objects is done through matrix operations to highlight certain features [4]. With the help of morphological algorithms within the MATLAB image processing toolbox, larger pixel objects stand out and small objects due to noise can be eliminated, making edge detection of shock fronts possible.

Implementation in experimental images

Before initiating the morphological edge detection process, illustrated in Fig. 4, which is to be applied to the distortion corrected schlieren photographs, a background image was used for noise subtraction. This background image is usually taken before any of the shock fronts enter the experimental viewing area. Consistency in background light intensity is critical for effective noise subtraction, but consistent backgrounds cannot always be found in the image sequences. For example, in this study, light interference from the exploding wires can disturb the initial stages of the collection of photographs: the intense light from the wire explosion, for instance, may cause frames captured before the shock wave enters the viewing area to be filled with white light. In such cases, a post-generated background can be pieced together using background regions from different frames. After an appropriate background image has been chosen (or created), noise subtraction is performed in the spatial domain as shown in Fig. 4c. Note that one can also perform noise subtraction in the frequency domain as some studies have done [5], but for this current case, subtraction in the spatial domain is found more effective. After the initial noise subtraction, the light intensity characteristics of shock fronts were used. From close inspection of the shock front photographs obtained from the exploding wire setup, brighter light is present immediately above the shock front, and darker light is present immediately below the shock front. Those two regions of bright and dark light will be the basis for highlighting the shock fronts. The average light intensity of the noise subtracted image is first calculated, and a threshold can be set for what level of light intensity would be preserved in reference to the average light intensity obtained. Note that there is only one threshold used for isolating shock waves. The threshold of light intensity is ideally set to be just slightly lower than the intensity of the lighter region of the shock wave, while also being higher than the average light intensity of the image. Hence, the threshold used can be considered to be a “high threshold” relative to the average light intensity of the entire image. Based on the application, the user must identify the threshold properly to ensure no important information in the image is lost. For example, in the case of Fig. 7, the focus was on the incident shocks and the first reflections that create a Mach stem. Thus, the light intensity threshold value was chosen to highlight those features while other features, such as secondary waves, are not shown. For preserving the lighter region of the shock fronts, all pixel objects with lower light intensity than the threshold are eliminated. The result of the logical operation is a binary image, which can be subjected to morphological operations. As shown in Fig. 4d, the shock front clearly stands out, but small regions of noise are still present in the image. For preserving the darker region of the shock fronts, the light intensity is first inverted using a copy of the original image, and in this way, the original dark region becomes the light region. The exact same high threshold value is then applied to the image to preserve the new lighter region and the binary pixels produced will represent the darker region of the shock wave. One can argue that a different threshold can be introduced for this operation, but we have found that the same threshold produces reliable results, and therefore, to save calibration time for the algorithm, just one threshold is used.

Fig. 4
figure 4

Edge detection workflow for high light intensity regions of the shock front (edge detection for low light intensity regions of the shock front follows the same process). (a) Experimental image after distortion correction. b Background image after distortion correction. c Background subtracted image. d Binary image after eliminating pixels below threshold light intensity. e Cleaned binary image with only the shock front after morphological operations

With only shock front segments preserved, some additional morphological operations were applied to fill holes in broken segments and thicken thin pixel objects for the ease of boundary detection. Figure 4e demonstrates the outcome of the morphological operations. The two final binary images containing the lighter and darker regions of the shock wave are overlapped to reproduce the full shock wave. Finally, the morphological boundary detection from the MATLAB image processing tool box was used to extract the pixel locations of the shock fronts to complete the process of edge detection. Note, that while edge detection based on morphological processes is effective for large and continuous patterns of bright and dark features of the shock wave, it has limited ability to detect shock fronts with complex light intensity features. For example, in the encircled regions in Fig. 6a and b, edge detection of the shock front is not effective. This is due to the alternating light intensity pattern observed in that section of the shock front as bright and dark regions of the shock front appearing as woven dashed lines. Since the current morphological edge detection processes dark and bright features separately using the original and a reversed image, the shock front, in the eye of morphological operations, becomes scattered points with similar sizes to the noise present in both the original and reversed image. As a result, the shock front is eliminated. In case of such complex light features of the shock front, the user may need to manually characterize the position of the shock front if that shock front is of interest.

Based on the pixel location of the shock wave, the physical position of the shock front can be calculated and the uncertainty may be determined. To estimate the uncertainty of the edge detection algorithms presented above, the algorithms were applied to five repeated experiments to obtain shock position data and the results were first compared with the shock positions measured manually. The average uncertainty in position from the algorithm compared to the manual result is calculated to be 1.4%. Details of the experiments used in the uncertainty calculation is elaborated in the single expanding cylindrical shock wave example in “Single expanding cylindrical shock wave”. The uncertainty value of 1.4% shows the consistency and precision of the edge detection algorithm, but users also need to take into account the uncertainty between the real location of the shock and the location of the shock produced from post processing of schlieren images. The minimum uncertainty in the real location of the shock wave lays in the thickness of the shock. Depending on the quality and resolution of the schlieren image, the thickness of the shock wave may vary, and therefore, the user needs to determine this uncertainty in shock position on a case by case basis. For example, in the case of the schlieren image presented in Fig. 4a, the average thickness of the shock can is estimated to be 5 pixels through visual inspection, and hence, the real location of the shock has an uncertainty of at most 5 pixels, which in this case results in 2% of the vertical resolution of the image.

Shock Recognition

In the study of shock wave reflection, shock wave segments are often convolved, leaving analysis of shock interaction more difficult even after the edge detection has been performed. Individual shock segments still need to be identified manually, adding work load for post processing. Therefore, an algorithm that can recognize different shock front segments when they are entangled leading to individual shock front tracking was developed here.

Image Discretization Process

To manage tracking of individual shock fronts with complex shock geometries, the whole experimental image is, after edge detection has been performed, discretized into cells and all the positions where the shock fronts cross the boundaries of the cells are recorded. To achieve successful shock recognition, it is critical to select an appropriate discretization cell size, and the process of cell size selection is discussed later in this section. Figure 5 shows the discretization into cells and highlighting of boundary points. If the shock front objects cross the boundaries of a cell only twice, that cell is considered to contain a linear segment of a shock front. This process yields local segments of shock fronts that can be approximated by a linear function. Note that only the positions of the linear shock front segments are preserved for the next step in the algorithm. Nonlinear segments and intersections of shock fronts within the cell will yield more than two crossing points with the boundaries of the cell, and therefore will not be included in the next step of the algorithm. In the next step, the algorithm applies linear fits to shock cell segments in a left to right loop. The leftmost cell segment is first picked as the current cell and is linearly fitted. The cell segment immediately to the right of the current cell is defined as a neighboring cell, and if the shock segment from the neighboring cell can be closely approximated by the linear fit of the current shock segment, the neighboring cell segment and the current cell segment are considered to be on the same shock front, as shown in Fig. 6. Note that only the current cell is linearly fitted, the shock segment in the neighboring cell is only plugged into the linear fit of the current cell to test the goodness of fit. After one comparison is finished, the algorithm will move on to make the current neighboring cell the new current cell in the next step in the loop, and the cell segment immediately to the right of the new current cell will become the new neighboring cell. The loop will continue as long as the algorithm judges the two cell segments in question to be from the same shock front, and once the algorithm determines the cell segments to be from different shock fronts, the loop will break off and move the shock segments from previous cells in the loop to a new array representing a recognized shock front. A new loop will begin after the process to examine the remaining unsorted shock cell segments until all segments are sorted. Figure 6 demonstrates two examples of shock separation. In both Fig. 6a and b, edge detection and discretization procedures have already been applied. In Fig. 6a and b, the blue dots represent a current local segment where linear approximation has been applied; and the red dots in the image represent a shock segment from a neighboring cell.

Fig. 5
figure 5

Discretization of a binary image after edge detection. The image shows two shock waves that are interacting with each other

Fig. 6
figure 6

Shock separation algorithm depicted for the two steps of determining if the shock wave of a neighboring cell is the same or a different shock wave. a Scenario where two discrete segments would be sorted and assumed to be the same shock front. b Scenario where two discrete segments would be sorted and assumed to be two different shock fronts

From Fig. 6a, it can be seen that the red segment is relatively close to the linear approximation of the current local segment, wheres in Fig. 6b, the neighboring segment clearly deviates from the current local segment. The algorithm will sort the two segments shown in Fig. 6a into one shock front, and the two segments shown in Fig. 6b will be sorted into different shock fronts. Figure 7 shows the workflow from the schlieren image to the separated shock fronts. The current version of this shock recognition algorithm, in essence, recognizes geometric features in the image that have continuous slopes. Hence, if two different shock waves happen to form a single continuous feature with a continuous slope, the algorithm will designate the two shocks as single shock front. Figure 8a shows shock recognition results from iterations of different cell sizes. In this case, the algorithm sorts the incident shock wave and the reflected shock wave, stretching continuously from the left edge of the image towards the right edge, to be on the same shock front. In cases like this, the user may need to manually distinguish the incident and the reflected shock. Furthermore, for the case in Fig. 8a specifically, the misrecognition mentioned above disappears eventually as the incident and reflected shock depart from a single continuous feature with the formation of the Mach stem.

Fig. 7
figure 7

Shock front separation. a Original schlieren image. b Data obtained after the edge and boundary detection procedures. c Data obtained after different shock fronts are separated

Fig. 8
figure 8

Shock recognition results from using different cell sizes. a Original schlieren image overlapped with discrete shock fronts recognized by cell size varying from 16x10 to 22x14 pixel2. b Zoomed in view of the white dashed rectangle in (a) showing the two incident and reflected shock fronts as recognized by the algorithm using cell sizes of 18x11 pixel2. Cell boundaries are shown as white dashed lines

Cell size determination

When using the process of shock recognition shown in Figs. 58, it is also necessary to determine the cell size to use for image discretization. The cell size is dependent on the curvature of the shock fronts as well as the resolution of the schlieren image. For the case shown in Fig. 5, the cell size is 20x13 pixel2 while the schlieren image resolution is 400x250 pixel2. In general, if large curvatures are present, smaller cell sizes may be required to produce local linear segments within the cells. The pixel size of the cell is also relative to the overall resolution of the schlieren image. For example, the 20x13 pixel2 cell size is 1/20 of the image resolution (12.5 is rounded up to 13). However, to determine the exact size of cells to produce the optimal result, an iterative procedure is required.

To guide the process of determining an appropriate cell size, the user first defines a viable range for the cell size. The lower bound of the cell size must be larger than the average thickness of the shock front. This is to avoid a single shock front segment filling a cell. For example, the shock waves shown in Fig. 5 on average have a thickness of 3 pixels after edge detection, and hence, the lower bound of the cell size is set to 3 pixels. The upper bound of the cell size is limited by the shortest shock front segment that the user wishes to recognize. For the shock front segment to be recognized, it needs to cross at least two cells and leave more than two crossing points for the second order polynomial fit to work. Therefore, the upper bound of the shock segment is half the length of the shortest shock front segment that the user desires to track. For example, for the shock waves shown in Fig. 5, the shortest shock wave segment is roughly 100 pixels long, and hence, the upper bound of cell size for the image is 50 pixels. If the user is planning to apply the shock recognition algorithm to an image sequence, the shortest shock front segment should be the shortest segment that exists in the entire image sequence as sometimes shock waves tend to change in shape during propagation.

Within the upper and lower bounds of the cell sizes, iterations of different cell sizes are performed to ideally achieve a converging result of recognized shock fronts. This convergence is reflected in the error between the second order polynomial functions defined in the domain of cells that contain the shock front segment. If the error is below 5%, in this work, the cell size is deemed sufficient. In the example shown in Fig. 7, the limit of the cell sizes is determined to fall between 3 to 50 pixels, and hence, cell sizes of 50x31, 40x25, 30x19, 25x16 22x14 and 20x13 pixel2 were tested. The aspect ratio of the cells was kept the same as the image for consistency. From the tests, cell sizes 50x31, 40x25, 30x19 and 25x16 pixel2 were unable to recognize the detected linear segments to belong to the same shock front. This is because the shock segments contained within the cells can no longer be reasonably fitted with a linear function. Both cell sizes of 22x14 and 20x13 pixel2, as shown in Fig. 8a, can successfully recognize the shock front, and the error between the second order polynomials obtained from the two cells in the domain of the cells was 2.5%. Finer cell sizes of 18x11, 17x11 and 16x10 pixel2 were also tested and yielded errors of 1.2%, 2.8% and 1.6% compared to the results from cells with 20x13 pixel2 sizes. The results of recognized shock fronts are also shown in Fig. 8. The error, consistently lower than 5%, shows that the choice of 20x13 pixel2 cell size is sufficient. Note that as reflected in the fluctuating errors, for smaller cell sizes the algorithm can successfully recognize different shock fronts, but the accuracy of the shock location does not necessarily increase as the cell size decreases. When the cell size is small, it is more likely to be affected by noise leftover from the edge detection process that exists near the shock fronts. As stated previously, the sources of noise include physical dust on schlieren mirrors and field lenses, which are difficult to eliminate entirely through an automated process especially when the noise is close to the shock front to the extent that they appear to be a part of the shock front. When the cell size is sufficiently small, the noise can potentially fill parts of the cell, resulting in misdetection of multiple crossings within the cell boundaries, which causes the linear segment to be removed by the algorithm. This will result in shock fronts having fewer segments, and subsequently, less accurate positions calculated from the second order polynomial fits. As the cell size becomes too small, it also becomes more likely for the cell boundaries to overlay on regions of noise, further adding to inaccurate shock front tracking. Moreover, cell sizes that are too small can increase the chance for its boundaries to potentially cross intersection points of shock fronts. This leads to small parts of different shock fronts, referred to as “lost” shock segment, to be included in a cell, causing misdetection of a single shock segment when in fact, multiple shock fronts exist within the cell.

An example is shown in Fig. 8b where shock recognition results from the cell size of 18x11 pixel2 is performed. The shared boundary of cell A and cell C slices through the intersection point of a shock wave, causing a “lost” shock segment, which results in a small part of the shock front belonging to the shock in cell B, to be included in cell A. This should have resulted in three crossings of shock fronts with the boundaries of cell A, but because the “lost” shock segment is small, the crossing on cell A’s right boundary by the “lost” shock segment is close to the crossing on cell A’s bottom boundary by the main shock segment. Therefore, the shock recognition algorithm mistakenly considers the two crossing points to be the same point, resulting in the final shock segment in cell A to result in a slight tilt. Similar error also occurs in cell C. To resolve this issue, the user can choose to change the cell size slightly to move the cell boundaries out of the intersection points of shock fronts, as shown in Fig. 8a where none of the results from 17x11 or 20x13 pixel2 cells displays the same type of error.

A flowchart summarizing the algorithms discussed in “Distortion Correction”-“Shock Recognition” is illustrated in Fig. 9. This flowchart shows the overall work flow from reading the experimental images to obtaining separately tracked shock front positions.

Fig. 9
figure 9

Flowchart of the image processing algorithm from distortion correction to shock recognition

Applications to Shock Wave Dynamics Examples

Next, three illustrative examples are shown to depict the efficiency and accuracy of the presented algorithm: (1) tracking of a two-dimensional single expanding shock wave; (2) tracking of Mach stem development due to the interaction of two two-dimensional cylindrical expanding shock waves; and (3) multi-shock interactions with several rigid structures in two dimensions.

Single expanding cylindrical shock wave

In this example the proposed algorithm has been used to obtain radius versus time data for a single expanding cylindrical shock wave. Pixel positions of the detected shock front were fitted to a circular shape, as shown in Fig. 10a, the radius of which is recorded as a function of time. While the tracked shock front positions were constrained by the spatial resolution of the high-speed camera, the fitted function can reach sub-pixel precision as long as the edge detection is determined to be reliable. Four sets of shock waves were generated using different voltage levels in the capacitors that are driving the exploding wire setup. These experiments were repeated, and the radius as a function of time results are shown in Fig. 10b. The edge detection of the shock front positions not only allowed efficient data processing, but also made sub-pixel precision tracking of the shock wave radius possible.

Fig. 10
figure 10

Tracking shock wave radius versus time for a single cylindrical expanding shock wave. a Detected shock front position (red x) with the superimposed fit of a circular curve (black solid line). b Radius versus time result obtained from four different sets of shock wave experiments in which the capacitors of the exploding wire setup were charged to different voltages. The uncertainty in shock wave radius is 0.25 cm for all experiments

Mach Stem Example

In the study of regular to irregular shock reflection transition, the development and propagation of a Mach stem have for long been a point of interest to the shock dynamics community. Here, a rough outline of where the Mach stem occurred was first manually traced from the sequence of photographs, and the algorithm is then used to isolate the Mach stem from other shock fronts. To more accurately obtain the position of the triple points on each side of the Mach stem, the shock recognition algorithm was applied to primary shock fronts as shown in Fig. 11a. Then, as demonstrated in Fig. 11b, each shock front was fitted with individual second order polynomial functions, and the intersection of the two polynomial functions was classified as a triple point. If necessary, the triple points can also reenter the algorithm to refine the region where the Mach stem is located.

Fig. 11
figure 11

Estimation of triple point location in irregular shock reflection. a Schlieren image after edge and shock separation. b Zoomed in view of polynomial intersection.

The vertical and horizontal positions versus time for the Mach stem propagation is shown in Fig. 12. Two experiments are plotted for the 13kV setting, and three experiments are plotted for the 21kV setting.

Fig. 12
figure 12

Propagation of Mach stem in the vertical direction. a Position of Mach stem, and b velocity of Mach stem. Uncertainty in the vertical position of Mach stem is 0.20 cm for both the 13kV experiments and 21kV experiments, and uncertainty in the velocity of Mach stem is 7.11m/s and 7.34m/s for the 13kV experiments and 21kV experiments respectively

Complex shock geometries example

Experiments involving the interaction of shock waves and structures can produce complex shock reflection geometries that are time consuming to quantify. Here we propose to use the edge detection and shock wave separation techniques presented herein to recognize and quantify the positions of single shock fronts in complex geometries.

Preliminary use of the algorithms in complex geometries has been applied to shock-structure interaction experiments. The experiments were conducted to study shock reflections resulting from a single blast wave impacting a set of obstacles mimicking city environment. Here, 3D printed plastic obstacles were used in the experiments. Even though the shock wave produced during these experiments is weak, the plastic obstacles may or may not be treated as rigid obstacles in the following analysis – however, this is not of concern for this particular study. Simply, the goal of the experiment has to determine the appropriate materials of the obstacles (e.g. the acoustic impedance and surface roughness) and how the obstacles’ interaction with the shock waves should be modeled. The same background subtraction and morphological operations as described earlier were applied to detect the shock fronts. The whole image was then discretized and shock waves were separated. The resultant tracking of multiple shock fronts are displayed in Fig. 13, and shows that the algorithm is successful in recognizing and tracking separate shocks. More details on this topic is presented in the study by Dela Cueva et al. [6].

Fig. 13
figure 13

Tracking of individual shock front positions in complex reflection geometries as shock fronts progress through a set of obstacles. The algorithm displays different shock front segments in different colors, and the relative position of the shock front segments are recorded. a Single blast front approaching a city scale model. bd Individually tracked shock fronts shown using different colors.

Conclusion

In conclusion, we have developed a set of image processing algorithms that are capable of:

  • automatically correcting image distortion using edge detection techniques;

  • reliably removing background noise and extracting shock front patterns with morphological operations;

  • efficiently tracking the positions of developing shock fronts with an uncertainty of 1.4% and an error within the thickness of the shock front while obtaining precise sub-pixel locations of shock positions and triple points using interpolation techniques;

  • separating and tracking the positions of individual shock fronts in complex reflection geometries.

In summary, to determine the optimal cell size for shock recognition, the user must first examine the input image and define a rough range for cell size. Then, the user can perform iterative trials with decreasing cell sizes. When different shock fronts are successfully recognized and separately tracked and the error in position produced by different cell sizes converges, the user can then decide upon the cell size with relative confidence. Additionally, a small cell size does not necessarily produce the best result. The user must inspect recognized shock fronts to ensure no errors, such as the ones shown in Fig. 8b, are present, and when necessary, adjust the cell size by small increments until the error disappears.

Finally, it is also worth noting the impact of image resolution on the effectiveness of the algorithms presented in “Distortion Correction”–“Shock Recognition”. Due to the complex nature of schlieren optics, the resolution of the image as well as the quality of the shock wave in the image are determined by multiple factors, including but not limited to camera sensor resolution, dust on schlieren mirrors and field lenses, and quality of the schlieren photograph (e.g. shock thickness, noise etc.). Hence, defining a specific pixel resolution for the presented algorithms is not very meaningful as the resolution does not necessarily reflect the quality of the shock wave in the image. However, for the purpose of edge detection and shock recognition, as long as the shock wave can be visually identified in the schlieren photograph, meaning that the shock wave shows distinctive and continuous light intensity characteristics compared to the background, and has a thickness that is larger than a single pixel, morphological operation should be able to isolate the shock fronts.

This algorithm has already been applied to various applications in the study of shock wave reflection ranging from parameterization of blast propagation to Mach stem development. The algorithms yield position data in sub-pixel precision, improving the traditional constraint imposed by the high-speed camera resolution.

Although some noise still exists after the analysis has been performed, but at this stage the proposed algorithm shows promising potential in tracking individual shocks in complex shock geometries.

For future work, machine learning techniques can potentially be incorporated to further recognize shock fronts based on their geometric features, and the development of an individual shock front can potentially be tracked against time.