Keywords

1 Introduction

Industry 4.0 (I4.0) is the trend towards automation and data exchange in factory and manufacturing systems [1], and its base technologies include Internet of Things (IoT), additive manufacturing (AM), cognitive computing, identification, analytics, etc. Some major capabilities are letting machines and people connect and communicate with each other, providing information from all points in the system, assisting humans technically, and making decentralized decisions. I4.0 can create a production environment that runs the factory without human intervention and even provides predictive maintenance. During the COVID-19 pandemic especially, many factories many factories were minimizing or closing, people need to practice physical distancing, and there is a high demand of personal protective equipment (PPE). This demonstrates clearly that we now need factory automation more than ever. Unfortunately, most of our factories cannot operate autonomously or even remotely, and none can function without considerable human input and oversight. This is because most machines today are closed source and cannot even communicate or join a network. This obstacle leads to the technology being far behind our expectations and vision [2]. To operate our factories more autonomously and even remotely, we need to equip the current manufacturing system to see, understand, predict, and interact with the machines and the products. The system needs to monitor physical processes, communicate, and cooperate with human, so that it can adjust the processes to achieve desired outputs. In addition, to manufacture parts correctly and in better quality, it is imperative that the system can inspect the work-in-progress (WIP) and detect failures, such as broken tools in cutting machines and work holding issues in AM.

Motivated by the need and the technology gap, this paper aims to develop a workable solution to automate our existing factories and thus to realize remote process control. Specifically, to create an IoT solution for current computer numerical control (CNC) machines, we developed the ORiON Production Interface (OPI). We installed the OPI in machines to enable the remote network wired/wireless access for direct numerical control (DNC) and machine monitoring. In addition, this paper investigates the hardware and software modules to integrate computer vision (CV) tools into the solution. For this purpose, we connect cameras to the OPI and use them to “look” at the machines and WIP and to communicate with the machine control unit (MCU) directly. The camera can be a USB camera that plugs into the OPI or a mobile device that connects wirelessly. The solution can monitor the status of machines, which makes sure that the build is correct, such as monitoring the processes, identifying failures, detecting tool conditions, and finding part position and orientation. Since there could be many situations and operations happening, a smart system requires the WIP to self-identify what it is and what process it needs. This paper uses fiducials to distinguish parts and the detection mode. For example, a QR code can tell the target is a drill tool, so that the system can detect whether it is a right tool and if it is still good to use. A barcode can identify the tool number and set up the datum. We implement the CV algorithms in the OPI unit to process the collected data. The OPI unit sends the direct control to the MCU, and it also sends the other system information to the master system through the Intranet. The contributions of this paper include:

  1. 1.

    A simple, yet practical solution that can connect legacy machines and communicate with the control unit directly to realize remote process control;

  2. 2.

    Using fiducials to produce an identification system that enables the interactions among the machines, the parts, and the vision system;

  3. 3.

    Applying computer vision technologies to the decision-making process, which is non-destructive and fast enough to provide feedback within the tolerable time windows.

This research provides a solution to equip our factories to be more intelligent and allow more remote operations. It opens the possibility of developing emerging technologies to enhance factory productivity with customization capability.

The organization for the rest of this paper is as follows. Section 2 reviews the related works, and then Sect. 3 presents the methodology. After that, Sect. 4 discusses the applications. Finally, Sect. 5 concludes the paper.

2 Literature Review

This section can only review a few recent related works in Internet of Things (IoT) and vision systems for manufacturing systems. For more information on the advances in I4.0, we refer the readers to a thorough review [3].

IoT. This concept aims to connect machines and products in the manufacturing system over the Internet, so that they can communicate and exchange data with each other. It has had many applications, like in the assembly line [4]. The enabling technologies for IoT include sensors, actuators, and near field communications. For example, Ali and Haseeb [5] used radio-frequency identification (RFID) technology to identify and track the objects attached with tags in real time. Sharif et al. [6] applied machine learning to enhance the decoding rate of barcodes in challenging situations. However, IoT devices normally have limited computational resources [7], and it is better to have the solutions simple, yet effective.

Vision systems. Computer vision has a very wide utilization in manufacturing systems. Markerless motion capture using depth cameras can digitalize human activity to achieve human-machine interactions [8] and collaborative robots [9]. Similarly, during the execution of manufacturing and assembly, Bortolini et al. [10] presented a motion analysis system for ergonomics assessment, and Faccio et al. [11] developed a human factor analyzer for work measurement. The extension of the technology is the applications of virtual reality (VR) [12], augmented reality (AR) [13], and digital twin [14]. Integrating visions systems into manufacturing can also realize closed-loop control systems for product quality control in various manufacturing processes [15]. Incorporating machine learning in these applications can further improve the accuracy and quality.

3 Methodology

3.1 Machines and Setup

This paper primarily focuses on computer numerical control (CNC) machines, including drilling routers and 3D printers. To create an IoT solution for current machines, we have designed and developed the ORiON Production Interface (OPI), which is a small box \((5^{\prime \prime }\times 4^{\prime \prime }\times 1.5^{\prime \prime })\) to be embedded in the CNC machines (see Fig. 1). In terms of computational power and memory, the OPI has a Linux-based hardened Quad Core 32 bit ARM processor, 512 MB DDR3 RAM, and 16 GB internal SD flash drive. In terms of connections, it has a Gigabit Ethernet port and 802.11 B/G/N Wi-Fi connection, so that it can connect to the Internet through Ethernet cable or Wi-Fi, and any browser can talk to the OPI using a laptop, a tablet, a smart phone, or an iOS device. It also has a dual RS-232 DB9 serial port, and it can connect to the CNC machines using serial cables for direct numerical control (DNC). In addition, the OPI has a couple of universal serial bus (USB) ports, and this work uses them for a camera. As a result, acting as a file storage and embedded DNC engine, the OPI is a simple, economical adapter that enables USB and serial DNC links to the CNC machines, making them compatible and online.

Fig. 1.
figure 1

(a) Nexas ORiON Production Interface (OPI) installed in CNC machines. (b) Front and back view of the OPI. (c) Web interface of the OPI.

3.2 Fiducials

This paper uses two machine-readable codes to allow high-speed identification. First, one-dimensional (1D) barcodes encode data by a set of parallel lines. We apply them to cylindrical surfaces (e.g., drill bit), so that the readers can read it in any direction, even the surface is rotating. Second, quick response (QR) codes are two-dimensional (2D) barcodes that can store much richer information. We use them to let the parts to self-identify themselves, so that the system can react accordingly. The finder pattern in a QR code can tell the location and orientation of what it tags on. We can also calculate the scale from image size to physical size by including the physical length of the QR code as part of its encoded data. The OPI has some of these codes built into memory as part programs and control the machines to produce them on demand, e.g., 3D printing the code or engraving the code in the material. These data-infused fiducials serve as a new tool in the operator toolbox for on-the-fly identification, positioning, and sizing. This is a major improvement in the manufacturing process at little cost – a real step up with a lot of potential, allowing the machines to communicate actively with the system about their status truly in the physical environment.

3.3 Computer Vision and Control

To equip the system with vision capability to “look” at the machines and the work-in-process (WIP), we connect a 1080P HD USB digital camera produced by Jinye Tiancheng Tec., China, to the OPI. The camera is driver-free, has a resolution of 1600 \(\times \) 1200 and 30 fps, and costs less than $5 USD. Using a low cost commercial off-the-shelf technology shows that even low end vision systems can yield a dramatic benefit – a high benefit-cost ratio. We have implemented the computer vision algorithms at the OPI using the C++ programming language. First, we employ the ZBar library for reading barcodes and QR codes. ZBar is an open source software suite licensed under the GNU LGPL 2.1, and it can scan and decode codes in less than a second. Second, our program processes the decoded data and determines which subroutine to call. For example, when there are the keywords ‘base’ and ‘part’, it activates the pose mode to determine the location and orientation of the part; when there is the keyword ‘tool’, it switches on the tool mode to detect whether the tool is correct and in a good condition. Last, the OPI sends out the corresponding logical control from each subroutine to the machines and reports the status to the main system through the Intranet. This real-time monitoring and control capability enables process verification and potential customization functions. The entire framework is highly modular, and it is simple to add other functions and operations.

4 Applications

This paper has presented a visual engine by adding “eyesight” and smart systems logic to the manufacturing systems, and it can address many manufacturing issues. This section shows a few of them.

Fig. 2.
figure 2

Using fiducials in 3D printing to inspect adhesion.

4.1 Process Monitoring

The vision engine can detect if there are any anomalies in production. For example, the first layer adhering properly to the build plate is often the key to a successful build in the fused filament fabrication (FFF) 3D printing. Therefore, it is crucial to ensure the adhesion is good. In this application, we let the machine print a QR code as the adhesive layer. Using fiducials made by the 3D printer allows the vision system to see what is happening. Figure 2a shows a \(2^{\prime \prime }\times 2^{\prime \prime }\) QR code printed on the build plate, and it only takes a few minutes to print the code. If it is in good shape (Fig. 2b), the vision engine can read it from over one meter away. Otherwise, if the vision engine does not recognize the code, there is a high probability that the layer is not adhering (Fig. 2c), and the system will pause the process and send out an alert. Making “structures with messages” on a 3D printer is important because it shows that the use of fiducials needs not to be an external operator-drive event. Our manufacturing process itself can create a feedback mechanism that the vision system can see and evaluate in real time. We can engineer quality into the production process, which can also assess production milestones, audit production, and run unattended operations to enhance throughput.

Fig. 3.
figure 3

Use of two QR codes for locating parts of different positions and orientations.

4.2 Part Location and Orientation

Shifting of stock or any form of misalignment will scrap a run and waste time, so knowing the location and orientation of the part is important. Here, we use two QR codes, namely ‘part’ and ‘base’, and when the vision engine detects both of them, it activates the pose mode (see Fig. 3). The code ‘part’ is on the part and moves with the part, while the code ‘base’ is on the worktable and static. Since each QR code is a square, the ordered corners can tell both the position and the orientation of the code. Therefore, by comparing two QR codes, we can find out the pose of the part. In addition, the ‘base’ QR code also includes the length of the physical code, i.e., ‘base 30’ in Fig. 3 for the code with 30 mm side length, which helps to compute the physical distance from the image distance between the codes. Figure 3 shows two scenarios where the x-axis points to the right, the y-axis points downward, and the rotation is in a clockwise direction. In the first case, the system locates the part being at a distance of (−64.3, −72.7) mm from the base and at an angle of \({-}2.6^\circ \) relative to the base. In the second case, the part is at (−59.5, −69.4) mm and an angle of \({-}28.9^\circ \) from the base. With this information, the system can reposition the tool regarding the part location and orientation. This proof-of-concept application shows a potential to realize adaptive machining, which can auto-correct for errors, enhance throughput, and reduce waste. Other implications include a reduced need for fixtures and extending a cut beyond the machine’s bed size.

Fig. 4.
figure 4

Using a QR code and a barcode for tool detection.

4.3 Tool Detection

In many machining processes, operators have to spend a lot of time ensuring the right tool for the job and determining tool length to see if the tool has broken during machining. We use a QR code with the keyword ‘tool’ to activate the tool mode, and similarly, it also includes its side length to allow scaling from image size to physical size (see Fig. 4). We then wrap a barcode (UPC-E) around the tool, and the barcode stores the tool number. Most tools have space for placing one barcode just above the cutting bit. With a clear background, image processing can segment the regions of the tool and the drill chuck. After that, the system computes the tool length by measuring from the end of the tool to the drill chuck, where the thickness changes sharply. Figure 4 shows two tools that the vision engine has detected. It reads the barcodes 00000019 and 00000055, respectively. As the last digit is the check digit for accuracy, they are tools #1 and #5. Knowing that the QR code has the side length of 30 mm, it finds the tools 109.5 and 82.5 mm long.

5 Conclusion

This paper presents a practical solution to equip our existing factories with the vision and communication capabilities. We developed the ORiON Production Interface (OPI) unit to enable remote process control of computer numerical control (CNC) routers. The OPI unit can connect to the CNC machines, the Internet, and a USB camera. Although we have intentionally designed the OPI to be of a modest computational power, it was enough to yield the computer vision and image processing algorithms. With the help of fiducials, including QR codes and barcodes, we have showed that the solution can monitor processes, locate parts, and detect tools. We are yet only scratching the surface of its potential, and this solution allows a true realization of Industry 4.0 without the need to change all equipment. It is also worthwhile to mention that both the hardware and software in this solution are open source – using low-end tools for high-end solutions.