Keywords

1 Introduction

As a result of the fourth industrial revolution, significant advances have been made in high technology, particularly artificial intelligence. Computer vision is an area of artificial intelligence (AI) that enables computers and systems to extract information from digital images, videos, and other visual inputs [1]. In a range of fields, such as applications for smart navigation [2, 3], biometric data management [4], transportation [5], quality monitoring system, and smart systems [6], image processing applications have been shown to be highly effective.

Visual inspection refers to the examination of products for quality assurance. Visual inspection has also been used to examine the interior and outside of manufacturing plant storage tanks, tanks, pressure vessels, pipes, and other equipment. The vast bulk of visual checks were performed manually. Consequently, the testing cycle is time-consuming and requires specialized personnel. In manual visual inspection, the naked eye is the major factor; nonetheless, the mistake rate is between 20 and 30% [6]. Human variables such as temperament, health, and boredom, etc. Other environmental variables include light, bonding distance, angle of tilt, etc.

Examining too-small objects visually becomes unrealistic and difficult. In order to overcome the shortcomings, automatic testing systems have included camera vision. Image processing technology helps the creation of an intelligent quality monitoring system. Machine vision contributes to several steps of image processing, including image acquisition, pre-processing, segmentation, classification, and warning. The precision of the technique for automatic quality control will be significantly improved. Image processing is an integral component of computing [7]. The development of an algorithm for image processing to monitor and inspect product quality. Histogram, Canny Edge, RGB color filter, HSV color filter, etc. are often the most essential algorithms [7,8,9,10].

Then, a color space is an algebraic representation of a mathematical model used to describe colors in the physical world [7]. Background subtraction method [7, 8] is an image processing and computer vision approach that isolates the foreground (foreground) for further processing (identification object, gesture recognition, motion, etc.). Typically, image areas of interest are in the foreground; consequently, good and precise background separation enables these systems to achieve stability and speed. In the processing steps that follow the pretreatment phase, this background separation technique will be utilized. Separation of background scenes is frequently employed in a variety of fields, including security cameras, object recognition, object gesture recognition, and traffic for counting the number of vehicles [1, 3, 8]. Next, template matching [9, 10] is a digital image processing method that identifies a little image within a large image that is almost identical to the sample image.

The study describes an autonomous quality monitoring system for smartphone panel manufacturing facilities, where manual visual inspection is becoming increasingly burdensome and unproductive. The vision approach combines the concepts of backdrop subtraction and template matching. As following are the panel's technical specifications: The anomalies on the phone’s display panel are smaller than 0.2 mm, the testing period is only 5 s, and the error rate is decreased by around 20% from the actual peak production about 40 percent.

2 Mechanical System Design

2.1 System Requirements

Firstly, the operation diagram is shown as in Fig. 1. Then, the technical staff will set up suitable positions of Camera. Finally, the vision’s principle diagram of an ordinary panel gluing device in Fig. 2.

Fig. 1
An experimental set-up of the camera position. The labeled parts, such as pick-up unit, pinnacle cutting, pinnacle knife, P E T film, rolling, O C A, T S P, plasma, channel 1 and 2, window, and loading 1 and 2.

Set up of camera position

Fig. 2
A principle diagram of vision represents the process D-sub bended. It includes a loader panel, plasma, vision, panel, unloader, and transfer unit for A S I S. loader panel, plasma, vision, panel, vision panel stage, unloader, and transfer unit for T O B E.

Principle diagram of vision

2.2 Automatic Vision Process

The automatic vision process is introduced in Fig. 3

Fig. 3
A flowchart illustrates the automatic vision process. It begins with loading the panel, plasma surface treatment, pasting the panel, vision of the panel state to alarm, with virtuality to software processing, and with reality to abnormal processing to the condition of the panel to saving templates.

Automatic vision process

Step 1: Panel loading.

Robot immediately transfers panel from Tray to Panel Stage (Jig).

Step 2: Plasma surface treatment.

Plasma is utilized to clean the panel surface. Next, a thin coating is adhered to the panel. Robot returns the tray to prepare the subsequent round.

Step 3: After the processing of the panel has been completed. Robot introduces a new panel to the state (jig).

Step 4: Vision processing is comprised of two case.

  • If detecting abnormal, it will be alarm to check the situation of panel. The cleaning processing will be applied in the case of the Reality.

  • Otherwise, the case of Virtuality. Software processing will be supported to remove the error.

3 Proposed Approach

3.1 Technical Requirements

From the mechanical system (see Fig. 4), the vision program will be designed to detect the abnormal with technical requirements as follows (see Fig. 5):

Fig. 4
A schematic representation of the system of image processing. It includes a jig panel with a vacuum hole, a controller, camera vision, and an alarm.

The system of image processing

Fig. 5
An algorithm diagram of the vision program to detect abnormal technical requirements. It represents the capture OK stage, contour vacuum hole, capture running stage, subtract background, analyze alien position, alarm, technicians confirm, and stage clean.

The vision algorithm diagram

  • Size of abnormal to be detected (requires actual problem): > 0.25 mm (~250 µm).

  • Maximum processing time per image: < 5 s (product’s tact time ~ 5 s/product)

3.2 The Diagram of Control Algorithm

From the original Jig, there are absolutely no disabilities that need to be saved for comparison with subsequent images of Jig in production. The Panels are continuously supplied into the Jig. So, the Jig is possible to get abnormal during the production process. The original Jig will be registered exceptions for traits inherent on the Jig as the position of the holes does not have a fixed size and profile Ø 0.6 and Ø 0.8 mm, scratches are available with fixed size, etc.

There are some suitable methods were used as bellows:

  • Party matching (or Template matching): Jig images without any difference become origin image. Compare the new image with the original. If the comparison value is less than threshold, there is an abnormal.

  • Check position partical: Save the entire position of the air hole on the Jig. New images will identify all objects including abnormal and air holes. Compare each object with previously trained position. If it is not matched with the positions, it’s abnormal.

  • Subtract background: Take the Jig image without abnormal as the original. Subtract each new obtained image to gain the original image will be the black background and abnormal. Catch blob to identify abnormal.

To achieve high accuracy and minimize loss time of confirming suspected abnormal, the paper presented the new method of combining the checking position particle and the subtrack background.

4 Practical Results

After installing the mechanical system and testing the proposed algorithm, abnormal detection rate is with an accuracy of nearly 90% in a short processing time (< 5 s per cycle). The automatic vision process does not affect the overall takt time of the entire production process. Accuracy average in 10 day with 19,054 products is nearly 90% in Fig. 6 and the real images of experimental results in Fig. 7.

Fig. 6
A table of 5 columns and 11 rows represents the detection of 0.2 millimeters abnormal in automatic vision. The column headers are day, input quantity, N G quantity, confirm result, and rate.

Detection of 0.2 mm abnormal in automatic vision

Fig. 7
2 photographs of the experimental result of the automatic vision process. The circles and the zoomed view of the dashed rectangular box represent the resulted region.

The actual images showing experimental results of detecting a 0.2 mm anomaly in automatic vision

5 Conclusions

The paper presents the automatic vision method combining template matching and subtract background. The practical production system improved the quality rate of detection abnormal with the accuracy about 90%. Furthermore, the data of templates will be enhanced because of saving new abnormal samples. The automatic production of panel guarantees the tactile belows 5 s/product and detection of < 0.2 mm abnormal. The vision program is optimized and improved the ability in continuous production mode with big data of samples. Mechanical system combining with vision system is designed successfully. PLC program is applied and communicated correctly with all devices.