Keywords

1 Introduction

In recent years, due to the popularity of autonomous driving, the related accidents also increased [1, 2]. According to a report [2], the accidents were caused by detection defections, such as the driving systems failing to detect other vehicles or recognize surrounding objects.

1.1 Challenges of Vehicle Detection

Autonomous vehicles are required to identify and track other vehicles around them and properly handle each detected vehicle. However, there are many challenges to recognizing those vehicles on roads correctly. The most significant challenges of vehicles detection are summarized as follows:

Vehicle detection is especially challenging in heterogeneous traffic or adverse road conditions, in which the size and type of vehicles vary significantly. When vehicular traffic density is high, it leads to frequent occlusion. Occlusions increase the difficulty of learning the visual representations of vehicles. The tracker may fail to follow the target under occlusions since the occlusions prevent the tracker from learning the complete appearance representation of the target [3]. Furthermore, complex backgrounds, weather conditions, and cast shadows make identifying and tracking a vehicle difficult [4].

1.2 Related Works

Although on-road vehicle detection is challenging, significant progress has been made for general problems in recent years [5, 6]. Autonomous vehicles integrate multiple sensors onboard for information acquisition about road conditions. Those sensors can be classified into two main categories: active and passive [5].

The most common approaches to detecting vehicles by active sensors include radar-based and laser-based. Millimeter-wave radar is widely used for vehicle detection, in which a frequency-modulated continuous waveform signal is emitted. Its reflections are received and demodulated, and frequency content is analyzed [6]. Radar sensing generally features a narrow angular field of view, and measurements are quite noisy, requiring extensive filtering and cleaning [6].

Lidar-based systems emit and receive lasers at wavelengths generally between 600 and 1000 nm. The distance to the detected object could be derived based on how far the photons have traveled round trip [5]. Laser-based systems are accurate; however, they do not perform well in rain and snow [7]. When a large number of vehicles are moving simultaneously in the same direction, interference among sensors of the same type poses a big problem [7].

A passive vision-based system such as a camera is utilized to track approaching and preceding vehicles more effectively than active sensors as visual information can provide a brief description of the surrounding vehicles [5]. Optical sensors can also be used for lane detection, traffic sign recognition, or object identification [5].

Multiple sensor approaches are more likely to progress and achieve more reliable and secure systems than a single sensor. In the fusion process, either two types of sensors perform detection simultaneously and then validate each other’s results, or one sensor detects while the other validates [5].

Imaging technology is the mainstream of vehicle detection methods [6], which could be divided into two broad categories: appearance-based and motion-based methods. Appearance-based methods recognize vehicles directly from images. However, motion-based approaches require a sequence of images to recognize vehicles [6]. Therefore, monocular vehicle detection often relies on appearance features and machine learning, while stereo vehicle detection often relies on motion features, tracking, and filtering [6].

Several recent studies investigated the detection problem under special scenarios, such as nighttime and low-light [8, 9]. The studies have shown that complex road and ambient lighting conditions and camera configurations can significantly impact the effectiveness. If vehicles are occluded by nearby objects or under very bad weather, the detection problem could be even more challenging. Current benchmarks indicate that recent detection algorithms can detect approximately 90% of partially occluded and 80% of heavily occluded vehicles [3]. One popular occlusion handling method is the analysis of motion cues, such as frame comparison reasoning, which analyzes continuous image data and identifies objects by comparing data between frames [3]. However, this method is restricted in cases of static occlusion where the variation of occlusion between frames is small [3, 10]. Some other popular methods of occlusion handling combine a number of the following occlusion cues or image characteristics to assess if an object boundary is recognized or recovered [3]. The weaknesses of the existing methods are obvious: (1) the success rates heavily rely on the visual quality or road conditions; (2) the occluded parts are very difficult to be recovered in cases of static occlusion because the related information is limited.

1.3 Proposed Solution for Detection of Invisible/Occluded Vehicles

This paper proposes a method to detect invisible/occluded vehicles by taking advantage of the new developments in radio frequency identification (RFID) technologies. The main idea is to attach passive RFID tags to a vehicle’s surfaces to add new electromagnetic visibility to the vehicle. Furthermore, each tag is allowed to store a vehicle’s 3D model on the chip. So that the RFID reader can remotely retrieve the 3D model from a tag when a vehicle is invisible or occluded; in addition, based on the tags’ returned signals, the vehicle’s boundary, location, and orientation could be derived. Compared to optical systems, RFID is independent of weather conditions and the time of the day [16].

The remaining sections of this paper is organized as follows. Section II analyzes the characteristics of RFIDs and explains the use of RFID technologies to make vehicles detectable in adverse road conditions. The storage space in a tag is very limited and varies in different brands. Section III shows how to dynamically minimize the storage space required for the 3D model of a vehicle. Section IV designs a data structure to support effective detection and computation in consideration of limited storage space in RFID tags. Section V proposes methods to estimate an occluded vehicle’s direction, distance, and orientation. Section VI is the performance evaluation for the proposed method.

2 Make Vehicles Detectable by Using RFIDs

RFID is designed to be attached to equipment or objects for easier detection, location, and tracking. RFIDs are highly reliable yet have low implementation complexity [13, 24]. For instance, multiple RFID tags are attached to an object to enhance availability and detection accuracy in inventory applications [25]. An RFID system usually contains one or more RFID tags and a reader. A tag consists of a silicon microchip attached to a small antenna, mounted on a substrate, and encapsulated in a plastic or glass veil. A reader consists of a scanner with antennas to transmit and receive signals, is responsible for communication with the tag, and receives the information from the tag. A reader can scan multiple tags at a time. Figure 1 is the illustration of interactions between a reader and multiple tags. However, it can also detect each tag individually. RFID tags are not necessary to communicate within line of sight. This characteristic is useful when the vehicle is partially occluded.

Fig. 1.
figure 1

An RFID reader can turn on multiple tags simultaneously over a long distance.

2.1 Durability and Detection Range of Passive RFIDs

There are two types of RFID systems in operation: active and passive. In an active system, the tag has its power source. The battery life could be up to a few years. However, the tag has no internal power supply in a passive system; therefore, it can be much smaller [15]. Passive tags contain circuitry that gains power from radio waves emitted by readers in their vicinity. They use this power to give a reply to the reader. Passive tags have no moving parts or internal power sources. The chance of breakdown within the tag itself is extremely low. Therefore, passive tags can last for the entire lifespan of the vehicles to which they are mounted [13].

The communication distance of RFID depends on the active or passive RFID, RF output power of reader/writer, the antenna gain of tag and reader/writer, and the user environment. In general, the communication distance for active tags could be up to 100 m [11]. For the passive type, although the reachable distance of radio waves depends on the conditions related to the antenna size and the signal strength, generally speaking, the higher frequency bands (UHF) have larger communication distances. For instance, in the mainstream market, some UHF RFID tags’ ranges can reach 20 m [12] or 30 m [13]. Recent studies have shown that the new passive RFID could be reached at an unprecedented range of up to 64 m [14, 15].

2.2 User Memory on RFID Tags

An RFID tag is composed of four types of memories in a tag. They are (1) reserved memory, (2) TID (tag ID is written by manufacturers), (3) EPC (electronic product code) can be written by users, and (4) user memory. Type (3) and (4) can be rewritten by users. Storing extra information (other than ID number) in an RFID tag allows users to access records in real-time without connecting to a reference database. When a reader scans an RFID tag, it can retrieve the ID and the stored data.

Different RFID tags have varying amounts of storage available. The capacity of RFID tags ranges from 60 bytes to 64K bytes [19]. Typically, a tag carries about 2 KB of data (e.g., Fujitsu chip MB89R118). However, some industrial passive UHF tags can store 4 KB or 8 KB of data. The data retention could be up to 30 years. Invengo RFID Tag (Model No. XC-TF8102-B-C43) is a typical RFID tag used in this paper’s experiment. The specifications of the tag are: TID: 96 bits, EPC (electronic product code) memory: 256 bits, and user memory: 512 bits. If applications need more memory than the EPC section has available, they use the extended user memory to store more information. In this case, the total size of usable memory is 96 bytes.

To summarize, RFIDs have the following characteristics: (1) their lifespan can be as long as 30 years; (2) they can be detected at a distance of more than 60 m; (3) users are allowed to store extra information on a tag for real-time access.

3 Overcome the Limitations of RFID’S Storage Space

One of the solutions to detect invisible/occluded vehicles is to increase every vehicle’s visibility to other vehicles. There are several advantages to attaching passive RFID tags to a vehicle’s surfaces. Those tags can be detected reliably under different road conditions [16]. Secondly, the vehicle’s identifier, 3D model, etc., are stored in each tag for real-time access. So that vehicles can easily detect and locate the invisible/occluded vehicles, they can also recover the boundaries of the occluded vehicles. The storage requirement of a vehicle’s 3D model should be minimized to overcome the limitation of RFID’s storage space and achieve better computational efficiency. The following sub-sections are to develop an algorithm to simplify the 3D expression.

3.1 Vehicle Segmentation

In 3D modeling, a vehicle is scanned into a point cloud which usually consumes a lot of storage space. A vehicle’s point cloud is divided into multiple parts to simplify the vehicle bounding’s expression. A tight bounding box is generated for each part. As a result, the tags’ positions will tightly align with the virtual boundary. Then the resulting bounding boxes are joined together to create a 3D vehicle model.

Edge-Based Segmentation

Several existing algorithms can be used to divide a point cloud into logical parts [20, 21]. For instance, there are edge-based segmentation, region growing segmentation, segmentation by model fitting, etc. Different algorithms have their advantages. Vehicles usually have simple shapes; they are easier to be divided into parts. In this paper, edge-based segmentation is chosen as it is a fast algorithm to speed up the computation [21]. The edge-based segmentation algorithms have two main stages: (1) edge detection to outline the borders of different regions and (2) grouping of points inside the boundaries to deliver the final segments [21]. Edges in a given point cloud are defined by the points where changes in the local surface properties exceed a given threshold. Figure 2 shows two examples of dividing a car and a truck into segments.

Fig. 2.
figure 2

Two examples: (a) a car is divided into two segments; (b) a truck is divided into three segments.

3.2 Shape Selection

After segmentation, each part of the vehicle will be converted into a bounding box to simplify the 3D expression.

Building the Shape Database

First, a database of simple 3D geometric shapes is built to store a set of representative shape exemplars. The selection of those shape exemplars is straightforward. The common shapes that appear in the vehicles are selected. It is important to ensure that those shapes have multiple flat surfaces, which will be easier for the algorithm to estimate the vehicle’s pose at a later stage. The database can be updated if the shapes of vehicles have changed. The following is an example (Fig. 3):

Fig. 3.
figure 3

Database of 3D geometric shapes with flat surfaces.

Initial Shape Selection

Instead of directly reconstructing shape representations, the proposed method operates indirectly by selecting shape exemplars. More precisely, after segmentation, for each segment, the algorithm is to select one shape exemplar among a set of K shape exemplars from a given shape database. The goal is to approximate the realistic shape for each segment yet consume minimum storage space. After each selection of shape exemplars, the exemplar’s parameters can be manipulated to fit the bounding box as perfectly as possible. A loss function is developed to evaluate the fitness of the approximation. This polygonal model could be used in different types of vehicles. By careful selection, all shapes consist of limited flat surfaces, and each surface is a plane that is a flat (not curved) two-dimensional space.

3.3 Distance Calculation and Parameters Fine-Tuning

A function is used to evaluate the effectiveness of the resulting 3D geometric shape. The distance is measured between each point of the point cloud and the surface (plane) of the 3D shape. The following model is proposed: let \(P=\left\{{p}_{1},\cdots ,{p}_{i},\cdots ,{p}_{m}\right\}\) is the point cloud, and \(S=\left\{{s}_{1},\cdots ,{s}_{j},\cdots ,{s}_{n}\right\}\) denote flat surfaces (planes) of the resulting 3D model. Define \(dist\left({p}_{i}, S\right)\) to be the distance function point \({p}_{i}\) to S. The objective is to minimize the total distance from P to S.

$$d\left(P,S\right)=\mathrm{minimize}\sum \nolimits_{i=1}^{m}dist({p}_{i},S)$$
(1)

Then the problem can be decomposed into the following set of sub-problems:

  • Develop a function \(dist\left({p}_{i},{s}_{j}\right)\) that calculates the distance between a point \({p}_{i}\) and an arbitrary plane \({s}_{j}\).

  • Calculate the distances between point \({p}_{i}\) and each flat surface (each side is a plane) and take the shortest of the distances:

$$dist({p}_{i},S)=\underset{{s}_{j}\in S}{\mathrm{min}}\left\{dist\left({p}_{i},{s}_{j}\right)\right\}$$
(2)

The following paragraph explains how to calculate \(dist\left({p}_{i},{s}_{j}\right)\). Figure 4 shows the distance from point A to a plane determined by normal vector N and point B. Point B is confined to being in the plane. The distance from A to the plane is the length of the projection of the vector from B to A onto the normal vector. C is the point where the projection touches the plane, then C is the point on the plane closest to A. Then the distance from A to the plane is as follows:

$$ d = \left| {\overrightarrow {AB} } \right|\cos \theta = { }\left| {\overrightarrow {AB} \cdot \frac{{\vec{N}}}{\left| N \right|}} \right| $$
(3)
Fig. 4.
figure 4

Illustrate the distance from point P to a plane.

Iterative Closest Point Fine-Tuning for Each Segment

For each segment, the parameters of a 3D geometric shape (such as lengths of dimensions and orientation) are fine-tuned to match the corresponding point cloud by using the Iterative Closest Point method (ICP), which is used to align two free-form shapes [22]. Then the problem is formulated as follows: given two corresponding free-form shapes (shape S and point cloud P). The goal is to fine-tune the shape parameters to minimize the sum of distance \(d\left(P,S\right)\). After that, all segments are put together to form the final shape for the vehicle. Some methods are proposed to identify key points for more efficient computation [23]. The trade-off between accuracy and computational time is dependent on the number of key points selected.

4 Design of Data Structure for RFID Tags

4.1 Data Structure in a Tag

Application developers can use the software development kits provided by reader manufacturers to write data into memory. The following information is stored in a tag to facilitate vehicle detection. They are the vehicle’s 3D segment shapes, tags’ positions on the surfaces, the total number of tags on a specific surface (this number will be used to calculate the weighting of the surface for the pose estimation later), the coordinates of polygons that form the vehicle’s 3D model. The above information could allow us to achieve the following objectives: (1) recover the vehicle’s 3D model; (2) calculate the portion of detected tags on each surface; (3) estimate the vehicle’s orientation.

The following example gives a conceptual idea of the storage requirement for a typical passenger car. The car in Fig. 5 is divided into two segments: two square frustums. A 3D square frustum bounding box takes four parameters. Assume that a floating-point number is used for each parameter. It takes 8 bytes to represent a box. Then two segments take a total of 16 bytes for the 3D representation. Each 3D object also takes 6 bytes to specify the coordinates in the 3D space. As a result, two 3D objects consume another 12 bytes.

Fig. 5.
figure 5

All 3D objects share the same coordinate system.

There are many ways to represent a rotation for the orientation representation: 3 × 3 matrices, Euler angles, rotation vectors (axis/angle), quaternions, etc. Take the Euler angle as an example; it uses a sequence like (x, y, z) to specify the rotation of the x-axis, y-axis, and z-axis, respectively (see Fig. 5). Each object takes 6 bytes for orientation representation. That takes another 12 bytes. Based on the above rough calculations, the data storage requirement is about 50 bytes for the above example. This example also shows that the proposed algorithm can dynamically adjust the storage requirement for each vehicle.

4.2 Attach Multiple Tags to a Surface

Based on the previous studies, attaching multiple tags to an object can significantly improve the reliability and accuracy of detection [25]. In the ideal case, the tags should be attached to a vehicle’s surfaces uniformly. However, this requirement is impractical in the design of vehicles. The strategy is to divide the usable area on each surface into grids to maximize the vehicle’s visibility from different angles. A tag is attached to each grid. Depending on the design of a vehicle, the density of tags may be different on each surface. The surface’s exposure is measured by counting the percentage of tags that have been detected to overcome the problem of heterogeneous density.

A vehicle’s 3D model is divided into different separated surfaces. A total number of tags on each surface is stored in the tag. This number will be used to compare with the number of detectable tags in the scanning process. If the surface is 100% directly facing the detector in the detection range, the detector should be able to receive signals from all tags on this particular surface. Otherwise, the surface is not 100% facing the detector, or a part of the surface is blocked.

5 Detection of Invisible/Occluded Vehicles

The following information is available to detect an invisible or occluded vehicle: (1) the vehicle’s 3D model; (2) the distance and direction to the detector; (3) its orientation to the detector. For point (1), the model can be retrieved from an RFID tag. For points (2) and (3), the following sub-sections elaborate on details of the proposed methods.

5.1 Direction and Distance Estimation of Passive RFID Tags

This section describes the details of the time-of-flight (TOF) method for estimating a group of passive RFID tags’ direction and distance. Two readers are arranged horizontally at a vehicle’s two front ends to perform the TOF-based localization. The above arrangement is to avoid collision with the front vehicles. However, the readers could be mounted at the vehicle’s back to avoid collision with the rear vehicles. When multiple tags are present, readers can process a tag at a time.

For on-road vehicle detection, all the vehicles are on the roads which are on the same or similar ground levels. In the proposed method, only the direction and distance of a specific tag are required (not the position in 3D space). Therefore, 2D TOF estimation is adopted in this paper (see Fig. 6). The RFID tag emits a signal, which propagates through the air toward the two readers. The distance between two readers is k. Since the distance from the tag to the reader can be measured separately, therefore, the tag’s direction to the vehicle can be estimated. The synchronization between two sensors in TOF measurements is a challenging issue; Medina et al. [18] proposed a TDMA-based method with compensation of the clock drifts and the random variation of the start time.

Fig. 6.
figure 6

Measurement of the distance and direction.

In the above figure, two readers and one tag are the three points of a triangle. \({\theta }_{1}\), \({\theta }_{2}\), and \({\theta }_{3}\) are the inner angles of the triangle, respectively. \({r}_{1}\) and \({r}_{2}\) are the distances from the tag to readers 1 and 2, respectively. The goal is to find the distance (d) and direction (\({\theta }_{4}\)) between the vehicle and the tag. There are different ways [26] to measure the distance between two points, such as time-of-arrival (TOA), time-difference-of-arrival (TDOA), and received-signal-strength (RRS), etc. Although more complicated positioning systems (such as GPS) can be used in the vehicle, they still cannot fulfill all the requirements in this application. For instance, GPS signals are not available in a tunnel. A new method, time-of-flight (TOF), is proposed to address the ranging issue in RFID systems [27, 28]. It measures the time-of-flight of the signal traveling from the transmitter to the measuring unit and back. It performs ranging with a single antenna and could work with standard EPC Generation-2 tags. According to [27], at a distance of 40 m, their study achieved 1-m ranging accuracy outdoors. In another paper [28], their study achieved a ranging precision below 10 cm for a MIMO system at a bandwidth of 100 MHz indoors. In TOF, the distance between the reader and the tag can be estimated by dividing the total traveling time by 2.

$$\tau =\frac{{t}_{1}-{t}_{0}}{2}$$
(4)

where \({t}_{0}\) and \({t}_{1}\) are the starting time and end time of the signal traveling. There is only one hop in this application, and the tag only gives a simple reply to the reader; the delay spent on routing and processing can be ignored. So, the distance between the reader and the tag can be given by \(D=c\tau \), where c is the speed of light.

\({r}_{1}\), \({r}_{2}\) are measured by using TOF, and k is given. Then \({\theta }_{1}\) can be expressed in the following equation by using the law of cosine:

$${\theta }_{1}={\mathrm{cos}}^{-1}\left(\frac{{{r}_{1}}^{2}+{k}^{2}-{{r}_{2}}^{2}}{2k{r}_{1}}\right)$$
(5)

Similarly, the same method is used to find \({\theta }_{2}\) and \({\theta }_{3}\). Then the direction of arrival, \({\theta }_{4}\), can be obtained as follows:

$${\theta }_{4}=\pi -\frac{1}{2}{\theta }_{2}-{\theta }_{3}$$
(6)

Moreover, the distance between the vehicle and the tag can be calculated by using the law of cosine again:

$$\mathrm{cos}{\theta }_{4}=\frac{{d}^{2}+{\left(\frac{1}{2}k\right)}^{2}-{{r}_{2}}^{2}}{d\cdot k}$$
(7)

The above is a quadratic equation where d is unknown, and the answer is as follows:

$$d=\frac{\mathrm{cos}{\theta }_{4}\cdot k\pm \sqrt{{\left(\mathrm{cos}{\theta }_{4}\cdot k\right)}^{2}-{k}^{2}-4{{r}_{2}}^{2}}}{2}$$
(8)

There are two solutions in (8), but one of them will be discarded based on the constraints. A reader can read multiple tags (say, n) at a time. \({\theta }_{4}\) and d can be calculated for each tag. Thus, the direction of arrival and the distance between the vehicle and the tag could be estimated as the average values of detected tags:

$${\left[\theta ,d\right]}_{avg}=\left[\frac{1}{n}{\sum }_{i=1}^{n}{{\theta }_{4}}^{i},\frac{1}{n}{\sum }_{i=1}^{n}{d}^{i}\right]$$
(9)

5.2 Estimation of Vehicle’s Orientation

The relative positions of the detected ID tags can be used to estimate the vehicle’s orientation. Those IDs are organized in hierarchies. An ID in each tag is formulated in the following format {Vehicle ID, Polygon ID, Surface ID, Tag serial no}. Based on the ID format, a tree structure is organized for fast searching.

Localization registration is to determine the orientation of a set of tags for the pre-built global 3D map. In the matching process, it is computationally expensive. Different approaches are proposed to accelerate the search [29]. The ordered tree comparison is suitable for localization registration [17]. The pre-built global 3D map could be organized as a tree (\({T}_{1}\)) that consists of several polygons, and each polygon consists of several surfaces. Each surface has an attribute of the total number of tags attached. The detected tags could also be organized as an ordered tree (\({T}_{2}\)) which consists of the detected polygons, and each polygon consists of the detected surface, and each surface consists of the detected tags. Therefore, the problem can be transformed into an ordered-tree comparison. A recent study proposed a linear-time algorithm comparing two similar ordered rooted trees with node labels. They have shown that an optimal mapping that uses at most k insertions or deletions could then be constructed in \(O\left(n{k}^{3}\right)\) where n is the size of the trees [17].

6 Performance Evaluation

6.1 Experiment Configurations

The experiment aims to study the effectiveness of the proposed detection method when the vehicle is invisible or occluded. Due to the budget constraint, a small-scale experiment is implemented in this paper. The setup of the experiment consists of two major components: a reader and a box for simulating a car. These two components can be separated at a maximum distance of 5 m. The box was made based on a car’s three dimensions, i.e., 0.46-m length, 0.18-m width, and 0.17-m height. The box has a total of 6 flat surfaces. Multiple tags are attached to each surface. The inner surfaces of the box are covered by aluminum foil, which is to simulate the metal body frame of a car. There is a distance of 4 cm between the two tags. The number of tags for Surface {1, 3, 4, 5} are {5, 12, 12, 12}, respectively (see Fig. 7).

Fig. 7.
figure 7

Experiment Setup.

Before the experiment, a product code was written to each tag for identification. The format of the product code is specified in the previous sections. The antenna is fixed at a position; the box changes the position and orientation. The following is the hardware used:

  • Reader: CNIST-CN9400 (model no.). The query interval is 25 ms for each antenna. There is a total of 8 antennas in each unit. A software development kit is installed on a notebook.

  • Antenna: CNIST-CN09C (model no.)

  • RFID Tag: Invengo RFID Tag (XC-TF8102-B-C43, model no.): working frequency: 860–960 MHz, EPC memory: 256 bits, and user memory: 512 bits.

6.2 Effectiveness of RFID Detection

This sub-section is to study the detection effectiveness when the vehicle is moving. The first experiment detects the vehicle’s front surface at different distances. The goal is to count how many tags have been detected for each surface. Figure 8 shows the detectability at a specific angle (see Fig. 7) at different distances. The percentage of detected tags is counted at each distance.

Fig. 8.
figure 8

Distance sensitivity experiment.

The results show that the front surface (Surface ID 1) can be detected successfully at different distances. When the distance increases to 5 m, the percentage of detected tags decreases to 80%, but the success rate is still at a high level. The experiment also shows that, at a short distance, the reader can detect tags from other surfaces which are not directly facing the detector.

The second experiment studies the sensitivity of detected tags when the vehicle changes its orientation. The box is fixed at a distance of 2 m, and then the box is slowly rotated from left to right for 90° (see Fig. 7).

Fig. 9.
figure 9

Sensitivity on rotation.

In Fig. 9, the box is rotated 10° from left to the right at a time (that means Surface 1 will gradually flash out, and Surface 4 will gradually flash in) (see Fig. 7). The percentage of Surface 1 starts declining at 50°, and the reading drops to zero at 80°. However, for Surface 4, the percentage increases steadily when the vehicle rotates. When the degree reaches 90°, the level reaches the top. The changes in the percentages of those two surfaces show that the vehicle is rotating.

6.3 Effectiveness of Occluded Object Detection

This experiment is to study the effectiveness of vehicle detection when nearby objects occlude the vehicle. An object is arranged to slowly move from left to right in the front of the vehicle. The detected tags’ percentage is measured in each movement when the occluded surface increases from 10% to 100% at 2 m. Surface 4 is used as a test case in the experiment; because it has the largest surface so that the most significant result could be observed. Figure 10 shows the detection sensitivity of the simulating box.

Fig. 10.
figure 10

Sensitivity on occlusion.

Figure 10 shows that the percentage of detected tags decreases proportionally to the percentage of the occluded surface. This experiment demonstrates the importance of using multiple tags.

7 Conclusion

We identify the challenges for vehicle detection under adverse driving conditions. This paper takes advantage of RFID technology to improve the vehicles’ visibility and proposes a solution to overcome the weakness of vision-based detection methods. An algorithm is developed to convert a point cloud into a simple 3D model, which then is stored in tags for recovery of the vehicle’s boundary. The proposed method has the following advantages: vehicle detection is not sensitive to light, weather, or occluded conditions; vehicles can be detected at a relatively long distance; the implementation is relatively simple. Finally, a small-scale experiment is set up to evaluate the performance of the proposed method. The results have shown that, by using multiple passive RFID tags, the proposed method is able to detect a vehicle’s orientation at various distances; distinguish whether a vehicle is rotating; recover the boundary for an occluded vehicle.