Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction to Sensor Nodes

A wireless sensor network (WSN) consists of a large number of sensor nodes (SNs) that transmit a volume of data to a central station, commonly known as a base station (BS) or sink node in a multi-hop fashion. From an Internet point of view, computation is primarily done at the BS which is rich with computing and storage resources. So, the real question is how to describe the deployment of SNs, how to discover neighboring SNs, how to integrate distributed processing done by SNs, and all these considerations constitute a GSN (Global Sensor Network). For example, temperature can be measured by many SNs and instead of sending each of them individually; it may be desirable to aggregate these value before reaching BS. There is no well-accepted standard for transducers and sensors while a lot of resources are needed in developing them. Besides, they are not portable and their compatibility ought to be examined carefully. Moreover, the SNs need to be deployed as soon as there is a need for their utilization for ever-increasing application domain. With the advances in technology, the price of SNs and transducers is decreasing day by day and innovative schemes need to be devised to having a flexible environment. To make a friendly situation, a wrapper can be employed which can extract desired parameter and create a combined representation that could reflect them in an effective way. Typically, a wrapper transforms and translates information from SNs to an understandable form [1]. This may be in plain text or in HTML format and conversion could be time-consuming. Different type of wrappers such as for TinyOS , RFID, and UDP could be used for different level of abstractions and lines of code for different wrappers are shown in Table 8.1.

Table 8.1 Coding efforts for different types of wrappers [2]

To understand the effect of the network size in a heterogeneous environment, experiment was conducted [3] to determine processing time from different types of transducers when the number of devices and query is varied. The system shown in Fig. 8.1 consists of 5 desktops, 10 Mica2 motes with light and temperature transducers (message size 15 Bytes); 8 Mica2 motes each equipped with light, temperature, acceleration, and sound sensors (packet size of 100 Bytes); 4 motes with TinyOS with one light and two temperature sensors (packet size of 29 Bytes); 15 Wireless AXIS 206W 15 cameras (which can capture 640 × 480 JPEG pictures with a rate of 30 frames per second and varying degree of compression); and Texas Instruments RFID reader with three different RFID-tags (each tag can store up to 8 KB of binary data). Each mote generates a burst of size R with probability B > 0. Figure 8.2 shows the results for a stream element size (SES) of 30 Bytes and the average time to process a query if 500 clients issue queries is less than 50 ms. The spikes in the graph are due to bursts and the system comes back to normal behavior.

Fig. 8.1
figure 1

Multi-hop communication from a SN to the BS

Fig. 8.2
figure 2

Processing time per client

The spikes in the graphs are bursts as described above. Basically this experiment measures the performance of the database server under various loads which heavily depends on the used database. As expected the database server’s performance is directly related to the number of the clients as with the increasing number of clients more queries are sent to the database and also the cost of the query compiling increases. Nevertheless, the query processing time is reasonably low as the show that the average time to process a query if 500 clients issue queries is less than 50 ms, i.e., approximately 0.5 ms per client. If required, a cluster could be used to improve the query processing times which is supported by most of the WSNs.

2 Camera Sensor Nodes (C-SNs)

A camera acts as sensors as the image or picture is converted into a form that a computer can recognize as bits and bytes. A digital picture is just a long string of pixels; 1’s and 0’s and all of these pixels make up the image. A digital camera has different lenses that help focus the light to create the image of a scene and is recorded electronically. A lens focuses light on to a film and other points in the image are projected in the image as a “circle of confusion” as illustrated in Fig. 8.3 [5].

Fig. 8.3
figure 3

Projection of a scene by a C-SN lens

Operation of a digital camera is drastically enhanced by the signal processing schemes [4] and one such scheme is shown in Fig. 8.3. A camera could be doing analog or digital recording and could be printed or optical image recorded on magnetic tape or stored in a RAM. A part of scene satisfying the following relation is focused correctly and is called circle of confusion:

$$ \frac{1}{{d_{o} }} + \frac{1}{{d_{i} }} = \frac{1}{f}. $$
(8.1)

The data captured by a camera can be processed in many ways and different levels of representation are shown in Table 8.2 [6]. The “raw” information measured by the SNs is transmitted by radio-signal and are called “Level 0” data. Level-1 data contain all the Level-0 data, and calibration and navigation information is appended. Each pixel of Level-2 data contains geophysical values by applying the calibration and atmospheric corrections from Level-1 raw data. Level-3 contains geophysical parameters observed during a certain period and interpolated on a global grid such as one day or 8 days with spatial resolution in degrees.

Table 8.2 Different levels of C-SN data and the corresponding representation

Raw data received by a camera goes through many steps of processing and are summarized in Table 8.3 [7].

Table 8.3 Digital photography, computational steps and C-SNs [8]

Typically, a sensor converts light into electrical charges and digital cameras use either CCDs (Charged Couple Devices) or CMOS (Complementary Metal Oxide Semiconductor) as both convert light into electrons. The value of each pixel (picture intensity) of the image is read and convert light into a readable form [9] (Table 8.4).

Table 8.4 Comparing CCD and CMOS technologies [9]

Although numerous differences do exist between these sensors, the same role is played in the camera of turning light into electricity. For all practical purpose, both types of digital cameras work nearly identically. The important characteristic of a camera is the resolution that represents details of a camera and is measured in pixels. More pixels a camera has, more detail it can contain and there will be no blur in larger pictures. Size of pixels is summarized in Table 8.5 [10].

Table 8.5 Pixel numbers and camera sensor node (C-SN) specifications

As photosets can only keep track of striking light intensity, most sensors use filtering scheme to show three primary color components.

3 Digital Images Using CCDs and CMOS Sensors

A simple mechanism for a color photo in a still camera is to separate the three basic color components of red, blue, and green and combine them at the receiver end. But, to obtain superior image quality through enhanced resolution and lower noise [12], it is better to detect only one-third of the color information for each pixel and interpolate other two-thirds with a demosicing algorithm to “fill in the gaps” from other pixels, resulting in a much lower effective resolution. Such consideration of multiple pixels together is due to Dr. Bryce E. Bayer [11] who invented Red, Green and Blue color filters that could capture color information from multiple adjacent pixels. An alternating Red/Green and Blue/Green arrangement of 4 pixels is shown in Fig. 8.4 and is called an RGBG filter that is trapped on the silicon sensor substrate with tiny cavities, like wells (pixels). This bonded color filter on the substrate records colors based on light photons it receives, with each pixel having only two of three colors. As human eyes are more sensitive to green color, there are twice as many green squares as either blue or red. As illustrated in Fig. 8.5, in each 2 × 2 square of four pixels, each pixel contains a single color; either red, green or blue and are called G1, B1, R1, G2. If a single pixel G1 is considered, then it finds out how many blue photons B1 has got and adds them to its green. In a similar way, G1 gets information from R1 and G2. In this way, G1 gets a complete set of primary color data, and can use it at its place on the sensor. When G1 acquires data to close by pixels, it also shares its value. In a similar way, a 3 × 3 grid can be analyzed as shown in Fig. 8.5b where the center pixel decides the constituent pixel colors. If the center pixel is green, it will have 4 blue and 4 red pixels. A red pixel in the center will have 4 green and 4 blue pixels. A blue center pixel will have 4 red and 4 green pixels. In this way, every single pixel is used by 8 other pixels in the neighborhood and is known as “demosicing ” to provide smoothness of pictures.

Fig. 8.4
figure 4

Bayer arrangement of color filters on the pixel array of an image sensor

Fig. 8.5
figure 5

a 2 × 2 pixel array in an image sensor. b 3 × 3 pixel array in an image sensor [13]

CCD sensors and a camera circuit board are shown in Fig. 8.6 [13] where after capturing the image, sensor send built-up charges row by row to an output register and amplified before sent to A/D converter. The corresponding digital file is then stored for display and further processing. A similar arrangement for CMOS is illustrated in Fig. 8.7 [13] which is cheaper to produce than a CCD scheme. It consumes less power with more complex circuitry. Each pixel is accessible independently and has more noise than CCD due to larger space requirement (Fig. 8.8).

Fig. 8.6
figure 6

CCD sensors and camera circuit board [14]

Fig. 8.7
figure 7

CMOS sensors and camera circuit board [15]

Fig. 8.8
figure 8

Deployment of SNs in a parking garage

4 Application of Camera Sensor Nodes (C-SNs)

SNs have been deployed in detecting open spaces in a parking lot as illustrated in Fig. 8.3 [16, 17]. It contains 5 infrared beam sensors to detect any passing vehicle and determine their speed and length. To ensure presence of a vehicle, a magnetometer is placed at an appropriate place and a camera takes photos of speeding vehicles. Figure 8.9a shows systematic conversion of signal by infrared sensors at 256 Hz rate, magnetometer at 16 Hz rate and the camera in detecting presence of a vehicle, its length, speed, and its assigned unique ID. Variation of signals is shown in Fig. 8.9b (Figs. 8.10 and 8.11).

Fig. 8.9
figure 9

a Vehicle detection steps in a parking lot. b Signals for vehicle detection

Fig. 8.10
figure 10

Illustration of remote senor nodes (R-SNs) scheme

Fig. 8.11
figure 11

Reflectance from Water and vegetation

5 Remote Sensor Node (R-SNs) Applications

It is interesting to note that camera-based techniques can be used in many different areas and remote sensing (RS) is a good example of extending the usefulness. In fact, RS can provide both qualitative and quantitative information about detached objects without approaching in direct contact. RS has numerous applications [18] such as meteorology (weather forecast), environmental studies (pollution effect), agricultural engineering, physical planning (scenario studies), hydrology (water and energy balance), soil science (vegetation mapping), nature conservation (vegetation mapping) forestry (such as fire detection), and land surveying (typography). The geographical information system includes topography, soils, geology, precipitation, land cover, vegetation, remote sensing data, surface temperature, hydrology, population, nature conservation, environment, digital terrain model, and topological map [18]. Selection of a remote sensing system depends on the primary sources of EM energy, possible atmospheric windows, and spectral characteristics of surface being sensed and spectral sensitivity of sensors available. This is due to fact that the reflection from the surrounding surface affects the reading.

RSN provide a regional view (large areas) as remote sensors in all seasons, understand a broader portion of the spectrum and can simultaneously focus in on a number of bandwidths and can provides digital data. A generic remote sensing (RS) scheme is shown in Fig. 8.12 [19] where aerial data is received from both airplane and satellite , and such geographical information system (GIS) provides useful information about the surface of earth, and possible signal processing operations are summarized in Table 8.6 [20].

Fig. 8.12
figure 12

Remote sensor nodes scenario

Table 8.6 GIS applications using RSNs and signal-processing operations [21]

The GIS system provides synopsis of the area as an overview of the region in terms of differences and coherence. Such flexibility is due to variety of sensors available, the underlying techniques used and processing algorithms used and reproducible analysis adopted. In practice, the actual results depend on the conventional mapping of specified data for given application and the process of updating the information. The interactivity depends on cooperation of human knowledge and machine operations. The dynamic capability of monitoring depends on the time series of data that reveals changes. Moreover, the invisible could become visible as it depends on the objective, quantitative data, extrapolation of point measurements and opening of inaccessible regions. The RS provides a regional view of large areas, and enables repetitive view of the same area. This enables a broader share of the spectrum than human eye. Moreover, SNs can use a number of bandwidths instantaneously and many remote sensors operate 24 × 7 (Table 8.7).

Table 8.7 Processing levels in R-SNs and detailed steps [22]

6 Conclusions

The area of SNs is much wider than just SNs with just few transducers measuring physical quantities. But, a camera lens is more or less equivalent to having a large number of SNs in a single lens, measuring pixel intensity of the whole area that could be focused by the lens and can be termed as C-SNs. Thus, a camera has become an important SN component in providing useful information and monitoring larger areas. Digital color images can be obtained using CCDs and CMOS devices. Remote sensor nodes (R-SNs) take one step further in the coverage area by capturing information from airplanes and satellites and fusing information together. It is really challenging to think of all these different types of SNs in a given application as these technologies are complimentary to each other. Future integration of these devices appears very promising and could lead to a very powerful and versatile system.

7 Questions

  1. Q.8.1.

    How do you compare sensing area and power consumption in a SN versus a camera, versus an aerial camera unit versus a satellite camera? Use both qualitative and quantitative measures to understand associated implications.

  2. Q.8.2.

    A WSN consists of 160 randomly deployed SNs in a large parking lot of size 2000 × 2000. The parking lot has an area of 6 × 9 for parking each car. If each SN can sense an object within 10 units of radius, what is the probability that cars at two selected spots can be detected by the SNs?

  3. Q.8.3.

    The lot is divided into 16 equal parts, with each part having 10 randomly deployed SNs, what is the probability that a car can be detected?

  4. Q.8.4.

    If one-fourth of SNs in Q.8.2 is replaced by a camera that can cover 4 times the area of a SN, what is the probability that a vehicle will be detected either by a camera or a SN?

  5. Q.8.5.

    If one-half of SNs in Q.8.2 is replaced by a camera that can cover 8 times the area of a SN, what is the probability that a vehicle will be detected either by a camera or a SN?

  6. Q.8.6.

    How can you place several cameras so that sensing area is equivalent to a: (a) Triangle and (b) hexagon.