The debate regarding whether or not photography is a form of art is long over. Photographs are found in art museums and private collections throughout the world. Many institutions, such as the George Eastman House in Rochester, NY, and the Museum of Contemporary Photography in Chicago are dedicated solely to photography. Owing to his love of the art form, singer and composer Elton John is the curator of a collection of over 8000 vintage photographs. And like other works of fine visual art, photographs often bring millions of dollars at auction.

Modern technology has given the photographer a wide range of tools for creating photographic images. We will consider some of these tools and the physical principles that make them work; others go beyond the scope of this book. The photographic art provides us with an opportunity to apply many of the principles of light and color that you have already learned. Modern cameras, tablets, and smartphones are so easy to use that it is tempting to merely “point and shoot.” Nevertheless, understanding photography and careful consideration of both scientific and artistic principles will result in much more satisfying photographs.

10.1 A Brief History of Photography

The earliest device for projecting and copying images was the pinhole camera or camera obscura, which we discussed in Chap. 3 (see Fig. 3.4). Although a sharp image could be produced by such a camera by making the pinhole small, the image was so dim that only the brightest subjects could be traced. In the sixteenth century, Girolamo Cardano added a lens to the camera, which allowed it to gather much more light and greatly increased the image brightness.

In 1826, French inventor Nicéphore Niépce succeeded in recording images of bright objects on a varnished pewter plate coated with asphaltum, a natural tar-like substance. The asphaltum that received less light could be dissolved with a solvent such as oil of lavender, and the plate was etched and engraved to form a permanent record, which Niépce called a heliograph (see Fig. 10.1). About a century earlier, John Schulze, a German physicist, had discovered that certain silver salts darken when exposed to sunlight.

Fig. 10.1
figure 1

(Joseph Nicéphore Niépce [Public domain], via Wikimedia Commons)

View from the Window at Le Gras, the first successful permanent photograph created by Nicéphore Niépce in 1826 in Saint-Loup-de-Varennes

A major step forward was made when French painter Mandé Daguerre coated a copper plate with silver and exposed it to iodine vapor so that the surface became covered with a thin layer of silver iodide. Light striking the plate converted some of the silver iodide to small particles of silver. The plate was then developed by exposing it to mercury vapor so that the silver particles united with mercury to form an amalgam of silver and mercury. Finally, the plate was washed in a solution of sodium thiosulfate to remove the unexposed silver iodide. Daguerre called the images on his plates daguerreotypes. An early daguerreotype is shown in Fig. 10.2. Daguerreotypes could show considerable detail, but exposure times were very long. Nevertheless, they were commercially successful. A number of daguerreotypes can be seen in museums around the world.

Fig. 10.2
figure 2

(Louis Daguerre [Public domain] via Wikimedia Commons)

The first surviving picture of a living person, taken in 1838 by French painter Mandé Daguerre. The image shows a busy street, but due to exposure time of more than 10 min, the traffic was moving much too fast to appear

English amateur scientist William Fox Talbot is given credit for inventing the negative–positive process that made it possible to produce multiple prints of a photographic image. He coated paper with silver chloride to make it light-sensitive. After exposure, the paper was washed in a solution of sodium chloride or potassium iodide. Although the tones were reversed, he could reverse them again by photographically copying the negative image on a second sensitized paper.

In 1851, another Englishman, Frederick Archer, invented the collodion wet plate negative. Glass plates were coated with a mixture of iodine and bromine salts dissolved in collodion. Some 20 years later, collodion wet plates were superseded by gelatin dry plates, which consisted of glass plates coated with gelatin mixed with potassium bromide and silver nitrate.

In 1884, George Eastman invented a new photographic system that used roll film with a paper base and a roll film holder. Then, in 1889, the Eastman Kodak Company developed a transparent flexible film that used cellulose nitrate as a base. A small box camera with a roll film holder was also developed. Box cameras remained popular for many years (see Fig. 10.3).

Fig. 10.3
figure 3

(Eastman Kodak Co. [Public domain], via Wikimedia Commons)

Bulls-Eye Kodak box camera, circa 1898

Kodak is also responsible for the 35 mm film that remains in use today. The format, which Kodak introduced in 1913, was later adopted by Leica in the 1920s for use in a camera they developed for still photography. For reasons of fire safety and durability, the 1920s saw Kodak and other companies transition from nitrate to celluloid-based film. Since the 1960s, the use of plastic-based film has been the norm.

Over a period from 1910 to the mid-1930s, color photography evolved from taking individual photos with red, blue, and green filters then overlapping them, to the much simplified and convenient use of a three-emulsion “sandwich.” Referred to as a “tripack,” the new format, marketed by Agfa-Ansco, allowed picture taking with a snapshot camera. Available in a roll, the film offered convenience, if not the sharpest color images.

In the early 1940s, Kodak revolutionized color picture taking with the introduction of Kodachrome. The film was the first to use dye-coupled colors in which a chemical process connects the three dye layers together to create an apparent color image. Taking quality color photographs with Kodachrome was so simple that Kodak proudly proclaimed, “You press the button, we do the rest.”

Everything changed in 1986 when Japanese company Nikon introduced a commercially available digital camera. Fourteen years later the world’s first digital camera phone, also known as a smartphone, was marketed by Sharp. In the same year, digital camera sales exceeded those of film cameras. Now dedicated digital cameras have taken a backseat to smartphone cameras, which now number in the billions.

10.2 Cameras

Cameras may be categorized as compact, or point and shoot, and single-lens reflex (SLR). Compact and SLR cameras come in both film and digital models. Digital SLRs are referred to as DSLRs. Today, the most used digital cameras are found on smartphones.

The essential parts of a camera are the lens, the shutter, the diaphragm (or iris), and film or an electronic sensor. The purpose of the lens is to focus an image on the film. A single convex lens will do this (see Sect. 4.7), but most cameras have multi-element lenses in order to minimize distortion and produce the sharpest possible image. In “point-and-shoot” cameras, the lens is fixed in one place, but in more sophisticated cameras the lens can be moved closer to or farther from the film in order to sharply focus either distant or close subjects. With lenses of variable focal length, called zoom lenses, different elements are moved relative to each other to change the effective focal length. Such lenses are very convenient for photographers who prefer not to change lenses.

The shutter controls the length of the exposure in order to accommodate different lighting conditions. Shutter speed is the amount of time the shutter remains open. The slower the shutter speed, the longer the film or image sensor is exposed to light.

Shutter speeds are expressed in seconds or fractions of a second. Typical shutter speeds, measured in seconds, include 2, 1, 1/2, 1/4, 1/8, 1/15, 1/30, 1/60, 1/125, 1/250, 1/500, 1/1000, 1/2000, 1/4000, 1/8000. Each speed increment halves the amount of light. Fast shutter speeds have the effect of freezing motion in the scene you are photographing. Conversely, slow shutter speeds will blur motion in a scene.

Shutters may either be mechanical or electronic. Mechanical shutters are of two types: the leaf shutter at the lens (“between-the-lens” shutter) and the moving curtain near the film or sensor (“focal-plane” shutter).

A leaf shutter employs a small number of identical overlapping metal blades called leaves that open and close. The leaves are relatively light and only have to travel a short distance. This allows fast shutter speeds. Unlike most focal-plane shutters, a leaf shutter provides synchronization with flash at all shutter speeds.

A focal-plane shutter consists of a pair of curtains. One curtain, which initially covers the focal plane, moves away, exposing the light-sensitive medium. After the desired exposure time, the second curtain, moving in the same direction, closes the aperture. When the shutter is cocked, the shutter curtains move back to their starting positions. An advantage of the focal-plane shutter is that all parts of the frame receive light for the same amount of time.

Unlike the leaf shutter, that covers a round aperture, the focal-plane shutter typically covers a rectangular area, immediately in front of the film or sensor. Cameras with interchangeable lenses tend to use a focal-plane shutter to avoid the expense of having a leaf shutter on each of the lenses.

Smartphone cameras, and many digital cameras, use electronic shutters. With this type of shutter there’s no physical barrier. Instead, an electrical pulse tells the sensor when to record. The sensor does not take light onto its surface all at once; rather the image is built up over time by stacking pixels row by row. The advantages of this type of shutter are that it is silent and capable of much higher shutter speeds than mechanical shutters. Speeds of 1/32,000th of a second are not uncommon. A downside to this arrangement is that it can produce what are known as “rolling shutter” effects, distortions in the images of fast-moving objects and rapidly flashing lights. This situation results from subject movement as the image is being captured.

Canon and Sony have each developed an electronic sensor that doesn’t suffer from the rolling shutter effect. This type of shutter, referred to as a global shutter, exposes all of the sensor’s pixels at the same time, thus eliminating distortions and blurring.

Most traditional DSLR cameras use a mechanical shutter, while the majority of mirrorless cameras have an electronic shutter. An increasing number of DSLRs have both.

In SLR cameras the exposure time can be determined automatically, a feature that has disadvantages as well as advantages. However, cameras have an override feature so that the photographer can set the exposure time manually, a feature preferred by most serious photographers. To aide in determining proper length of exposure, SLR cameras have a built-in light meter and an electronic readout displayed on a backlit LCD screen at the bottom of the viewfinder. Other cameras display the readout on the back or top of the camera.

With compact and some digital cameras, framing a subject is accomplished with a direct-vision viewfinder. The viewfinder, which is situated to the side or on top of the camera’s lens, provides a view of the subject slightly different from that seen through the lens. This gives rise to what is known as parallax error, which can make it difficult to properly frame the subject. The discrepancy between the two views is most pronounced for short subject distances.

An SLR camera is shown in Fig. 10.4; a cutaway drawing of an SLR camera appears in Fig. 10.5. A noteworthy advantage of SLR cameras, as well as DSLR cameras, is that focusing and framing the picture can be done with great accuracy by looking directly through the lens. The mechanism that makes this possible is rather complicated. During focusing, the diaphragm aperture is usually fully opened, and a mirror drops down to direct the light from the lens onto a ground-glass screen where it is viewed through a doubly reflecting pentaprism, as shown. When the shutter release is pressed, the mirror flips out of the way, and the aperture stop closes down to its preset size; then the shutter opens, and the picture is recorded. All this must happen in a fraction of a second.

Fig. 10.4
figure 4

(Sebastian Koppehel [CC BY 4.0 (https://creativecommons.org/licenses/by/4.0)], from Wikimedia Commons

Sony Alpha 700 35 mm digital SLR camera

Fig. 10.5
figure 5

(en:User:Cburnett (https://commons.wikimedia.org/wiki/File:SLR_cross_section.svg), “SLR cross section”, https://creativecommons.org/licenses/by-sa/3.0/legalcode)

Cross-section view of SLR system: 1—Front-mount lenses. 2—Reflex mirror at 45-degree angle. 3—Focal-plane shutter. 4—Film or sensor. 5—Focusing screen. 6—Condenser lens. 7—Optical glass pentaprism (or pentamirror). 8—Eyepiece

Some digital cameras don’t employ a mirror. In these “mirrorless” cameras, light passes directly through the lens onto the image sensor. The captured image is displayed on screen on the rear of the camera, which takes the place of an optical viewfinder. These cameras have the advantage of lighter weight and simpler construction.

The focal length of a camera lens determines the field of view. The “standard” lens on most SLR cameras has a focal length of 50 or 55 mm. A wide-angle lens having a focal length of 28 or 35 mm allows a larger field of view, a feature that is useful for photographing tall buildings in a city or for taking photographs in a small room. On the other hand, a telephoto lens narrows the field of view so that distant subjects appear much larger than they would with the standard lens. One of the advantages of SLR cameras is the ease with which lenses can be interchanged. Because the lens is used both as the viewfinder and rangefinder, no adjustment is needed in the viewing and focusing mechanisms when the lenses are interchanged.

Fixed-focus cameras have no mechanism that allows for focusing. Such cameras are not well suited for close-up photography and require that the photographer be some minimum distance from the subject for a sharp image. However, most cameras allow for manual or automatic focusing.

Some focusing systems employ a pair of prisms (a biprism) often mounted in the center of the SLR screen. The prisms, slanted in opposite directions, produce a split image. The two halves of the image appear to be displaced if the image is not in focus.

Today, most cameras have autofocus (AF) mechanisms that rely on one or more sensors to determine correct focus. Autofocus systems may either be active or passive. Active systems employ a method of ranging based on a time-of-flight. With this approach, light from an LED or laser is directed toward the subject. The time required for the light to reach and travel back to the camera is used to calculate distance. Once the distance is computed, the lens is directed to adjust focus accordingly.

Passive autofocusing occurs electronically within the camera. Focusing is based on the difference between light and dark areas within an image. Since an image tends to be in focus when the contrast between these areas is the greatest, the camera uses input from sensors to adjust the focus until the contrast between adjacent areas is maximized.

10.3 Focal Lengths, f-Numbers, and Field of View

The aperture stop in cameras is designated by the f-stop number, which is defined as the ratio of the focal length to the diameter. Thus, the diameter of an f/4 lens is twice that of an f/8 lens, and so it lets in four times as much light (because its area is four times as large), other things being equal. An f/2 camera lens may typically be provided with the following aperture stops: f/1.8, f/2.8, f/4, f/5.6, f/8, f/11, f/16. Some of these apertures are shown in Fig. 10.6. Note that each of these numbers is \( \sqrt 2 \) times the preceding one, so that the area of the aperture is halved each stop. If the exposure time is halved, the f-number should be increased by \( 1 /\sqrt 2 \) (one stop), provided the light conditions don’t change. For example, f/4 with 1/100 s would be used under the same conditions as f/2.8 with 1/200 s. Note that the diameter of an f/4 telephoto lens having a focal length of 100 mm is twice that of an f/4 standard lens having a focal length of 50 mm.

Fig. 10.6
figure 6

(KoeppiK [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons)

Lenses with apertures ranging from f/1.8 to f/11

The field of view, as we discussed in the previous section, is determined by the focal length of the lens and the size of the film. Table 10.1 gives the angle of view for various lenses used on a 35 mm camera. Doubling the focal length will halve the length and width of the field of view. Figure 10.7 shows a scene photographed with focal lengths ranging from a 17 mm wide-angle lens to a 200 mm telephoto lens.

Table 10.1 Angle of view for various lenses on a 35 mm camera
Fig. 10.7
figure 7

(Courtesy of Jim Hamel Photography)

Scene photographed with lens having focal lengths ranging from 17 to 200 mm

When a camera is focused for a particular distance, objects at a slightly greater and slightly smaller distance also will appear to be in focus. The distance range over which this will be true is called the depth of field. The depth of field depends upon the aperture stop. The depth of field for a large f-number (small lens aperture) will be greater than for a small f-number. Thus, photographers often “stop down” a camera lens in order to obtain a large depth of field. On the other hand, it is sometimes desirable to have the background slightly out of focus (“soft” focus); this is accomplished by using a small f-number (large lens aperture). This is frequently the case in portrait photography. Photographs of the same scene with increasing aperture stops are shown in Fig. 10.8. Note how close and distant objects fall out of focus as the depth of field decreases.

Fig. 10.8
figure 8

(Alex1ruff (https://commons.wikimedia.org/wiki/File:Dof_blocks_f1_4.jpg), https://creativecommons.org/licenses/by-sa/4.0/legalcode)

Photographs of the same scene with decreasing f-number (increasing apertures): a f/22; b f/4; (c) f/2.8

Most inexpensive point-and-shoot cameras do not make a provision for focusing the lens. At the same time, these cameras usually have fairly small lens openings (large f-numbers), so the depth of field is quite large. Nevertheless, they cannot be used for close-up photography. A typical 35 mm point-and-shoot camera has a lens with a 38 mm focal length, which means that it has a wide field of view. The short focal length also makes such cameras very compact, and the combination of size and convenience has made them very popular.

10.4 Digital Photography

In many respects, digital cameras are like traditional film cameras. In their simplest form, both consist of a light-tight enclosure that contains a lens for gathering light, a focusing device, and an opening and a shutter, which together determine how much light enters the enclosure. However, they differ in one very significant way: digital cameras use electronic photodetectors, referred to as image sensors, to capture images rather than photographic film.

Digital photography has revolutionized picture taking by making it possible to take virtually unlimited images inexpensively and view the results immediately. While widely popular by 2005 as a stand-alone device, digital cameras soared in number with the advent of the smartphone, a cellular phone that functions as both a computer and a high-resolution camera. It is estimated that 2.5 billion people currently have digital cameras in one form or another. At one time, using an entire 24-exposure roll of film to photograph an event might be considered liberal; today taking hundreds of digital photos is not considered excessive.

It should be emphasized that the digital camera is far more than a consumer item. Digital cameras play an important role in astronomy, medicine, security, and television broadcasting. Improvements in digital imaging technology have led to high-definition television, the discovery of distant galaxies, and advanced approaches to medical diagnosis and treatment.

The Evolution of Digital Photography

In a period of a little over 50 years, digital imaging has evolved from humble beginnings to a process used by billions of people around the globe. There are few technologies that have affected so many in such a short time span.

The first digital image was produced in 1957 by Russell Kirsch and his colleagues at the United States National Bureau of Standards. Kirsch and his team used a scanning apparatus to transform a picture of his son into the binary language of computers (Fig. 10.9). Unbeknownst to the group, their work would lay the groundwork for digital imagery used in a broad range of applications such as those mentioned previously.

Fig. 10.9
figure 9

(Russell A. Kirsch [Public domain], via Wikimedia Commons)

The first scanned image, produced in 1957, of Walden Kirsch, son of the leader of the team that developed the image scanner 

In 1969, the first digital image sensor was invented by scientists at Bell Telephone Laboratories. Three years later, Texas Instruments patented an electronic camera that did not require film. The year 1973 also brought the release of the first large image forming CCD chip by Fairchild Semiconductor.

A big leap in the development of digital photography occurred in 1975 when Steve Sasson, working at Eastman Kodak, created the first digital camera. Weighing eight pounds and recording only black and white images to a cassette tape, the camera took 23 seconds to capture an image. Needless to say, this prototype did not immediately lead to a commercially viable device.

Additional important advances in digital photography include the 1975 development of the Bayer filter mosaic pattern by Bryce Bayer at Kodak, which allows CCD sensors to capture color images. In 1981, Sony introduced the Mavica (magnetic video camera) still camera, which produced images that were stored on a two-inch floppy disc. The year 1986 brought the invention of the first megapixel sensor by scientists at Kodak.

In 1994, the Apple QuickTake 100 digital camera came into the market, followed a year later by Kodak’s DC40. The release of these cameras was important, for they were relatively inexpensive and simple to use. This was the boost that digital cameras needed. Within a decade of the introduction of these cameras, digital picture taking would surpass film photography.

By the early 2010s, almost all smartphones had an integrated digital camera. Many smartphone cameras provide performance that rivals, and in some instances, surpasses that of dedicated DSLRs. Smartphones are convenient, allow for instant image viewing and correction, and connect easily to the internet for picture sharing.

Smartphones also have the ability to take high-quality video. It is therefore not surprising that the use of smartphone cameras has soared in recent years.

How a Digital Camera Records an Image

All digital cameras have an imaging sensor, approximately the size of a postage stamp, that uses light to free electrons through the photoelectric effect. There are basically two types of image sensors in use today, the charge-coupled device (CCD)and the complementary metal oxide semiconductor (CMOS). Both types of sensors consist of millions of tiny light-sensitive silicon photodiodes called photosites. While both CCD and CMOS sensors rely on the photoelectric effect to generate electrons, they differ in how the electrons are transferred from the photosites to an electronic storage system and where electrons are converted to a voltage. As a result, each type of sensor has unique characteristics that affect power consumption, processing speed, cost, and image quality.

The CCD Sensor

In a CCD, electrons are collected in each photosite’s electronic storage mechanism called a potential well. The number of electrons in each well corresponds directly to the amount of light incident upon the photosite. A magnified image of the surface of a CCD is shown in Fig. 10.10. The small colored dots mark the positions of the CCD’s photosites.

Fig. 10.10
figure 10

(Serych at cs.wikipedia [Public domain], from Wikimedia Commons)

A greatly magnified surface of a CCD showing photosites

A CCD’s architecture allows charge to be moved from the array of potential wells to the camera’s memory system without using external wires. This is accomplished by transferring the electrons in each well to its neighbor, bucket-brigade style. This sequential, or serial, movement of charge in the CCD circuit is achieved by manipulating the voltages on the wells. The last well in the array deposits its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the entire contents of the semiconductor’s array of photosites is converted into a sequence of voltages. These voltages are then digitized and stored in memory.

The CMOSSensor

Like CCDs, CMOS sensors consist of myriad photo-sensitive elements. However, the CMOS sensor takes a different approach in processing the charges from the millions of photosites. Whereas charges in CCD photosites are sent to external electronics for processing, a CMOS sensor has circuitry at every photosite. This circuitry includes charge to voltage converters as well as amplifiers and digitization circuits. This means that all charges can be processed, in parallel fashion that is, at the same time, clearing the photosite for the next exposure. This results in faster processing speeds.

In addition to faster processing speeds, CMOS sensors use less power and have lower manufacturing costs. For this reason, the CMOS sensor has become the choice of imager for use in smartphones and scanners. That said, CCD sensors remain a popular choice for applications where high efficiency is required.

Capturing Color

Both the CCD and CMOS sensors are only capable of detecting light intensity, that is, brightness. This is useful for recording black and white images, but not for capturing color. One method of designing a digital camera capable of recording color employs a combination of prisms or filters coupled with three separate sensors. Each of the three sensors receives light corresponding to one of the primary colors, red, green, or blue. A color image is created when the output from the three sensors is combined. While this approach produces high-quality color images, it tends to be costly. Fortunately, a more cost-efficient solution exists.

Instead of using three separate sensors, most digital cameras use something called the Bayer filter system in conjunction with a single sensor. Named after its inventor, the Bayer filter consists of a mosaic of red, green, and blue filters arranged on a grid of photosites with each filter covering a single site (Fig. 10.11). Small lenses are used to concentrate the light at each site. A red filter only allows the passage of red light and blocks both green and red light. Similarly, green and blue filters only allow green and blue light, respectively, to pass. Thus, the charge collected in each site is representative of the amount of red, green, or blue light striking that site.

Fig. 10.11
figure 11

(Studio BKK/Shutterstock.com)

Bayer array on sensor

In a Bayer array, the color filters cover individual photosites in a 2 × 2 grid pattern. Green is used twice in the quartet of pixels to accommodate for the human eye’s greater sensitivity to green. Since each photosite is behind a color filter, the output is an array of values, each indicating the intensity of one of the four filter colors. Thus, it takes four photosites to gather each pixel’s information regarding color and intensity.

The Image Signal Processor : The Core Component in the Image Processing Chain

At the heart of all digital cameras is the image signal processor (ISP) . The primary function of an ISP is demosaicing, the process of translating the Bayer array of primary colors into a final image that contains full color information at each pixel, the smallest unit of brightness and color in a digital image. To do this, the ISP collects and averages the values from neighboring photosites to determine the true color of a single pixel.

In addition, the ISP does a variety of other essential tasks. It controls autofocus, exposure, noise reduction, and, in some cases, face and object detection. It also manages image stabilization. When gyroscopic sensors detect shaking, the camera responds by using small motors to move either the lens, the image sensor, or both, to produce enhanced stability.

Digital Camera Resolution

A digital camera is often categorized based on its resolution, or how many pixels its sensor can capture. Camera resolution is usually given in megapixels. One megapixel equals one million pixels. Resolution is a measure of how large a photograph can be made without becoming unacceptably blurry or grainy. As a rule of thumb, larger prints require more megapixels. A two- or three-megapixel camera can produce some very good quality prints up to about 4 × 6 inches, whereas an 8 × 10 inch or larger print will generally require a resolution of four or five megapixels.

Today’s budget camera’s sensors typically have resolutions of five megapixels or less; professional cameras may employ sensors having 20 megapixels or more. It should be noted that just having more megapixels doesn’t guarantee that a camera will produce better quality images than a camera with fewer megapixels. The number of megapixels is only one factor to consider when choosing a digital camera. The quality of a camera’s lens system and image processing capabilities are of equal importance.

Image Storage

After an image is captured and broken down into pixels, the image data is transferred to the camera’s storage device. The method of storage is often a small, removable flash memory card. Data stored on such a device is not lost when disconnected from a power source. Flash memory may be erased and reused thousands of time. Smartphone cameras generally employ nonremovable internal data storage.

The most common format for data storage in flash or other types of solid-state storage is JPEG, abbreviation for Joint Photographic Experts Group, a consortium of digital imaging authorities. Data in this format has been compressed to save storage space, the compression being accomplished by the removal of nonessential data. The reduction in data allows more efficient storage and transfer of data.

A second format that does not involve compression is the tagged image file format, or TIFF for short. Because no data is eliminated, this type of file tends to be much larger than a JPEG file.

Photo Editing

Adobe Photoshop remains one of the best photo-editing applications available, but it is relatively expensive and learning to use it can be challenging. Thankfully, there are many free photo-editing programs available that provide much of the same functionality as Photoshop but are much easier to use. Programs such as GIMP, PAINT.NET, and PIXR.COM can perform basic functions such as cropping, correcting exposure, and image resizing, as well as carry out more advanced photo editing.

The Magic of Smartphone Cameras

At first blush, it would seem that producing high-quality photographs with a camera confined to a space not much bigger than a thin matchbox would be impossible. The major obstacle is sensor size. A bigger sensor will capture more detail with wider dynamic range (the detail in dark and light areas), offer superior low-light performance, and focus more sharply on moving objects. However, with few exceptions, smartphone and tablet cameras have tiny sensors. The magic is in the way this limitation is overcome.

Among the remedies used to deal with the constraints imposed by sensor size is backside illumination (BSI). This modification moves wiring to the back of the sensor, maximizing the surface area upon which light can hit the sensor’s photosites. Wider apertures, made possible with improved lens quality, also aids in light gathering. Working in concert with sensor and aperture improvements, the camera’s ISP not only massages data received from the sensor but also provides stabilization to compensate for camera shake.

Many new phones offer two or more rear-mounted cameras in addition to the front-facing selfie camera. The added cameras not only help in enhancing the depth of field through the use of different focal lengths but also allow the ability to shoot in low light. One camera delivers typical shots while the other may work as a zoom or wide-angle lens. Some phones use the two cameras together to produce a bokeh effect, which blurs the background while leaving the subject in sharp focus.

Some multiple camera phones come with a monochrome module. This feature is designed to take two shots simultaneously, with one of them being black and white. The photo processing software on the smartphone then combines both images to produce a sharper single image with enhanced color rendering.

It should be noted that not only are today’s smartphones capable of producing photos of remarkable quality, they also support sophisticated in-camera editing through the use of a wide variety of applications, many of which are available at no cost. Among the highly regarded editing “apps” are Adobe’s Photoshop Express and Lightroom CC.

As a result of the above, and other, innovations, today’s smartphone cameras have the ability to produce sharp enlarged prints, work in low light, often without the use of a flash, and take high-definition video, a feature made possible by stabilization.

10.5 Film

Modern black and white film is based on Talbot’s negative–positive process (Sect. 10.1), but the silver halide coating is applied to a plastic base. Sensitizers are added to the emulsion to make it sensitive to a wide range of colors (panchromatic film). An antihalation backing is generally applied to the reverse side of the plastic base to prevent the reflection of any light that isn’t completely absorbed by the emulsion. This light would otherwise be reflected at the back of the base to return to the emulsion and form a halo around the picture. This backing is removed during development.

Light striking the silver halide crystals generates one or more silver nuclei. The chemical developer converts (reduces) all of the silver halide, but the developer will work faster where there are already silver nuclei. Thus, the developer will first convert those crystals that have been exposed to light. The trick is to remove the film before the unexposed silver halide crystals are reduced (and the film becomes “overdeveloped” and totally dark). In order to stop the development before the unexposed crystals are reduced to metallic silver, the developer is washed away in the stop bath, generally an acid that makes the developer inactive. To prevent further conversion of silver halide to silver, the film is fixed with hypo (generally sodium thiosulfate Na2S2O3), which converts the insoluble silver halides into water soluble compounds that can be washed out of the emulsion.

To obtain a positive print, a sheet of photographic print paper is exposed, by contact or projection, to the negative image of a photographic negative. The print is developed in the same way as the negative. A large number of prints can be made from one negative, if desired. Contact printing results in prints of the same size as the negative, while enlargements are made by projecting an image of the negative through the lens of an enlarger. During the enlargement process, filters and masks can be used to change the photograph, correct areas of overexposure, and so on.

The film speed indicates the reaction rate of the emulsion, which determines the exposure necessary to create an image. Film speeds may be rated according to three speed indexes: ISO (International Standards Organization), ASA (American Standards Association), or DIN (Deutsche Industrie Norm). The ISO and ASA ratings, determined by different procedures, give almost the same film speed, and film speeds often do not say which standard is used. Color films, for example, are usually rated 100 (slow), 200 (medium), or 400 (fast). ISO (or ASA) 400 film is twice as fast as 200; it requires half the exposure time under the same conditions. In the DIN system, an increase of three indicates a doubling of speed (DIN 21 film has twice the speed of DIN 18).

To expose the film properly, the photographer should consider three things: film speed, lens aperture (f-stop number), and exposure time. Many cameras do this automatically. With a given film speed, increasing the lens opening by one-half (using the next lower f-stop number) allows one to decrease the exposure time by one-half. Doubling the film speed, on the other hand, allows one either to reduce the exposure time by one-half or to reduce the lens opening by one-half (use the next higher f-stop number).

Why, then, would one not always choose the fastest film available? There are two reasons: First, fast films generally cost more than slower films. Second, fast films usually have fairly large silver halide crystals or grains, while slow films have fine grains. If enlargements are to be made, it is desirable to use film with as small a grain size as possible. Some films have a variety of grain sizes and thus have a wide latitude for exposure.

10.6 Color Film

We have seen, in Chap. 8, how almost any color can be matched by combining three primary colors. It should be possible, then, to combine three images recorded on three emulsions sensitive to red, green, and blue to construct a color image. It would be nice if these separation negatives, as they are called, could be made with light-sensitive dyes, but unfortunately only silver compounds are sensitive enough to light to be practical. Thus, the three separation negatives and positives are really three black and white recordings of the red, green, and blue light that reached the film.

Color film consists of three emulsions plus a yellow filter on a common base, as shown in Fig. 10.12. The top emulsion has the basic blue sensitivity typical of all silver halide film. The next emulsion contains sensitizers that make it sensitive to green light; the bottom emulsion is sensitized for red. Because the green- and red-sensitive emulsions retain their sensitivity to blue light, a yellow filter is placed above them to prevent blue light from reaching them.

Fig. 10.12
figure 12

a Cross-section of a tripack color film; b actual film includes a protective layer P, interlayers I, and an antihalation layer A

After the blue, green, and red records have been obtained, the black silver images must somehow be changed to color and combined into one color picture. This can be done either by additive or subtractive color mixing, although additive systems are rarely used today.

Additive mixing is conceptually the simplest method and was the first to be used. In principle, light could be projected through three black and white positives with color filters and combined on a screen, as done in projection color television, although this would lead to problems with registration. Historically, the first successful color films were mosaics; they combined black and white film with a mosaic of tiny red, green, and blue filters. The film behind each of them was exposed to just one color, and when projected or viewed in transmitted light the colored images combined additively.

In subtractive color mixing, used in nearly all present-day films, the developed silver is replaced by a dye of a color complementary to the color of the record. Then the three records can be combined subtractively. The dye destruction process, invented by Herbert Kalmus in 1932, places complementary dyes in the emulsion during manufacture. During development, the dye is chemically removed in the vicinity of exposed silver halide, thus forming a positive image. Unfortunately, the dyes absorb light during exposure, so the film’s sensitivity is reduced, and so this process is practical only in color print paper where high sensitivity is not important. The dye transfer process depends on the absorption of dyes by matrixes of chemically hardened gelatin in the separate emulsions containing the red, green, and blue records.

The most common method for forming the dye image is the dye coupling process, in which the oxidized developer couples with a chemical to form a dye, thus producing a negative color image. The superposition of three color negatives transmits the colors complementary to those of the original image. To make color prints, the colors are reversed by exposing a three-layer emulsion coated on the paper to light through a color negative. To make positive transparencies, the color reversal is performed in a film consisting of three sensitive layers separated by gelatin; each layer contains crystals sensitized to one of the primary colors: red, green, or blue.

10.7 Instant Cameras

After their popularity being in decline for several years, instant cameras have made somewhat of a comeback. There seems to be something special about hearing the clicking sound of a shutter, watching a print emerge from the camera, and seeing a color image appear before your eyes.

Instant cameras use the same type of film described in the previous section, but the film carries a pod of viscous chemicals ahead of each frame that perform the function of both developer and fixer. After exposure, the film is pulled through rollers that squeeze it and break open the pod, spreading its contents between the film and support paper to develop it. The unexposed silver halide crystals converted into soluble silver by the fixer diffuse out of the emulsion toward the support paper. The originally unexposed regions become dark on the support paper, while in the exposed regions the metallic silver is not dissolved by the fixer, so no ions diffuse through and the support paper remains white. The result is a “positive” image created in just a few seconds.

The first instant color film was developed by the Polaroid Corporation and marketed under the name Polacolor. It was designed to be used in Polaroid cameras in place of black and white instant film. This film does not use couplers, but the dyes are incorporated into the film at the time of manufacture. Above each color-sensitive recording layer is an appropriate dye layer as well as a developer. The actual process of developing and printing the final image is quite complicated, and it will not be discussed here.

Due primarily to its failure to make the transition from analog to digital technology, the Polaroid Corporation filed for bankruptcy protection in 2001. However, Polaroid enthusiast Dr. Florian Kaps saved the day by his decision to fund The Impossible Project in the same year. In 2010, the company began selling film for use in the Polaroid SX-70, 600, and Spectra cameras, allowing instant photography aficionados to once again use their vintage cameras.

Today, fans of instant photography have several options. In addition to Polaroid, Fujifilm produces cameras in two film formats: Instax Wide and Instax Mini. Some companies, including Polaroid and Lomography, have produced cameras compatible with Fujifilm’s Instax film.

10.8 Lighting for Color Photography

Lighting is much more critical in color photography than in black and white photography. In addition to having the right intensity, the right color balance is needed. The eye adapts to changes in illumination (see Sect. 9.1), but the camera cannot do so. Hence, photographs of the same object taken with the same color film or digital camera setting in sunlight and in incandescent light will appear quite different.

Ideally, with film photography one would use a film that is color-balanced to the particular light being used. Practically, this is not always possible, and color-correcting filters must sometimes be used. Color film is generally available in two different types: daylight and indoor (or type A). Daylight film is color-balanced for sunlight, while indoor film is balanced for incandescent lighting. If daylight film is exposed indoors, the colors will be too reddish, because incandescent light is richer in reds (and deficient in blues) as compared to sunlight. Likewise, if indoor film is used outdoors, the colors will appear bluish. Most strobe flashes are color-balanced to resemble daylight, so daylight film can be used indoors with strobe flashes. Fluorescent lighting presents a problem, since it matches neither daylight nor incandescent light. Filters that correct for fluorescent light are available.

Some photographers use LEDs; however, their use is not yet widespread. Their favorable attributes include higher efficiency, which results in less heat production. They are usually fully dimmable and offer adjustable color temperature, making it easier to precisely match natural light. However, their power output is often below that of incandescent and fluorescent lights, making it difficult to shoot with smaller apertures or to capture stop action.

When using a digital camera, changing the white balance setting permits adjustment for differences in lighting. The human eye automatically takes into account different lighting situations, but a camera needs to be adjusted for accurate color reproduction. White balance is a camera setting that ensures that what appears white to the human eye will also appear white in the recorded image for any given type of light.

Digital cameras allow white balance to be controlled both automatically and manually. Automatic balance pre-settings include daylight, shade, cloudy, tungsten, fluorescent, and flash. In the manual mode a photo is taken of a white object such as a sheet of white paper under existing lighting conditions. Using that image as its white balance reference ensures that all photos taken under those conditions will come out correctly balanced.

Color balance in digital photography can also be controlled using imaging software. The programs described in Sect. 10.4 all have color correction capabilities.

10.9 High-Speed Flash Photography

The Evolution of High-Speed Photography

Capturing events that occur too fleetingly for the unaided eye to observe remained out of reach until 1851. It was then that Henry Fox Talbot used an electrical spark to “stop time.” Talbot attached a newspaper article to a rotating wheel. In a darkened room, he exposed the newspaper to light produced by an electrical discharge. The resulting image captured on a wet plate revealed readable text.

Stop action photography was later put to use in the 1870s when Eadweard Muybridge was challenged to address a question posed by wealthy entrepreneur and horse racing enthusiast Leland Stanford. Stanford wondered if racehorses ever had all four hooves off the ground simultaneously. To learn the answer, Muybridge used a system consisting of 24 cameras and a set of tripwires. In order to capture the full range of a horse’s gallop, each trip wire was attached to trigger a different camera. As the photos in Fig. 10.13 indicate, Stanford’s question was answered in the affirmative.

Fig. 10.13
figure 13

Eadweard Muybridge’s stop action photographs of a galloping horse (Eadweard Muybridge [Public domain] via Wikimedia Commons)

Most of us have admired photographs that capture events such as a bullet passing through an apple, the impact of a tennis ball with a racket or a foot with a football, and a bursting balloon. It was MIT professor Harold Edgerton who invented devices such as the stroboscope and the electronic flash tube that make it possible to witness such events. Edgerton said the strobe allows us “to chop up time into little bits and freeze it so that it suits our needs and wishes.”

Among other things, Edgerton’s research gave the world the electronic flash for cameras, slow motion photography, remote-controlled deep-sea cameras, and high-intensity flashlamps used in World War II to take nighttime aerial reconnaissance photographs. Modern extensions of his pioneering work continue to have applications in science, technology, industry, and national defense.

Low-Cost, Do-It-Yourself High-Speed Photography

You may not have realized that photographs such as those taken by Edgerton can be taken with ordinary electronic flash units. The trick is to synchronize the unit so that it flashes at exactly the right time. Techniques for doing this with very inexpensive equipment have been developed by a talented high school physics teacher, Loren Winters.

Many electronic flash units have durations as short as 30 μs, which is short enough to record most high-speed motion. Winters uses sound triggers, photogate triggers, or mechanical switches to trigger one or more electronic flash units. A sound trigger senses the sound of impact and triggers the flash. The simple sound trigger circuit, shown in Fig. 10.14, can be constructed for less than $10.

Fig. 10.14
figure 14

(Schematic courtesy of Loren Winters)

Simple sound trigger using a piezoelectric buzzer element as a microphone. A silicon-controlled rectifier (SCR) triggers the flash

When an electronic flash unit undergoes a full discharge, the flash of light lasts for several milliseconds, which is much too long for high-speed photographs. For the latter, the flash power must be reduced, as this also reduces the duration of the burst of light. Therefore, it is best to use a flash unit that has a feature to adjust the flash power. For most situations, one sets the power at the lowest setting.

Photographs are usually made in a darkened room. The camera need not have a high shutter speed, because the duration of the exposure is determined by the flash rather than the shutter. The shutter is typically set for one or two seconds so that it will be open in the dark before the flash. Examples of Winters’ high-speed photographs are shown in Fig. 10.15.

Fig. 10.15
figure 15

(Photographs by Loren Winters)

High-speed flash photographs: a Milk droplet taken with multicolored sequential triggered flash; b plucked string using sixteen sequentially triggered flashes; c balloon burst by overpressure; d bullet bursting a balloon

10.10 Summary

The evolution of photography from heliographs and daguerreotypes to digital imaging is an interesting story. Cameras may be categorized as compact, or point and shoot, and single-lens reflex (SLR). Compact and SLR cameras come in both film and digital models. The essential parts of a camera are the lens, the shutter, the diaphragm (or iris) , and film or an electronic sensor. Digital cameras use sophisticated optics and electronics to capture, edit, and store images. Color films create images in three different emulsions, and these are combined to produce color positives or negatives. High-speed flash photographs can be made by using simple trigger circuits with relatively inexpensive photoflash equipment.

◆ Review Questions

  1. 1.

    To focus a camera, you move the lens back (closer to the film) or forward (farther from the film). When the lens is closer to the film, are you focused on a near or a far object?

  2. 2.

    List the components in a simple camera and describe their functions.

  3. 3.

    What is depth of field and how is it controlled in a camera?

  4. 4.

    State the equation used to determine f-stop number.

  5. 5.

    Verify that the following f-stops and shutter speeds all give the same exposure: f/2 at 1/200 s; f/2.8 at 1/100 s; f/4 at 1/50 s; and f/5.6 at 1/25 s.

  6. 6.

    Why do professional photographers often carry more than one lens for their cameras?

  7. 7.

    A student notices that 38 mm f/11 appears near the lens of an inexpensive camera. What is the focal length of the lens? What is the diameter of the diaphragm opening?

  8. 8.

    If a camera is focused for a distance of 20 m, should the lens be moved closer to the film or farther from the film to focus on an object 5 m away? Explain why.

  9. 9.

    What is the difference between a CCD and a CMOS detector?

  10. 10.

    What is the function of the white balance setting on a DSLR?

  11. 11.

    Explain how a digital camera records color.

  12. 12.

    What are the relative advantages and disadvantages of fast and slow film?

▼ Questions for Thought and Discussion

  1. 1.

    What are several advantages of a single-lens reflex camera over a “point-and-shoot” type?

  2. 2.

    Why might a photographer wish to override the automatic exposure time feature?

  3. 3.

    Under what conditions might you wish to decrease the depth of field?

  4. 4.

    In what ways has digital photography revolutionized picture taking?

  5. 5.

    Why are color negative films more often used to make color prints than color reversal films?

  6. 6.

    Why should a flash unit be used in its automatic mode in taking high-speed flash photographs?

  7. 7.

    Describe how a sound trigger works.

■ Exercises

  1. 1.

    If f/8 and 1/125 s gives the proper exposure, what f-stop number should be used with 1/500 s under the same light conditions?

  2. 2.

    Compare the diameter of an f/4 lens with a focal length of 50 mm to that of an f/8 telephoto lens having a focal length of 200 mm.

  3. 3.

    A photograph is taken at f/4 and 1/250 s using ISO 100 film; what f-number should be used with ISO 400 film at 1/500 s in order to obtain the same results?

  4. 4.

    If a bullet moves at 200 m/s, how far does it move during 30 μs? Could you “stop” it with an electronic flash?

  5. 5.

    A zoom lens has a focal length that varies from 30 to 200 mm. Which setting will make the subject look largest? Which setting will include the widest angle of field?

  6. 6.

    a. What is the diameter of an f/4 lens with a focal length of 50 mm? b. What is the diameter of an f/8 lens with a focal length of 200 mm?

  7. 7.

    Which of the lenses in Exercise 6 might be considered a telephoto lens?

Experiments for Home, Laboratory, and Classroom Demonstration

Home and Classroom Demonstration

  1. 1.

    Field of view of camera lenses. Open the camera back of an SLR camera and place a piece of ground glass (or plastic with a rough surface) at the location of the film. Determine how much of a meter stick can be seen at various distances. The field of view should fit the following relationship: field of view/object distance = width of film/image distance. (With a single-lens reflex camera, this experiment can be done by looking through the viewfinder.)

  2. 2.

    Using a digital camera’s white balance setting. Take digital photographs of a particular scene using different white balance pre-settings. Notice how settings affect each photograph’s color balance.

  3. 3.

    Observing the rolling shutter effect. You can use your smartphone to observe what is known as the rolling shutter effect by taking a photo of a moving fan, a toy “spinner,” or an airplane propeller. Try taking photos with the smartphone held both vertically and horizontally.

  4. 4.

    Measure shutter time using an oscilloscope. Connect a photodiode (a “solar cell” will work) to an oscilloscope and place the diode at approximately the film position in an SLR camera; shield it from room light with a black cloth. Aim the camera at a bright light and press the shutter. Note the width (duration) of the pulse on the oscilloscope.

Laboratory (See Appendix J)

  • 10.1 Exploring a Single-Use Camera with Built-In Flash.

  • 10.2 Field of View and Depth of Field of a Camera Lens.