Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Introduction

The topic of electromagnetic metamaterials is a rich field that spans thousands of years and frequencies ranging from radio through the ultraviolet. These materials have a vast range of applications including art and jewelry, church decoration, frequency converters, electromagnetic cloaks, and sub-wavelength super lenses, just to name a few. Every day the field is expanding in new and different directions, and the range of new technologies being created seems limited only by the creativity of those involved. By combining our understanding of materials behavior with our unprecedented ability to model and fabricate structures at the nanoscale, researchers are bringing devices into the world that were previously only seen in the movies.

Here we define a metamaterial as “a man-made or otherwise artificially structured material with inclusions embedded in a host medium or patterned on a host surface, where the length scale of the inclusions is significantly smaller than the wavelength of interest.” The macroscopic optical properties of the composite material are a result of the sub-wavelength unit structure, rather than the constituent materials; and by tuning the design of those inclusions, one can tune the overall electromagnetic properties of the metamaterial.

1.2 Ancient Metamaterials

While there’s no way of knowing for sure when and where the first optical metamaterials were produced, a strong candidate for this distinction is glass, specifically stained glass. The first evidence of glass making was around 3000 B.C.E. in an area called the Canaanite–Phoenician coast near the Mediterranean (just north of present-day Haifa). The sand from this region contained the right concentrations of lime and silica, so that the traders needed only to mix in natron (a mix of soda ash, baking soda, salt, and sodium sulfate) while the melt was placed into a hot fire [63]. The earliest glasses developed in this manner were opaque rather than transparent due to scattering from small air bubbles or particles trapped within the glass during formation. Later, during the first millennium B.C.E., hotter kilns were developed and artisans began to introduce metal oxides into the glass to control the color. By forming the glass either with or without charcoal, the glassworkers were able to either reduce or oxidize added copper, and as a result, produce glass that was either red or blue, respectively.

Since it seems that no discussion of plasmonics or metamaterials would be complete without referencing the Lycurgus Cup, we’ll mention here that this artifact is one of the world’s most famous examples of metal being introduced into glass. This cup was produced during the fourth century A.D. during the Roman Empire, Fig. 1.1. The cup depicts the death of King Lycurgus in Thrace at the hands of Dionysus. As can be seen from Fig. 1.1, the glass appears green (a) when seen with light reflected off the surface of the cup, and red (b) when seen with light transmitted through the cup. This remarkable behavior results from the introduction of colloidal gold and silver into the glass during formation. The resulting gold and silver nanoparticles within the glass reflect the green portion of the visible spectrum while transmitting the red portion of the visible spectrum [4].

Fig. 1.1
figure 1

The Lycurgus cup shown in both reflection (a) and transmission (b). Gold and silver nanoparticles are responsible for the strong reflection of green light and transmission of red light. ©Trustees of the British Museum

It’s truly remarkable that from this period in history all the way to present day, people have been studying how to produce new and different optical properties by simply combining nanoscale metallic inclusions within dielectrics, and that a theory for this type of scattering from metal spheres would not be formalized until 1908 by Gustav Mie [45], ∼1600 years after the cup was made. From the fourth century on, people began studying how to produce stained glass by annealing the material with the addition of metallic salts. One of the first books documenting these studies, and arguably the first book on metamaterials, was “The Book of the Hidden Pearl,” written in the eight century A.D. by Jabir ibn Hayyan, discussing the manufacturing of colored glass as well as techniques for the coloring of gemstones [31].

Around the turn of the twentieth century, the theories required to explain the behavior of metamaterials began to take shape. The first attempt at developing modern metamaterials with sub-wavelength structures was by Jagadis Chunder Bose in 1898. Bose used pieces of twisted jute in an effort to develop an artificially structured chiral material [8]. In 1904, J.C.M. Garnett published his paper on “Colours in Metal Glasses and in Metallic Films” in the Philosophical Transactions of the Royal Society [24]. Here he used the Drude model for the optical properties of free electron metals to describe how colors arose and changed within glasses when gold or silver films were annealed into nanoparticles that dispersed throughout soda glass. This work was followed in 1908 by the paper mentioned earlier from Gustav Mie that discussed the scattering of electromagnetic radiation by a sphere [45]. This work enabled the calculation of electric and magnetic fields inside and outside of a sphere which, in turn, can be used to calculate the scattering profile of incident light.

Taken together, these papers represent the first major effort using electromagnetic theory to analytically explain the behavior of nanoparticles, and more generally, optical metamaterials. This also represents a fundamental turning point in the history of metamaterials. Now, for the first time, the range of accessible metamaterials was not simply limited to those discovered by chance or passed down by word of mouth; rather, these theories could be applied in new and different ways to target specific applications and produce novel optical behavior.

1.3 Modern Metamaterials

In the late 1960s, the era of modern metamaterials began. Traditionally, modern developments in the field are attributed in large part to three seminal papers by Victor Veselago in 1967 [81], Sir John Pendry in 2000 [57], and David Smith in 2000 [71]. These papers certainly helped to inspire an entire field of researchers in the field of optical metamaterials and spark an enormous surge in related publications (Fig. 1.2(b)); however, these publications and other like it represent one of three different communities that came together at the start of the twenty first century to bring about the field of metamaterials as we know it today. Current progress in the field was brought about by a combination of key papers as well as advances in, and the commercialization of, both nanofabrication capabilities and full-field electromagnetic modeling tools. These three areas all represent key components in the development and understanding of modern metamaterials, and the intersection of all three has brought the field to where it is today.

Fig. 1.2
figure 2

In (a), the metamaterial-based, two-dimensional microwave cloak from Shurig et al. [64]. The structure consists of ten concentric cylinders of split ring resonators mounted on printed circuit board. The plot in the foreground shows the relevant materials parameters (μ r ,μ θ ,ε z ) as a function of distance from the center of the cloak. In (b), the number of papers published from 2000 to 2011 on the topics of: optical cloaking, nonlinear metamaterials, negative index metamaterials, and optical metamaterials

1.3.1 Publications

In 1967, Victor Veselago published his seminal paper on “The Electrodymanics of Substances with Simultaneously Negative Values of ε and μ [81].” In this paper, Veselago describes such “left-handed materials” that would support electromagnetic waves where the phase velocity and pointing vector propagate in opposite directions. This work was not realized experimentally until 2000, when David Smith and colleagues fabricated the first ever negative-index material by structuring an array of copper strips and split ring resonators on printed circuit boards [71]. Smith and Schurig later used these techniques and the recently emerging field of transformation optics to design an electromagnetic invisibility cloak that operated over a band of microwave frequencies, Fig. 1.2(a) [39, 64, 65, 82]. Concurrently in 2000, Sir John Pendry published his paper entitled “Negative Refraction Makes a Perfect Lens [57].” Here Pendry took the concepts introduced in Veselago’s paper and explored the possibility of producing a negative-index “super lens.” In principle, this material could circumvent the normal diffraction limits of light and resolve structures “only a few nanometers across” at visible frequencies. The combination of these three efforts by Veselago, Smith, and Pendry helped kick-off an enormous amount of research within the field of negative index materials and metamaterials in general, and the field would not be where it is today without their work. Between the years 2000 and 2011, many areas of research have received tremendous attention including: optical metamaterials (∼2800 publications), negative index metamaterials (∼2450 publications) [59, 67, 72, 80], optical cloaking (∼650 publications) [11, 58, 64, 79], and nonlinear metamaterials (∼450 publications) [36, 37, 60, 87]. Representative references for each topic are listed above, and a compilation of all publications from the four representative research topics in the field as a function of year is plotted in Fig. 1.2(b).Footnote 1 While it should be noted that papers within the four topics are not mutually exclusive, the general trends clearly show enormous growth within the field starting around 2000 for optical, nonlinear, and negative index metamaterials, and 2006 for optical cloaking.

While the contributions of these researchers have certainly shaped the development of the field of optical metamaterials over the last half century, they are by no means the only scientists to do so. From 1968–2009, Ben Munk became a pioneer within the field of frequency selective surfaces for their use in radar and other military applications [4850]. Here, the military quickly realized the importance of being able to properly tune the design of antenna arrays for absorption and beam steering applications, and to this day, portions of his 1968 Ph.D. thesis are still classified. Research lead independently by Vladimir Shalaev and Xiang Zhang has taken the microwave cloaking and negative index of refraction concepts demonstrated by Smith and Shurig, and extended them to visible frequencies [911, 60, 67, 79, 80]. And finally, in a similar vein to the work done by Veselago, modeling and design work lead by Nader Engheta has predicted a wide range of exotic behavior from phase-shifters with novel medium to electromagnetic tunneling through waveguides of “epsilon-near-zero” materials, to the introduction of circuit nanoelement models in the optical domain using plasmonic and non-plasmonic nanoparticles [1719, 62, 69].

While this list by no means encompasses the range of researchers who have made major contributions to the field, it does emphasize some of the major work done that has leveraged metamaterial theory (including Veselago, Mie, and Drude) to tailor the design of metamaterials for specific applications. These researchers have done a superb job of studying not only the fundamental resonances involved in these nanostructures, but also addressing the question that immediately follows: How to design these structures for specific applications?

1.3.2 Fabrication

As the field of metamaterials has moved to ever shorter wavelengths, the frequency ranges and resonances that we can study are limited by the modeling and fabrication capabilities at our disposal. During the 1960s, 1970s, and 1980s, the study of radio frequency (RF) metamaterials required structures with length scales on the order of centimeters, and these could be easily machined and assembled by hand. In the 1970s and 1980s, techniques for micro- and nano-lithography were developed to pattern structures with sizes spanning from tens of microns to tens of nanometers. For structures designed to operate at frequencies up to 1 THz, standard photolithography techniques allowed large arrays of these structures to be pattered very quickly by exposing ultraviolet light through a patterned glass photomask to transfer this pattern into a photosensitive resist; and subsequently transferring that pattern into the resonator material, either through etching or lift-off techniques. As the operating wavelength of the metamaterial grew shorter, standard lithography techniques ran up against the diffraction limit of the exposure light, and new methods were employed. While there are a number of techniques that have been developed to fabricate structures at the nanoscale, including X-ray lithography, interference lithography, extreme ultraviolet and immersion lithography, direct laser writing, and imprint lithography, over the past 20 years, there are two techniques that have played key roles in the fabrication work done in the metamaterials’ community: electron beam lithography and focused ion beam patterning.

While both techniques are key components within modern, academic nanofabrication facilities, it is interesting to note that the two tools evolved along very different paths. Electron beam lithography was initially developed as far back as the 1960s and 1970s; however, for most of its history, this tool was largely used within the microelectronics industry and its price was such that it was prohibitively expensive for academic use. Even to this day, electron beam lithography tools in academia are mainly located within shared user facilities and have distributed ownership. In contrast, focused ion beams have been more of a research and development tool with many key developments coming from users of the tool. Only over the past few decades has the tool become mainstream enough to be commercialized by such companies as FEI Co. and Micrion Corp.

Scanning Electron Beam Lithography (SEBL) was first introduced as a commercial Gaussian beam system in 1962 by Philips, Eindhoven, and in 1974 as a commercially available shaped electron beam lithography system by Carl Zeiss, Jena. SEBL uses a set of electromagnetic lenses to focus a column of high energy (usually at 30, 50, or 100 keV) electrons onto a focal plane with typical spot sizes between 2 and 10 nm, Fig. 1.3(a). The lenses then raster the beam across the sample at pre-determined positions to expose the resist where it should either remain or dissolve away during subsequent processing steps. Because the de Broglie wavelength [40] of these electrons is so much smaller than ultraviolet light used in standard photolithography, the minimum feature size is orders of magnitude smaller. Another benefit of electron beam lithography is in the flexibility of the tool when compared with photolithography. Besides the size limitations, photolithography requires the fabrication of a photomask which, once produced, is difficult to modify. In comparison, electron beam lithography is fed an electronic beam map before each run, which can be easily modified. While there are a number of benefits to using this technique, the primary limitation is throughput. Because the beam can only expose one spot at a time, the process is inherently serial, and as a result, patterning on 8″ or 12″ wafers would take orders of magnitude longer than with photolithography.

Fig. 1.3
figure 3

An example of nanofabricated Split Ring Resonators fabricated on silicon substrates using Scanning Electron Beam Lithography (a). The scale bar corresponds to 5 μm. In (b), fishnet metamaterials fabricated using Focused Ion Beam milling were studied as negative index materials [80]. The scale bar corresponds to 1 μm

In comparison, the Focused Ion Beam (FIB) is a direct milling process and after patterning, does not require further etching or lift-off steps to fabricate nanostructures [38, 80], Fig. 1.3(b). Early developments in FIB milling came about in the 1970s when Levi-Setti [22], and Orloff & Swanson [56] independently introduced the first field emission Focused Ion Beam systems in 1975, and the first liquid metal ion source by Seliger in 1979 [66]; however, it wasn’t until 1998 that FEI commercially produced the tool in its current form as a dual-beam Focused Ion Beam / Scanning Electron Microscope system.

Instead of transferring a pattern into a resist, which then requires further etching or lift-off steps to produce structures, the focused ion beam is a direct milling process. The FIB extracts gallium ions from a liquid metal source and accelerates these ions onto the sample in a focused beam with a radius of ∼10 nm. This allows the FIB to produce structures with critical dimensions equivalent to those with electron beam lithography; however, it has been shown that, for certain material sets and device designs, this process can be significantly quicker than electron beam lithography [20]. As with electron beam lithography, FIB is an inherently serial process, and as a result, is very time consuming for larger samples. Also, the materials selectivity of the etch process may be significantly reduced when compared with the variety of etches used in standard lithography. The etching must be timed properly or else significant erosion into the underlying substrate will occur. At the same time the sample is being milled, it’s also being implanted with gallium ions. This effect will typically reduce the quality of the materials being etched, and depending on the material and structures being fabricated, can result in significantly modified material properties [29].

One method that address the issue of throughput with electron beam lithography is nanoimprint lithography [12, 27]. Using this technique, electron beam lithography is used to pattern a “master stamp” with the desired nanostructures. A sample is coated with a heat or ultraviolet curable resist and the stamp is directly pressed into the resist. The speed and effectiveness of this technique is then determined by the rate and extent to which the polymer conforms to the mask. Control of the adhesion between the stamp and the resist allows removal of the stamp while retaining the imprinted pattern. Once the stamp is fabricated, an entire 8″ wafer can be patterned in a matter of minutes. The resolution of this process is ∼10 nm, and there are no diffraction effects. The mold can be re-used many times and is only limited by the rate at which the stamping process erodes the stamp features. Not only does this process reduce patterning time by many orders of magnitude compared with electron beam lithography, but by stamping patterns onto the same substrate multiple times, three-dimensional structures can be fabricated layer by layer.

The last two benefits of imprint lithography cannot be emphasized enough. All of the methods discussed so far are inherently planar fabrication techniques, and fabricating fully three-dimensional metamaterials with these methods has two major problems. First, the accuracy with which subsequent layers can be positioned with respect to those already fabricated can be on the order of the resonator critical dimensions, which can significantly degrade the performance of the metamaterial. Second, the time required to fabricate a single, two-dimensional layer of metamaterials can be prohibitively long, and this essentially rules out repeating the process tens or hundreds of times to extend structures into the third dimension.

While there are promising alternative approaches to address these issues such as self-assembly techniques [25, 68] and direct laser writing [15, 21], these methods are still under development and are not yet integrated in standard fabrication facilities. Examples of both methods are shown in Fig. 1.4(a)–(b).

Fig. 1.4
figure 4

In (a), an example of chiral metamaterials fabricated using Direct Laser Writing [78]. The cubic lattice is written in SU-8 negative tone polymer on a glass substrate. In (b), an example of metamaterial arrays fabricated using self assembly [86]. Nickel is patterned in an inverse-opal structure. The four rows correspond to different filling fractions of nickel, and the three columns correspond to observed far-field colors of the structure as a result of differing surface topographies

1.3.3 Modeling

Before the 1960s, electromagnetic modeling was mainly limited to closed-form and infinite series analytical solutions to the problems of interest. During the 1960s, both the Finite Element Method (FEM) and the Finite-Difference Time-Domain method (FDTD) were reported for the first time in the field of computational electromagnetics, and since then have become two of the main methods for analyzing complex optical metamaterials. The following sections give a brief overview and comparison of the two methods. For a rigorous treatment of these methods, the reader is referred to [33, 34, 76].

1.3.3.1 Finite-Difference Time-Domain Method

The Finite-Difference Time-Domain (FDTD) method is a time-stepping approach that models how electromagnetic waves actually move through a structure. The FDTD method has a number of different implementations; however, the most well-structured is the highly accurate algorithm introduced by Kane Yee in 1966 [85], and was first made commercially available by Panoramic Technology in 1999. Using the Yee algorithm, the structure to be simulated is first broken up into a rectangular grid, with the corresponding ε and μ calculated at each spatial position. The method starts with the time-dependent, differential form of Maxwell’s equations:

$$\begin{aligned} \nabla\cdot \mathbf {D} =& 0, \end{aligned}$$
(1.1a)
$$\begin{aligned} \nabla\cdot \mathbf {B} =& 0, \end{aligned}$$
(1.1b)
$$\begin{aligned} \frac{\partial \mathbf {B}}{\partial t} =& -\mathbf {\nabla} \times \mathbf {E} - \mathbf {M}, \end{aligned}$$
(1.1c)
$$\begin{aligned} \frac{\partial \mathbf {D}}{\partial t} =& \mathbf {\nabla} \times \mathbf {H} - \mathbf {J}. \end{aligned}$$
(1.1d)

Here, Faraday’s law (Eq. (1.1c)) relates the magnetic flux density B, the electric field E, and the magnetic current density M; while Ampere’s law (Eq. (1.1d)) relates the electric flux density D, the magnetic field H, and the electric current density J. Equations (1.1a)–(1.1d) are then discretized using a central-difference approximation which is accurate to second-order. The Yee algorithm solves for the electric and magnetic fields in space and time by utilizing Maxwell’s curl equations. By combining Eqs. (1.1c) and (1.1d) with the constitutive equations for the electric flux density (Eq. (1.2)), magnetic flux density (Eq. (1.3)):

$$\begin{aligned} \mathbf {D} =& \varepsilon _{0}\varepsilon \mathbf {E}, \end{aligned}$$
(1.2)
$$\begin{aligned} \mathbf {B} =&2 \mu_{0}\mu \mathbf {H}, \end{aligned}$$
(1.3)

along with the fact that the electric and magnetic current densities can serve as additional sources of electric (J source) and magnetic (M source) energy:

$$\begin{aligned} \mathbf {J} =& \mathbf {J}_{\mathrm{source}} + \sigma_{E}\mathbf {E}, \end{aligned}$$
(1.4)
$$\begin{aligned} \mathbf {M} =& \mathbf {M}_{\mathrm{source}} + \sigma_{H}\mathbf {H}, \end{aligned}$$
(1.5)

where σ E is the electrical conductivity and σ H is the magnetic loss, we arrive at Maxwell’s curl equations for linear, isotropic, non-dispersive materials:

$$\begin{aligned} \frac{\partial \mathbf {H}}{\partial t} =& -\frac{1}{\mu} \mathbf {\nabla} \times \mathbf {E} - \frac{1}{\mu} (\mathbf {M}_{\mathrm{source}} + \sigma_{H}\mathbf {H} ), \end{aligned}$$
(1.6)
$$\begin{aligned} \frac{\partial \mathbf {E}}{\partial t} =& -\frac{1}{\varepsilon }\mathbf {\nabla} \times \mathbf {H} - \frac{1}{\varepsilon } (\mathbf {J}_{\mathrm{source}} + \sigma _{E}\mathbf {E} ). \end{aligned}$$
(1.7)

This produces a set of six coupled scalar equations for \(\frac{\partial H_{x,y,z}}{\partial t}\) and \(\frac{\partial E_{x,y,z}}{\partial t}\) that represent a “Yee cell,” Fig. 1.5(a), where every electric-field component is surrounded by four circulating magnetic-field components and every magnetic-field component is surrounded by four circulating electric-field components. The simulation volume is then spanned by an array of Faraday’s law and Ampere’s law contours. Thus, the method accurately simulates both the differential and integral forms of Maxwell’s equations at every point in the simulation volume.

Fig. 1.5
figure 5

The rectangular Yee cell is shown in (a). This visualization shows the distribution of electric and magnetic field vector components. Using Finite-Difference Time-Domain simulations, the three-dimensional volume of structures and spaces to be simulated consists of an array of these cells. An example of a bow-tie antenna discretized using the triangular, conformal meshing used in Finite Element Methods is shown in (b). The benefits of a conformal mesh over the rectangular grid used in FDTD can be seen near the corners of the antenna

To obtain each E and H component at time t and position (x,y,z), the curl equations are discretized in time, and the electromagnetic pulse propagates through the simulation volume by leapfrogging from E x,y,z (x,y,z,t=0) to H x,y,z (x+1/2,y+1/2,z+1/2,t=1/2Δt) to E x,y,z (x+1,y+1,z+1,tt), and so on. Finally, a Fourier transform of these results yields the field magnitudes and phases at every point and every frequency.

1.3.3.2 Finite Element Method

Compared with the FDTD method, the Finite Element Method (FEM) is an inherently more complex and universal method. FEM is a numerical procedure to find stable solutions to boundary-value partial differential equations. This approach was first reported in 1943 by Richard Courant in his study of elasticity and structural analysis [14], where the concept of mesh discretization of a simulation was introduced. It was not until 1969 that the method was introduced to the field of electromagnetic engineering, when Silvester used this approach in the field of microwave engineering [70].

The Finite Element Method for electromagnetics is a frequency-domain method, which is again based on Maxwell’s equations (1.1a)–(1.1d). When reformulated to include the anisotropic materials permittivity (ε ij ) and permeability (μ ij ) of the structure under consideration, we start with:

$$\begin{aligned} \nabla\cdot(\varepsilon _{ij} \cdot \mathbf {E}) =& -\frac{1}{i\omega}\nabla \cdot \mathbf {J}, \end{aligned}$$
(1.8a)
$$\begin{aligned} \nabla\cdot(\mu_{ij} \cdot \mathbf {H}) =& -\frac{1}{i\omega}\nabla\cdot \mathbf {M}, \end{aligned}$$
(1.8b)
$$\begin{aligned} i\omega\mu_{ij} \cdot \mathbf {H} =& -\mathbf {\nabla} \times \mathbf {E} - \mathbf {M}, \end{aligned}$$
(1.8c)
$$\begin{aligned} -i\omega \varepsilon _{ij} \cdot \mathbf {E} =& -\mathbf {\nabla} \times \mathbf {H} - \mathbf {J.} \end{aligned}$$
(1.8d)

In this context, FEM assumes that the resonant structure under consideration is surrounded by an artificial absorbing boundary condition which approximates the electric and magnetic fields approaching zero at infinity:

$$\begin{aligned} \hat{n} \times \mathbf {\nabla} \times \mathbf {E} + ik_{0}\hat{n} \times\hat {n} \times \mathbf {E} \approx&0, \end{aligned}$$
(1.9a)
$$\begin{aligned} \hat{n} \times \mathbf {\nabla} \times \mathbf {H} + ik_{0}\hat{n} \times\hat {n} \times \mathbf {H} \approx&0. \end{aligned}$$
(1.9b)

Here, \(\hat{n}\) represents the vector normal to the boundary surface and k 0 is the incident free-space wave vector. Following the treatment by Jin and Riley [34], when the boundary conditions in Eqs. (1.9a) and (1.9b) are combined with the vector wave equation for the electric field of Maxwell’s equations:

$$ \nabla\times(\mu_{0}/\mu_{ij} \cdot\nabla\times \mathbf {E}) - k_{0}^{2}\varepsilon _{ij} \cdot \mathbf {E} = -ik_{0}\sqrt{\mu_{0}/\varepsilon _{0}} \mathbf {J} - \mathbf {\nabla} \times(\mu_{0}/\mu_{ij} \cdot \mathbf {M}), $$
(1.10)

we arrive at

where V represents the entire volume of integration, S B is the artificial absorbing boundary surface, S surf is the surface of the structure being simulated, and T is an appropriate test function used for integration. Combining this with the artificial absorbing boundary condition in Eqs. (1.9a) and (1.9b) gives

(1.11)

The volume of integration “V” is then meshed into subregions using trapezoidal, tetrahedral, or other types of meshing schemes. One example of this is shown in Fig. 1.5(b), where the conformal nature of these meshes is especially useful with curved surfaces such as the corners of a bow-tie antenna.Footnote 2 To find a solution to the electromagnetics problem posed in Eq. (1.11), the E-field tangent to each edge of an individual meshing cell is calculated, and a set of basis vectors are used to extrapolate the resulting fields throughout the remaining simulation volume. The E-field within the entire structure is then given by

$$ \mathbf {E} = \sum_{k=1}^{R_{\max}} \mathbf {R_{k}}E_{k}, $$
(1.12)

where R is the vector field component along a given meshing cell edge, R max is the total number of edges within the simulation with the exception of S surf, and E k is the tangential electric field component along the same edge. When the basis vectors, R, are the same as those in the test function, T, we can combine Eqs. (1.11) and (1.12) to obtain the discretized, Galerkin formulation of the electromagnetics problem for a single frequency:

$$ \sum_{h,k=1}^{R_{\max}} \mathcal{M}_{hk} E_{k} = -\iiint_{V} \mathbf {R_{h}} \cdot \bigl[ik_{0}\sqrt{\mu_{0}/\varepsilon _{0}} \mathbf {J} + \nabla\times (\mu_{0}/\mu_{ij} \cdot \mathbf {M} ) \bigr]\, dV, $$
(1.13)

where \(\mathcal{M}_{hk}\) is given by

(1.14)

1.3.3.3 FDTD and FEM

While both FDTD and FEM accurately model the three-dimensional response of structures to incident electromagnetic radiation, a number of similarities and differences can be seen. FDTD is based on the relatively straightforward implementation of the Yee algorithm. The structure is excited using a broad-band pulse and, as a result, Fourier transforming the fields provides broad-band information with a single simulation. As a result, the hardware limitations of FDTD are based more on the speed and number of processors, rather than the amount of RAM available. The amount of RAM required is determined by the size of the simulation, density of the mesh, and amount of data being stored throughout the simulation. The method is highly efficient, and there is no large matrix to invert, as with FEM. The method is highly parallelizable, and can easily handle both anisotropic and inhomogeneous structures, including nonlinear and dispersive media. However, the method is naturally based on a rectangular grid. As a result, accommodating curved or highly dynamic surfaces is a limitation, even with recent advances in conformal meshing techniques. Further, a new broadband simulation is required every time the excitation conditions are changed.

In comparison, FEM is a more complicated approach, in terms of formulation, meshing, and computation. Proper mesh generation and boundary truncation can be a significant challenge. Also, to arrive at a solution for a given frequency, a system of linear equations need to be solved. The large matrix inversions required can result in substantial computational requirements; however, recent advances in sparse matrix solvers can be incorporated to significantly reduce these issues. As a result, the hardware limitations of FEM are based more on the amount of RAM needed to complete the matrix inversion. While techniques exist to mitigate this problem, the matrices involved scale with the number of degrees of freedom and the meshing density and can quickly grow to the point where sizable amounts of RAM are required to obtain any solution. The resulting solution is independent of excitation, and once the matrix is inverted, it is fairly easy to find other solutions.

Like FDTD, this technique can be highly parallelizable by simply running simulations at different frequencies on different processors. Also, while it is more complex to generate the triangular or tetrahedral meshing, the resulting mesh is conformal. Hence, the method excels with curved or highly dynamic surfaces. From a design and optimization standpoint, when the simulated structure requires only one illumination condition over a wide range of frequencies, FDTD is usually the stronger method. When the frequency range is narrow, or even limited to a single frequency, but a wide range of illumination angles and conditions are required, FEM is usually the stronger method.

Finally, it should also be mentioned that while FEM and FDTD are two of the main methods used for this type of electromagnetic analysis, they are by no means the only methods. In specific situations, other techniques such as: Rigorous Coupled Waveguide Analysis, the Method of Moments, the Boundary Element Method, and others are also utilized to solve design problems; and when studying radio frequency or radar designs, may actually be more applicable than FDTD or FEM. In the end, the design optimization methods discussed throughout the rest of this book should be applicable to any of these design problems and able to be combined with any of these simulation methods.

1.3.4 The Union of Fields

The key developments within the fields of Simulation (FDTD and FEM), Fabrication (SEBL and FIB), and Publications that have been discussed throughout Sect. 1.3 of this chapter, are listed in Table 1.1 along with other relevant developments within these fields. While developments within all three areas date back to the 1960s, it wasn’t until the turn of the century that the field started growing into what we know today. This coincides with the first commercial dual beam SEM/FIB, the first releases of commercial FDTD and method-of-moments-based FEM solvers, and publications by Pendry and Smith et al. [57, 71]. The union of these fields allowed a rigorous study of the electromagnetic resonances that occur within a wide range of today’s metamaterials that operate at infrared and visible frequencies.

Table 1.1 Key developments within the areas of simulation, fabrication, and publication as they relate to the study of optical metamaterials

Just as the focus of metamaterials/Frequency Selective Surfaces that operate at microwave and radio frequencies has largely been on device applications, efforts within the field of metamaterials that operate at terahertz, infrared, and visible frequencies will become increasingly applications driven. To that end, advances in the field of metamaterial design will come about by manipulating fabrication and simulation capabilities in new and different ways.

1.4 Design

As with any design problem, to understand the observable extrinsic properties, you must first understand the intrinsic properties of the constituent components of a system. In the case of optical metamaterials, we will combine dielectrics and metals. For the designs studied in this book, the dielectric acts as the host medium that supports either structured or unstructured arrays of metallic resonators. In the case of bulk metamaterials, where the resonant arrays are distributed throughout the volume of the structure, the dielectric needs to be transparent to the wavelengths of interest. Any significant amount of absorption would result in higher metamaterial losses and decreased device performance. In contrast, this requirement is relaxed when working with frequency selective surfaces, where the resonant array is patterned above the dielectric substrate. Here, substrates such as semiconductors are sometimes utilized to tune the local dielectric environment in unique ways that would otherwise not be available to bulk metamaterials.

As we will see in the following sections, materials behavior at these frequencies is dominated by free electrons. To provide both strong optical contrast between the metals and dielectrics, and minimize the contribution of the dielectric to the overall metamaterial performance, the electrons within the dielectric are tightly bound to the atomic lattice. Additionally, we will see that the surface plasmon resonances that play a major factor in the device performance, are only supported at interfaces between negative (metals) and positive dielectric constants.

1.4.1 Optical Properties of Metals

As the operation range of metamaterials has moved from radio and microwave to infrared, visible, and ultraviolet frequencies, the materials used to fabricate these structures have also changed. Structures such as frequency selective surfaces that operate at radio and microwave frequencies are oftentimes constructed using printed copper wires on top of printed circuit board material. At these frequencies, the metal can be treated as a perfect electrical conductor. The materials can be thought of as having infinite electrical conductivity and are without losses. The dielectric properties of the metal are fixed, and the penetration of the electric field below the surface of the metal is assumed to be negligible.

As we move to infrared and visible frequencies, this assumption breaks down and things become more complex. Here, the electromagnetic fields penetrate tens of nanometers into the metallic resonators, and materials’ response is dominated by the behavior of the free electrons within a material. As a result, gold, silver, copper, and aluminum become the dominant materials used because their electron density and configuration are such that bulk plasma oscillations and surface plasmons can be supported. These materials’ resonances are the result of significant dispersion throughout the frequency range of interest.

The resulting shift in size of the individual resonant structure is tens to hundreds of nanometers. At these dimensions, the field penetration depth into the metal resonators becomes a significant fraction of the overall thickness. Additionally, when the geometry of the meta-atoms is tuned to have resonances that coincide with the natural resonances of the metals, we observe significant electromagnetic field enhancement around the resonators, and striking bulk optical properties.

As with any material, we can describe the optical properties with a frequency-dependent, complex dielectric function. For the plasmonic materials mentioned above, we express the dielectric function in terms of both free-electron effects (ε D ) using the Drude–Sommerfeld model, and interband transitions (ε IB). Each of these effects will be discussed, and we then finish the section by discussing the surface plasmon effects that arise within these metals.

1.4.1.1 Drude Metals

Under illumination by a time-harmonic external electric field E 0eiωt, the equation of motion for free electrons in a metal is given by

$$ m^{\ast}_{D}\frac{\partial^{2} \mathbf {r}(t)}{\partial t^{2}} + m^{\ast }_{D} \frac{1}{\tau}\frac{\partial \mathbf {r}(t)}{\partial t} = e\mathbf {E}_{0} \mathrm{e}^{-i\omega t}, $$
(1.15)

where e and \(m^{\ast}_{D}\) are the charge and effective mass of the free electrons, and r is the displacement of an electron under an external field. τ is the average relaxation time of the free electrons. τ is proportional to \(\tau= \frac{\ell}{\nu _{F}}\), where ν F is the Fermi velocity and is the electron mean free path. These values for aluminum, copper, silver, and gold are listed in Table 1.2. Solving for r gives

$$ \mathbf {r} = \frac{e}{m}\frac{\mathbf {E}_{0}\mathrm{e}^{-i\omega t}}{ (\omega^{2} + i\omega/\tau )}. $$
(1.16)

Combining this result with Eq. (1.2), we obtain the complex Drude model for frequency-dependent permittivity:

$$ \varepsilon _{D}(\omega) = 1 - \frac{\omega_{p}^{2}}{\omega^{2} + i\omega /\tau}. $$
(1.17)

Here, the term ω p is the bulk plasma frequency given by \(\omega_{p} = \sqrt{(ne^{2})/(m^{\ast}_{D}\varepsilon _{0})}\). Finally, we can separate Eq. (1.17) into its real and imaginary components:

$$ \varepsilon _{D}(\omega) = 1 - \frac{\omega_{p}^{2}}{\omega^{2} + 1/\tau^2} + i\frac{\omega_{p}^{2}}{\omega\tau(\omega^{2} + 1/\tau^2)}. $$
(1.18)
Table 1.2 Drude and Drude–Sommerfeld values for plasmonic metals including the: plasma frequency ω p [10], Fermi velocity ν F (cm/s) [2], Drude relaxation time τ D [2], frequency of interband transitions ω IB [13], and the electron configuration

A plot of the real and imaginary Drude components of the dielectric function are shown in Fig. 1.6 for silver.Footnote 3 In this figure, the real part of the dielectric constant is shown as the solid blue line and the imaginary part is shown as the dashed green line. Also, to plot both constants on the same y-axis, the imaginary part of the dielectric constant has been plotted as ten times its actual value. Here we see that the real part of the dielectric constant is negative across visible and infrared frequencies. This indicates that under external illumination, the electrons are driven 180 out of phase with the incident light. This results in the high reflectivity that is typically associated with metals. We also see a significant contribution from the imaginary part of the dielectric constant. The optical losses associated with these metals are an inherent limitation for certain types of metamaterial designs.

Fig. 1.6
figure 6

Real (solid blue) and imaginary (dashed green) Drude components of the dielectric function of silver. In this plot, the imaginary permittivity has been scaled up by a factor of ten

1.4.1.2 Interband Transitions

While the Drude–Sommerfeld model for metals provides a nice starting point for their understanding, it is by no means a complete explanation of their optical behavior. The fact that gold, silver, and copper refer to colors as well as metals clearly indicates that there’s more going on than the model in the previous section can explain. The explanation for such effects lies with interband transitions.

Gold, silver, and copper are all monovalent, Face-Centered Cubic metals. For these noble metals, the Fermi surface of the metal strongly resembles a free electron sphere with the exception of the 〈111〉 direction, where the surface intersects the Brillouin zone face. From Table 1.2 we see that the electron configuration of all three have 10 electrons occupying the d-bands and 1 electron occupying the s-band. Additionally, all three metals have the fully occupied d-bands 2–4 eV below the s-band. As a result, absorption can occur when light above this interband transition energy is incident upon the surface of the metal. This explains why copper has a somewhat reddish appearance, gold appears to be yellow, and silver strongly reflects across the entire visible spectrum.

To model the contribution of interband transitions to the overall dielectric function, we modify Eq. (1.15) to include damping from bound electrons γ, and the electron restoring force α:

$$ m_{B}\frac{\partial^{2} \mathbf {r}(t)}{\partial t^{2}} + m_{B}\gamma\frac {\partial \mathbf {r}(t)}{\partial t} + \alpha \mathbf {r}= e\mathbf {E}_{0}\mathrm{e}^{-i\omega t}, $$
(1.19)

where m B is the mass of bound electrons. Solving Eq. (1.19) following the same method as in Sect. 1.4.1.2, we arrive at

$$ \varepsilon _{\mathrm{IB}}(\omega) = 1 - \frac{\tilde{\omega}_{p}^{2}}{ (\omega ^{2}_{0} - \omega^{2} ) - i\gamma\omega}. $$
(1.20)

Here, the term \(\tilde{\omega}_{p}\) is the Drude–Sommerfeld plasma frequency given by \(\tilde{\omega}_{p} = \sqrt{(\tilde {n}e^{2})/(m_{B}\varepsilon _{0})}\), \(\tilde{n}\) is the concentration of bound electrons, and, \(\omega_{0} = \sqrt{\alpha/m_{B}}\). In a similar manner to Eq. (1.18), we can separate Eq. (1.20) into its real and imaginary components:

$$ \varepsilon _{\mathrm{IB}}(\omega) = 1 - \frac{\tilde{\omega}_{p}^{2} (\omega ^{2}_{0} - \omega^{2} )}{ (\omega^{2}_{0} - \omega^{2} )^{2} - \gamma^{2}\omega^{2}} + i\frac{\tilde{\omega}_{p}^{2}\omega \gamma}{ (\omega^{2}_{0} - \omega^{2} )^{2} - \gamma^{2}\omega^{2}}. $$
(1.21)

Plots of the real and imaginary contributions to the dielectric constant of gold are shown in Fig. 1.7. In this figure, the real part of the dielectric constant is shown as the solid blue line and the imaginary part is shown as the dashed green line.Footnote 4 Here the interband transitions can clearly be seen as spikes in ε 2. Finally, at frequencies far from where interband transitions occur, these effects continue to have an influence on the overall dielectric function of the material. This manifests itself as a constant offset term in the overall dielectric function. Typical values of this offset ε for gold are between 6.5 and 9 and for silver are between 4.5 and 5.

Fig. 1.7
figure 7

Real (solid blue) and imaginary (dashed green) interband components of the dielectric function of gold

1.4.1.3 Dispersion and Surface Plasmons

Separate from bulk plasmons within the metals mentioned above are a type of electron density oscillation at the interface between a metal and a dielectric. These resonances are known as surface plasmons, and play a significant role on the overall behavior of optical metamaterials that operate at infrared, visible, and ultraviolet frequencies. In addition, when these oscillations propagate along the metal surface in the form of a guided wave, they are referred to as surface plasmon polaritons (SPPs).

Even though ε and \(\tilde{n}\) are referred to as constants, we know that at optical frequencies these properties can vary significantly, depending on the configurations in which they are used as well as the frequency of the light involved. This property of materials is known as dispersion. To calculate the dispersion of these structures, we start with an incident electromagnetic wave of the form [16]:

$$ \mathbf {E}(x,y,z) = E_{0}\text{e}^{i(k_{x}x - k_{z}\vert z\vert - \omega t)} $$
(1.22)

whose electric field has a perpendicular component to the waveguide (transverse-magnetic polarization). Here the components of the electric field within the metal are given by:

$$\begin{aligned} E_{x}^{\mathrm{metal}} =& E_{0}\text{e}^{i(k_{x}x - k_{z1}\vert z\vert - \omega t)}, \end{aligned}$$
(1.23a)
$$\begin{aligned} E_{y}^{\mathrm{metal}} =& 0, \end{aligned}$$
(1.23b)
$$\begin{aligned} E_{z}^{\mathrm{metal}} =& E_{0} \biggl(\frac{-k_{x}}{k_{z1}} \biggr)\text {e}^{i(k_{x}x - k_{z1}\vert z\vert - \omega t)}, \end{aligned}$$
(1.23c)

and the components of the electric field within the dielectric are given by:

$$\begin{aligned} E_{x}^{\mathrm{dielectric}} =& E_{0}\text{e}^{i(k_{x}x - k_{z2}\vert z\vert - \omega t)}, \end{aligned}$$
(1.24a)
$$\begin{aligned} E_{y}^{\mathrm{dielectric}} =& 0, \end{aligned}$$
(1.24b)
$$\begin{aligned} E_{z}^{\mathrm{dielectric}} =& E_{0} \biggl(\frac{-\varepsilon _{1}k_{x}}{\varepsilon _{2}k_{z1}} \biggr)\text{e}^{i(k_{x}x - k_{z2}\vert z\vert - \omega t)} , \end{aligned}$$
(1.24c)

where k z1 and ε 1 represent the wave vector and dielectric constant within the metal layer, and k z2 and ε 2 represent the wavevector and dielectric constant within the dielectric layer, respectively. For both sets of equations, k x represents the component of the wave vector in the direction of propagation along the metal-dielectric interface. Similarly, k z represents the component of the wave vector perpendicular to the metal-dielectric interface, and from this we obtain the decay length of the electro-magnetic field into the layers, or the “skin depth”:

$$ \hat{z} = \frac{1}{\vert k_{z}\vert }. $$
(1.25)

Note that for metamaterial structures with thicknesses on the order of twice the skin depth, interactions between both surfaces can occur and further modify the behavior of the individual resonant structure. By requiring continuity of the E and B fields at the interface between the two layers, we obtain the dispersion relation for a single metal–dielectric interface [42, 74]:

$$\begin{aligned} k_{x} =& \frac{\omega}{c}n_{\mathrm{spp}}, \end{aligned}$$
(1.26a)
$$\begin{aligned} k_{z1,2}^{2} =& \varepsilon _{1,2} \biggl( \frac{\omega}{c} \biggr)^{2} - k_{x}^{2}, \end{aligned}$$
(1.26b)

where the effective surface plasmon index is given by

$$ n_{\mathrm{spp}} = \sqrt{\frac{\varepsilon _{1}\varepsilon _{2}}{\varepsilon _{1} + \varepsilon _{2}}}. $$
(1.27)

These relations show an exponential decay into both the metal and dielectric, although the decay is much shorter into the metal. Additionally, these relations are for ideal metals with no defects. As the size of the individual resonant element within the metamaterial is decreased, grain boundary and surface roughness scattering will play an increasing role in the performance of the device. This effect, along with the decreased size of the total structure, manifests itself in the form of a modified scattering time [10].

1.4.2 Current Designs

While advances in fabrication and simulation capabilities have allowed the operating frequency of optical metamaterials to increase over the past few decades, it is interesting to note that many of the most prominently studied designs within the field continue to be variations on structures adopted from radio and microwave frequency antenna design. Such structures include bow tie antennas [38, 44], dipole antennas [41, 47, 55], fishnet structures [43, 80], and perhaps the best example of this, the split-ring resonator (SRR).Footnote 5 These four structures are shown in Fig. 1.8. With structures such as dipole antenna, the individual resonators are basic enough that the resonance can be calculated using either full-field electromagnetic simulations, or obtained analytically using a basic LC circuit model; however, we see from the literature that variations in the constituent materials, geometrical parameters, host medium, and three-dimensional array layout quickly increase the complexity of the design to the point where full-field electromagnetic simulations are required.

Fig. 1.8
figure 8

Some of the most common individual metamaterial resonators including a bow tie antenna (a), a dipole antenna (b), a split ring resonator (c), and fishnet metamaterials (d)

As is often the case with these structures and studies, a combination of fabricated samples and full-field electromagnetic simulations sweep through a few of the critical design parameters and analyze how the resulting resonator response is affected. This may then be followed by highlighting the optimized structure to take advantage of the resonance under consideration. When the number of parameters under consideration is small, and the question is “how does each design variant change the overall metamaterial response,” this is certainly a valuable and viable approach; however, as the primary focus shifts to optimizing resonances for a given application and the number of parameters increases, it quickly becomes apparent that from a time standpoint, this exhaustive approach is no longer feasible. At this point in the design process, we arrive at the central question of this book:

What is the most accurate and efficient way to tailor the broadband optical properties of a metamaterial to have predetermined responses at predetermined wavelengths?

Throughout the rest of the book, we address one answer to this question. By combining numerical optimization methods with full-field electromagnetic simulations, we are able to explore high-dimensional design spaces, orders of magnitude faster than performing traditional parameter sweeps. Using this approach, the researcher determines the design parameters to be varied, along with the range of interest for each parameter. The optimization routine then steps through a simplex of test points. For each point, the program executes a function call by sending the metamaterial design to an electromagnetic solver, and then extracts the relevant figure(s) of merit. The figure(s) of merit are then combined based on a user defined “cost function” or “objective function” to rank the metamaterial design with respect to all other designs.

The range of optimization routines that can be used for this approach span the entire spectrum. Surrogate optimization methods, such as Curiosity Driven Optimization, choose test points in an effort to generate a maximally predictive, minimally complex model of the response of every possible geometrical variation within the specified design space (see Chap. 3). Gradient-free optimization techniques, such as Mesh Adaptive Search algorithms, are extremely robust in terms of their ability to survey non-smooth “parameter space,” and based on specified criteria of convergence, can do a remarkable job of finding global optimum designs (see Chap. 4). Evolutionary algorithms, such as Particle Swarm Optimization and Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES), rely on evaluating sets of test points and based on the results, permuting the sets to generate different, and hopefully better, sets of geometrical solutions (see Chap. 5). Finally, in much the same way genetic algorithms mutate their solution set to develop new, better solution sets; new optimization methods are always being developed. Conjugate gradient methods are being combined with objective-first designs, which start with the desired electromagnetic fields, and work backwards to calculate the required dielectric distribution (see Chap. 6). Level Set Methods, which are computational techniques traditionally applied to fluid dynamics, have already shown promise for designing photonic crystals and are now being explored for metamaterial applications (see Chap. 7). Finally, when all else fails, the Black Box Optimization Benchmarking has established a yearly workshop to assess the performance of newly developed optimizers to understand their strengths and weaknesses, and this organization is a constant source of new and different ideas [1].

Finally, while the techniques mentioned in the previous paragraph summarize the extent to which avenues of metamaterial design optimization are covered in this text; everything here, as well as most work in the literature, has focused on selecting a specific material for the resonator design and then using geometrical permutations to obtain optimized or novel device performance. While this is certainly a rich field of study, one can imagine other avenues by which new metamaterial designs can be achieved. One such avenue that is receiving increased attention is described in Sect. 1.4.3.

1.4.3 Future Designs

Throughout the history of optical metamaterials, gold, silver, and copper have been the dominant materials used. This is in large part because in these metals, the free electrons necessary to support plasmon resonances are in high enough concentrations to resonate at near-infrared, visible, and ultraviolet frequencies. Unfortunately, the same resonances that give these exotic optical properties introduce high losses and limit the overall performance of devices. This limitation with traditional plasmonic materials has provided an opportunity for both alternative plasmonic materials, as well as additional design degrees of freedom, by tuning their resonant frequencies [7].

In recent years, a variety of material sets have been proposed as alternative plasmonic materials including doped semiconductors [30, 52, 77], intermetallics [6], transparent conducting oxides [23, 53, 83], transition metal nitrides [53], and graphene [32]. One material set in particular, Transparent Conducting Oxides (TCOs), have shown significant tunability across the near-infrared spectrum by varying the concentration of oxygen vacancies and interstitial metal dopants introduced into the films during deposition. These materials, including aluminum zinc oxide, indium zinc oxide, and indium tin oxide have primarily been used as components in touch screen displays; however, their low losses (five times smaller than silver) [51, 54], tunability, and compatibility with standard fabrication processes have resulted in increasing attention from the plasmonics and metamaterials communities. From a design and optimization standpoint, they offer another interesting benefit. From Sect. 1.4.1.1 we know that the Drude dielectric constant is given by:

$$\begin{aligned} \varepsilon =& 1 - \frac{\omega^{2}_{p}}{\omega^{2} + i\omega/\tau}, \\ \omega^{2}_{p} =& \frac{ne^{2}}{\varepsilon _{\infty}m^{*}}. \end{aligned}$$

TCOs, such as indium tin oxide or indium zinc oxide, can typically be doped to have carrier concentrations between 1019–1021 cm−3. Based on this model, Fig. 1.9 shows that by adjusting the carrier concentration within the material during deposition, we can tune the plasma frequency (ε=0) across the near-infrared spectrum.

Fig. 1.9
figure 9

Permittivity dispersion modified by a change in the carrier concentration. As the carrier density (per cubic centimeter) increases, the plasma frequency (ε = 0) shifts toward visible frequencies, and the dispersion becomes substantially different in that regime. Reprinted with permission from E. Feigenbaum et al.,“Unity-Order Index Change in Transparent Conducting Oxides at Visible Frequencies,” Nano Letters 10, 2111–2116 (2010). Copyright 2010 American Chemical Society

To date, virtually all optimized metamaterial design has focused on parametrically tuning the topology of the metamaterial unit cell, and a given material with preset electronic and optical properties is chosen in a binary manor. With the introduction of TCOs as alternative plasmonic materials for metamaterial design, we can now include the resonant frequencies of the material itself as another design parameter to be optimized. This can be taken one step further, by considering metamaterial designs where the doping concentration and resulting plasma frequency are shifted as a function of resonator thickness. These additional design degrees of freedom present an interesting opportunity for future metamaterial designs, and are left as an exercise for the reader.