Keywords

1 Introduction

A fractal antenna is one that has been shaped in a fractal fashion, either through bending or shaping a volume, or through introducing holes. They are based on fractal shapes such as the Sierpinski triangle, Mandelbrot tree, Koch curve, and Koch island etc. [1]. There has been a considerable amount of recent interest in the possibility of developing new types of antennas that employ fractal rather than Euclidean geometric concepts in their design. Sierpinski gasket is an example of mostly explored fractal antenna. Other geometries used for fractal antennas include Sierpinski carpet, Hilbert, Koch, and Crown square antenna [2].

A fractal antenna based on rectangular base shape is proposed in this presented work. The base geometry or zeroth iteration is a rectangle with side lengths of 39.3 and 48.4 mm as shown in Fig. 1a. To obtain the first iteration geometry shown in Fig.1b, an ellipse is cut from the base shape of Fig. 1a. The primary axis radius of the ellipse is 16 mm, and secondary axis radius of ellipse is 22.62 mm. Then, a rectangle is inserted in the area from where the ellipse is cut, such that all four corners of inserted rectangle are connected with the boundary of ellipse cut. The side lengths of the inserted rectangle are 24.18 and 29.64 mm. The dimensions of the ellipse and rectangle are selected so that the inserted rectangle is 60 % of size of the base rectangle. To obtain the second iteration geometry shown in Fig.1c, the same procedure is applied on the first iteration geometry i.e., an ellipse is cut, and then a rectangle is inserted. The primary axis radius of the ellipse cut is 10 mm, and secondary axis radius is 12.92 mm. The side lengths of the inserted rectangle are 14.5 and 17.78 mm. The dimensions of the ellipse and rectangle are selected so that the inserted rectangle is 60 % of size of the first iteration rectangle. The height of substrate used is 3.175 mm with dielectric constant and loss tangent of 2.2 and 0.0009, respectively.

Fig. 1
figure 1

a Base geometry (zeroth iteration), b first iteration geometry, c second iteration geometry

In antenna design, feeding techniques are very important as they ensure that antenna structure operates at full power of transmission. Especially at high frequencies, designing of feeding techniques becomes a more difficult process. One of the most common techniques used for feeding antennas is coaxial probe feed technique. The location of feed (i.e., feed point) is very important in antenna performance. The feed point must be located at that point on the patch, where the input impedance is 50 ohms for the resonant frequency. But, it is not an easy task to achieve specially for small-size antennas. This problem is further complex in case of fractal antennas because of the complex geometry of different iterations.

In this chapter, a novel method of finding feed point using hybrid GRNN-GA model is proposed. The following section describes GRNN, GA, and proposed hybrid algorithm. The results are given in Sect. 3, and present work is concluded in Sect. 4.

2 Hybrid GRNN-GA Model for Feed Point Calculation

2.1 Generalized Regression Neural Networks

Back-propagation neural networks (BPNN) is a widely used model of the neural network paradigm and has been applied successfully in applications in a broad range of areas. However, BPNN, in general, has slow convergence speed, and there is no guarantee at all that the absolute minima can be achieved. This disadvantage can be overcome by using generalized regression neural networks (GRNN).

Fig. 2
figure 2

Generalized regression neural network model

The GRNN were first proposed by Sprecht in 1991, which are feed-forward neural network model based on nonlinear regression theory. The network structure is shown in Fig. 2. GRNN consists of a radial basis function network layer and a linear network layer. The transfer function of hidden layer is radial basis function. The basis function of hidden layer nodes in network adopts Gaussian function, which is a non-negative and nonlinear function of local distribution and radial symmetry attenuation for central point, and generates the responses to input signal locally. GRNN employs the smoothing factor as a parameter in learning phase. The single smoothing factor is selected to optimize the transfer function for all nodes. To reduce computational time, GRNN performs one-pass training through the network [3].

2.2 Genetic Algorithms

Genetic algorithm belongs to a class of probabilistic methods called “Evolutionary Algorithms” based on the principles of selection and mutation. GA was introduced by J. Holland, and it is based on natural evolution theory of Darwin. It is a population-based algorithm, and they find their application in various engineering problems. Usually, a simple GA consists of the following operations: selection, crossover, mutation, and replacement. First, an initial population composed of a group of chromosomes is generated randomly. These chromosomes represent the problem’s variables. The fitness values of the all chromosomes are evaluated by calculating the objective function in a decoded form. A particular group of chromosomes is selected from the population to generate the offspring by the defined genetic operations such as crossover and mutation. The fitness of the offspring is evaluated in a similar fashion to their parents. The chromosomes in the current population are then replaced by their offspring, based on a certain replacement strategy. Such a GA cycle is repeated until a desired termination criterion is reached (for example, a predefined number of generations are produced). If all goes well throughout this process of simulated evolution, the best chromosomes in the final population can become a highly evolved solution to the problem. To overcome the possibility of being trapped in local minima, in GA, the mutation operation in the chromosomes is employed. GA has been applied in a large number of optimization problems in several domains, telecommunication, routing, and scheduling, and it proves its efficiency to obtain a good solution. It has also been extensively used for a variety of problems in antenna design during the last decade [46].

2.3 Proposed Hybrid Algorithm

The genetic algorithm (GA) technique uses the objective function for the optimization and without which the optimization technique has no meaning. But in case of fractal antennas, closed-form mathematical formulation for finding the optimum feed location is not available. Thus, a novel method of objective function formulation has been presented, in which generalized regression neural networks are used as the fitness function. This technique can be used everywhere, particularly in those cases where the objective function formulation is difficult, or the objective function is improper. The procedure adopted to find the optimum feed location of the proposed fractal antenna is given below.

Fig. 3
figure 3

Training model of GRNN

  • Data Set Generation: Data set for the training of GRNN has been prepared by using IE3D software. The antenna is simulated for different feed locations (\(x_{i}, y_{i})\), and the return loss for the corresponding feed location points has been taken as output. The center of antenna is considered at (0, 0) all cases.

  • Training the ANN: The above data set has been used to train the GRNN in MATLAB. Sufficient number of training samples has been used to train the network. Fig. 3 shows the model for the training of GRNN network.

  • The genetic algorithm optimization technique has been implemented using MATLAB, and the trained GRNN network has been used as objective function for the GA algorithm. The GA minimizes the objective function, and it gives the feed location where return loss is minimum, as output.

  • The optimum feed points given by GA have been simulated using IE3D software, and return loss (\(S_{11})\) is found. The simulation values are compared with the hybrid model results in order to check the accuracy of the results.

3 Results and Discussion

Three different models have been trained, one for each geometry. For the training of GRNN, the spread constant is required to be set, and it is taken as 0.1 for all the 3 models. These trained GRNN models have been then used as objective functions for the genetic algorithm. The parameters of GA are as follows: The population size is 50 and the crossover probability is 0.8. GA is run for 500 iterations for each model. The optimum feed locations provided by this hybrid GRNN-GA model along with the minimized return loss are given in Table 1.

Table 1 Results of hybrid GRNN-GA model
Table 2 Comparison of hybrid model and simulation results

The feed locations obtained from hybrid algorithm are simulated using IE3D software. The return loss (\(S_{11})\) for all geometries is found. The simulated values of return loss are almost same as given by hybrid algorithm for base geometry and second iteration geometry, and it is better than hybrid model value for first iteration geometry. The comparison of hybrid model results and simulation results is given in Table 2 which shows a reasonable match.

4 Conclusion

The optimum feed points of the three geometries are different. As the iterations of fractal antenna increases, the optimum feed location changes, thus affecting the performance of the antenna. A novel approach based on GRNN-GA hybrid algorithm has been implemented successfully to locate the optimum feed point for each generation of fractal geometry. The results obtained using GRNN-GA hybrid algorithm are in good agreement with the simulation results obtained using IE3D software. This approach can be used for any other geometry.