1 Introduction

The preservation of image quality with utmost identity and integrity has become a big concern in the field of medical information and its systems. For instance, in December 23, 2010, electronic medical records got collapsed in north Bronx healthcare Network at New York which in turn caused worst effect on nearly 1,700,000 patients to a larger extent. In the medical environment maintenance of Patients data is very crucial and much needed one to keep them secured against unwanted and mischievous attacks [1]. To overcome such short-comings, it is of prime importance to deal the data with medical image encryption with minimal or no loss of information [2,3,4].

The Cognitive Radio (CR) assures the utilization of the licensed spectrum when it is not in use. Secondary Users (SU) are referred as CR users to access spectrum bands which have been allocated to licensed users [5]. To access the spectrum bands, CR users sense and detect the status of a primary user [6,7,8].

In addition, DICOM images are chosen to cover the media and termed as cover-images [9,10,11]. The key messages are embedded in these cover images and together named a Water-marked or embedded image [12]. In other words, the process of concealing both key images and cover images is known as the standard of the stego-images. In fact, it is just a technique of inserting the least Significant Bit (LSB) by substituting LSB of the original image with the patient information (text) [13, 14]. The idea of LSB embedding is straightforward and exploits the actual fact that the amount of preciseness in several image formats is much larger than that perceivable by average human vision [15]. Henceforth, an altered image with minor variations in its colors is indistinguishable from the initial by a human being, simply by looking. Using a highly capable fragile steganographic technique [16], that is, Discrete Gould Transform, by tuning the coefficients one could generate even smaller changes to the pixels [17]. The reasons for that may be attributed due to possibility of the message get destroyed easily upon any manipulation. As a result the image becomes altered and the chosen technique becomes vital and identifies application in image authentication and tamper proofing [18].

The name Latin square is intended by the mathematician Leonhard Euler, who exploited the Latin characters [19], provides a new method of integrating probabilistic secret of writing in image encoding by embedding random noise within the LSB plane [20]. It has exhibited the properties of a high key space and key sensitivities, perfect confusion and diffusion properties and robustness against channel noise [21].

In the process of achieving or exploiting, the investigator wish to consider a channel model, namely, Additive White Gaussian Noise (AWGN) which permits a linear addition of broadband or white noise with a continuous spectral density and well distributed amplitudes [22]. The merits of the channel model being: Fading, frequency selectivity, interference and distortion or nonlinearity. Besides AWGN model, yet another effective model called Rayleigh fading has its advantages. It is a specialized one for random fading when there is no way for line of sight signal. It can be defined as the generalized idea of Rican fading [23]. Here, the Rayleigh distribution is formulated by amplitude gain as one could expect more number of scatters in the vicinity of the channel. It is worth mentioning that the Rayleigh fading model is an effective candidate in areas where heavily built up centers with no line of sight between the transmitter and the receiver.

The objective of the paper is to check the quality and robustness of the patient information and DICOM image when it is passed over a media by sensing the available spectrum bands through CR [24]. The patient data is secured by taking MD5 hash function [16, 25, 26] and the DICOM images are encrypted by DGT transform by selecting the Region of Interest (ROI) to maintain the secrecy [27]. It is then processed by LSIC algorithm [28] and transferred over different channels like AWGN, Rayleigh and Rician channels by modulating the signal [29]. This encrypted secure image is transmitted to the individuals who are in need, the concerned doctor and the DMS. Future wireless communications will require a growing technological use of the licensed frequency bands. Section 2 provides the methodology of the proposed system, Sect. 3 provides the results and discussion part and finally Sect. 4 provides the conclusion.

2 Methodology

2.1 Cognitive Radio Technology

The simplest and most efficient energy detector with threshold spectrum analyzer is used. The band pass filter constraints the received signal and it is converted into digital form to calculate the threshold value, the bits are then transformed to find the presence of primary user. The output of such transformation is known as the Chi square distribution [30,31,32]. Gaussian distribution which approximates Chi square tests is measured as

$$Y \sim \left\{ {\begin{array}{*{20}l} {N \left( {n\sigma_{n}^{2} , 2n\sigma_{n}^{4} } \right)} \hfill & {H0} \hfill \\ {N \left( {n(\sigma_{n}^{2} + \sigma_{s}^{2} , 2n(\sigma_{n}^{2} + \sigma_{s}^{2} )^{2} } \right)} \hfill & {H1} \hfill \\ \end{array} } \right.$$

where n is the number of the samples, σ 2n is the variance of the noise, the σ 2s is the variance of the received signal s(t), We can get the threshold λ as

$$\lambda = \sqrt {4t_{s} W\sigma_{n}^{4} } Q^{ - 1} \left( {P_{f} } \right) + 2t_{s} W\sigma_{n}^{2}$$

where ts is the observation time and W is the bandwidth of the spectrum, the minimum sampling rate should be 2 W from the Nyquist sampling theorem, so n can be represented as 2tsW.

Upon analyzing the signal spectrum using RX1 antenna in the NI USRP with the bandwidth restriction of 88 to 100 MHz and gain of 20 dB, the presence of the primary user is identified at 98 MHz. Other than the primary user signal at 98 MHz, the rest of the spectrum is considered as spectrum holes. To avoid interference of primary user signals, a suitable frequency is chosen, say 90 MHz.

2.2 Data Hiding Technique

In order to maintain the security and privacy of the medical image and patient information, data hiding technique and encryption algorithm are used [33]. The traditional process of data hiding used here is LSB substitution method. When the patient information has to be hided into the cover image, the text data is converted to binary bits and replaced in the LSB of every pixel of the cover image which in turn produces an output as embedded image. By sacrificing the LSB of every pixel does not change the originality of the image. It is the efficient technique of inserting data into the cover image which gives away the better privacy pattern for exchanging data through the medium [34,35,36,37]. The security using LSB technique offers a good output and by differentiating the carriers and keys for encrypting decrypting paves the extended path to improve.

2.3 Encryption Concatenation Unit

2.3.1 Discrete Gould Transform

The transform contains Gould coefficient which is defined by the difference between neighboring pixels. When there is a small change in Gould coefficients, there will be a large scrambling in pixel values [38]. This technique provides authentication and tamper proofing of an embedded image even the message got hacked or altered by any unauthorized user. It has the application in fragile watermarking. If any malfunction happens for hacking the secret message, it gets destroyed and the missing of such secure data makes us to be known that the message has altered. This transform can be used wherever authentication and tamper proofing of information is in need and plays a vital role in the development of present day technology [39].

2.3.2 Latin Square Image Cipher

LSIC is a permutation network. It is an encryption technique where a key is used to encrypt an image. The resultant image is the encrypted image of Latin square image cipher algorithm [40]. It has many good characteristics such as more key space, resistant to attacks, more key sensitivity, good diffusion and confusion properties etc. Analysis and simulations have proved that the encryption algorithm is more secure and is more efficient. LSIC is highly resistant to various types of attacks and provides good metric computation [41]. All results have proved that LSIC is a good encryption algorithm for digital image encryption [42,43,44,45].

2.4 Modulation and Transmission Over Different Channels

When a signal is not suitable to transmit through the channel or the signal does not fit within frequency spectrum the process of modulation should take place. Then the signal is transmitted over the channels and received signal will be demodulated to recover the original signal. The technique used here is the Quadrature Amplitude Modulation (QAM) where it transmits two conjugated analog and digital signals with different symbols or with set of different encrypted symbols. This type of modulation technique is widely used in telecommunication area. It is having the advantage of high spectral efficiency, low noise level, low linearity [46,47,48,49]. During demodulation process, the altered symbols are decrypted to gain the actually transmitted bits.

The modulated signal is passed on to the transmission channel where it contains the channel noise [50]. A channel should be modeled in such a way that it should withstand in weather condition and speed between source and destination should be well maintained

2.4.1 AWGN Channel

AWGN channel is a widely used channel in the field of telecommunication and it does not have interference, fading or dispersion. The channel can be modeled by assuming it as additive and noise samples are Gaussian distribution. The channel is assumed to add white Gaussian noise of the sample functions with the zero mean and power spectral density.

2.4.2 Rayleigh Channel

Rayleigh channel is assumed to be modeled in such a way that the signal with its magnitude is passed through the transmission medium will change or fading takes place by Rayleigh distribution. Multipath receptions of signal are the main causes of Rayleigh fading.

2.4.3 Rician Channel

Rician channel is a statistical model for radio communication. It is the only channel type which models in such a way that it can propagate in single path i.e., line of sight propagation scheme. The rician distribution is formulated by the gain in amplitudes.

2.5 Spectrum Sensing

Step 1::

To access the spectrum bands, CR users needs to sense the band to check the availability of primary user.

Step 2::

Then perform Fast Fourier transform, integration and compared with the threshold to find the presence of primary user

$${\text{q}}\left( {\text{t}} \right) = \left\{ {\begin{array}{*{20}l} {n\left( {\text{t}} \right)} \hfill & {H0} \hfill \\ {r\left( {\text{t}} \right) + n\left( t \right)} \hfill & {H1} \hfill \\ \end{array} } \right.$$

where H0 refers to the presence of user, H1 refers to the absence of user, r(t) denotes signal waveform, n(t) denotes zero-mean AWGN

Step 3::

The detection probability Pd and the false alarm probability Pfa can be expressed as

$$\left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {p_{d} \left( \lambda \right) = p_{r} [Y > \lambda | H1]} \hfill \\ {p_{f} \left( \lambda \right) = p_{r} [Y > \lambda | H0] } \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$

where λ stands for threshold, Pf should be kept as small as possible to avoid underutilization of transmission opportunities, on another hand, Pd should be kept as large as possible for the same reason.

2.6 System Design

The implementation of the proposed system is fragmented into encryption and decryption phase along with wireless spectrum sensing environment. The proposed block diagram is given in Fig. 1.

Fig. 1
figure 1

System architecture of proposed system

2.6.1 Encryption Algorithm

Step 1::

Read the DICOM images of size 256 × 256.

Step 2::

Extract the ROI part from the input image.

Step 3::

Patient’s information like sex, age etc. is embedded into ROI.

Step 4::

Patient data is reformed into ASCII codes and then it is converted into binary.

Step 5::

Replace LSB of every pixel by the text binary bits until all the bits are embedded.

Step 6::

Encrypt the ROI part by DGT algorithm.

Step 7::

To perform the DGT the matrix is obtained from the general equation

$${\text{g}}_{{{\text{p}},{\text{kn}}}} = \left( { - 1} \right)^{{{\text{k}} + {\text{n}}}} \left( {\begin{array}{*{20}c} {\text{p}} \\ {{\text{k}} - {\text{n}}} \\ \end{array} } \right)$$

where k, n is obtained by 0, 1…N − 1.

Step 8::

Here substitute N = 2, p = 2 and obtain the matrix which is multiplied with each 2 × 2 matrix of the image.

Step 9::

Concatenate encrypted ROI to the original image to form the embedded image.

Step 10::

To obtain the Latin Square Image Cipher of the resultant image two keys of the size of the original image are used.

Step 11::

XOR all the values in the image by key1.

Step 12::

XOR all the values in the resultant matrix by key2.

Step 13::

The encrypted image thus obtained is modulated by QAM modulation technique and transmitted via the identified available free spectrum using spectrum sensing technique.

Step 14::

The modulated signal is passed through AWGN channel, Rayleigh channel and Rician channel to prove the robustness of the proposed scheme.

Step 15::

The PDF of the Rayleigh channel is \({}_{R}^{P} \left( r \right) = \frac{2r}{\varOmega }e^{{ - r^{2} /\varOmega }}\), r ≥ 0.

Step 16::

AWGN channel is defined by the equation \(f\left( n \right) = \frac{{e^{{\frac{{\pi^{2} }}{{2\sigma^{2} }}}} }}{{\sqrt {2\pi \sigma^{2} } }}\).

Step 17::

The PDF of rician channel is \(f\left( x \right) = \frac{{2\left( {k + 1} \right)x}}{\varOmega }\exp \left( { - k - \frac{{\left( {k + 1} \right)x^{2} }}{\varOmega }} \right)I_{0} \left( {2\sqrt {\frac{{k\left( {k + 1} \right)}}{\varOmega }} x} \right)\)

3 Results and Discussion

In wireless network the CR users checks the availability of primary user, if the user is not present at the search then the encrypted bits are transmitted through the free hole spectrum. Here watermarking technique has been employed to interleave patients’ information with medical image. Further, to ensure the security of the text information, MD5 hash function is taken to secure data and the method of Discrete Gould Transform (DGT) is used to address tamper-proofing and authentication. In addition, to secure the DICOM images, the Latin Square Image Cipher (LSIC) has been applied to exhibit excellent confusion and diffusion, high key sensitivity and robustness.

By analyzing the spectrum with the use of RX1 antenna in USRP with the gain of 15 dB and the frequency bandwidth of 88 to 100 MHz is shown in Fig. 2. The primary user presence is identified at 94 MHz and other frequencies are considered as spectrum holes. The suitable frequency spectrum is selected to be 98 MHz in order to avoid interference of the primary user.

Fig. 2
figure 2

Spectrum analyzer a block diagram of spectrum analyzer and b front panel of spectrum analyzer

The Fig. 3 describes that the cipher image pixel elements are transformed into 16-bit binary information. This data is to be modulated by QAM modulation and transmitted with the help of TX1 antenna of NI USRP with the carrier frequency of 98 MHz.

Fig. 3
figure 3

Transmission of binary bits a block diagram of transmitted information bits and b front panel of transmitted information bits

At the receiver antenna RX1 of NI USRP is maintained at 98 MHz, the frequency at which the information was to be transmitted. It is then demodulated by QAM and the 16-bit binary information is retrieved when the signal is obtained which is shown in the Fig. 4.

Fig. 4
figure 4

Reception of binary bits a block diagram of received information bits and b front panel of received information bits

For experimental analysis, CT and MRI images of size 256 × 256 and 512 × 512 are considered as shown in Fig. 5.

Fig. 5
figure 5

Test images a proposed image of size 256 × 256, b MR-1 of size 256 × 256, c CT-1 of size 256 × 256, d MR-2 of size 512 × 512, e CT-2 of size 512 × 512

3.1 Statistical Analysis

To prove the robustness of the proposed algorithm statistical analysis is one of the techniques to be evaluated by numerous statistical attacks. It can be validated by calculating histogram of encrypted image, Chi square tests and correlation between adjacent pixels of encrypted image in various directions.

3.1.1 Histogram Analysis

Histogram analysis is the graphical representation of pixel distribution of images by plotting at each stage. The randomness of the proposed algorithm can be evaluated by flat and uniform distribution of pixels in the encrypted image. From the obtained results, the histogram of encrypted image is found to be flat and differ completely from the histogram of the original image. Figure 6(a–c) represents the proposed image, histogram of the proposed image and the histogram of the encrypted image respectively.

Fig. 6
figure 6

Histogram analysis a proposed image, b histogram of (a) and c histogram of cipher image of (a)

3.1.2 Chi Square Test

It is one of the statistical methods to test the strength of a proposed algorithm in which a set of expected and observed value are calculated. When the calculated Chi square value of is low as compared with the theoretical value for the given degree of freedom, then the performance of encryption is better and uniformly distributed. The Chi square parameter is measured as \(\chi^{2} = \mathop \sum \nolimits_{0}^{255} \frac{{\left( {x_{i} - y_{i} } \right)^{2} }}{{y_{i} }}\)where xi and yi are the expected and observed values, i is the number of number of gray values. Hence it is evident that the redundancy of the plain image is completely satisfied and proves the uniformity of histogram as in Table 1.

Table 1 Histogram assessment based on chi square test analysis

The theoretical Chi square value with degrees of freedom 255 equals to 293.24 with 0.95 probabilities. From Table 1, it is evident that the measured Chi square value for all the images lies below than the theoretical value which reveals that the pixels in the encrypted image is uniformly distributed.

3.1.3 Histogram Deviation

The measure of deviation between the original and encrypted image can be evaluated by histogram deviation and it is given by the equation

$$B_{H} = \left( {\frac{{\frac{{a_{0} + a_{255} }}{2} + \mathop \sum \nolimits_{i = 1}^{254} a_{i} }}{M \times N}} \right)$$

where ai is the amplitude of the absolute difference at i, M × N is the dimension of the image considered.

3.1.4 Irregular Deviation

It is defined as how much deviation is caused by the encryption algorithm in the encrypted image from the original and it can be calculated as

$$D_{I} = \frac{{ \mathop \sum \nolimits_{i = 0}^{255} B_{D} \left( i \right)}}{M \times N}$$

where histogram deviation BD(i) = |B(i) − GB|.

3.1.5 Deviation from Ideality

It is the measure of quality of encryption algorithm in which how the algorithm reduces the deviation of encrypted image and it can be evaluated by the equation

$$I = \frac{{\mathop \sum \nolimits_{{c_{0} = 0}}^{255} \left| {B\left( {C_{I} } \right) - B\left( C \right)} \right|}}{M \times N}$$

where B(C) is the histogram of encrypted image.

The above defined metric’s were calculated and tabulated in Table 2. From the table, the histogram deviation values are very higher, irregular deviation and deviation from ideality values are smaller which reveals the robustness of the proposed encryption algorithm.

Table 2 Encryption evaluation metric analysis

3.1.6 Correlation Analysis

Correlation coefficient calculates the quality of least square fitting to the original data. Correlation values should be very less to withstand various statistical attacks. The correlation analysis is tabulated in Table 3.

Table 3 Correlation analysis of various test images

The correlation analysis of plain image and encrypted image in vertical, diagonal and horizontal direction are shown in Fig. 7. Figure 7a–c represents the pixel distribution of the original image in horizontal, vertical and in diagonal directions. Figure 7d–f represents the pixel distribution of the encrypted image in horizontal, vertical and in diagonal directions. From the figures, the pixels are concentrated in some region for the original image and it is uniformly distributed for the encrypted image. The correlation between adjacent pixels are given by the equation

$$C_{ab} = \frac{{E\left[ {\left( {a - E\left( a \right)} \right)\left( {b - E\left( b \right)} \right)} \right]}}{{\sigma_{a} \sigma_{b} }}$$

where E(i) = Expected value of i, \(\sigma \left( i \right)\) = Standard deviation of i

Fig. 7
figure 7

Pixel distribution a diagonal correlation of proposed image, b horizontal correlation of proposed image, c vertical correlation of proposed image, d diagonal correlation of encrypted image of (a), e horizontal correlation of encrypted image of (b) and f vertical correlation of the encrypted image of (c)

3.1.7 Information Entropy

Global Shannon entropy is the metric to measure the randomness of information and it is given by the equation

$$S\left( m \right) = - \mathop \sum \limits_{i = 1}^{N} E\left( {m_{i} } \right)\log_{2} E\left( {m_{i} } \right)$$

where E(\(m_{i}\)) is the probability of appearance of the symbol \(m_{i}\). For a random image the maximum entropy is said to be N. if the entropy value is less than N then there exist a prediction which spoils the security of proposed algorithm. This global entropy fails to measure the true randomness of the encrypted image. Local Shannon entropy overcomes this weakness of global entropy by computing the local entropy of non over lapping blocks. Table 4 shows the local and global Shannon entropy of original and encrypted image. From the table, local and global entropies are closer to 8, which reveal the randomness of the proposed encryption scheme.

Table 4 Entropy analysis

3.2 Differential Analysis

The most important metric to calculate the resistivity of proposed system are measured using Number of Pixel Changing Rate (NPCR) and Unified average changing intensity (UACI). These analyses can be measured by evaluating two encrypted image, one obtained by original image and another from single bit change in plain image. The NPCR and UACI are calculated as

$$NPCR = \mathop \sum \limits_{x,y} \frac{{C\left( {x,y} \right)}}{M\times N} \times 100\%$$
$$UACI = \frac{1}{M\times N}\left[ {\mathop \sum \limits_{x,y} \frac{{\left| {D_{1} \left( {x,y} \right) - D_{2} \left( {x,y} \right)} \right|}}{255}} \right] \times 100\%$$

where C(x,y) = array of same size as images D1 and D2

$$D\left( {x,y} \right) = \left\{ {\begin{array}{*{20}l} 0 \hfill & { {\text{if}}\,\,D_{1} \left( {x,y} \right) = D_{2} \left( {x,y} \right)} \hfill \\ 1 \hfill & { {\text{if}}\,\,D_{1} \left( {x,y} \right) \ne D_{2} \left( {x,y} \right)} \hfill \\ \end{array} } \right\}$$

The observational values of NPCR and UACI for various test images are listed in Tables 5 and 6. In Table 6, measured UACI values are compared with the theoretical critical values. From the table, all the test images except CT-2 pass the UACI critical value tests.

Table 5 NPCR analysis of various test images
Table 6 UACI analysis of various test images

3.3 Channel Metric Analysis

To study the ability of the recovered text and the image Bit Error Rate (BER) is evaluated with Signal to Noise Ratio (SNR). BER is defined as the ratio number of bits received in error to the total number of bits transmitted. BER and SNR are tabulated in Table 7, while transmitting the encrypted images through AWGN, Rayleigh and rician channels. From the table, the proposed encrypted image provides better BER for AWGN channel at various SNR as compared to rayleigh and rician channels.

Table 7 Calculation of BER versus SNR while transmitting encrypted image over various channels

3.4 Key Sensitivity Analysis

Key is the most efficient and powerful tool for any crypto systems. So it should be kept safe and it should not give away the decrypted output even there is a slight change in key. Key is the tool to identify the strength of a proposed algorithm. The variations with key and their corresponding results are given in Fig. 8a–e.

Fig. 8
figure 8

Key sensitivity analysis a encrypted image with original key, b decrypted image with key 1, c decrypted image with key 2, d decrypted image with key 3 and e decrypted image with original key

K = 9E1B4DCFD126928C957BF4B713F949A0CBB24B7B97D203C0927990AC4BAEFDA9,

K1 = 8E1B4DCFD126928C957BF4B713F949A0CBB24B7B97D203C0927990AC4BAEFDA9,

K2 = 7E1B4DCFD126928C957BF4B713F949A0CBB24B7B97D203C0927990AC4BAEFDA9,

K3 = 6E1B4DCFD126928C957BF4B713F949A0CBB24B7B97D203C0927990AC4BAEFDA9,

From the analysis, it is evident that even a single bit change in the original key value results in a complete random image as compared with the original one. This reveals that the key used for the proposed scheme are more sensitive and even a slight variation results in a complete obscured image.

3.5 Cropping Attack Analysis

In the real time applications of image encryption, attacks are generally performed intentionally or unconcerned. The intruder can perform intentional attack to know the strength of an algorithm so they purposely crop some portion of an encrypted image. In cropping attack various blocks of the pixels in the image is cropped and decrypted to validate the robustness of the proposed algorithm. Figure 9a–c represents the encrypted image after cropped by 1, 5 and 10% respectively. Figure 9d–f represents the decrypted images of (a–c) respectively. From the figure, even after cropping attacks, the images can be able to recover almost 85% which represents the resistivity of the proposed scheme against attacks.

Fig. 9
figure 9

Cropping attack analysis a encrypted image with 1% data loss, b encrypted image with 5% data loss, c encrypted image with 10% data loss, d decrypted image of (a), e decrypted image of (b) and f decrypted image of (c)

3.6 Noise Attack Analysis

Attacks are the process which degrades the strength of the proposed encryption algorithm. When the proposed algorithm is strong and efficient, it should be immune to various attacks especially with noises. These noises can be added when the data is passed over the channel. Figure 10a–d represents the addition of Gaussian, poison, salt and pepper, speckle noises added to the encrypted images respectively. Figure 10e–h represents the decrypted images of Fig. 10a–d respectively.

Fig. 10
figure 10

Noise attack analysis a encrypted image with Gaussian noise of density 0.02, b encrypted image with Poison noise of density 0.04, c encrypted image with Salt and Pepper noise with density 0.02, d encrypted image with Speckle noise of density 0.04, e decrypted image of (a), f decrypted image of (b), g decrypted image of (c) and h decrypted image of (d)

In addition to noise attack analysis, colored noise such as pink, blue, red and violet noises are added to the encrypted images. The colored noise and its decrypted images are depicted in Fig. 11a–h.

Fig. 11
figure 11

Colored noise attack analysis a encrypted image with Pink noise, b encrypted image with Blue noise, c encrypted image with Red noise, d encrypted image with Violet noise, e decrypted image of (a), f decrypted image of (b), g decrypted image of (c), and h decrypted image of (d)

From the figures, it is revealed that even after noise addition images can be able to recover which shows that stability of the system performance.

3.7 Complexity Analysis

The computational analysis of the encryption algorithm depends on DGT, MD5, hash and LSIC techniques. The MD5 algorithm involves 4 rounds of iteration and the DGT algorithm with 128 iterations of chosen 2 × 2 matrixes is performed. Then finally LSIC involves encryption with two keys, key1 of size 256-bit and key2 of size 256-bit with eight iterations. So the overall complexity of the proposed system is given by 2 × 24 × 4 × 10 × 4 × 128 × 2 × 256 × 2 × 256 × 8. Hence the proposed system is applied for real time applications.

4 Performance Analysis with Existing Paper

This section illustrates the performance comparison with the literature available and it is tabulated in Table 8, showing the correlation analysis with vertical, diagonal and horizontal coefficients, NPCR and UACI metric’s

Table 8 Performance comparison with proposed scheme

From Table 8, it is evident that the correlation results of vertical, horizontal and diagonal coefficients have better results than the existing method [2, 3]. NPCR have comparable values with [4, 9] and UACI value shows good analysis than [4, 9,10,11]. The computational complexity is found to have better results in the developed crypto system.

5 Conclusion

The trend and urgency towards protecting the personal details when propagated via a public and vulnerable channel or medium, it has become essential in this digitized world. In this paper, a system to transmit bio-medical signal in the available free spectrum based on cognitive radio technology has been proposed. It is quite obvious that the medium or channel which carries the information is open for many intruders or crackers. Amidst various issues, in the process of achieving a well secured technique capable of avoiding risks, the present work has been initiated. Here, a double encryption technique is used along with watermarking technique and output is recoverable in the presence of noise. This application can be well suited for transmitting and exchanging biomedical signals between DMS, doctors and patients in rural areas.