Signal multiple encodings by using autoencoder deep learning

ABSTRACT


INTRODUCTION
Encryption is a form of transforming information where the generated data cannot be detected by unauthorized persons.Decryption is a way to restore encrypted data into its original structure [1], [2].The proper decryption key is required to reconstruct the contents of the encrypted information.The function of the key is to reverse the work of the encryption process.The more complex the encryption technology, the more demanding it is [3].Encryption and decryption are both included in a big field called cryptography [4]- [7], which is also be concentrated in this paper.
Encryption was studied in many papers [7]- [11].According to Qin et al. [12] studied deep learning applications in physical layer communications for systems with and without block structures.Authors here used the block structure to show how deep learning can be used for signal compression and identification in deep learning-based communication systems.They also discussed ongoing efforts for building end-to-end communication networks based on deep learning [12].In the same year, Fadhil et al. proposed forward artificial neural network encoder with a main structure built by the self organizing feature map (SOM).The forward neural network dimension is initially chosen based on the number of bits in a code word and source bits.The code word uniqueness was checked to ensure that it matched existing data.For decoding, a spike neural network was used.Its performance depended on the size of the source dimension or the amount of bits in the code word [13].Ye et al. [14] suggested to use a conditional generative adversarial network for representing channel effects and bridges between the transmitter deep neural networks (DNN) and receiver DNN.From results, the suggested method was active in additive white gaussian noise (AWGN) channels, frequency-selective channels and rayleigh fading channels [14].Sassi et al. [15] modified posterior decoding algorithm to solve hidden Markov models by machine learning algorithms which deal with big data tasks.The authors succeeded in improving an algorithm to reduce time complexity and yield effective outcomes in  [16] proposed a system that avoided unofficial access and spy by using dual adapted Hebbian neural network for encrypting data.Machine learning and deoxyribonucleic acid (DNA) were used to compress and growth data confusion.The method succeeded by transferring data in a secure manner and short time [16].In the same year, Maolood et al. [17] suggested a lightweight stream cipher method.Then, it was tested for numerous video samples to check its suitability and authentication in encryption and decryption procedures.After testing many characteristics such as differential analysis, correlation analysis, information entropy and histogram analysis, their method showed a higher security and lower calculation time compared with state-of-the-art encoding methods [17].From the literature, it can be noticed that the strength of encryption and the time of encryption and decryption are very important factors in cryptography.Here, both factors are considered.
Our work contributes to the field of security by proposing an intelligent algorithm for encrypting multiple signals called the deep autoencoder network (DAN).The generated codes are difficult to be predicted by unreliable people.The goal here is to build a highly secured encryption system called the signal multi-encryptions system (SMES).The remaining sections will be provided as follows: section 2 illustrates the theory of the overall cryptography algorithm; section 3 states the results and discussion, and section 4 concludes the paper.

METHOD
The main SMES concept is encrypting signal information multiple times, it is based on a DAN.Firstly, the input signal is converted into a one-dimensional (1D) matrix of size 1n elements, where n is the number of elements in any original signal and it is here equal to 172,890 values.Then, it is entered into the 1 st hidden layer (or first encryption layer), and the weights between the input signal and the first hidden layer can be used as the first encryption key.The encrypted data is then sent to the 2 nd hidden layer (or second encryption layer) using the weights between the first and second hidden layers, which can be utilized as a second key.This process can be extended to the N th number of hidden layers (N here is used equal to 3 hidden layers).To find the best number of hidden nodes (or neurons) in the encryption layers, the number of hidden nodes is equally changed for all the used hidden layers to 4, 8, 16, 32, 46, and 128 nodes.Consequently, different measurements of error ratios and training times are considered, as will be declared later.On the other hand, the output layer (last layer) is employed for the decryption.It decrypts the multiencrypted information and reconstructs the original signal by exploiting the weights between the last hidden layer and the output layer, which represent the final key.The general structure of the suggested SMES is illustrated in Figure 1.All experiments in this study are executed under the matrix laboratory (MATLAB) software of version is R2020b (9.9.0.1467703) 64 bits.

Deep autoencoder network
The DAN is a type of deep learning, which is advanced machine learning.Deep learning is widely employed in artificial intelligence domains such as medicine [18], images processing [19], information and sound application [20], computer vision [21], and many more applications [15], [22]- [28].In this paper, the DAN is suggested, implemented and evaluated.It has three layers: input, many hidden layers and output layer.The backpropagation training algorithm is utilized by exploiting the input values as targets.

Decryption evaluation
To evaluate the decryption's reliability, many measurement equations have been used to calculate the errors between original signals and restored (or decrypted) signals.Low error values show high performances.Whereas, high error values show low performances.Such equations include:

Mean square error
Mean square error (MSE) measures the errors of mean squares.This measurement is represented by the following [30]: where MSE is the MSE error, n is the number of elements in any original or restored signal, I is the original signal and O is the restored signal.

Peak signal to noise ratio
Peak signal to noise ratio (PSNR) is the ratio of the maximum possible signal power to the power of the corrupting noise.It is usually measured in decibel.The following is an example of PSNR computation [30]: where PSNR is the PSNR error and l is the maximum peak value.

Average difference
Average difference (AD) considers the mean between the original and restored signals.AD can be represented as follows [30]: where AD is the AD of error.

Maximum difference
Maximum difference (MD) measures the maximum absolute difference between the original and restored signals.MD is denoted as follows [30]: where MD is the MD error.

Normalized absolute error
This technique measures exactly what is the difference between the processed signal and the original signal.Normalized absolute error (NAE) is defined as follows [30].
where NAE is the NAE error.

RESULTS AND DISCUSSION
Database of DIVerse 2K (DIV2K) is used.It can be considered as a set of signals [31].In this study, we tried to encode signals using the suggested deep learning.We have tested and compared the results of outputs and inputs.The system achieved good results for complex coding with more than one stage.Three hidden layers are used for our experiments.The proposed DAN is checked for the appropriate size of nodes in the hidden layers in order to achieve the most acceptable performance by giving the lowest results of error rates.The number of hidden neurons in all three hidden layers is changed for the values of 4, 8, 16, 32, 64 and 128 nodes.Each of these sizes is evaluated by applying the error measurement equations of the MSE, PSNR, AD, MD, and NAE.Table 1 shows the DAN performances of errors for the different numbers of hidden neurons.While Table 2  This table shows that the PSNR error attained a good value for 64 neurons in the hidden layers, because higher value means more quality of output reconstructions.The results in the values of AD errors show that the lowest value is for the number of 64 neurons in the hidden layers too.NAE error values also show that 64 neurons in the hidden layers have the minimum value.Only the MD error obtained the minimum value for 8 neurons.After comparing the error values and consideringthat the more complex code causes greater strength, the number of 64 neurons in the hidden layers seems the best choice for our multistage code generations .
The training time increases with the number of hidden neurons in the hidden layer, see Table 2.The fundamental reason for a large number of hidden layers is that each point in the signal has its own neurons on the processing line from the input layer to the output layer in DAN, which allows for a complicated encryption scheme.In the Table 2, it is clear that increasing the number of neurons leads to an increase in the training time by assuming a constant data size this must be taken into account if this system is implemented in real-time systems.

CONCLUSION
An MES system based on DAN was proposed in this study.Several encryption layers make up the proposed system.To begin, the input layer is established to read signal data and generate the initial key and encryption procedure on the signal input.From the input encoded signal, each encryption layer generates its own keys.The last layer decrypts the encoded signal and reconstructs the original signal using its key and inputted signal to it.Signals are successfully tested using the N th encryption procedure once the system is constructed.Many error measurement formulas were used to calculate the optimal system performance after changing concealed size layers.This reliable technology may be used to deliver a critical notification while maintaining a high level of security.When you're concealed, you'll obtain the finest results.

Figure 1 .
Figure 1.The general architecture of the suggested SMES


ISSN: 2302-9285 Bulletin of Electr Eng & Inf, Vol. 12, No. 1, February 2023: 435-440 438 [29]yption is a method of encoding data[29].It transforms original signals, which are the original representations of data, into coded signals.Only authorized humans are allowed to decode coded signals and view the original data.

Table 1 .
measures the training time for each case of changing the number of hidden neurons.DAN performances of errors for the different numbers of hidden neurons

Table 2 .
Training time of each DAN according to its number of hidden neurons