The authors believe that the hybridization of two different approaches results in more complex encryption outcomes. The proposed method combines a symbolic approach, which is a table substitution method, with another paradigm that models real-life neurons (connectionist approach). This hybrid model is compact, nonlinear, and parallel. The neural network approach focuses on generating keys (weights) based on a feedforward neural network architecture that works as a mirror. The weights are used as an input for the substitution method. The hybrid model is verified and validated as a successful encryption method.
TopIntroduction
Securing a network for digitally stored or transmitted data is an ultimate need for all organizations in the world. There is everlasting need to keep enhancing encryption and protection. Many encryption techniques have been proposed in the literature (Al-Muhammed et al, 2017; Bogdanov et al, 2014; Kunden et al, 2015; Mathur et al, 2016; Daemen et al, 2002; Nie et al, 2009; Al-Muhammed, Abuzitar et al, 2017; Patil et al, 2016; “TDEA”, 2012; Bogdanov, Mendel et al, 2014; Stallings et al, 2016; Anderson et al, 2018; Burwick et al, 1999; Isenburg et al, 2003; Soto et al, 2018; Rukhin et al, 2001; Soto et al, 1999). They use different models that implement different processing techniques as stream based or block based models. Different key based diffusion and confusion operations were proposed. Mesh-based technique and directive operators are also used in (Al-Muhammed, 2017). However, these methods lack the needed confusion due to its quasi-linear nature and might be exploited by hackers. The mapping can be more deceptive if it was nonlinear. On the other hand, the proposed diffusion in this paper has connectionist nonlinear and complex nature that makes it even difficult for predicting the nature of the encryption even if the keys were known. Additional information at both ends should be shared regarding the architecture of the NN and the nature of the activation functions used to complete the process. All parts of the key (i.e. weights) work collectively and in parallel in generating the encrypted/decrypted data. Neural networks with Jordan training method were used in generating keys for encryption (Komal et al, 2015). A neural network was trained as sequential machine for encoding process. However, sequential machines have a problem of delay, especially for long input sequences. Other researchers used the neural network to generate encryption keys and they repeated the training process at the receiving end to generate the decryption keys (Zitar et al, 2005). In that sense, they did not take advantage of the compactness and the mirroring capabilities of the neural networks.
To apply our hybrid method, we will use the weights in a substitution heuristic that uses a look-up table to provide new mapping framework. The outcome of this substitution method is based on a pre-set table known at both sides of the transmitter and receiver. This table should be updated occasionally with coordination between both sides at the two ends. At the receiving end and using the weights sent by the transmitter, the substitution is reversed and the “neurally” encrypted data is recovered. Now we move to step 10 where the “neurally” encrypted data is, in return, recovered through processing it with the output layer using the sent weights.
In summary the chapter offers the following contributions:
- 1.
More effective hybrid nonlinear mapping technique is used to overcome any embedded exploitable linearity in the mappings followed by traditional look-up table substitution that can be occasionally updated.
- 2.
More complexity is generated with the parallel computational method used.
- 3.
The cipher text has more confusion since efficient processing with “real type” data is used (such as multiplication and addition of real numbers). No more straight text manipulation is used with neural networks.
- 4.
It uses a block-based cipher rather than stream based one and keys are re-generated for every block.
- 5.
Keys used for encoding are different than keys used for decryption and that adds another security feature.
- 6.
The relatively time-consuming training is done once, and during transmission the created keys are used in generating the encrypted signal and recovering the original data at the receiving end with very low cost of calculations (single loop for every neuron only using single layer at both ends).
- 7.
The added layer of substitution (table look-up) adds more complexity and still it does not manipulate original the plain text itself. It deals with data encrypted by the hidden layer of the neural network.
The contributions of the paper are summarized as follows: