Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Neural Espionage: Can Adversarial Neural Networks Learn to Apply Encryption to Images?, Thesis of Cryptography and System Security

A project summary of the California State Science Fair 2017. The project aimed to determine if two neural networks can successfully encrypt and decrypt an image while preventing an intercepting neural network from deciphering the original image. the objectives, methods, materials, results, conclusions, and discussion of the project. The project was designed by Alexander T. McDowell and received help from Cybersecurity Expert Chris K. Williams.

Typology: Thesis

2022/2023

Uploaded on 05/11/2023

selvam_0p3
selvam_0p3 🇺🇸

4.3

(15)

233 documents

1 / 1

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CALIFORNIA STATE SCIENCE FAIR
2017 PROJECT SUMMARY
Ap2/17
Name(s) Project Number
Project Title
Abstract
Summary Statement
Help Received
Alexander T. McDowell
Neural Espionage: Can Adversarial Neural Networks Learn to Apply
Encryption to Images?
J0805
Objectives/Goals
To determine if two neural networks, a Provider and a Receiver, can successfully encrypt and decrypt an
image while preventing an intercepting neural network from deciphering the original image.
Methods/Materials
I started with three Adversarial Convolutional Neural Networks as a framework and developed them so
that they could perform encryption to images instead of matrixes. I then trained the neural networks on 13
different tests. The variables I changed in my experiments were the number of training iterations, image
and key sizes, types of images and keys, the rate at which the network learned, and the message and key
lengths.
Results
The neural network achieved its training goals of minimizing guess error between the Provider and the
Receiver. However, the intercepting neural network always managed to decrypt the image into a faint
outline, which was decipherable to a human observer. As well, the image encrypted by the Provider wasn't
cryptographically secure and was easy for a human to determine. In 12 out of the 13 tests, the Receiver
successfully decrypted the message while in 8 out of the 13 tests the interceptor had an accurate outline of
the encrypted image.
Conclusions/Discussion
Neural Networks can learn to apply encryption to images. However, the encryption being applied by the
networks was not cryptographically strong. The data suggested that changing the loss function I was using
would significantly improve the neural networks' ability to encrypt and decrypt images. Changing the
architecture of the networks could also improve that same ability.
I tested if Adversarial Neural Networks could learn to apply encryption to images.
I designed my experiments myself. I received help understanding the scientific paper I used as a basis for
my project from Cybersecurity Expert Chris K. Williams.

Partial preview of the text

Download Neural Espionage: Can Adversarial Neural Networks Learn to Apply Encryption to Images? and more Thesis Cryptography and System Security in PDF only on Docsity!

CALIFORNIA STATE SCIENCE FAIR

2017 PROJECT SUMMARY

Ap2/

Name(s) Project Number

Project Title

Abstract

Summary Statement

Help Received

Alexander T. McDowell

Neural Espionage: Can Adversarial Neural Networks Learn to Apply Encryption to Images?

J

Objectives/Goals To determine if two neural networks, a Provider and a Receiver, can successfully encrypt and decrypt an image while preventing an intercepting neural network from deciphering the original image. Methods/Materials I started with three Adversarial Convolutional Neural Networks as a framework and developed them so that they could perform encryption to images instead of matrixes. I then trained the neural networks on 13 different tests. The variables I changed in my experiments were the number of training iterations, image and key sizes, types of images and keys, the rate at which the network learned, and the message and key lengths. Results The neural network achieved its training goals of minimizing guess error between the Provider and the Receiver. However, the intercepting neural network always managed to decrypt the image into a faint outline, which was decipherable to a human observer. As well, the image encrypted by the Provider wasn't cryptographically secure and was easy for a human to determine. In 12 out of the 13 tests, the Receiver successfully decrypted the message while in 8 out of the 13 tests the interceptor had an accurate outline of the encrypted image. Conclusions/Discussion Neural Networks can learn to apply encryption to images. However, the encryption being applied by the networks was not cryptographically strong. The data suggested that changing the loss function I was using would significantly improve the neural networks' ability to encrypt and decrypt images. Changing the architecture of the networks could also improve that same ability.

I tested if Adversarial Neural Networks could learn to apply encryption to images.

I designed my experiments myself. I received help understanding the scientific paper I used as a basis for my project from Cybersecurity Expert Chris K. Williams.