Using Computer Vision to See Cyber Threats

David Campbell | May 16, 2017 | 4 Min Read

We as humans can very quickly visually identify threats in complex environments, but what if we could train the computer to ‘see’ the markers of cyber attacks in real-time?

The difficulty with defending against cyber attacks is accurately identifying and quickly responding before a complete breach occurs. There are often markers to indicate when an attack is about to occur, but these hints are often not identified until a post-mortem is performed. What if we could train the computer to ‘see’ these markers in real-time?

We as humans can very quickly visually identify threats in complex environments. We notice other people’s eye movements and body language. We can tell if something doesn’t seem right. Is it possible to teach a computer to identify a threat in an almost instinctual fashion?

I began exploring this concept by studying algorithms that allow computers to ‘see’ images. My idea is to investigate whether computers could ‘see’ network traffic with a visual perspective and not just as literal protocols. Would they be able to see through the noise and pickup visual queues to issue threat warning levels?

To solve this problem, I began building a Neural Network (NN) framework. A NN seemed like a natural algorithm because it is very good at recognizing patterns. It is the building block for all artificial intelligence.

The specific algorithm I used was a modified convolutional neural network. The convolutional neural network (or ConvNet) simulates the visual cortex and allows the computer to assign classifications such as object type, number of objects, object color, etc. It can analyze a picture and describe it using predefined classifications, for example, it may look at a picture and declare that there are two red cars and one green bicycle. This classification is achieved by training the computer by feeding in millions of pictures and teaching the computer what each picture contains. The NN eventually configures itself to see patterns in an almost magical way.

Using this concept, I created a ConvNet that takes network frames as input and outputs threat classification and risk level. It essentially identifies the type of attack and rates its danger. My goal was to simulate our brain’s ability to see danger and to classify it in real-time. When we are driving and we see something dart out in front of us, we very quickly analyze the risk. We first classify the risk, which in this case is the fact that we’re going to hit something. We then quantify the risk. What are the consequences of hitting it? If it’s a paper bag that floated onto the highway, we’ll have a very low-risk level. On the other hand, if a moose walks in front of us, we’d have a very different assessment.

A more compelling analogy is how we identify subtle signs of danger. In a situation, for example, where we are traveling and walking in an area that feels dangerous, our brain processes visual data in ways that can help protect us. We see the movements of people’s eyes, their gait, their hand positions. We can spot people who might be a threat. My goal is to achieve this level of sophistication in my ConvNet.

The challenge I’m currently facing is finding or creating enough learning material to properly teach my ConvNet. To teach it how to identify threats, it must be taught what’s considered a threat and what’s legitimate. If there’s not enough teaching material, the ConvNet interprets things at a much too literal level; it cannot deal with missing information and cannot make educated guesses when things aren’t easily classifiable. The beauty of well-trained neural networks is they can make leaps of faith and extrapolate where information is missing.

Companies like Google, Microsoft, and Facebook have artificial intelligence (AI) networks that are capable of amazing things. Google has even created a custom chip optimized for neural networks processing called the Tensor Processing Unit (TPU). Unfortunately, it’s not available to the public at this time.

I foresee a future where companies like Intel and ARM create Neural Network Processing Units (NNPU) that are generally available to the public and cheap. Ideally, these NNPUs would be cheap and efficient enough to embed in IoT (Internet of Things) devices. A part of the difficulty with securing IoT devices is the scale of the proliferation. By 2020 it is estimated that IoT devices will outnumber humans five to one. These IoT devices cannot be easily monitored for security breaches so a little-embedded intelligence will go a long way.

At this point, the concept of Embedded Artificial Intelligence is theoretical but I believe predicting and detecting danger is a very real and practical application of AI. This sort of intelligent security can be implemented in web servers, network switches, and many other internet facing devices and technologies. I believe AI is the future in cybersecurity.

This post originally appeared on LinkedIn

Get Email Updates

Get updates and be the first to know when we publish new blog posts, whitepapers, guides, webinars and more!

Suggested Stories

Health Information System Integration

In this webinar, we discuss interoperability in healthcare and answer attendee questions on Health Information System integration. Download the webinar Now.

Read More

Guide to Creating Engaging Digital Health Software

This guide shares our knowledge and insights from years of designing and developing software for the healthcare space. Focusing on your user, choosing the right technology, and the regulatory environment you face will play a critical role in the success of your application.

Read More

Accelerate Time To Market Using Rapid Prototyping

In this webinar, you will learn how to leverage rapid prototyping to accelerate your products time to market in one week, agile sprints.

Read More