I’ll start by admitting that the title is probably making a rather bold statement around what I want to talk about, which is cyber security, also referred to as data / information security. This is a very hot topic of late with (Yes I am going to say the R word again) Ransomware still being the most costly cause of data loss / system downtime for the past couple of years. So we are getting better at mitigating ransomware attacks purely because it’s happening a lot, so we adapt and create new technologies and methods to mitigate this sort of threat. A lot of the focus of mitigation to these types of threats comes down to typically what can be considered the weakest link in the chain of security between data and an attacker, and that is: the users, or otherwise known as us, ourselves.
A lot of companies have engaged in the education of end users around identifying malicious emails / phishing attempts and better overall security posture, like password tools and practices, as well as screen locking etc, but lately we are seeing more and more a model adopted known as zero trust.
This is because, just like when we have too much to drink and with our ex’s number still stored in our phone within reach, we simply cannot even trust ourselves. Humans make mistakes, and even worse, some humans intentionally plan to do damage. And this is what I mean when I talk about the “last threat to data” as this is something we can never really create a real solution for, hence the zero trust model.
To properly understand this, we need to go back over time and understand all the historical threats to data and how we have adapted and evolved to overcome these types of threats. Ever since primary data was created and we started placing value on it, we realised we need to protect that data. Besides natural disasters, the first early threats to data were primarily the reliability of the hardware hosting the data itself. Storage mediums were mostly mechanical, with moving parts that would wear out over time, or occasionally sustain physical damage. So we improved infrastructure and resiliency through higher availability, failover systems, increasing up time, grouping multiple disks together that can be “hot swapped” on the fly if damaged, and then the data magically returns to the newly replaced disk.
So once we have most of the infrastructure problems out of the way, the threats to data became mostly software or data related. This being primarily in the form of Malware, Viruses, Spyware, Ransomware, all intent to steal information, hold information Ransome for payment, or just outright destruction! This is where “perimeter security” became popular over the years, otherwise known as a preventative measure to try and stop cyber attacks in their tracks. Anti-virus first became widely popular in the 1980’s, around the same time that personal computers became more mainstream in the general population (can you see the connection here? End users introduced to technology that became vulnerable to cyber attacks)
Today perimeter security has evolved significantly. Most people, even non-technical, know what a firewall is, and most operating systems today come typically inherently secure, with automated patching, and so we have also somewhat adapted to these “logical” threats.
Another type of threat which isn’t directly destructive of data, but counts in a security conversation is the theft of data. Data is inherently very portable, and the entire world is now hyperconnected. Losing Intellectual property has become significantly important as more of the property is digital. So we embraced cryptography technology to try and protect information and we created data loss prevention technologies that try to ensure data stays within an organisation or at least if it leaves the security perimeter, it’s unreadable. I once heard of a person with an amazing idyllic (not photographic) memory who sat computer based exams, and memorised every single question, only to subsequently “brain dump” the answers online shortly after, devaluing a certification, and forced the vendor to constantly update the exam content.
So, if infrastructure has become resilient to the physical failures, and perimeter security has become significantly more robust to help prevent intrusion and attacks or theft of data, what can stop ourselves from being that last threat to our data, whether we do it intentionally as an insider threat, or even unintentionally as a user making an honest mistake. Remember security hardening can go so far as to be counterproductive, and lock us out from the very systems we need access to. On the other hand we don’t want to give our users “enough rope to hang themselves with” but sometimes we simply don’t have a choice.
The answer? A.I and Machine Learning. This is increasingly becoming the answer more and more often to our modern technology requirements. You see if we can’t trust ourselves (or each other) with the zero trust model, then we need to rely on the machines. I will give you a good analogy why, Machines don’t have anything to gain from being malicious (unless of course they become self-aware and somehow decide that humans are their enemy, but this isn’t an article about the Terminator and Skynet) No, just like a loyal dog won't leave its owner for a richer person across the street, machines won't let a disgruntled user cause a malicious attack so as to share in any profits, or support the attackers agenda.
We let the machines take over because we need the machine overlords to monitor our activity and learn our behaviours to stop us in our tracks. This is the only way we will ever get close enough to a permanent solution for “the last real threat to data.”
This is where you are probably thinking, OK, why is a guy from NetApp talking about a security conversation? Well we here at NetApp have spent years and years perfecting data management across an increasingly distributed infrastructure known as the Hybrid Multi Cloud. As data is spread to more places, the challenge of ensuring protection, security and compliance increases significantly. We provide a way to mitigate risks associated with distributed data environments by providing a seamless user experience to both control the data, and its use as well as user interaction with it.
Last month I spoke about how we are helping businesses make sense of data to classify information for compliance and governance reasons.
Today I am talking about NetApp’s Cloud Secure capability, a feature of Cloud Insights. NetApp Cloud Secure uses very sophisticated AI and Machine Learning to observe behaviours and patterns of users and the interaction with data. This is important for a number of reasons, but primary of which is to detect attacks before it’s too late. A number of other key features work in conjunction with each other, but just like our compliance capability can show you were sensitive information lives, cloud secure can provide user data access reporting for security compliance. In fact, we keep information for 13 months to enable forensics and user audit reporting, which is becoming increasingly important for data breach notification laws.
The best part is that cloud secure and cloud insights are a SaaS solution completely hosted by NetApp, so there is quick time to value, no upgrades, and it is highly scalable from a single department to an entire highly distributed global enterprise.
Again as always, I am only scraping the surface of what cloud secure and cloud insights can do. In fact, NetApp provides a complete end to end approach to prevention, detection and rapid recovery from any number of cyber threats and attacks.
If you want to get a taste of what I am talking about and see how you can gain visibility into malicious user activity and identify potential policy risks, check out my video “Introducing NetApp Cloud Secure in 60 seconds or less” Always be prepared for when, not if an attack or breach will happen.
Data Driven Technology Evangelist, NetApp ANZ
Techie with Table Manners
My mission is to enable data champions everywhere. I have always been very passionate about technology with a career spanning over two decades, specializing in Data and Information Management, Storage, High Availability and Disaster Recovery Solutions including Virtualization and Cloud Computing.
I have a long history with Data solutions, having gained global experience in the United Kingdom and Australia where I was involved in creating Technology and Business solutions for some of Europe and APAC’s largest and most complex IT environments.
An industry thought leader and passionate technology evangelist, frequently blogging all things Data and active in the technology community speaking at high profile events such as Gartner Symposium, IDC events, AWS summits and Microsoft Ignite to name a few. Translating business value from technology and demystifying complex concepts into easy to consume and procure solutions. A proven, highly skilled and internationally experienced sales engineer and enterprise architect having worked for leading technology vendors, I have collected experiences and developed skills over almost all Enterprise platforms, operating systems, databases and applications.
Brush up on the latest trends and developments in cloud, on premises, and everywhere in between. This is where it all gets real, with a cherry on top.
Explore a wide range of open forums where you can post questions, share answers and just generally get smart on all the NetApp technologies that matter most to you.