Magazine Button
Darktrace expert on the risks of collaboration on the cloud and the role of AI

Darktrace expert on the risks of collaboration on the cloud and the role of AI

Justin Fier, Director of Threat Intelligence and Analytics, Darktrace, discusses the major vulnerabilities seen in Software as a Service today and looks at real life examples of attacks where AI cyberdefences have been able to prevent a breach.

It’s no secret that collaboration is the bedrock of business. In fact, a Stanford University study demonstrated that merely priming employees to act in a collaborative fashion – without changing their environment or workflow – makes them more engaged, more persistent, more successful and less fatigued.

To digitally optimise this biologically ingrained capacity for teamwork, businesses the world over have adopted Software-as-a-Service (SaaS) applications that facilitate the sharing of information between multiple users.

Run via centralised, cloud-hosted data centres rather than on local hardware, such applications offer financial and technical benefits to companies of all sizes, from storage savings to reliable connectivity to support speed. Yet it is their collaborative nature that has positioned SaaS software at the heart of the modern enterprise.

At the same time, the interactivity of cloud services renders them an attractive target for advanced cybercriminals, who can often leverage a single user’s SaaS credentials to compromise dozens of other accounts.

And while leading SaaS vendors conform to high security standards, the cyberdefences they employ nonetheless have a common weakness: human error on the customer end.

By launching sophisticated attacks, today’s threat actors are increasingly gaining access to cloud services through the front door, necessitating a fundamentally different security approach that can detect when credentialed users behave – ever so slightly – out of character.

Sensitive file access

Among the key challenges of SaaS security is balancing the convenience of open access to information with the imperative of protecting privileged assets.

Indeed, with hundreds or even thousands of employees sharing a welter of files and databases at all times, safeguarding SaaS applications against insider threat is extraordinarily difficult with traditional security tools, which use fixed rules and signatures to catch only known, external cyberattacks.

Rather, detecting when credentialed users enter parts of these applications where they don’t belong requires AI security systems that understand their typical online behaviour well enough to spot subtle anomalies. And as employees’ responsibilities and privileges inevitably change, such systems must be able to adapt while ‘on the job’.

The necessity of this AI-driven approach to cyberdefence recently came to light when a serious threat was detected by AI on the network of a European bank.

After stealing credentials or otherwise gaining access to a SaaS service, the cybercriminals frequently ran scripts to identify files containing keywords like ‘password’ to find files that stored unencrypted passwords.

As they had already breached the network, the attackers could have reasonably expected to be in the clear – having already successfully bypassed any conventional security controls.

However, while these attackers would likely have exploited the cleartext passwords to escalate their privileges and further infiltrate the organisation, Artificial Intelligence was able to flag the activity as anomalous for the bank’s particular network because it breached the following model: ‘SaaS/Unusual SaaS Sensitive File Access’.

Ultimately, the AI’s nuanced and evolving understanding of what constitutes ‘unusual’ behaviour for each of the bank’s users and devices proved critical, given that the suspicious file access may well have been benign in other circumstances.

Social engineering

Perhaps the most difficult cloud-based attacks to counter are those that rely on social engineering, since they involve deceiving employees into handing over their credentials and other lucrative information voluntarily.

In these cases, AI anomaly detection is the optimal security strategy, as thwarting a social engineering threat before it’s too late means protecting employees from their own mistakes.

In 2018, a device on the network of a UK property development company that had attempted to connect to a rare external domain was detected, just two seconds after landing on office365.com.

The domain had a suspicious name and offered HTTP connections to a form containing sensitive data transmitted in plain text, which would be vulnerable to a man-in-the-middle (MITM) attack.

Further investigation indicated that an employee at the property development company had been tricked by a shortened URL in a phishing email to visit the suspicious domain.

Despite the user actively clicking on the URL to visit the page, Artificial Intelligence flagged the event as threatening due to the rarity of the destination domain in comparison to the company’s normal network activity.

AI has consistently demonstrated this ability to provide a safety net for human error – flagging anomalous connections and rare domains regardless of how well they may be disguised to the unsuspecting user.

From social engineering attacks to insider threats to stolen credentials, the risks inherent to SaaS are largely user-dependent.

As a consequence, any security tool up to the task of defending SaaS applications must understand how these users work, evolve and collaborate.

Indeed, it is precisely the sought-after interconnectedness and collaborative nature of SaaS platforms which makes the potential reward for attackers so great, as a single breach could allow them to compromise an entire company.

Yet the efficiencies promised by SaaS need not come at the cost of security, since the latest AI cyberdefences shine a light on even the most nebulous traffic in the cloud.

Browse our latest issue

Magazine Cover

View Magazine Archive