Generative Artificial Intelligence Changes the Cyber Risks

in generative-ai •  5 months ago 

Major leaps in the effectiveness of Generative AI and Large Language Models have dominated the discussion around artificial intelligence. Given its growing availability and sophistication, the technology will inevitably reshape the cyber risk landscape.

Cyber campaigns tend to be designed with specific objectives and aim to maximise returns for the perpetrators, so most threat actors have a strong incentive to keep their actions concealed and their attacks contained.

Due to the rapidity of advances in AI research and the nature of the highly dynamic cyber environment, analysis of the consequences these tools may have for cyber perils has been limited, according to Lloyd's Generative AI in Cyber Insurance Report.

Beinsure Media summarise the key highlights from Report about Large Language Models landscape, the transformation of cyber risk, the considerations for business and insurance and the ways in which Lloyd’s will take action to develop solutions that build greater cyber resilience.

Generative Artificial Intelligence Changes the Cyber Risks

Lloyd’s has been exploring the complex and varied risks associated with AI since developing the world’s first autonomous vehicle insurance in 2016.

Lloyd’s is committed to working with insurers, startups, governments and others to develop innovative products and intelligent policy guiderails that can support the development of this important technology and create a more resilient society (see Artificial Intelligence Becomes an Unexpected Risk for Insurance).

Artificial Intelligence and Large Language Models

Approximately 6 years ago, a seminal paper was published by Google Research introducing a novel algorithm for encoding, representing, and accessing sequential data with complex structure. This machine, dubbed ‘Transformer’, would underpin almost all language, vision, and audio based generative machine learning approaches by 2023.

Early models (circa 2018) were limited in their capability, but progressively scaling up the computing power and dataset size resulted in rapid advances, culminating in the release of ChatGPT, a web portal interface to GPT-3.5, which was released less than 18 months ago (Nov ‘22), to considerable public interest and concern. See How AI Technology Changed Insurance Claims.

Since November, notable events include the release of GPT-4 (March ‘23) – exhibiting similar capability to humans across a battery of benchmarked tasks, Google’s Bard (March ’23) and completely open-source equivalents by Meta (March, July ‘23).

AI model governance, financial barriers, guard-rails

The rise of powerful generative models brings with it tremendous opportunity for innovation, but also introduces significant risks of harm and misuse.

It is fair to ask why despite the claims of advanced capabilities of these tools, few material impacts on the cyber threat landscape seem to have occurred. The answer has so far been that the industry focus on safety has prevented widespread misuse, as well as economic considerations.

Generative AI models and tooling

An important consequence of controlling the training process of LLMs and restricting public access to the models through custom interfaces is the ability to apply strict controls to their usage in accordance with the governance safety principles of the hosting organisation.

All large commercial models to-date except for those released by Meta have ensured access was closely safe-guarded and monitored through specialised interfaces like ChatGPT or Bing Chat.

If users do not have full access to the models and all internal components, it is impossible to circumvent these restrictions in any meaningful way; while some ‘jailbreak prompts’ may allow a soft bypass, it is ineffective for very harmful requests. Likewise, users cannot bypass screening mechanisms if forced to interact with the models through online portals.

Generative Artificial Intelligence Changes the Cyber Risks

Several advanced techniques have been developed which have dramatically driven down the computational requirements for training, fine-tuning, and running inference for these models. Hundreds of LLMs now exist in the wild for a variety of tasks, many of which can be run locally on commodity hardware.

As of September 2023, it is possible to run a LLM with capability equivalent to GPT3.5 on a consumer grade hardware such as a MacBook M2, completely locally (without internet connection).

This means that all safeguards detailed above can be completely circumvented: models can be adjusted to answer all requests regardless of harm, and this can be done in completely sealed, cheap local computing environments, without any oversight.

A new cyber threat landscape

Overall, AI has the potential to act as an augmentation of threat actor capability, enhancing the effectiveness of skilled actors, improving the attractiveness of the unit cost economics, and lowering the barrier to entry.

It is likely to mean that there will be more vulnerabilities available for threat actors to exploit, and that it will be easier for them to scout targets, construct campaigns, finetune elements of the attacks, obscure their methods and fingerprint, exfiltrate funds or data, and avoid attributability.

All these factors point to an increase in lower-level cyber losses, mitigated only by the degree to which the security industry can act as a counterbalance.

  • Initial access vectors which rely on human targets making errors of judgement (spear phishing, executive impersonation, poisoned watering holes, etc) are likely to become significantly more effective as attacks become more targeted and finetuned for recipients

  • Attacks are likely to reach broader audiences due to lower cost of target selection and campaign design, meaning the absolute number of losses, and the potential severity of each loss could grow

  • Industrial or operational technology attacks are likely to become more common as automation uncovers vulnerabilities

  • Embedding AI into software could create entirely new initial access vectors for threat actors to exploit, resulting in larger surface area of attack, and consequently more claims

  • The industrialised production of synthetic media content (deepfakes) poses significant challenges for executive impersonation, extortion, and liability risks

Though more companies will be vulnerable to cyber attacks and there will be more security flaws that threat actors can exploit, it is uncertain if this will lead to an increase in highly targeted attacks on specific companies, an increase in broad attacks aimed at many companies, or some other mixed outcome.

The increased number of potential targets and vulnerabilities creates the potential for growth in both focused and widespread cyber campaigns.

Overall, it is likely that the frequency, severity, and diversity of smaller scale cyber losses will grow over the next 12-24 months, followed by a plateauing as security and defensive technologies catch up to counterbalance.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!