AI is here. How are innovators helping to structure its use in the workplace? 

 

The recent explosion in generative AI is bringing plenty of business opportunities, but also fresh challenges. Employees are experimenting with chatbots from the bottom up, with nearly three in five business workers using generative AI on a weekly basis. This use is not always under employers’ control, with one in ten employees acknowledging that they use the technology behind their employer’s back. 

This creates new risks. For example, workers might leak sensitive information via publicly hosted tools, and ‘hallucination’ – where models return incorrect information – could affect output quality. 

These emerging risks add to existing challenges, such as algorithmic bias and unexplainable ‘black box’ models. There is also the fear that productivity gains from AI will not be fully materialised if it is implemented in an unstructured way. 

 

Generative AI and human behaviour 

 

How humans use generative AI is a key determiner of risk, so effective training is essential. 5mins, which offers TikTok-style video micro-lessons, has developed a generative AI training solution with content tailored to specific job functions. These micro-lessons raise awareness of AI-related risks and teach skills such as how to craft the most effective prompts. 

There are also software tools for de-risking employee use of generative AI. Calypso AI, for example, tracks organisational use of public large language models (LLMs), blocking offending prompts before they leave the organisation while mitigating the risk of harmful responses. A prompt containing the organisation’s proprietary source code would be blocked, as would a response containing malicious code. 

Protecto is also preventing the use of public LLMs from resulting in data leaks. The startup’s intelligent solution masks sensitive information, while preserving the context. This means that the LLM can still function effectively, with the masked data added back into responses. The company has also developed a product that enables employees to securely interact with in-house generative AI apps informed by enterprise data. Crucially, access to data is controlled based on the employee’s role, so the AI will not return information they are not meant to see. 

 

Tackling the hallucination problem 

 

While many of the risks associated with generative AI tools derive from human behaviour, there are inherent challenges with the technology itself, through hallucination. 

Companies in this space are working hard to tackle the problem. Vectara AI is one of many startups in the field using a technique called retrieval augmented generation (RAG). Its solution enables companies to create chatbots with reduced hallucination risk by grounding AI responses in facts retrieved from the organisation’s indexed data. The startup has also created an open-source tool that compares the hallucination rates of leading public LLMs. 

 

Quantifying and managing AI risks 

 

Quantifying business exposure to AI risk is vital and Calvin Risk has created a platform that creates an ‘inventory’ of all a company’s algorithms. Adaptive assessment tasks – completed at each stage in an algorithm’s development – quantify the regulatory, technical, and ethical risks of each model. This enables a company to see how risk levels change over time and access calculations of the probability and potential costs of adverse incidents. Any incidents that do occur are logged. 

As models become more advanced, it is increasingly difficult for humans to comprehend the techniques that power them. This means that many applications are effectively ‘black boxes’ – we struggle to understand why they have reached a certain answer. 

German startup QuantPi has developed ‘PiCrystal’, a computational framework for testing and evaluating the behaviour of black box models that have been trained on a particular dataset. With a few lines of code, data scientists can access advanced analytics that make the model’s behaviour more transparent, making it possible to take action to make improvements. 

For many companies, AI adoption will mean buying products from third party vendors. With more than half of all AI failures coming from third-party tools, Armilla offers product verification and warranties for AI products, backed by major insurers. The startup first assesses the quality of an AI model considering threats like algorithmic bias. Then, if the model does not perform as promised, the purchaser receives back the licence fee. 

 

Preventing algorithmic bias 

 

AI models can sometimes return unfair or unbalanced results because of skewed or limited input data. This is a particular problem for businesses, such as banks, that preside over major life events like mortgage loan applications. 

FairPlay has developed a product that enables models to be tuned to enhance fairness while preserving, or even bolstering, their performance. The startup also uses AI to determine whether declined applicants resemble ‘good’ borrowers in ways that are not considered by primary algorithms. 

As organisations around the world look to implement AI in a structured way, using the right tools is going to be essential. The good news is that there’s a growing number of innovative solutions to help businesses make the transition to an AI-enabled workplace. 

However, deploying the tools alone is not enough. To help ensure a successful implementation, it’s important that organisations embed the tools alongside robust employee training and education programmes, to empower employees to make use of the benefits that AI tools can bring while avoiding the pitfalls. 

That’s why at Edelman, we’ve developed our own Generative AI training, as well as principles of responsible AI usage, to ensure that our teams understand and are equipped to harness the possibilities, while also navigating the risks. We’re also working closely with clients to help deliver AI solutions to specific to client needs today, while advising on longer term opportunities and incorporating the insights from our experience into the development of future roadmaps.


By Abigail Lloyd-Prescott, Senior Director, London Technology