We stand with Ukraine

LLM Hype and Concern: Benefits Versus Harm

Expert.ai Team - 5 April 2023

Since ChatGPT was released in November 2022, it has created a lot of buzz in tech circles, but also in the mainstream. It has set companies racing to get their latest (and usually largest, although that is starting to change) LLMs to market, from Microsoft, to AWS, Google to Facebook and a number of open-source models. Those experimenting with ChatGPT range from middle school teachers to company CEOs, and the combined momentum has caused a lot of hype and concern around the benefits versus its potential for harm. 

The criticism ramped up last week over a series of events. Last Wednesday, AI experts, industry leaders, researchers and others signed an open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4. The next day, AI policy group CAIDP (Center for AI and Digital Policy) launched a complaint with the FTC, arguing that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability,” and that OpenAI’s GPT-4 “satisfies none of these requirements” and is “biased, deceptive, and a risk to privacy and public safety.” Finally, on Friday, the first major chip to fall was the call by Italian regulators for OpenAI to address certain privacy and access concerns around ChatGPT.  

As an AI company that provides natural language (NL) solutions, founded in Italy, this development was of particular interest. At expert.ai, our approach is not LLM (large language model) dependent. In fact, we offer combined approaches via hybrid NL, which enables transparency. This is one of our core principles and one of the four aspects of our “green glass” approach to responsible AI. 

Why Did Italy “Ban” ChatGPT?  

The order issued by the  Italian Data Protection Authority (Garante per la protezione dei dati personali) cites concerns over personal data protection related to GDPR, namely that ChatGPT has unlawfully processed personal data and that there is no system in place to prevent minors from accessing it. OpenAI has taken ChatGPT offline in Italy. It has 20 days to respond to the order, and if it fails to comply, could face significant financial penalties — up to 4% of annual turnover or €20M.The specific issues cited are within the list of main considerations that enterprises need to know that we highlighted back in February:  

  • ChatGPT has been trained on data scraped from the internet, including articles, social media content, Wikipedia and Reddit forums. As a TechCrunch article highlighted, “if you’ve been reasonably online, chances are the bot knows your name.” 
  • ChatGPT is generative AI, which means it generates human language content based on a prediction of the words and sentences that might come next, based on its training. This makes it vulnerable to “prompt injections” that can be used to deliberately manipulate the content it produces, with potentially dangerous effect. 
  • ChatGPT is not based on an understanding of relationships and context within language. While it can generate human language content that looks and sounds correct, it can also make things up quite convincingly—referred to as “hallucinating.” 
  • The LLM that ChatGPT is based on is considered a “black box” transformer that is not explainable, meaning that you cannot track how it provided results (whether accurate or inaccurate). 

How does this impact the future of ChatGPT and LLMs generally? What do businesses need to know about their own LLM deployments? 

Considerations for AI Applications Going Forward 

The concerns being raised are a reminder of the risks of some types of AI, but it’s also a time to reflect on the proven AI capabilities already at work. At expert.ai, we have delivered over 300 natural language AI solutions over the past 25 years, working with many Fortune 2000 companies to optimize processes and improve the work humans do. We don’t just insist on having a human in the loop, we work to humanize the work that is done – to make it more engaging and have humans add value to the solutions.  

In that vein, we want to share some general considerations around using any AI to solve your real-world business problems. 

1. Transparency and explainability should be built into any AI solution 

Large language models like GPT-3 and GPT-4 are so large and complex that it is the ultimate “black box AI” approach. If you cannot determine how an AI model arrived at a particular decision, this could eventually be a business problem and, as we are seeing play out now, a regulatory problem. It’s absolutely critical that the AI you choose is able to provide outcomes that are easily explainable and accountable. 

The path to AI that solves real-world problems with the highest degree of accuracy is through a hybrid approach that combines different techniques to take advantage of the best of all worlds.  

Symbolic techniques leverage rules and, in the case of expert.ai, a rich knowledge graph — all elements that are fully auditable and understandable by humans. When paired with machine learning or LLMs, these combined techniques introduce much needed transparency into the model, offering a clear view on how the system behaves in a certain way to identify potential performance issues, safety concerns, bias, etc.  

2. The data you use matters 

The data you choose to train any AI model is important, whether you’re working with an LLM, machine learning algorithms or any other model. 

Public domain data such as the data used to train ChatGPT is not enterprise-grade data. Even if the content ChatGPT has been trained on covers many domains, it is not representative of what is used in most complex enterprise use cases, whether vertical domains (Financial Services, Insurance, LifeSciences and Healthcare) or highly specific use cases (contract review, medical claims, risk assessment and cyber policy review). So, even for chat/search use cases—the ones that work similarly to ChatGPT—it will be quite difficult to have quality and consistent performance within highly specific domains.   

As we mentioned in our previous post, the very nature of the data that ChatGPT has been trained on also creates concerns for copyright infringement, data privacy and the use and sharing of personally identifiable information (PII). This is where it comes up against the European Union’s GDPR and other consumer protection laws.  

Natural language AI is most useful when it is built on, augments and captures domain knowledge in a repeatable way. This requires engineering guardrails (like the combination of AI approaches we use in Hybrid NL), embedded knowledge and humans in the loop. We built our platform on all three of these pillars for that reason and because it allows businesses to build accretive and durable competitive advantage with their AI tools. 

3. A human-centered approach is critical 

Having humans at only the beginning or only the end of an AI process is not enough to ensure accuracy, transparency or accountability. Enterprises need a human-centered approach, where data and inputs can be monitored and refined by users throughout the process.  

Only explainable-by-design and interpretable-by-design AI models offer humans full control during the development and training phases. Because it includes an open, interpretable set of symbolic rules, hybrid AI can offer a simpler way to correct insufficient performance. So, if an outcome is misleading, biased or wrong, users can intervene to prevent future mistakes and achieve the success metrics most valuable for each use case and improve accuracy, all by keeping a human subject matter expert in the loop. 

Whereas black box machine learning models only offer the opportunity to add more data to the training set without an opportunity to interpret the results, a hybrid approach can include linguistic rules to tune the data in the model. Hybrid AI makes it possible for people to be in control of a model. 

The end result should also include an analysis of not just the ROI provided but the human benefit. Did menial or redundant tasks get automated and is there value in the solution for the humans receiving the benefits? While ROI, consistency and automation are typical results in any AI project, those including natural language solutions often provide additional upside. The work that humans do is more engaging, critical and rewarding. In combination with humans in the loop, this is human-centered AI. 

Looking Ahead 

As the use of AI generally is growing in the mainstream, organizations are looking for solutions that can keep them competitive while also ensuring business benefits, regulatory compliance and internal accountability.   

As with any new technology, businesses need to be able to understand how to apply it to solve real problems in ways that doesn’t put their enterprises at risk. 

GPT and other LLM’s represent real capabilities but also require careful focus on governance, accuracy, bias and cost. As companies experiment with how to commercialize these technologies, we believe that inclusion of knowledge-based approaches provides a critical tool to ensure the accurate outcomes a business relies upon, in a way that is practical and responsible.  

Our enterprise AI platform is built with this approach: provide the tools, workflows and hybrid AI approaches to solve real problems in the most responsible, cost-effective way possible. 

We offer a governance framework to find the best solutions that deliver value from the data of language. We have always promoted our “green glass” definition of Responsible AI which ensures solutions are: transparent, sustainable, practical and human-centered.