We stand with Ukraine

Getting Business Value from Generative AI and LLMs 

Expert.ai Team - 26 October 2023

Deploying LLMs, GPT and Hybrid AI

Over the last 12 months, generative AI and large language models (LLMs) have captured headlines, corporate agendas and financial investment.  

Earlier this year, we conducted a survey to see how this development was impacting the enterprise, from their implementation plans to the use cases and challenges that were front and center.  

Now, almost a year later, we know that companies are experimenting with the capabilities of generative AI and LLMs and still grappling with some of the original concerns and challenges raised early on.  

Fundamentally, we know that companies are looking for safe and cost effective ways to leverage these technologies. This was the focus of our recent webinar, “Deploying LLMs, GPT and Hybrid AI: Mitigating Risk and Increasing Business Value,” where expert.ai CEO Walt Mayo, and Alan Packer, expert.ai board member and former Director of Engineering for Amazon’s Alexa Natural Language Understanding team, explored how organizations can adopt them in a safe, meaningful way.

Here are some of the key takeaways. 

When and Where Should Companies Consider Using Generative AI and LLMs? 

In the ten plus months since the launch of ChatGPT, there has been much broader recognition that language as a data form is tractable with software technology. This ability to replicate the way humans engage in language exchange has been immediately compelling, and companies around the world have been discussing what to do with these new capabilities.

So, how can companies understand where they should be considering generative AI and LLMs in their organization? Our panelists suggested 3 starting points:

  • Understand these technologies. Before implementing anything new, do the due diligence to understand the opportunities and the risks it presents.
  • Start with the problem you need to solve. Generally speaking, framing a problem in terms of “I have a capability, now where can I deploy it” is not the most effective way to think about your business challenges. Avoid the temptation to lead with the need to deploy generative and pivot to a broader question: where is language as a data form critical to your business in general?
  • Use the simplest tool that works. Going back to the problem you want to solve, it’s always best to use the simplest tool to do the job. Remember, these are powerful, complex technologies that can be very expensive. An enterprise is going to value predictability and explainability. When you can solve a problem predictably and deterministically, you should. If you don’t need to use this complex, and relatively expensive technology, then don’t.

Do LLMs Require Large Repositories of Proprietary Data for Training? 

Assuming you have the right skills and can access the data, is there enough data to use? 

Because of the complexity of LLMs, we are still understanding how to obtain the best results from them. Companies like OpenAI and others have invested billions of dollars into the kinds of capabilities that are now emerging. An organization has to think carefully about whether mastering the complexity of LLMs is something that they want to take on as a core capability. 

One of the really powerful aspects of LLMs is that they have been trained on literally everything humans have written. As a result, they carry around a large amount of semantic knowledge and understanding of the world.   

This is why the concept of fine tuning works. You can bring a relatively small amount of your data and the fine tuning extracts the knowledge that’s already inside the LLM. In some cases, a little clever prompting is all you may need to get the results you want. So no, they typically do not require huge amounts of your data to make effective use of them.

However, when you get into the business applications of these technologies, data quality trumps data quantity. Your ability to generate insights that are relevant to the kinds of business challenges you are going to face depends on the kind of data, the quality of data you’re working with.  

Do the Same Governance Principles Still Apply?

Any technology you use should be subject to the same set of practices for responsible technology and risk management—this has not changed. However, given the nature of how LLMs consume data and how they are trained, there is a whole new world of governance principles that companies must consider.

On the back end, these models are voracious, and they want to train on everything they see. For most publicly available models, the default policy is that, if you use them, they’re going to use your data. On the front end, you have to bring a lot of context and data to the prompt in order to get the kinds of answers you want out of it. Once it’s in the model, it’s available for other people to use as well. As a result, you can end up with the unintended consequence of your proprietary data, data about your customers, in these models.

Today, a data privacy concern could quickly escalate to a potential liability issue. Therefore, companies must have governance guardrails as well as strict policies in place for if, how and where such technologies may (or may not) be used.

What are the Best Approaches to Successfully Access Generative AI and LLM Capabilities? 

Remember, these technologies are just a subset of all the tools available to drive value in your business.  

The capabilities they offer are compelling, but we have to put them in the context of the problems that are best solved by summarizing large volumes of language data. Think about processes that involve a mass of information that someone has to review in order to determine its value—life sciences or claims management processes, for example, are some ways that generative AI and LLMs can provide value in the short term.

Also, it’s important to remember what’s necessary to be able to work with these technologies. You need: 

  • Technology expertise: This includes both knowledge and practical experience.
  • Domain expertise: These technologies are very general, so you will need to be able to tailor it to your domain and your business challenges.
  • Finally, remember that generative AI and LLMs are a component, not an entire solution. Alone, they will not solve your business problems, but they could be integrated with a solution, one that incorporates them or part of them, to work.

It’s worthwhile to keep in mind that LLMs are an input to an additional workflow. If you focus all of your efforts on training and high-level accuracy, you still have to solve the integration concerns and make it useable for knowledge workers or downstream systems. 

 

Thinking about Generative AI and LLMs for your business?

Watch the recap of “Deploying LLMs, GPT, and Hybrid AI: Mitigating Risk and Increasing Business Value” and hear about the the approaches and models that are both safe and cost effective, and the real-world use cases that drive competitive advantage.

Watch Now