We stand with Ukraine

Adding GPT and LLMs to your Enterprise Hybrid Approach

Watch this session featuring real-world examples and the practical considerations for organizations need to know to optimize results in the enterprise.

Transcript:

Luca Scagliarini :

Good morning, good afternoon everybody. Thank you for being here at the new episode of our NLP stream. This is the first episode of a series that we start today in which we’re going to cover progressively what we can call areas of innovation in the era of NLP. Today specifically we’re going to cover ChatGPT and the integration with our platform and the advantage of hybrid.

Today with me we have Marco Varone. Marco Varone is the CTO and founder of expert.ai. He’s been involved in the fantastic world of NLP for more than 20 years and he went through the phase of this technology being named in different way across the course of the 20 years.

So in order not to lose more time and I’ll let Marco start his presentation, we have the opportunity to ask, you have the opportunity to ask questions, so make sure that you put your present in the comment and I will manage them and make sure we leave at the end a few minutes in case there are questions that we want to address immediately. If not, we’ll answer to all the questions over the course of the next days. Marco, your cue.

Marco Varone:

Thank you Luca for the introduction. Hi everybody and thanks for participation. Let me start with the usual technical things is that sharing the screen, hoping that it work well. So let me see if it go as it should. So share it. I hope you can see the screen, maybe look, if you can confirm just to be sure that …

Luca Scagliarini :

Yes, yes, we can see.

Marco Varone:

Okay, very good. Okay, so as Luca said, today I will try to share, I hope useful consideration, observation, comments and analysis about the impact, the value and the use of what has become very, very famous as GPT or ChatGPT but is a sort of something that exists from a few years because this is the last incarnation of the large language models.

I will try to be very practical down to earth. I am in the field from forever as Luca said, more than 25 years at the end, and so I have a ton of experience on implementing NLP solution to solve real problems for enterprises. And then we’ll try to say few things that are not already been said or you have seen everywhere. There will be a tremendous amount of coverage about GPT and ChatGPT and the models in the last few months. So we try to give you sort of some additional angles and I hope something that is interesting and not so already discussed recently.

We already did the introduction. My presentation will be split in four parts. I will try to give a bit of a background for both people that are more expert in the field or people that are a bit less expert. Clearly super difficult to summarize something complex like NLP in a few minutes, but will do my best.

With the background scheduled for everybody, I will share consideration about what’s happening, what’s new, what is ChatGPT, how it can be used, what is good for, what is not so good for, linking this directly with our platform that has been designed from the start to implement an hybrid approach to NLP, trying to show where is the starting point, what something like GPT can add in terms of value and capabilities to the platform, and also looking at some possibilities how to mix in the best way different approaches in a set of different scenarios.

And I will try to terminate giving you some very concrete and solid takeaways to be, hopefully, used in your normal day and activity. So let me start with the background.

I know that in these days it can be a bit of a surprise, but NLP, so natural language processing, the capability for a software to understand, process, and do something valuable with the language and with the text, existed well before ChatGPT. As I said, I am in the field for more than 25 years, and there were other people working on this before me, so it’s really something that exist for many, many years. What is different now there is this huge attention and the interest on NLP that, by the way, is a very important useful thing for us.

ChatGPT has been released recently, only few months ago, November. It got a lot of attention because it’s implemented as a chatbot, something that you can naturally interact, you can ask question, you can get answer, you can ask some activities, and it can generate some text. NLP is a super wide field and ChatGPT and the majority of the language models are addressing not really the full NLP set of problems, but more, even if they can be used also for other part, the subset that is called natural language generation. So creating new text based on input from the user, from a simple input up to a very long input.

ChatGPT is what is in the mouth of everybody but really behind ChatGPT there is a very solid language model that is the GPT model that started from 1, 2, 3, current version is 3.5 that like ChatGPT has been implemented by OpenAI.

Why all this but now considering that GPT version are available from 2018 and honestly they are not even been the first language models because of the first work in the large language model space has been done by Google. Now I know that Google is considered sort of a bit behind compared to OpenAI and Microsoft. I don’t think it’s really the case. They really did these type of models as the first companies, and for sure they have something that is as good or even better than GPT. Simply for now they are not giving this open access that has been the real fantastic marketing ideas of Microsoft and OpenAI. For now they are maybe using internally. They have announced that they will make it available for a much larger public. We will see in the coming weeks and months what they can do.

ChatGPT is focused on generation as I said and can generate as probably all of you has tested this or the majority of you have tested ChatGPT very human like, very convincing language based on what you ask and with the length and the style that you ask. It’s really impressive. I must admit that compared to what was available before, the quality is significantly better and has a big spectrum of use and application.

It can also perform other task belonging to NLP. Not so good, not so in the focus, but for sure translation is an example, summarization that will be at the center of my demonstration of the integration between our platform and GPT, and also text extraction of specific information.

Everybody has seen or has read somewhere or somebody has told him or her that all these language models and ChatGPT is not different generate text but they can’t guarantee anything about the fact that this text is correct and it is not a mix of invented information and real information in any percentage.

So everybody’s seen that it has sort of hallucinations, it seems correct, but then you crosscheck, I don’t know it is wrong, it is wrong in information or part of it or the full text is more or less invented. And even when you try to be very factual and trying to retrieve a specific information, it can only do sort of an approximation of the information. So for sure generation is better than in the past but also the kind of invented or not so correct content is much bigger than in the past.

In our platform we have decided to integrate the GPT model that is behind because at the end ChatGPT is a sort of other components on top of the core model. And for a platform that is used to implement NLP solution, what makes sense to integrate is the core model.

So this is where we start from, big [inaudible 00:10:39], a big part, big improvements. Let’s see where we come from because as I said, large language model have not been invented by OpenAI and there is a history before that.

What are these large language models? These are at the end machine learning models, so something that you train and you create starting from massive amount of data, textual data, and the starting point is a process that is not supervised. What it means that there are no person directing the training process. Okay, there are some guidelines, some rules, some logic that are implemented as a starting point but is mainly reading millions and millions of document and creating these model step-by-step with a pretty long and expansive process.

Where is taken the content to create this huge language model? Is taken mainly from public data sources. And honestly saying public doesn’t mean that all the content as we see later is something that is free because a lot of content that is used to create these models belongs to somebody and there was no explicit authorization to use this content.

There are many language models that are available, some of them are open source, they are free, some of them are only hosted on the cloud. GPT is a very, very good example. And we’ve seen a combination of approaches, tuning, verticalization and listing here are some of the most famous, and in general as you will see, they started from a small amount, a small size to reach the size that we have reached now.

One point that I wanted to highlight before moving to the next slide is that the final bullet that you can see, it will be mentioned again and again as a key element to explain why GPT is getting much more attention than other similar language models in the past, and why we see an improvement compared to what has been available up to a few months ago.

What they have done is to add a lot of human supervision to create this ChatGPT. So instead of leaning only on unsupervised learning, they mixed the two. And by the way, something that makes a lot of sense because at the end the only intelligent beings that are on earth are human person. So the supervision of person, of experts can really add a lot of value and a lot of knowledge even if in a sort of an implicit way to these models.

Very quickly, super quickly to give an idea of the timeline of the language models. So 2018 started, BERT is the most famous of the initial language models. As you can see it had 110 millions of parameters. Okay, I will not discuss what it means, but it’s just to give an idea of how much more information has been crunched from the initial BERT model to the GPT that is used in ChatGPT, that as you can see has 175 billions parameter. So an enormous increase on the side, and it is reflected in the capability of the system and the kind of text that can be generated and the kind of NLP task that can be executed by this model.

So I don’t want to say too many other things about ChatGPT because as I said, it is everywhere, okay? It is just going down a little bit, but it’s still everywhere in the news, on website, blogs, even on television shows and news, and there are people saying expressing any kind of opinion and comment about ChatGPT, “A revolution. It’s a small innovation. Is useless. No, it’s fantastic. It is the most intelligent piece of software that we have seen in my life. Yes, but you can’t trust it.” We have read, we have heard anything about ChatGPT.

What is important for me today is to try to cut a bit out of the noise and the hype and say, okay, but in the enterprise space where you need to solve real language problems and where frankly generation of text is not the most common problem, okay yes, there are companies that are generating text and they sell text or they use text to better sell that can find a very interesting value up to a point in using the generation capability of ChatGPT and other language models. But the majority of use cases and problems in enterprises are linked to natural language understanding. So trying to understand what is written and extracting relevant information.

So as I said, I use a stupid example but I don’t want to add much more example. I will try to do it live. That is much more interesting. Why there is so much attention? Because it’s true, is a fact. Its generation capabilities are truly better than its predecessors. We have seen when the language model history started in 2018. And thanks to the big investment as everybody has seen, Microsoft gave OpenAI the possibility of investing hundred of millions, probably 1 billion or even more in implementing this kind of model and ChatGPT and everything else and an enormous amount of computing power and computing time, and the result is there. Generational capability are really better, flexible, you can really generate any type of text with different level of quality of the list is normal. But this is the main reason. As I said, probably not so relevant for the majority of enterprises, but I wanted to align why is getting so much attention.

At the same time what we need to be cautious with. Okay? ChatGPT like every other language model, large or small, it is not intelligent at all. Here is an example of the millions that you can find on the web where something that every person that is older than six years probably can easily answer and always correctly and the ChatGPT is giving the wrong answer.

I wanted to highlight a very important thing. The fact that ChatGPT and other language models are not intelligent, at least not as we consider in the normal definition of intelligence, I don’t believe it is a problem in itself because from many, many years as I told you, we are in the field for more than 25 years. I always try to stay away from this discussion. Is it intelligent so it is useful or not? But I said, “Okay, if a tool, an artificial intelligence tool can solve the problem in a cost-effective way, for me it’s fine, it has value and I will use it. I will suggest my customer to use it. If it is not that intelligent, it’s not a problem.” 99% of the software is not intelligent, and we are using a ton of software every single day from the smartphone, the computer, the tablet, even television in these days. And we don’t stop using the smartphone because the software in the smartphone is not intelligent.

So I really wanted to clarify these two dimension. ChatGPT is intelligent? No, not at all. Can add value like any other piece of software that can solve a problem or some problems in a cost-effective way? Yes. So I believe that all the discussion about it’s not intelligent so it has no value that can be applied to many other pieces of software, I don’t think it’s that important, that relevant.

Luca Scagliarini :

Marco, let me add one comment here.

Marco Varone:

Sure.

Luca Scagliarini :

Which is something that is not really linked to the intelligence or not being intelligent or not, but I think one aspect that for sure requires our attention in general is that we have created a machine that can generate very plausible and very real content but that can make things up. So we create a machine that can generate a lot of partially through information, let’s put it that way. So this is something that goes beyond the single application or the value in a single application but is a potential impact of generating even more disinformation or misinformation, which is something that I think we should keep under control.

Marco Varone:

Absolutely. Thanks Luca for the addition, and I will address a little bit this at the end of the demo part.

Okay, so how we can leverage? Does it make sense to leverage large language models including GPT in the enterprise? Our answer, my answer is yes, but … We always wanted to share as much information and to be as transparent as possible with our customer, with the people that we interact with to explain which are the possible problems but also which is the value of any new technique, technology, component, software that you can apply to solve a problem.

So let me start with the problems and then I will go to the value that larger language model can create, are already creating inside some use cases in enterprise. You can always start also in the other way, so start with the value than the problems. Start with the problems because it’s so recent that I prefer to focus on this and then move immediately on the other side of the coin.

So first of all, and this is not a particular order, all the points are important. As I said, the large language model are trained using public data with the limited amount of supervision. In GPT in the past there was really no supervision, or really, really small. So you are really learning knowledge that has any kind of risk, of bias, discrimination, wrong information and so on. It’s something that you can’t avoid, at least you can’t avoid completely, and it is important that is clear. So what has been learned is in many cases something that nobody know exactly how good it is.

The second point that I wanted to stress, I mentioned it in one of the previous slides that the content that is used to train this language models in many cases is owned by somebody else. On the system to generate images, that is the other cool area in these days in generation of content, the discussion and the low suites and the problems in using images that are property of somebody else and then you can see are giving value without being paid is already pretty strong. But also in the text. If you’re analyzing the content of our website and learning something about me or our products from our website, you are using my property because my content can be used to generate something that somebody will pay. So I think this will be a growing issue and that will become more and more relevant in the coming months and years.

The second or the third point are the one that are I believe the most important, they’re the most important one. We know that artificial intelligence is creating value, is something that will continue to become better and more used, but the majority, not all the approaches for example, what we use is fully explainable in the majority of the situation are based on a sort of black box approach. So you get an answer, you get a result, you get a text, sometimes many cases can be right, but you don’t know how this result has been obtained. You can’t explain why. And if you test ChatGPT, you have been somehow surprised because one question got a very good answer, the second one similar not and you say, “But why?” And it is impossible to really explain exactly why.

Second, if you apply this kind of system in enterprise, you need to have somebody that is accountable for the result and the answers. In a company and you are interacting with your customer, with your users, you just can’t just share an information or give an answer. And if it is wrong, you say, “Ah, but it was the GPT that made it up or made it wrong.” No, no, you can’t accept. If you’re a customer you’re paying something to a company. So somebody should be accountable. And this is still an open problem.

For the responsible part, we should like that this large language model requires an immense amount of computer power. And in a period where we are trying to go more and more towards green technologies, reducing the carbon footprint, reducing the energy and so on, it seems to me that we are moving exactly in the wrong direction. Many of the problems that you can approach or solve with the large language model can be solved with the same quality, with a fraction of the energy, maybe 1000th or even less. So it’s a bit of a contrarian situation where you say, “Okay, we should save and spend less energy,” and now we are spending more and more.

Replicable AI, I think this is probably the most important point for the enterprise. With a large language model, and again, I believe that many of you have tested this with GPT, you can have the same content, the same prompt, so the same input but a different result. This is by design. It will always happen at least with the current generation of large language models. I don’t believe that is something that you can accept in enterprise, unless there is a human supervision every time to be sure that these differences are somehow fixed. We believe that it is something that can create big obstacles for a wide implementation or use of these capabilities inside enterprises.

Also, another element that is relevant that when there is a new model, in particular when you’re using a cloud hosted model, companies will change these models, you will get different result from the past. So if you sent a prompt six months ago, you get a result. And then there is a new version of the model and the result will be different. Probably it’ll be on average better, but it will be different. And again, you can’t replicate who is accountable for this difference compared to what was the answer six months before.

Domain knowledge is an important point, is something that is key in enterprise. Okay, you are not chatting or generating content on general topics. You need to work with your content, your domain, your specific use cases. And training and tuning these models for vertical domains remains a long, expensive and difficult process.

Final point that I wanted to highlight is that all these new big language model, GPT is a perfect example, are hosted on the cloud and are offered by cloud providers. You have seen how much Microsoft is pushing on GPT because they are so big and so expensive to run and that is more or less right now the only possible solution. So it means that you have a sort of lock in that is even stronger than other situation and the biggest is the model, the more time it takes, the more money the cloud provider is getting. So the cost of using this solution can become very high very quickly.

With the COVID pandemic, we have seen a lot of migration from on-prem to cloud, and that has a lot of advantages. But now companies and enterprises are seeing the bill of moving to the cloud. And we know. We all know that costs are very, very, very, very high.

Before we move to the demo, let me see the other side of the coin. So I started with the problems of language models and GPT, that is one of them. Let me see where it can help, where it should help. NLP is the most complex problem in AI. And everything that can help in doing better in NLP problems is more than welcome, and I can tell you after 25 years in this space.

Complex problems require a combination of approaches and the best, richest, deepest toolbox of things that you can use. In NLP, the one-side-fits-all approach doesn’t work. There are too many different problems and too many different use cases that you can say, “Okay, with the ChatGPT or GPT or what Google will offer in a few weeks, I can solve all my problems.” No, this is simply impossible.

Large language models are already used in NLP for a subset of use cases and they already can reduce the amount of work or can simplify the steps. So even before, GPT we already see and we also using in some cases, value coming from language models. So considering that ChatGPT GPT is the richest, deepest and most flexible language model it can reduce more and simplify more.

So can GPT create value in solving some NLP problems? Yes. So this is the reason. Clearly you’ll need always to keep in consideration the promise that I highlighted. So if you have a small budget, maybe it can help to solve but the cost can be too high. Or if you have a very specific problem, maybe be the tuning is difficult. You have a more generic problem, I’m sure that it can create some value.

The real point that I will try to address and explain also in the demo is that in the real world NLP problems, if you wanted to solve them, you don’t need only a tool or a model. You need human intelligence, as I said before, domain and process knowledge, and general knowledge, and the language models excel in general knowledge to have a solution that is successful.

We believe, and I think it’s demonstrated by the big number of solution and implementation that we have delivered to our customers that the best approach is this approach that combines everything and it is a hybrid approach because it’s combining different tools and techniques and symbolic, document understanding, machine learning, large language model under one roof so that you have the possibility of selecting the best, the most efficient way to implement a cost-effective, as I said before, even if it’s not super intelligent, but if it is a cost-effective and manageable, go for that. You are solving a problem and getting value in exchange.

Okay, so let me move to the demo part. Let me move here. Okay, so what I will try to show you is what we can do, what you can do in our platform in terms of understanding the content of text and making sense of it for any kind of content enrichment, process automation, better search experience, many, many type of use cases. And now something like GPT can add value on top of this.

What I did, I just went to the BBC site and I selected this article was published four hours ago, so something fresh. I’m doing all the demo live. So if there is some small technical issue, please forgive me, but I like to do things live and real. It’s not a candid demo that was prepared 10 days ago and I will test. I will show what our platform can do and now the integration with ChatGPT is done live, sorry, GPT not ChatGPT.

So what I will do, I’ll just take this URL, move to our platform. This is the viewer for the full language understanding. So just let me click here. Okay, what is this article? This article is about some spears that James Cook 300 years ago took from this population of the Australia at the time and there will be a return to Australia. So something that is, let me say, linked to the past, the present and the future.

If you analyze this with our platform, what you will see here is what we call the cognitive map. What is the cognitive map? Is the representation of all the relevant knowledge, information, relation that are inside of this document, trying to understand things at a level that is similar to what we do when we read the text. Okay, so-

Luca Scagliarini :

Marco?

Marco Varone:

Yes?

Luca Scagliarini :

Marco, sorry, maybe you can expand so that it’s a little bit bigger on the screen.

Marco Varone:

Better?

Luca Scagliarini :

Yeah.

Marco Varone:

Okay. Sorry, sorry. So this is the cognitive map. So for example, we recognize, we see which are the objects that are made by a human that are in this document, which are the people, the organization, the places, the professions, the buildings, things about history, Australian, the mass media. So this is not a text like you see normally or a set of keywords, but it’s really concept and elements. You can also recognize the relation between the different elements. So for example, if I show here and for example I double click on James Cook, okay, here you see all the relevant information.

This is knowledge that the system is extracting from the text, recognizing the entities, the concepts and the different relation. And this information can be learned so that next time you analyze the text, you know a bit more compared to the past. This part is transforming and structured the text in structured knowledge. So it is explicit transformational knowledge. And this is based on a rich knowledge representation that is an explicit representation that is inside our knowledge graph. I will show a couple example very soon.

In this way we can understand a lot of the content of a text then, but for example an element that is more difficult to be done with our approach is to create a summary of this document. So if I move to this view, okay, sorry if the text is a bit smaller, what we can do now before showing how we can reach this with the GPT, we can recognize the most relevant sentences of the text and typically it works pretty well are these three ones, but it’s not a nice summary, a nice abstract as you have seen that can be generated by GPT.

So given that our platform is an open platform, what we have done, we have integrated GPT in the platform and you can analyze any text, and for any text you can recognize and extract all these high value, super relevant information out of the box, and then you can use the GPT to extract a better summary.

So what I will do here is to show a workflow where there is the platform core linguistic analysis, linguistic understanding, and there is the ChatGPT that with the same text is generating a summary. Okay?

So let me do the same things that I did before. So I copy out the URL, the same that we have used for technology, and then I analyzed the text. What ChatGPT has done, it has produced this summary. Take 10 seconds to read it, and I think it’s pretty good. It is written as a person who write it. I instructed to create a summary that is three sentence. And so in this case I believe he has done this. Clear in three sentence you can summarize everything, but is good.

So this is a very good and simple, easy way to integrate what is already in the platform, capability of understanding information at very deep level with an external capability. In this case summarization is pretty good, probably is the best system right now on average in doing summarization in particular of news. And so you can have a workflow that has all the value of the understanding of the text and a very good summary.

Clearly there are still the issues and the problem that I mentioned before. Before the start of the presentation I did the same operation. So I analyzed the same text and asked the GPT to generate the summary. And as you can see, these are three different summaries that he has generated. And this one is the fourth one, okay? No, probably no, it’s the same of the third one.

So all of them if you look at them quickly makes sense. One is better than the other. For example, the second one is not so good because they refer to their country without saying which is the country for example. But this is a very clear demonstration of that you can’t replicate. Every time you ask a summary, three sentences for the same document, you will get different results. Some summaries are better than other ones. So value, yes. There are issues in the fact that you can’t replicate the same things every time.

Let me try to do something more using GPT and say, okay, it can be … Sorry. It can be useful and it creates very good summary. Let’s try to do a bit of extraction out of the document using always the same approach. So what I will do, I will go back to the workflow and I will change my request. Oops, sorry.

What I asked to GPT is to summarize the news content with maximum three sentences. Let me do something different and it is say which are the weapons listed in the text. Okay. So let ask something different. Okay? So we do it, we save, and we update the model. I’m doing it live in our platform.

And then what I will do, I will execute again the analysis, test the workflow, and then click. Weapons listed in the text are original spears. Yes, this is the core, the most important weapon that is mentioned. But if you go back to our understanding, you will see that we know more and better than GPT because we know that together with spear that is as you see the big because it’s the most relevant weapon, we know that there is a musket in the text. And we know that a musket is a weapon because we have our knowledge graph that knows a lot of knowledge, millions and millions of items of knowledge and they are explicit knowledge. It is explicit knowledge, not implicit knowledge.

So we can recognize and we can extract the full information that is better and I would say better coverage than GPT. And as you see, this is a live part. If I move to the text here, and again I show this, let me do in this way, and search, I select the weapon, you can see that there are the spears and also the muskets here. So even if you are talking about pretty general knowledge, you can see the limit of something like GPT, because there is no explicit representation of knowledge. He understood spears, but he doesn’t understand the musket on this.

And I believe that when you talk about enterprise implementation, you can’t accept something that is implicit or a black box up to a point because even the kind of understanding that our technology is doing is not perfect. I wanted to analyze it, okay? We can take another document and there can be errors like there are errors in GPT. But the report I want to make that with explicit knowledge, you have the possibility of really being very precise, very rich, and you control what is recognized and extracted from the text.

Clearly, if you wanted to have a good summary, integration of GPT is a very good thing to do and this is what we have done and there are other possibility. For example, GPT is also doing translation, even if not at the quality of the dedicated translation system and so on.

Let me make another quick example on the value of generating the summaries and how I would say the problems can create issues in using something like GPT in enterprises. So from this article, I’m moving to this, something that has been published today. Okay, is the 2nd of March today, history, the little known history of champagne. Completely different topic compared with the rest. So let me do the same. So I take the URL, I go to the platform, to our platform. It’s here. I put it inside and analyze it.

What you’ll see is what we have briefly seen also for the previous documents. So what is at the center? Beverage, wine, is the champagne. So all this information are recognized in a way similar to what we do thanks to the knowledge graph. This is an example. For example all the food that are known by the knowledge graph and everything that you can find here, okay, so here and then you see which are the sweets and so on. All those information are available, are used to understand the text.

Let me do the same things that we have done on this to get the summary from GPT. And let me write the URL. Again, everything is live. Testing the workflow. And let’s look at the summary. Ah, this model contains 4,000 tokens. So this is another problem. As you can see, the article that I’m trying to summarize is not that long. Yes, it’s probably 3000 words, something like this, but it’s not very long. And a summary is more useful if the document is long. If you have a shorter document you can easily read it.

So another element that should be considered when you look at something like GPT is that given that bigger sites in analysis will require even more computing power, there is a very small limitation. So a shorter document, this is at the end probably will be 3000 words cannot be summarized with GPT or ChatGPT. Okay?

I believe this is a big limitation. It’s a big limitation because you need summarization when the document is a bit long. So let’s see what we can summarize on our side. Probably is not that good, but at least we can recognize the sentences and, okay, I’m sure that if GPT could manage longer documents, we could have generated a better summary, but at least we have the most relevant sentences here.

Let me finish. I wanted to leave a bit of time for question if there is anything.

Luca Scagliarini :

Yeah, there are a few questions, Marco, so we are running-

Marco Varone:

Okay. So I can maybe stop here and I can as you suggest, Luca.

Luca Scagliarini :

Okay, so one question is around how does GPT do with a task like subject classification? Is there insufficient accuracy and consistency or is there a real opportunity to simply set up and maintain, provide more flexibility when needed and ultimately also improve accuracy?

Marco Varone:

Well it depends, honestly. I must say that in some scenario we have seen that something like GPT can reduce the amount needed to tune a categorization and topic categorization implementation in particular when you are talking about things about news or something like this that are, let me say, the sweet spot of a generic language model. But it is not something that you can take for guaranteed. It really depends on the problem that you’re solving and the specific type of content.

So yes, it can help in some situation but it is not something … Our suggestion in this case you should try first and see if it can help. We have seen that in some cases you can save time and guarantee better results.

Luca Scagliarini :

Marco, we have other questions but maybe we want to finish the last demo by the hour and then we’ll provide answer directly to all the people that were so nice to follow and also ask very, very good questions. Okay?

Marco Varone:

Okay. Okay, very good.

Luca Scagliarini :

We want to keep it in an hour.

Marco Varone:

Okay. Okay. Okay. For sure. Absolutely. So my final demo is something that is super complex but we believe is super important. So what we are trying to understand there, if we can with our understanding capability, the one that we have seen in analyzing the two documents, if we can reduce the problem of hallucination, generation of wrong or partially wrong answers.

So really in hybrid, hybrid approach where we try to apply our understanding capability to crosscheck against known and trustable sources to crosscheck what something like GPT is generating. Again, super complex problem, but as Luca mentioned before, this could be really huge.

What we are trying to do, we are experimenting on specific fields because the full spectrum of content is simply impossible to take all for the time being. And for example, let me use this example about a sad pandemic that we had in the last two years.

What we are doing here, this is a text that has been generated and we wanted to try to see can we use our understanding to crosscheck and see other trustable sources that are in agreement with what you see written here, what has been generated. So we are experimenting on a sort of extended analytics where we analyze the text, we try to understand which are the claims, the things that in the text are said or affirmed, and again, very, very early days, and see, okay, can we crosscheck on sources and this is the starting result that we are observing.

So in this case we recognize two strong claims and say, “Okay, can we confirm them?” And yes, okay. These are for example when the global pandemic has been officially declared by the World Health Organization, and yes, we found this information, here is written March 2020 in the two sources that we have found is March 11th. So perfect. And you can click here to move to the specific source. Clearly we are using trustable sources that contain a subset of the information and you can see that this is the piece that we have extracted here.

We believe that this is very, very interesting, very promising and very complex hybrid integration where generated text is at least partially validated by a better or deeper language understanding capability. This is another example. It’s a bit, taking a bit longer. And you see that this is what we are using to validate the claim.

Luca Scagliarini :

In the meantime, there are a couple of other questions.

Marco Varone:

Sure.

Luca Scagliarini :

One is around the availability of a recording session. Yes, of course. This is going to be available as recorded. And then there’s another specific question about any experience of using large language models in pharma. For example, extracting entities and their relation from a biomedical article. Here I think maybe your opinion could be valuable for this question.

Marco Varone:

Yes, yes. Again, it is an area where we are active. We are working this space from a good number of years and got I think a good result and a good value. And I can say, yes, like I said before, adding large language models to a hybrid approach that it is not only based on the language models can improve and can in particular give better results, adding, let me say a bit more robustness when you have more complex content.

So the short answer is yes, but alone they tended to be limited. So for me this is a very, very good example where you can get the full possible value only having the two elements together. And again, there are some type of content that are better or that get more value, other type of contents where we have seen very, very few improvements. But in general I will say it is a yes.

Luca Scagliarini :

For some reason we don’t see you anymore with the camera.

Marco Varone:

I have not done-

Luca Scagliarini :

Okay, maybe now you’re back. There are other questions, some around the use of the API, the recently published API of ChatGPT-3. They’re a little bit technical and complex. Maybe we can take them separately.

And one last question is around the fact that, let me go and find it, the fact that there are now some, let’s call them tricks in generation, in image or text generation, that ensures that once that you enter a prompt, then the system returns always the same answers. I think there are some parameter to fix that can help in doing that. Any comment on this?

Marco Varone:

No, yes, but this is normal. Okay, I can understand, that makes a lot of sense. But again, the idea is that it should be something that it should be sort out of the box in my view, but it can be done. You can cash. You can force. There are possibility and I’m sure that it’ll be something that will be one of the standard option later on. But you should consider that when you change the model, you update the model, you still have the problem, because with the different model, the output with the same prompt will be different from the previous version of the model. So you still need to have to manage this. And it’s something, a problem that we know very well and it requires a set of consideration and elements to be taken into-

Luca Scagliarini :

And that’s exactly one of the answers that was already in our list of comments. One question around the release. I’m assuming the release of the integration of GPT in our platform. It’s something that is available already now, correct Marco?

Marco Varone:

Yes, it’s already available now. Yes, absolutely now.

Luca Scagliarini :

Okay. We are at the end of the hour. As I mentioned, this is the first of a series of NPL streams that will be dedicated to just kind of commenting or taking a look at the innovation that seems to be happening very fast in the field of NLP, but also always with an eye of what could be useful in practical, pragmatic implementation in the enterprise today. Because I think one of the ways of really improving the capability to create efficiencies and create effective solutions is always to try to take this innovation and apply it in the real world. That’s going to be the team of all these streams.

So thank you very much, Marco. Thank you very much for the people who attended. And we have other questions that we have lined up, so we will comment either directly on the YouTube comment section or reach out to the people and provide specific answer. Okay?

Marco Varone:

Thank you everybody.

Luca Scagliarini :

Thank you very much.

Marco Varone:

Bye-bye.

Related Reading