We stand with Ukraine

NLP Stream: From Black Box to Green Glass, The Responsible AI Imperative

The days of bringing an unexplainable algorithm to production are over. Between waning consumer trust and increasing artificial intelligence (AI) regulation, the need to be responsible in your AI pursuits has never been important. But what does responsible AI look like and how can you achieve it?

Watch CEO Walt Mayo as he breaks down the concept of responsible AI and shares how we can move away from the all-too-common black box approach. All it takes is a renewed focus on five key AI components:

  • Carbon footprint/energy intensity
  • Compute cost
  • Explainability
  • Human in the loop
  • Toxicity (LLMs)

Transcript:

Brian Munz:

Hi, everybody. Welcome once again to the NLP stream, which is our weekly, every Thursday at 11:00. It is our-

Walt:

11:00, right. Yeah. That’s what time it is.

Brian Munz:

EST, I know. That-

Walt:

EDT.

Brian Munz:

I think actually last week I was just thinking, as I said that I may have said we have it weekly at 10:00, but hopefully not. So yeah, it’s our weekly web live stream where we talk about all things related to NLP. As usual, I’m Brian Munz, a product manager at Expert AI. And so this week is going to be a little bit different, but I think we’re going to be doing more of these in the future because it’s always fun to have more conversations and we wanted start out with a bang. So you heard him chime in earlier, but that is Walt Mayo, the CEO of Expert AI. Say hello.

Walt:

Hey, everybody.

Brian Munz:

Yeah. So today we wanted to have a conversation and we asked Walt kind of one of the things that he wanted to touch on in the world of NLP and AI, and what he came up with was from a black box to green glass. And so I think the natural first question would be what do you mean by from black box to black box to green glass?

Walt:

So that was our effort to try to address what a lot of people describe as the black box inherent in the complexity of more advanced AI approaches. So you’ll hear about black box AI, and in particular as you get into the era of deep learning and then large scale, large language models, the complexity is extraordinary, right? So you’re talking about GPT-3, which is large language model that was put forward by Open AI, which has received about $1 billion in funding from Microsoft. And I believe it has 175 billion parameters in the model, right? So needless to say, that’s a lot of moving pieces, and that very complexity, which produces some pretty astonishing results, also makes those results almost uninterpretable in terms of how they are produced. The complexity is so extraordinary, the amount of math that’s taking place, it really is a black box.

Walt:

Now, there’s another element that I want to put front and center around black box, and that is a black box that has connotations around the amount of energy consumption associated with it, because the two go hand in hand. So these large language models and the very complex deep learning AI approaches that have emerged recently are extraordinarily compute intensive. So black box in terms of you don’t understand really what’s going on inside of it, and to be clear, nobody does. Let’s be absolutely clear about that. And there are dynamics that occur with these very, very complex AI approaches that are recognized, but not understood. So people will say it does that. We don’t know why. Okay. Now, green glass was our attempt to offer an alternative and the connotations associated with that is can you find a less compute efficient, more energy efficient and more transparent way of producing reliably useful outcomes? Okay. So that was from black box to green glass.

Brian Munz:

Right. Well, and it seems like part of the compute, something that’s inherent in the black box approach also is that given that it is a black box and what comes out the other side comes out the other side, and then you have to try to train and do things like that, there’s this loop that if it’s a highly inefficient energy, it’s just going to continue. So not just the usage it’s within the development and training of the model itself too, right?

Walt:

Yeah. No, there’s definitely that dynamic. And then there’s also the element around the amount of data that’s required. So when you look at kind of the inputs that are going into some of the approaches for AI, what you’re seeing is this explosion in input requirements. So energy consumption, the amount of data that’s required, a tremendous amount of complexity. And then as you’re going back to try to obtain results that are more relevant for the problem you’re trying to solve, you’re repeating the cycle and you’re repeating it in a kind of inefficient way, because typically what you’re having to do is either add more data to it or label a lot more data.

Walt:

So I had told you early on, and since we jumped right into the compute and energy efficiency, just put to the side any concerns that any company might have around global climate change. Just pretend like companies don’t care about it. Just about every company outside say the fossil fuels industry though, is representing pretty clearly that they are committed to mitigating their climate impact. All right. So that’s their stated policy and you can see it in just about every company, and most companies are taking pretty serious steps to toward that end.

Walt:

The compute cost alone though is another real consideration because the energy intensity is driven by the type of specialized processors that are required in particular for deep learning. Those are graphical processing units. That’s part of the reason why Amazon, Microsoft, Google, that hyperscale cloud providers have welcomed, shall we say the trend towards very compute intense approaches to AI, because obviously in their cloud architecture they’re able to charge premium. It’s typically three to five times more expensive to use GPUs. So as you consider the running cost of any AI solution that you’re bringing to bear, that alone is a pretty powerful reason to look at that element.

Walt:

Let me just flash up if I could, Brian, a pretty powerful of the dynamic that’s been occurring in terms of the type of models that have emerged and the compute intensity. So if you look at this chart, this is actually the number of floating point operations. You can see it’s on a logarithmic scale and it’s increasing at a logarithmic rate. In general, if you follow this line forward for very long, you end up with things like it would consume all of the energy in the universe. I’m not being too hyperbolic, but this is not, in more technical terms, in Spanish, you would say, “No me bueno.” It’s probably not something that you can continue to do for a very long time. So that’s one element of black box to green glass. And then the other one is around the kind of transparency and explainability and toxicity elements, and that’s that’s something else that is attracting an awful lot of attention.

Brian Munz:

What do you mean by the toxicity aspect of things?

Walt:

Well, that’s kind of going right to the tail end of some of the challenges that you see. So I’ll use it as an example. It’s not in the way in which it’s most generally recognized, so the dynamic with large language models, which are really restricted to a relatively few very large scale companies typically associated with some of the largest technology companies in the world, Open AI with Microsoft, for example, or Deep Mind with Google, Alphabet, there are some efforts to open source some of these models, and there’s some work that’s taking place. It’s good, solid work. They tend to focus on what’s called natural language generation. What Expert.ai does is natural language understanding.

Walt:

Natural language generation is essentially predictive text generation, and so you can provide a prompt, say Brian and Walt were on a webinar and they were discussing AI. Walt turned to Brian and said, “A big concern,” and then you stop and then it will provide a string of syntactically and grammatically well structured prose. Okay? So it sounds like this was somebody writing about Walt and Brian’s webinar, but in fact, it’s the large language model, basically taking that information that you gave it, and then trying to predict what would be a cogent explanation of what’s occurring. Here’s where the toxicity comes in. Because the capability that is afforded it comes from the massive training set, which is essentially common crawl, which is for all intents and purposes, the content of the internet, along with all of the vile, toxic, racist, misogynistic sludge on the internet. So it has a tendency, in a fairly stochastic way, random way, to go very, very badly off rails and introduce some really inappropriate, violent, jarring, racist language.

Walt:

Now, Meta just released Blender Bot, I think is what they’re calling it. Right. This was like two, three days ago. Total elapsed time from when… and they went out with, oh, by the way, this very well could produce some really awful offensive things, which you’re kind of like, [inaudible 00:11:36] yeah. Okay. So then why exactly are you releasing it, right? And this gets back to the whole idea of responsible AI. There was another example of GPT-3, which is the large language model, natural language generation that it does not take much at all to go horribly awry. And so one, a person who is kind of a provocateur, he created GPT-4 Chan. So he got the worst 4 Chan content, fed it into GPT-3 and it started spouting just awful, awful stuff.

Walt:

Well, now there’s a company that actually posts language models as part of its business model, and this thing went up and they took it down, but nonetheless, so the idea of responsible AI, this is the big rubric for it. I was thinking about that earlier. Generally you don’t have to put responsible in front of what you do. So you don’t say responsible airline. We’re a responsible airline. It’s kind of implicit that what you’re going to do is reliably and safely carry people in an aircraft from point A to point B. Reliably, this [inaudible 00:13:01] maybe not, but you’ll take off and land with a pretty good track record. And you don’t have to generally say responsible medicine. But one of the things that I think has happened in the AI world is there’s been so much excitement to push the boundaries of what AI is capable of that people really aren’t looking at the guardrails in kind of the thoughtful way that the technology merits. So let me offer two examples.

Walt:

One, if you’re going to put forward prose that could convince somebody that there’s a responsible human being actually authoring that prose with some level of authority, well then you should be reasonably careful about the kind of prose that it produces. And then the other thing in the business setting, which is where we operate, a lot of what the businesses are looking for is AI that helps them make decisions. That’s what we do, right. So fundamentally what we say is we help you understand the language data in your enterprise, understand language, natural language understanding, so that your humans can make better decisions. If you have a black box algorithm and you’re saying, I want it to really help drive decision making, then you should be very, very thoughtful about how it’s driving those decisions, and I can talk more about that later. So that’s under that… what attracts the press and so forth is the toxicity where it’s really just so awful that you have to comment on it. But if you kind of pull it back, it gets into this broader responsibility to ensure that your technology does things reliably that are useful, and it doesn’t do things that have meaningful unintended consequences.

Brian Munz:

Right. Well, in a way in both cases, it’s aligning with your goals and belief systems, right. Whether it’s in the personal life of the business world, you want to make sure it’s representing what you want it to represent in a way, which shouldn’t be as difficult as it is, I guess. But going back to the green glass side of things, what do you see as the… because you mentioned that you’re trying to pull away from the black, and so how exactly you see that happening?

Walt:

So we’re working on our own framework for responsible AI. And I think in general, it’s important to approach it with a fair degree of humility, which the technology world is not super well known for because obviously we’re all out there. We’re trying to talk about how-

Brian Munz:

Big ideas.

Walt:

… this… Yeah, big ideas and the enormous value that we create and how we make people’s lives better and easier and so on and so forth. So of course, right. But the idea of humility in a couple things, the way we’re approaching it is we want to make sure that we’re not saying that we’re saving the planet or that we’re not doing some big virtue signaling around our approach. What we want to say is we’re being as thoughtful as we can be in a way that’s consistent with our values. And one of our values is be a good person, which is fairly straightforward. So a company is just a group of people working together in a common purpose and so we think we should be a good company. And so the kind of overarching framework we have is that we have accountable, efficient technology that reliably does useful things. So accountable, efficient, and reliable, and implicit in that is the accountable is that if somebody asks you how your technology produced a given outcome, you could reasonably explain it.

Walt:

Now, that doesn’t require you to understand the intricacies of the software that we’re writing. But what we want to build into it is the notion that you are making conscious choices around how our technology understands language, and we call it human in the loop, but that’s kind of artificial really, right? At the end of the day, there’s a human being who is using your technology. They should understand how your technology is adding value to their insights in our world to make better decisions, and then they should be able to accept that, reject it or modify it. And if there’s some outcomes that are emerging that are not expected, they should be able to reasonably quickly go back and change that. So that’s kind of the ethic behind the responsible AI approach that we’re trying to bring to bear.

Walt:

Let me give you an example because if I would put this forward, and you can come up with exaggerated examples. So say you were renovating your house and they said, “We’re going to put in some beautiful hardwood flooring,” and you say fantastic. And they said, “And in fact, it’s from the last standing Redwood tree on the planet,” you would probably go, “No, don’t do that. That’s not a good idea.” But say you’re in charge of HR and one of your jobs is trying to bring in the most talented people with the right fit for your organization who can succeed and will stay. And so you’re considering technology that will help you do that better assess candidates. If you’re getting into the domain of making judgements about individuals for the purposes of hiring, you should absolutely understand how that technology is arriving at the solutions that it assist your work.

Walt:

Let me give you another example. Say you’re a bank and you’re using technology to accelerate your home loan application process. And in the AI approach that’s brought to bear, there’s pattern recognition. It detects a pattern of delinquencies associated with a particular attribute of the applicants, and then unbeknownst to you, it’s acting on that. And then what you’re finding out underneath the covers is that it is basically using zip codes and is saying, we’ve noticed that there’s a higher default rate in a given zip code so we are scoring that lower so they’re not getting home loans. That’s against the law.

Brian Munz:

Right. It’s [inaudible 00:20:41].

Walt:

It’s called [inaudible 00:20:41]. Okay. You can’t do that. So as a business owner, responsible AI means if the technology is helping you to make a better decision, you should understand how it’s helping you. Right?

Brian Munz:

Yeah. Well, and that kind of goes to the idea of, because I’ve heard you say before that you need to have a human in the loop, right, and in a way it’s just kind of having a person involved in the process that will, like you said, you’ve got to watch these things. You’ve got to check on these things and ensure that the models… that what is being done is again, aligning with, of course, the laws and aligning with your goals and all that kind of stuff. Right.

Walt:

That’s right. And again, there’s variations. So let’s say you have IoT sensors on a jet engine, and they’re monitoring millions of data inputs per minute. And what they’re looking for is fluctuations above known upper and lower control limits that would suggest a failure that’s material. Amen. Have at it. Now, probably doesn’t mean immediately shut off the engine unless it’s something really catastrophic, and then if that’s the case, I can assure you in aviation, the degree of rigor and scrutiny to which that would be subjected by a whole bunch of different government agencies, aircraft manufacturers, the amount of redundancy that it would have would be very, very high. Okay. So that’s one.

Walt:

When you’re in the domain where most of us operate where the technology is helping humans to make better decisions, then I think for whatever reason, the standards seem to get really relaxed. And so you’ll have, even on its face, some representations around AI, and I’ve seen some, for example, that purported to track muscle movements in someone’s face to determine emotional state during an interview. Now, there was a pseudo scientific practice back in the 1800s, it was called phenology where you could determine someone’s personality by the shape of their skull. My guess is the science associated with muscle movements in people’s faces is pretty modest, I would say in general. There’s not a super, super robust body of knowledge that supports making important decisions. And then you can get into something like, well, how good is the contrast recognition for skin color? Is this going to produce the same results if the person is brown complected? Simple things like that, but at its face, you kind of have to ask those hard questions, right?

Brian Munz:

Right.

Walt:

Yeah, so-

Brian Munz:

Yeah, and then take into account cultural things, take into account neuro divergencies. There’s all kinds of things that things tend to be a lot of times models and AI is made for a particular group, which may be the majority, but it certainly shouldn’t be done as a rejection of the rest of the group, right? So that’s where it can kind of off the rails.

Walt:

Yeah, for sure. And also, not to set up this kind of false comparison where you say… we encounter it sometimes in our world, as you well know, in natural language understanding where people will say, well, we want 99% accuracy on the recognition of important data elements and language, because that’s how good our people are. And you’re saying not really, they’re not that good. So you’re not kind of throwing away the benefit that you can get from some thoughtful applications of artificial intelligence.

Walt:

Another example beyond the kind of IoT example is if you think about what recommendation engines or pricing engines do. So dynamic pricing engines or recommendation engines. At the end of the day, if Netflix produces a recommendation for a movie that’s not really appropriate for you, who cares? I mean, it’s just not that big a deal, right? If you have say a dynamic pricing algorithm, it’s basically predicting maximum margin under given supply demand conditions. So say what Uber or Lyft or any of the other ride share businesses do, they can do a good job of taking massive amounts of data and reasonably doing a curve fitting exercise that says here’s margin maximization. People probably couldn’t do that, but you also probably need to think about, well, gosh, at some point, is this perceived as price gouging? So we should put a cap on it and just say, you’ll never get charged more than X amount in an Uber or Lyft. And again, that’s that kind of conscious, responsible accountability that you need to have in the application of any technology.

Walt:

You still there, Brian? Brian you still got me? Are we having technical difficulties?

Brian Munz:

I think so. Sorry. Yeah. You were cutting in out for me, but my internet seems to be fast, but-

Walt:

Yeah, you were kind of freezing up on me, so I apologize for that. I’m not sure whether it was on my end, but can you hear me all right?

Brian Munz:

Yeah. Yep. I think we may have [inaudible 00:27:29].

Walt:

I think I overloaded with the word stream that I produced that last five minutes. I broke the internet.

Brian Munz:

Yeah, yeah, yeah, exactly. No, I mean, so we did actually receive a question if you wanted to grab that. We can put it on the screen. So what’s your opinion on the giants releasing, I’m assuming state of the art state of-

Walt:

State of the art, yeah.

Brian Munz:

… almost every other month by increasing the learned parameters exponentially?

Walt:

Yeah. I mean, when all you have is a hammer, everything starts to look like a nail, right? So there’s no question, some of this is being done, I think, just to push the boundaries of an approach to natural language understanding. So this is essentially that the connectionist approach or the deep learning approach and saying, if you increase the number of parameters and you increase the amount of data, then you’re going to get speech or language generation that is increasingly similar to how humans might generate it. I feel that point has been proven reasonably well. I think the broader point, which is how exactly is that going to produce a useful outcome, is still kind of hanging out there. There’s also some pretty, I think, unsettling implications around the amount of compute costs, and you saw the chart that I showed earlier, that’s associated with these large language models mean that this particular field is really going to be restricted to a very, very few extremely deep pocketed technology providers. And some folks are calling them foundation models, and I think they want them to be foundation models.

Walt:

But when you say that the inputs are extraordinarily computed, intensive energy inefficient, very, very costly and trained on enormous amounts of language and there is no real path to having anything other than stochastic outcomes, which means kind of random outcomes. I’m seeing more and more being done in the, ‘gee, wouldn’t it be cool?’ category than how do we really try to make this useful in a broader sense? So one of the things that came out was text to image generation with Dolly. So you could write a hedgehog, and this is a real example, a hedgehog in a red smoking jacket, reclining on a chaise lounge in the forest, reading a book, and it would produce an image of that. And you think, okay, well maybe in the world of graphics design, that might be useful at some point, but it also feels a little bit like, gosh, nobody can really say whether this is right or wrong and what they’re not showing you is the 58 other things that maybe came out that in some cases were didn’t look anything at all like a hedgehog.

Walt:

So in the business world problems tend to be fairly specific. They tend to be fairly domain intensive, and in general, you want to have a reasonable degree of certainty around the outcomes. So the movement to state of the art, increasingly larger language models, I suspect that at some point people are going to realize, gosh, we’re playing this string out as far as it needs to go. Let’s start thinking about how to be more thoughtful around what exactly it’s going to do in terms of making people’s lives better, helping businesses make or save money. They’re going to continue to do it though, for sure. I can guarantee you there’s nobody at Open AI right now listening to this webcast going, gosh, well, you got it. You’re right.

Brian Munz:

Yeah. Yeah, exactly. Well, that’s-

Walt:

I hope that answers your question, Raisul, and thank you for the question by the way.

Brian Munz:

Yeah. Yeah. And I mean, so just to kind of go back and kind of as we wrap things up, I wanted to give a minute where you can kind of address how you’re looking to address the green glass aspect of things. Basically what some of the other AI and NL technologies offer that is a bit more green in that way, because we know that symbolic of course is more of the less of a black box and it’s more of an explainable of course, but what else does that offer us in terms of being green and efficient?

Walt:

Well, so let me start off with the Occam’s Razor principle, right, which is you should always find the simplest solution that suffices, that solves a problem. There was an interesting company that came out a while ago and they were basically doing predictive decision making technology, and they came out and they said, “Oh, and to be clear, this has not run on machine learning at all.” So there’s math, the math does include some of the techniques that are used by machine learning, like Daisy and probability and the like, but what we found is more robust, less expensive, more explainable approaches that you can use in a pretty practical way. So I would start there. We’re fairly straightforward in the sense that we say we want to solve meaningful problems in your business, and if it’s not meaningful, then we don’t want to represent that we’re going to add a whole lot of value.

Walt:

And there, there’s two basic trends or two dynamics. One is where the language is very complex accuracy is really important and mistakes are costly and we can help because we can embed the knowledge that your experts apply to that language. And then we can kind of raise the bar so that there’s a new, upper and lower control limit. And then above that, the folks who you bring to bear in your enterprise who are making decisions are accountable for them can more consistently and more rapidly arrive at solid outcomes. So that’s one. The other is where the volume is simply too high. So there’s important information out there finding the signal through the noise is worthwhile, but it’s just not practical to have people trying to read it. So that’s where we offer language understanding capability.

Walt:

The symbolic element that we bring to bear makes it much more efficient because we essentially parse language in the way that people do with embedded structures of knowledge on context and relationships. So I would kind of put that forward, and there was a really good article that came out, gosh, I think it was around 2017 and the title was You Are Not Google. And you can look it up, and the fellow who wrote, I forget his name, but he is a brilliant guy. And he just said, resist the temptation to try to adopt the most complex technology that you’re seeing the hyper scale providers adopting themselves and offering. Think about what is the most thoughtful way to achieve your business objective and then from there, generally speaking, you’re likely to find something that is going to be more efficient and elegant. I think often the two go together, the simple, elegant solution that works well. Right?

Brian Munz:

Yeah, exactly.

Walt:

I saw a really cool picture of a self-driving lawnmower and so they’ve got these Roomba vacuum cleaners that go right around, they’re using computer vision and sensors and all this other thing. And so it was a standard lawnmower the time where you can press the handle and that’s what makes it turn. And so the guy had tied a rope to the handle, so it was going to move and then he put a stake in the ground and he tied a rope to it and it just went around in a big circle and then as it went around the stake, the circle got smaller and smaller. Done. So he repurposed something that he already had, got some rope, yeah, okay, he’s going to ask Mo the edges. Fair enough. But he didn’t have to buy a $4,000 robotic lawnmower.

Brian Munz:

Exactly.

Walt:

Or even better yet, give the kid down the street 10 bucks and ask him to do it.

Brian Munz:

Exactly. And-

Walt:

So that’s not my official response to energy efficient AI, but that’s a frame that you should apply I think in general.

Brian Munz:

Yeah. No, it makes sense. And it has been interesting to see all this stuff come into light. It seems recently, but I guess it hasn’t been, but it seems like it’s become a major thing recently in terms of computing power, probably because of the rise of crypto and things like that. People started to take notice of how much processing is happening.

Walt:

Yeah. You just heard my big sigh. Right? So I was kind of present at the creation with some early, early days, right when crypto monetized and I’m one of the guys who is not retired. And at the time, the dynamic behind it was really well intentioned, and then what you’ve seen in the side effects of it. So just again, a couple every day examples. I have a refrigerator, I don’t know exactly how it works. I kind of do. But it’s never once done anything other than keep my food cold and it doesn’t appear to have the ability to do anything other than that. It doesn’t seem like it’ll do anything other than stop keeping my food cold. So it’s a well defined, well bounded system.

Walt:

And if you think about in the range of the introduction of new technology, so the pharmaceutical companies, when they’re putting a medicine out, they have to go through an extraordinarily rigorous process. I mean, for obvious reasons. They could affect people’s wellbeing. And we all hear the television commercials where they list the 97 things, all of which sound horrible that could happen if you take it, but they have to do it. And then you can go all the way to… I’ve got an aluminum ladder down in my garage and it’s got all these labels on it and says, make sure these things… you’re using it appropriately. Somewhere in between those two things, I think before you roll out, and this is on the technology vendors, like Expert.ai and others, before you roll out any technology, make sure that you can responsibly account for what it will do, how it will do it and ensure it won’t do other things that you don’t believe are particularly helpful.

Brian Munz:

Right. Exactly. I mean-

Walt:

It’s pretty straightforward.

Brian Munz:

Yeah, exactly. That is the thing, it’s stuff that most of us were taught as kids, but I think a lot of money and fame gets involved and it gets tough to wrangle kind of all of the different intentions. Right?

Walt:

Yeah. But I mean, look, where it all comes to a head is in the marketplace. And for folks who are considering adopting AI, my rough rule of thumb is for a business owner if the technology vendor and the technology team in your organization cannot explain to you in terms you understand how the AI is going to produce the useful results that you seek and not do anything else, then stop until they can. That simple. You hear a lot about what algorithm do you use? I can assure you, there’s no algorithm market out there. I’ve never heard a business say to me, “Gee, we want to buy an algorithm.” They’ve got a problem that they need solved. Find the most efficient way to solve that problem that is explainable and then won’t do other things that are not part of that bounded problem set.

Brian Munz:

Right. Right. Because that’s not part of the scope of what they’re trying to do.

Walt:

Well, and you don’t want… I mean, look, even if you don’t care, there’s other folks who do like people in the say European union or the State of California or the Federal Trade Commission, because folks are going to… the regulations are coming up and they should, I welcome it. So there you go.

Brian Munz:

Yeah. Great. No, I mean, that makes total sense, and I agree that I think it’s one of the larger problems that’s sort of facing a lot of, especially ML-based AIs is that, like you said, when people knock on your door and they start saying, “Well, how did this happen? Who made this decision?” And you say it’s an algorithm, it’s AI. It’s been a gray area for a little while, but I think it’s not going to be for very, very long if even it’s still is where it’s just going to say that’s not good enough. I mean, it hasn’t been in the past.

Walt:

No. No, for sure it’s not good enough. And there’s some weird dynamics where people are saying do algorithms have personhood for the attribute of intellectual property creation? And you’re kind of like, I don’t know. I don’t think so. Right. But I can assure you that when it comes time to suing, they’re not going to be suing the algorithm because they don’t have any money. The algorithm, I guarantee you, does not have a bank account. The company that is using the algorithm does. So that’s where reality, the physical and digital worlds will meet in the courtroom, right?

Brian Munz:

Yeah. Yeah. Same way where if I have a pet tiger and it gets out and eats my neighbor, they’re not good.

Walt:

You can’t say, “Hey, it’s a huge jungle cat. What do you expect?”

Brian Munz:

The Tiger’s doing what the tiger does. I’m the one who let it out.

Walt:

Yeah. Anyways, so this has fully turned into a mid-August NLP livestream because we’re talking about tigers and lawnmowers.

Brian Munz:

Tigers eating our neighbors and stuff and people are probably getting concerned over hearing it, but yeah, I guess we’re pretty much at our time. Actually, this is the longest one ever, I think, so you-

Walt:

I told you it was going to be the longest one.

Brian Munz:

I’ll send you a T-shirt or something. But yeah, this was fun and definitely appreciate coming on. It was a nice kind of meandering conversation, like most times that we talk, but hopefully it was insightful too.

Walt:

That’s what you’re supposed to say about the CEO. Yeah, this is the normal kind of meandering conversation we have.

Brian Munz:

Well, I didn’t say we were talking about work stuff, it’s just you also have to… it’s a team of teams. We have to all hang out and stuff, so yeah, I’ll retract that. It’s unlike our normal conversations where we are bulleted lists of things that we need to do. But yeah, so like I said, thanks for joining. It was a really fun conversation, and hopefully this is one of many, but next week we have “No Fear stylometry with expert.AI,” which should be very interesting. And so-

Walt:

And one other thing, Brian, too, I wanted to throw out there, we are going to have our responsible AI framework online sometime in the next month or so. So we’re working on it right now, and what we want to do is also put information around where else people can look, the kinds of organizations that are trying to be thoughtful about it, and so we’ll have that up online. I don’t want to promise anything. We’re building it out right now, but sometime in the next month or so. So stay tuned for that and we’ll publicize it when we do.

Brian Munz:

Right. Yep. Yeah, I’ll make sure to mention that on here in the future.

Walt:

Awesome.

Brian Munz:

And so keep an eye out for that. But yeah, so again, thanks for joining. Hopefully you’ll join us again in the future and we can-

Walt:

I don’t think you’re going to let me back on after today.

Brian Munz:

I certainly will. We’ll have to see what the powers that be.

Walt:

Katie’s the one who’s in charge, so-

Brian Munz:

That’s true.

Walt:

… we’ll let her make the call.

Brian Munz:

Yeah. She’ll make the call.

Walt:

All right. Super.

Brian Munz:

Yeah, again-

Walt:

All right. Thanks everybody who did join and-

 

Related Reading