We stand with Ukraine

Responsible AI Gets Real

Expert.ai Team - 26 October 2023

responsible AI

It’s not an overstatement to say that AI is driving transformational change. According to the World Economic Forum, AI is on track to increase global GDP by more than $15 trillion by 2030. While overall AI use is on the rise, the introduction of ChatGPT last fall has catalyzed a wider population of users and elevated the conversation around ethics and responsible AI, making it more urgent than ever.   

Last week, we joined hundreds of business leaders at “Leading with AI Responsibly,” the second annual conference organized by The Institute for Experiential AI at Northeastern University. The event brought together top leaders from corporate America and academia to explore some of today’s most pressing AI topics through the lens of Responsible AI.  

Here are some key takeaways from the conference. 

Humans (and Nature) in the Loop 

Artificial Intelligence needs humans in the loop. Full stop. In his keynote, Peter Norvig, Researcher at Google and Education Fellow at the Stanford Institute for Human-Centered AI HAI, offered a definition of human-centered AI: “building systems that do the right thing for everyone fairly across the board, not just for one user.”  

When it comes to practical use, speakers in multiple sessions cited a human in the loop approach as the key differentiator in AI use cases that were successful. 

A human in the loop approach must: 

  • Be integrated, not an add on: humans must be in the loop from the outset and not later in the process.  
  • Be part of the workflow: humans and ethics shouldn’t just be a checklist item but built into design requirements and guidelines. 
  • Include a multidisciplinary point of view: considering the different users, stakeholders and society at large not only helps reduce bias, it can also help you identify more use cases and expose more threat surfaces early on. 

For more on how to design humans in the loop with your AI solution, see our Roadmap to NLP Success.  

In addition to humans in the loop, John Havens, Director of Emerging Technologies & Strategic Development at IEEE Standards Association, referenced another aspect of responsible AI: sustainability. In addition to those human stakeholders we mentioned, nature, the Earth is also a stakeholder, and we must take the environmental impact of using large language models (LLMs), which are notoriously resource intensive, into consideration.   

Start with the Problem You Want to Solve 

It’s easy to get caught up in the wave of excitement around the capabilities of LLMs and generative AI, but several panelists stressed the importance of starting with the problem that you want to solve and then choosing the technology: 

  • “I sometimes get the feeling of a backwards mentality with generative AI, where people wonder how they can use LLMs, rather than finding out what the actual problems are. You can get these models to do lots of things but they’re not always the right tool.” – Byron Wallace, Assoc. Prof. Khoury College of Computer Sciences  
  • Using AI must give you an edge for the problem you’re solving – otherwise it’s AI for the sake of AI. Does it solve a big problem and whose problem does it solve? Can you explain the output, errors and measures? – Rudina Seseri, Founder and Managing Partner at Glasswing Ventures 
  • “Make sure you’ve got the problem it solves, not just somewhere for the solution to go.” – Steve Johnson, CEO and Co-Founder of Notable Systems. 

As our CEO, Walt Mayo suggests: 1) AI needs to be applied realistically, and 2) Successful adoption should NOT be about solving the most complex problems initially, but first focus on where AI provides the most value.    

Get Corporate Boards Involved 

Support from corporate boards is essential for making real organizational progress on AI, and today’s boards need the expertise to help guide organizations through such a transformational period. 

In the session “AI as a Business Disruptor: What Your Board Needs to Know (and Do) on AI,” moderator Michael Nieset, Partner at Heidrick & Struggles, said: “80% of the Fortune 1000 boards I’m dealing with today aren’t dealing with this [Generative AI] technology at all. There’s a lot of work to do at the corporate board level.”  

He highlighted some results from a recent Deloitte survey of members of the Society for Corporate Governance that describes the current situation on corporate boards: 

  • 19% of large-caps and 22% of mid-caps reported that AI is not being discussed at the board level
  • 35% of large-caps and 53% of mid-caps said that AI-related topics have not been on a corporate meeting agenda
  • 17% of large-caps and 46% of mid-caps reported that they do not have an AI policy or AI code of conduct in place

Bridge the Trust Gap with Responsible AI  

For companies using or developing AI, it’s imperative that we steward responsible technology and responsible AI strategies and policies.   

The final session of the day presented some of the results from a recent poll on AI and trust by the AI Literacy Lab at Northeastern University:  

  • 77% consume news about AI on at least a weekly basis 
  • 55% don’t feel confident that AI will be developed responsibly 
  • 64% say the government should regulate AI 
  • 52% say they can’t distinguish between human- and AI-generated text or images  
  • 83% worry about AI-driven misinformation during the presidential campaign 
  • 68% say they haven’t used a large language model such as ChatGPT 
  • 52% think it’s possible that AI will take away their jobs 

These results show that people are paying attention to AI, and that there is a sense of caution and skepticism around the development and evolution of these technologies.   

Conclusion 

It was thrilling to see so many business leaders and AI experts together in thoughtful dialog around this critical topic. As Ashok Srivastava, Senior VP and Chief Data Officer at Intuit, said in his keynote: “We shouldn’t lose sight that this is a transformative period in human history. This isn’t just about technology, it’s also about humans, and how humans and technology evolve together.”  

You can still sign up to see videos and recaps from the event here on the “Leading with AI Responsibly” event page. You can learn about expert.ai’s definition and approach to Responsible AI here. 

The Power of AI and NLP for ESG and Responsible AI

Learn about the value of AI for ESG and how to use the expert.ai Green Glass approach to evaluate any AI solution for Algorithm Integrity, Privacy and Legality, Energy Consumption and more.

Download the White Paper
esg