Why the responsible AI should disrupt your organization from the bottom up (VB)

Presented by Dataiku


The white box AI is now receiving a lot of attention. But what does it mean in practice? And how can companies start moving away from black box systems towards a more explainable AI? Find out why white box AI brings commercial value and how it is a necessary evolution when you join this VB Live event.

Register here for free.


Black box AI has been receiving some media attention for how it has produced undesirable, even unethical results. But the conversation is much more complex, says Rumman Chowdhury, managing director of Accenture AI. When technologists or data scientists talk about black box algorithms, what they specifically refer to is a class of algorithms for which we do not always understand how output is achieved, or non-understandable systems.

“The fact that something is a black box algorithm does not necessarily mean that it is irresponsible,” says Chowdhury. “There are all kinds of interesting models that one can apply to make production explainable, for example, the human brain.”

That’s why black box AI systems actually have an important relationship with responsible AI, he explains. The responsible AI can be used to understand and unpack a black box system, even if it is within that black box algorithm.

“When people at the receiving end of a model’s production talk about explainability, what they really want to do is understand,” says Chowhury. “Understanding is about explaining the result in a way that is useful at a level that is beneficial to the user.”

For example, in the case of the Apple Card discussion, where the sexist algorithm offered less credit to a woman than her husband, the customer service agent told him that they simply did not know why that happened. The algorithm simply said it. Therefore, it is not just about the scientist’s understanding of data. It’s about a customer service representative explaining to a customer why the algorithm came to a result and how it impacts the conclusions, rather than a high-level discussion “How to unpack a neural network?”, Chowdhury explains. .

“Well done, understandable artificial intelligence, well explained, is about allowing people to make the right decisions and the best decisions for themselves,” she says.

To obtain the benefits of innovation and navigate the possible negative consequences, the most important thing that companies should do is establish inter-functional governance. Responsible thinking must be infused at every step of the process, from when you think about a project, the stage of ideation, to development, implementation and use.

“When we develop and implement AI responsibly, we are thinking not only of what we deliver to our customers, but of what we do for ourselves,” says Chowhury. “We recognize that there is no single approach for everyone.”

The biggest challenge of implementing a responsible AI or an ethical AI is usually that it seems like a very broad and daunting task. From the beginning, there is concern for media attention. But then the most complicated questions arise: what does it really mean to be responsible or ethical? Does it mean legal compliance, a change in the company’s culture, etc.?

When establishing an ethical AI, it is useful to divide it into four pillars: technical, operational, organizational and reputational.

Companies often understand the technical component: how do you unpack the black box? What is the algorithm about?

The operational pillar is perhaps the most essential and governs the general structure of its initiative. It is about creating the right kind of organizational and business structure.

That then fades into the third pillar, the organization, which deals with how it hires the right kind of people, how it creates interfunctional governance. Finally, the last pillar, of reputation, requires being thoughtful and strategic about how to talk about your artificial intelligence systems, how you allow your customers to trust you to share their information and relate to artificial intelligence.

“The need for a responsible and explainable AI changes the field of data science in a very important way,” says Chowdhury. “To create models that are understandable and explainable, data scientists and customer teams will have to get very deeply involved. Customer-oriented people should be involved in the initial stage of development. I believe that data science as a field will grow and evolve to the need of people specialized in algorithmic criticism. I am very excited to see that happen. “

To learn more about how companies can create a culture of responsible and ethical AI, the challenges involved in unpacking a black box, from organization to technique, and how to launch their own initiative, do not miss this VB Live event.


Do not miss it!

Register here for free.


Key Conclusions:

  • How to make the data science process collaborative throughout the organization
  • How to establish data trust through the model
  • How to move your business towards data democratization

Speakers:

  • Rumman Chowdhury, Managing Director, Accenture AI
  • David Fagnan, Director, Applied Sciences, Zillow Offers
  • Triveni Gandhi, Data Scientist, Dataiku
  • Seth Colaner, AI Editor, VentureBeat

Leave a Reply

Your email address will not be published. Required fields are marked *