Is responsible artificial intelligence the new business imperative? Credo AI’s Head of Product, Susannah Shattuck, and Researcher on Responsible AI Shlomi Hod explored this question by sharing their insights on the recent growth in artificial intelligence adoption and its role in business at the Partners in Business Ethics Symposium, hosted by Boston University’s Questrom School of Business this October.
In the ensuing panel discussion, Susannah discussed upcoming regulations that are forcing organizations to consider AI’s ethical implications. She also pointed out the risks that come with AI adoption, including brand risk, compliance risk, and financial risk. Incorporating AI in business is also challenging due to the question of reliability of the decisions that are made by the technology. But as AI is developed responsibly, Susannah argued, users will be able to manage and audit models in order to gain a deeper understanding of how and why a decision is being made. This provides organizations with a greater level of transparency after the AI system is deployed, ensures it behaves as expected in production, and makes the outcome more predictable and fair.
The panel also discussed the possibility for algorithms to deliver incredible business results, but only if they are continuously monitored. Adapting responsible AI practices will become increasingly important as organizations find more ways to leverage AI. The discussion concluded on the topic of companies’ AI journeys, how they will depend largely on their ability to understand how and why their models reach certain conclusions, and their confidence in determining whether or not their AI models have become effective and unbiased.