AI Cloud leader DataRobot today released its State of AI Bias Report, conducted in collaboration with the World Economic Forum and global academic leaders. The report offers a deeper understanding of how the risk of AI bias can impact today’s organizations, and how business leaders can manage and mitigate this risk. Based on an exploration of over 350 organizations across industries, the research findings reveal that many leaders share deep concerns around the risk of bias in AI (54%) and a growing desire for government regulation to prevent bias in AI (81%).
AI is an essential technology to accelerate business growth and drive operational efficiency, yet many organizations struggle to implement AI effectively and fairly at scale. More than one in three (36%) organizations surveyed have experienced challenges or direct business impact due to an occurrence of AI bias in their algorithms, such as:
Lost revenue (62%)
Lost customers (61%)
Lost employees (43%)
Incurred legal fees due to a lawsuit or legal action (35%)
Damaged brand reputation/media backlash (6%)
“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long. “The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”
Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum
While organizations want to eliminate bias from their algorithms, many are struggling to do so effectively. The research found that 77% of organizations had an AI bias or algorithm test in place prior to discovering bias. Despite significant focus and investment in removing AI bias across the industry, organizations still struggle with many clear challenges to eliminating bias:
Understanding the reasons for a specific AI decision
Understanding the patterns between input values and AI decisions
Developing trustworthy algorithms
Determining what data is used to train AI
“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his report Predictions 2022: Artificial Intelligence. Purcell continues, “Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”
DataRobot’s Trusted AI Team of subject-matter experts, data scientists and ethicists are pioneering efforts to build transparent and explainable AI with businesses and industries across the globe. Led by Ted Kwartler, VP of Trusted AI at DataRobot, the team’s mission is to deliver ethical AI systems and actionable guidance for a customer base that includes some of the largest banks in the world, top U.S. health insurers, and defense, intelligence, and civilian agencies within the U.S. federal government.
"The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place,” said Kwartler. “Organizations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there's still more to be done to ensure models are fair, trusted and explainable.”
DataRobot surveyed more than 350 U.S. and U.K.-based CIOs, IT directors, IT managers, data scientists, and development leads who use or plan to use AI. The research was fielded in 2021.
DataRobot AI Cloud is the next generation of AI. DataRobot’s AI Cloud vision is to bring together all data types, all users, and all environments to deliver critical business insights for every organization. DataRobot is trusted by global customers across industries and verticals, including a third of the Fortune 50.