How Is IBM Thinking About AI Governance? It Aims For High Transparency And Low Bias


Google, Facebook, and Microsoft are probably the first names that come to mind when thinking about the most recognizable names in artificial intelligence. But according to a new publication in the Harvard Business Review (HBR), tech firm IBM is also looking to blaze a path in AI. Written by Francesca Rossi, IBM’s AI Ethics Global Leader, the op-ed is centered around the issue of governance. Specifically, Rossi states that the company is committed to advancing fairer, more transparent, and more accurate advanced technology, and is looking to build trust among consumers and the business community. Rossi, who will soon take the reins of the Association for the Advancement of Artificial Intelligence as its next President, advises that companies using AI should consider implementing the following governance measures

  1. Create an effective AI ethics board. IBM notes in HBR that to make true and lasting changes in ethics, tech companies must support holistic organizational and cultural change. It provides the following example of what it is doing to achieve this goal: “IBM has put in place a centralized and multi-dimensional AI governance framework, centered around the IBM internal AI ethics board. It supports both technical and non-technical initiatives to operationalize the IBM principles of trust and transparency. We also advance efforts internally under the umbrella of Trusted AI that seeks to tackle multiple dimensions of this concept, including fairness, explainability, robustness, privacy, and transparency.” 
  1. Clearly define the company policies around AI. “In 2018, IBM released its Principles for Trust and Transparency to guide policy approaches to AI in ways that promote responsibility, including our view on Precision Regulation of AI, released in early 2020. These principles outline our commitment to using AI to augment human intelligence, our commitment to a data policy that protects clients’ data and insights gleaned from their data, and a commitment to a focus on transparency and explainability to build a system of trust in AI. Our precision regulation policy recommends that policymakers only regulate high-risk AI applications, after a careful analysis of the technology used and its impact on people.” 
  1. Work with trusted partners. “We have also established multiple multi-stakeholder relationships with external partners over the years to advance ethics in AI, including earlier this year when IBM became one of the first signatories on the Vatican’s “Rome Call for AI Ethics.” Released in February 2020, this initiative in partnership with the Vatican focuses on advancing more human-centric AI that aligns with core human values, such as focusing more attention on vulnerable parts of the population. Another recent initiative IBM joined is the European Commission’s (EC) High-Level Expert Group on AI, designed to deliver ethical guidelines for trustworthy AI in Europe. They’re now being used extensively in Europe and beyond to guide possible future regulations and standards for AI.”
  1. Contribute open-source toolkits to the pillars of AI trust. Beyond defining principles, policy, governance, and collaboration, we at IBM also prioritize the research and release of tangible tools that can move the needle on AI trust. In 2018, IBM Research released an open-source toolkit called AI Fairness 360 (AIF360) that allows developers to share and receive state-of-the-art codes and datasets related to AI bias detection and mitigation. This toolkit also allows the developer community to collaborate with one another and discuss various notions of bias, so they can collectively understand best practices for detecting and mitigating AI bias. Since AIF360, IBM Research has released additional tools designed to define, measure, and advance trust in AI, including AI Explainability 360 (AIX360), which supports understanding and innovation in AI explainability, the Adversarial Robustness Toolbox, which provides useful tools to make AI more robust, and AI FactSheets, which focus on increasing the levels of transparency in the end-to-end development of an AI’s lifecycle.” 

On the issue of bias being built into AI, IBM says it actively looks to mitigate it. It points to its 2018 Watson OpenScale product that builds AI-based solutions for enterprises. According to IBM Watson Openscale detects, manages, and minimizes bias in an effort to ensure AI remains fair, explainable, and compliant. Additionally, IBM recommends that enterprises using AI implement the following: 

  • Devote resources to education and awareness initiatives for designers, developers, and managers; 
  • Ensure diverse team composition
  • Be sure to include consultations with relevant social organizations and the impacted communities to identify the most appropriate definition of fairness for the scenarios where the AI system will be deployed, as well as the best way to resolve intersectionality issues — various notions of bias (such as gender, age, and racial bias) that impact on overlapping parts of the population, where mitigating one can increase the other one; 
  • Define methodology, adoption, and governance frameworks to help developers correctly revise their AI pipeline in a sustainable way. New steps (for example to detect and mitigate bias) need to be added in the usual AI development processes; a clear methodology needs to be defined to integrate such steps and effort needs to be made to make the adoption of such methodology as easy as possible. A governance framework also needs to be used to evaluate, facilitate, enforce, and scale adoption; and 
  • Build transparency and explainability tools to recognize the presence of bias and its impact on the AI system’s decisions. 

IBM notes that bias is a part of the human condition, pointing to confirmation biasanchoring bias, and gender bias as influencing our everyday decisions in a detrimental manner. At this juncture when AI is becoming so ubiquitous in our everyday lives, it is the first step to acknowledge our bias. The second fundamental step is to develop governance systems to mitigate the inclusion of biases into the technology that will shape the centuries ahead of us, and IBM should be commended for its efforts to do so. 

Artificial Intelligence is being implemented in industries all over the world and is a central theme of the research undertaken at UCIPT. Our work in the HOPE study is using data to assess and shift behavioral outcomes among HIV and other populations.

Leave a comment

Your email address will not be published. Required fields are marked *