New Research Makes Recommendations For AI Governance. Surprisingly, Google Is One Of The Report’s Collaborators


It was a cool fall day in September when the 2030 Sustainable Development Agenda was adopted at the United Nations headquarters in New York. The year was 2015, and governments around the world signed on to prioritize people, planet, prosperity, peace, and partnership – the central tenets that shape the 17 Sustainable Development Goals (SDGs). While advanced technology can be utilized to further most, if not all, of the SDGs, technological innovation is particularly relevant to Goal 17 – strengthening, implementing, and revitalizing global sustainable partnerships. 

Specifically, Goal 17 calls for enhancing ‘international cooperation on and access to science, technology, and innovation and enhance knowledge sharing on mutually agreed terms’ and ‘fully operationalizing the technology bank and science, technology and innovation capacity-building mechanism for the least developed countries.’ In a nutshell, the Sustainable Development Goals called for technology to be internationally co-operative and equitably distributed. Now, the Association of Pacific Rim Universities (APRU) and the United Nations Economic and Social Commission for Asia and the Pacific (UN ESCAP) have taken that mandate and ran the ball further down the field. 

The two organizations also enlisted the help of a hugely influential tech-firm in their endeavors. Five years after the initial SDG announcement, Google, APRU and ESCAP released a new research-based report titled AI for Social Good―A United Nations ESCAP-APRU-Google Collaborative Network and Project.’ One of the takeaways from the report is reducing the potential of AI to commit  “human rights abuses, while not suffocating the beneficial uses.” Christopher Tremewan from the APRU asks if governments are authentically accountable to their citizens or if they are aligned with high-tech monopolies. “As with all technologies, we face the questions of ownership and of their use for concentrating political power and wealth rather than ensuring the benefits are shared with those most in need of them,” Tremewan states.  

The paper makes the following recommendations: 

1.     Theory and Practice: Governments should have more alignment and integration between theory and policy in formatting their AI strategies. Only by breaking the wall between academic research and policy discussion can there be a possibility for the formulation of effective policies well-supported by research and well-grounded in knowledge and theories. For example, governments should discuss how to prepare their labor force to rise with AI by equipping them with skills and capacities to work with enabling technologies rather than replacing technologies. Education and training in schools and the labor force should put more emphasis on social intelligence and creative intelligence, which are not going to be replaced by AI in the future of work.  

2.     International Organizations and the Developing World: AI impacts both developed and developing countries. That said, many developing countries are ill-prepared due to limitations in resources, technology know-how and policy capacity. National AI strategies have only been released by developed countries and global powers; no developing countries have set up a comprehensive AI strategy. Context and institutions also matter in determining the ability of a nation to embrace and survive job disruption by AI. Unlike the welfare states of Western countries, the social protection system of many developing countries is feeble and depends much more on self-reliance, the vitality of the economic system, and family support. This means that the ability of individuals to sustain economic instability and downturn caused by AI job disruption would be weak and non-sustainable. Understanding the limited capacities and resource concerns of developing countries, it is recommended that global and international organizations such as the World Bank, UN, and World Economic Forum take the lead in offering advice and support for developing countries to craft their own AI strategies.  

3.     AI for All: A good AI policy should ensure that all members of society benefit from this powerful technology. To build on the major theme of “AI for Social Good”, there should also be “AI for All” – benefiting and empowering all members in society. It is inevitable that some people, especially the older population, will likely find it difficult to re-train for the AI era. As society gets richer and wealthier with AI, how this vulnerable population should be protected and funded will require some tough decisions, which can be delayed but never avoided. In this connection, equity, social security, and fair re-distribution (e.g., introducing UBI to protect the vulnerable population) should be critical and essential elements in all future AI policy responses.  

Targeted at policymakers, the AI for Social Good report highlights how COVID-19 and other recent crises have illustrated the need for governments to act swiftly in the public’s interest. The publication advises that a governance framework is needed and cautions that it must meet the needs of multiple nation-states and people. “AI-driven solutions are never “one-size-fits-all” and exist in symbiosis with the socio-economic context in which they are devised and implemented,” the paper reads. “As such, it is difficult to create a single overarching regulatory framework for the development and use of AI in any country, especially in countries with diverse socioeconomic demographics.” 

Particular focus is also given to ensuring gender equity is implemented into the development of AI. Improving the safety and security of women in public spaces through facial recognition technology is encouraged and increasing mobility for women through AI-enabled transportation systems are highlighted as positive outcomes for women that would be brought about by the proliferation of AI. The 3A Framework was used to systemize the research: 

  • Agency: How much agency do we give technology? 
  • Autonomy: How do we design for an autonomous world? 
  • Assurance: How do we preserve our safety and values? 
  • Indicators: How do we measure performance and success? 
  • Interfaces: How will technologies, systems, and humans work together? 
  • Intent: Why, by whom, and for what purposes has the system been constructed?  

The 2020 AI for Social Good report was authored by academics from India, Australia, the U.K., Singapore, Thailand, Japan, South Korea, and Hong Kong. The group hopes to hold a policy forum to discuss the project next year. In the interim, the Secretary-General of APRU advises that further work should be conducted to look at how “social movements can assist formal regulatory processes in shaping AI policies in societies marked by inequalities of wealth, income and political participation, and a biosphere at risk of collapse.” 

For its part, Google says that it believes that AI is a “powerful tool to explore and address difficult challenges such as better predicting natural disasters, or improving the accuracy of medical diagnoses” and that it is working with the team on AI for Social Good to ‘meaningfully contribute to these solutions, drawing on the scale of our products and services, investment in AI research, and our commitment to empowering the social sector with AI resources and funding.’ 

Artificial Intelligence is being implemented in industries all over the world and is a central theme of the research undertaken at UCIPT. Our work in the HOPE study is using data to assess and shift behavioral outcomes among HIV and other populations.

Leave a comment

Your email address will not be published. Required fields are marked *