AS the growing impact of artificial intelligence becomes more evident, we must ask: how can we transition "responsible AI" from a theoretical concept to practical application? In a webinar hosted by Access Partnership's AI Policy Lab, experts from various fields gathered to dissect the transformation of "responsible AI" theory into practical, real-world frameworks.
Senior Policy Manager Jonathan Gonzalez led the discussion, delving into aligning AI's potential with human needs, considering regional factors, and formulating governance models. Providing a unique perspective on critical issues at the crossroads of AI and policy, this conversation proved to be a valuable resource for both industry professionals and policymakers.
Gonzalez introduced the concept of "Responsible AI," suggesting that a consensus on responsible AI acknowledges two major facets. First, responsibility by design, which emphasizes the importance of embedding responsibility into AI development from the start. Secondly, the idea is that AI should be deployed responsibly, aiming to solve problems, enhance lives, and promote inclusivity. This point highlighted the crucial role that laws, policies and regulations play in preventing AI from causing harm.
Caitlin Corrigan, executive director at the Institute for Ethics in Artificial Intelligence (IEAI), discussed fundamental rights, sustainability, societal impact and technical trustworthiness. Colin Christie, chairman of the Advisory Committee at the Analytics Association of the Philippines (AAP), called for addressing the problem of AI alignment. Christie is also the chairman of Navix Health Inc., a company that develops integrated health care software using AI. Senior Manager Mojca Cargo, at Artificial Intelligence for Impact (AI4I), GSMA, introduced significant initiatives like the "AI Ethics Playbook," which sets guidelines for developing sustainable data-driven products and services that comply with privacy and ethical standards. Giulia Ajmone Marsan from the Economic Research Institute for Asean and East Asia (ERIA) emphasized the need for inclusive conversations involving diverse stakeholders, acknowledging that solutions aren't always straightforward. Gonzalez stated that defining responsible AI can be challenging, but "we can agree on what it should not be."
Christie's insights into the role of AI in the Philippines were particularly interesting. He discussed the reliability problem of AI, detailing the consequences when AI hallucinates or misinterprets data, which can pose dangers in areas such as medical diagnosis. Christie expressed optimism about the long-term view of responsible AI and the potential for establishing governance mechanisms. Citing existing examples of global cooperation on various issues, he argued that we aren't starting from square one when it comes to cooperation. There is a need to prioritize short-term challenges, asserting that AI's impact on societies and economies isn't a future possibility, but a present reality.
"The impact of AI on societies and economies isn't some theoretical occurrence set to happen in the future — it's already happening," Christie added. Noting an alarming statistic from a Silicon Valley podcast, Christie noted that AI could impact up to 30 percent of jobs in the tech sector this year alone.
In the Philippines, Christie explained that this warning has struck a particular chord given that the country's economy heavily relies on the tech outsourcing sector, which employs over 1.2 million people in the business process outsourcing industry.
Christie acknowledged the US perspective of AI as an economic boost but warned that it might exacerbate global unemployment and deepen the digital divide, leading to an "AI divide" that benefits the US in terms of technology access. To tackle these challenges, Christie suggested a strategy akin to medical triage, outlining a three-fold plan as follows:
1. Upskilling and reskilling workers worldwide to minimize job disruption and ease the transition to an AI-driven economy.
2. Assessing the short-term impact of AI fairly and focusing on the next 6 to 24 months.
3. Guiding policymakers and lawmakers on effective and sensible AI regulation to prevent harmful knee-jerk reactions.
I agree with Christie's view that AI has the potential to amplify global inequality and unemployment. Through meaningful dialogues and strategic planning, we can align AI with ethical principles. Christie's approach underscores this point, providing practical steps to capitalize on AI's benefits while mitigating its adverse effects. As we move into an AI-driven future, we must act now, focusing on upskilling our workforce, assessing AI's immediate impacts, and guiding effective regulations. Our goal is a world where technology and ethics coexist for humanity's benefit.