Top of page

Spotlight on Ona Oshen (NY, ’09, Revisited ’17)

Image
Ona_Oshen
Ph.D. candidate at Osgoode Hall Law School, York University

In anticipation of our Alumni in AI Career Panel on March 7, 2024, we spoke with Ona Oshen (NY, ’09, Revisited ’17). Ona is a Ph.D. candidate at Osgoode Hall Law School, York University, where she is researching the intersection of artificial intelligence (AI) and international law. She has also served as a member of the volunteer expert group convened by Nigeria’s National Information Technology Development Agency, where she led the security-focused team on the mandate to develop a national AI policy for Nigeria.

Read on to learn about Ona’s policy work and her advice to those interested in working in the emerging field of AI.

How did you become interested and involved in policy work around AI?

My journey to AI and policy work was not planned, but one thing certain from early on was that I would undertake doctoral study at some point. When the opportunity finally came to start during the COVID-19 pandemic, my research was going to be in the area of migration and development. But then I received from a mentor what has now turned out to be very valuable advice, to consider instead the emerging area of AI. As to my interest in policy work, I have always been drawn to work that has the potential to deliver impact in society. A core challenge in regulating AI is the fast pace at which the technology is developing, whereas it can take time to achieve legislation or to conclude regional or global agreements of a binding nature. Policy work provides a path for advocacy and other actions that support the identification of priorities and the formulation of direction in the lead-up to the development of law. When Nigeria’s National Information Technology Development Agency (NITDA) issued a public call for the participation of experts in the development of a national AI policy for Nigeria, I got my first opportunity to engage in AI policy work. I went on to lead the security-focused team in the fulfilment of this mandate.

You are writing your dissertation on the intersection of AI and international law. Can you give us the elevator pitch of your thesis?

My dissertation has two main objectives. The first is to explore the extent to which norms of international law are evolving in response to the advent of AI. Secondly, it investigates how, in reverse, the design, development and deployment of AI is impacted by international law. As the applications of AI often transcend geographical borders, the risks it portends are not merely of domestic concern. Therefore, while governments around the world are taking steps toward the regulation of AI domestically, there is a compelling need to understand the role of, and gaps in, international law. This would add to an understanding of the problems and prospects relating to AI, not least to resolve issues of interoperability of standards but also to generally inform discussions on what shape global governance of AI should take. My research seeks to add to this conversation, as a timely contribution to scholarly and policy discourse around this revolutionary technology.

Please tell us about your work with the volunteer expert group convened by Nigeria’s National Information Technology Development Agency (NITDA) to develop a national AI policy.

NITDA’s work on Nigeria’s AI policy builds on the country’s National Digital Economy Policy and Strategy (2020-2030) document, which identifies AI as a focus area under the pillar for progressing digital society and emerging technologies in Nigeria. The volunteer expert group selected by NITDA included persons from academia, industry, and civil society, bringing together a broad range of perspectives as well as varied expertise. Allocated into subgroups based on identified policy impact areas, our task was primarily to provide strategic advice and to undertake research and drafting in connection with the policy. I was appointed to lead the security-focused team. For a country beset with insurgencies and other security challenges, the focus on security in Nigeria’s AI policy aims to generate important recommendations for the deployment of AI to bolster national defense while safeguarding human rights. In terms of process for drafting the policy, there were multiple meetings among each subgroup, with periodic reporting to the general group over several months. The work of the various subgroups was consolidated, and an initial draft of the policy was finalized. When the policy is published by the government of Nigeria, it is expected that NITDA will play a lead role in its implementation.

What advice would you give current Davis Polk associates and our alumni who are interested in your area of work?

At the start of my foray into the field, I sometimes wondered what business I had with AI given my non-science background. I soon recognized that the safe absorption of this technology into society requires all hands on deck. Expertise from various domains is crucial and, especially in the area of regulation and governance of AI, a legal background is extremely valuable. Apart from embracing the adventure of channeling one’s skills toward a new endeavor on the leading edge, I would advise three things that have been pivotal for me: mentorship, news consumption, and volunteering. I cannot overstate the value of mentorship in both directions. As I have mentioned, my journey to AI was spurred by a mentor but, in addition, the opportunity to work on a national AI policy was brought to my attention by a mentee working in the technology field. I would also advise staying plugged into the news cycle, as knowledge really does convert to a competitive advantage, from gaining an understanding of the issues to speaking competently about them and becoming equipped to co-create solutions with other stakeholders. Finally, volunteering is a great way to build capacity in a new area, and this is exactly what the role with NITDA represented for me.

How would you recommend that associates and alums stay current on AI-related questions and topics?

I have found newsletters to be a great way to access information on AI developments. These include technology and AI-focused newsletters from reputable news organizations, universities, think tanks and civil society organizations. At the risk of information overload, following key individuals and organizations on platforms such as LinkedIn and X (formerly Twitter) can also highlight regulatory and technological developments, as well as events, seminars, trainings, jobs and, indeed, all things AI-related. This even includes entertainment for kids! It was on LinkedIn that I first came across news of a project that could reportedly animate users’ uploaded drawings (literally bringing art to life).

Do you see any new or growing windows of career opportunity/contribution within the AI space over the next several years?

I definitely see a huge window of career opportunity unfolding in the AI space within both the public and private sectors. This wave is reminiscent of the emergence of privacy officer and sustainability officer roles over the last several years. AI officer roles will be necessitated by regulation and also driven by business needs. We are already seeing the beginnings of these trends, for instance, with U.S. federal agencies needing to designate chief AI officers pursuant to the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. For corporations, efforts to embrace digital transformation are leading to an increase in AI officer roles. There is also a proliferation of AI-related opportunities in international organizations and governments, including contributing in expert groups and sitting on advisory committees.

Do you have a memory of your time at Davis Polk that you would like to share?

I have a medley of good memories, but a couple stand out. It can be quite an intricate balance to juggle motherhood with a job as a lawyer and, for months before I had my first child while at the firm, I was worried about how it would turn out. Who knew I would be treated to a surprise baby shower! It took place at one of our fortnightly Environmental practice group meetings, and Loyti Cheng and Betty Huber, co-heads of the group at the time, made sure that my friends from other groups were identified and invited to the party. Fast forward, and on my first day back to work, I was greeted by a printout of my baby’s photo on my office pinboard. I still have that printout. I also still have a Mets blanket from the game I attended with Gail Flesher (now a senior counsel) in 2010.