The rise of AI – and AI policies

This year brought us a new normal. Covid-19 and shelter-in-place mandates have prompted remote environments and more virtual activity across the workplace. As a result, private and public sector organisations have increasingly deployed automation and AI tools to address new demands and challenges stemming from the pandemic efficiently. For example, airlines have used automation to clear back office queues for flight cancellations and amendments, public finance departments have used AI to process financial support claims, and utilities have used AI to anticipate and adapt to increased electricity demands.

Not only are organisations using automation and AI solutions to more effectively manage new demands that have emerged as a result of this crisis, but they’re also doubling down on these tech investments to scale digital transformation initiatives and ensure they have the appropriate structure in place to support operational agility and continuity for no matter what comes their way next. In fact, Forrester projects the pandemic will cause automation and AI to take on new urgency for enterprise risk and resilience, with organisations increasingly using these technologies to scrutinize and predict risk, adapt to demand fluctuations, and enhance digital labor.

Amidst the rise of AI and fast-tracked digital transformation, guidelines about how to use AI appropriately are poised to take centre stage in the months and years ahead. Policymakers in the EU started talking about ethical and trustworthy AI for a while now, and we’re beginning to see similar initiatives from the US and other countries.

As the entire world navigates this new normal, what should policymakers consider to ensure the proper development and use of AI? We explore below.

Establishing AI leadership bodies

National AI leadership bodies are needed to develop a harmonised approach to AI development and deployment. These bodies would be responsible for creating or advising on guidelines, codes of conduct, and standards that guarantee AI trust and safety.

To establish these guidelines and standards, it’s critical that the AI leadership bodies secure perspective from various industry and civil society stakeholders, including researchers, academics, and representatives from the private sector. By involving a diverse group of people and perspectives, the leadership bodies can secure a comprehensive view of known and potential AI risks and opportunities, and work together to develop guidelines that ensure appropriate and safe applications of AI.

International cooperation

We’re already seeing the development of various types of AI groups, and, conversations around establishing and enforcing AI guidelines have started to take hold. The EU has formed alliances to discuss AI developments and impact, and to discuss proper implementation and strategy. The US has also called for the promotion and protection of national AI technology and innovation, while also ensuring its safety and security. At the same time, international organisations like UNESCO and the OECD are also examining the complex issues raised by AI technologies.

In their efforts to establish AI guidelines within their own sovereignty, it’s critical that AI leadership bodies collaborate with their counterparts abroad and with international forums to work towards a harmonised approach to AI development and deployment. Through global collaboration and discussions, nations can apply lessons learned from other countries and consider more varied components of establishing AI guidelines. For instance, some countries have shown contrasting tones on AI policy, debating whether government-, technology-, or ethos-first approaches are the best way forward. While these different perspectives create challenges, they also open doors for more dialogue and a path to increased international cooperation.

Building a clearer definition around AI 

Core to discussions held by AI leadership bodies should focus on how to ensure AI safety without hindering innovation. Too many guidelines could stifle the benefits afforded by AI, while too few guidelines could pose potential risks and uncertainty. To mitigate risk without hindering innovation and progress, AI leadership bodies must begin by clearly defining AI. Today, many technologies and systems fall under the AI umbrella; we need a more narrow and precise AI definition, which will allow for more effective policies on how to appropriately use it.

Once a clear definition of AI is established, AI leadership bodies must next assess and evaluate possible risks associated with the use of AI and corresponding mitigation techniques. There are a number of factors to consider here, including quantifiable and predictable risks, probability and severity of outcome (which should be assessed based on AI use cases), and the ability to prevent risks.

It’s also important to remember that AI systems are oftentimes industry-agnostic, and that the same AI algorithm can pose no to low risks in one use case, and higher risks in another. For example, the same algorithm could be used to create personalised ads for clothing and for medicine. The latter could pose greater risks than the former, as advertising medicine to the wrong audience could lead to improper usage and even health issues. In these instances, the onus is on the deployer to assess the risks and use industry- and use case-specific mitigation techniques. However, the leadership body should also offer parameters and guidance for how AI stakeholders can evaluate potential risks for different applications.

Developing and nurturing AI competences and digital skills

In addition to establishing AI guidelines, leadership bodies must also support digital upskilling and reskilling efforts to enhance AI awareness and acceptance. The impact of AI on the future of work has never been more pressing, and evolving digital skills are crucial for remaining competitive in today’s world.

One aspect of this is continuing to educate people on the value of AI, as well as how to best use it. UiPath recently uncovered that 58 per cent are intimidated when tasked with learning how to use new technology, yet, 80 per cent feel they would be more productive at work if they learned new skills. This underscores that AI and automation are welcomed in the workplace, but there is great need for upskilling and reskilling initiatives.

One way to enhance people’s digital and AI skills is to offer education and training programs, including open online courses that are scalable and easy to implement. These programs can increase awareness of new technologies, improve skills and competences in AI, and open doors to new career opportunities. Partnerships and close collaboration between academic institutions and the private sector should also be encouraged and incentivised accordingly to equip the next labour force with future-ready skills.

Looking Forward

As more enterprises and government organisations accelerate AI adoption plans to increase their competitiveness, there is a growing need for guidelines and common requirements for creating and deploying AI. This comes in the form of ongoing dialogue and consultations between governments, the private sector, and academia to achieve a common understanding of AI ethics, trust, and data privacy as specialised, niche discussions will pave the way for AI standards and specific rules.

Looking forward, we’ll continue to see more efficiencies and productivity driven by AI, as well as more industry-specific use cases that will further prove the technology’s value. Any regulatory attempt should therefore balance potential risks with the social and economic benefits created by AI. Now is the time to set a clear understanding of AI’s definition, avoid unnecessary barriers to development and adoption, while maintaining safety above all else. Furthermore, as AI technology becomes ubiquitous, a global consensus on the fundamentals around ethical implications is more important than ever.

Unlike many news and information platforms, Emerging Europe is free to read, and always will be. There is no paywall here. We are independent, not affiliated with nor representing any political party or business organisation. We want the very best for emerging Europe, nothing more, nothing less. Your support will help us continue to spread the word about this amazing region.

You can contribute here. Thank you.emerging europe support independent journalism