Opinion

Promoting responsible AI adoption – a key step in CEE’s AI journey

Making sure AI is used responsibly requires us to build new and stronger collaborations with civil society, academia, governments and industry.

Europe is on a journey to become a digital-first economy, which will boost GDP and help the region compete on a global stage.

Artificial Intelligence (AI) is helping accelerate this shift, unlocking innovation and greater productivity in every country and every industry. These capabilities can be leveraged by startups and legacy companies alike, with McKinsey predicting that AI could contribute up to 2.7 trillion euros to Europe’s economic output by 2030. 



With its reputation as a hotbed for technology start-ups, Central and Eastern Europe (CEE) is well placed maximise this potential. We’re already seeing the region get ahead of the game – in Czechia, a new investment fund is raising 20 million euros for AI start-ups, and companies across the region have long been adopting the technology at pace to enhance efficiency and solve critical problems.   

The state-of-the-art AI we see today is a powerful tool for enhancing critical thinking and stimulating creative expression. It enables us not only to search for information but to seek answers to questions. It can help people discover insights amid complex data and processes. It speeds up our ability to express what we learn more quickly.

AI has already been beneficial in many areas of human life, from preserving vision to advancing cancer treatments, discovering new properties of proteins, and even forecasting dangerous weather events. Foundation models like GPT-4 can act as assistants for people, making search a more effective tool for research and enhancing productivity and creativity for people at work.  

In short, the possibilities for people are enormous. And the obligations for us who create this technology are even greater. We are at a critical inflection point for AI, and we need to use this landmark year not only to launch new AI innovations, but to responsibly and effectively address both the opportunities and challenges that lie ahead. It is vital to empower organisations across CEE to responsibly leverage and utilise AI. 

Acknowledging this critical need, Microsoft has long been committed to building a responsible AI infrastructure. Our approach to Responsible AI is grounded in our AI principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Since 2017, we have invested in a cross-company program to ensure that AI systems are designed with responsibility in mind. The establishment of the Office of Responsible AI in 2019 and the development of the Microsoft Responsible AI Standard further underscore our dedication to mitigating potential harms associated with AI technologies. 

The learnings we have taken from this approach centre on three broad areas.  

First, we must ensure that AI is built and used responsibly and ethically. Proactive, self-regulatory efforts by responsible companies will help pave the way for these new laws, but they will have to be complemented by effective AI regulations that are flexible and adaptable to grow with the technology. Government and private sector partnerships will have an important role to play in defining the guardrails together.  

Secondly, we need to ensure that AI is aligned with human values and serves the common good, not just the interests of a few. To do this, we need to foster a culture of responsible innovation and governance of AI, involving multiple stakeholders and perspectives. We need to empower workers with the skills and opportunities to thrive in an AI-enabled economy, and support students to achieve better learning outcomes with AI tools. We need to protect the rights and dignity of the most vulnerable groups, especially children, and ensure that AI does not exacerbate existing inequalities or create new ones.  

Finally, we need to leverage AI for environmental sustainability and social justice. AI can help us tackle the climate crisis, by monitoring environmental impacts and developing clean energy solutions, while also ensuring a fair and inclusive transition to a low-carbon society. 

Fundamentally though, making sure AI is used responsibly requires us to build new and stronger collaborations with civil society, academia, governments and industry. We all have to work together to better comprehend the challenges that we face and the solutions that are most likely to work. This is the moment to cooperate on the principles and guidelines for AI. 


This article is part of Digital Future of CEE, a regional discussion series, powered by Emerging Europe, Microsoft and PwC.


Unlike many news and information platforms, Emerging Europe is free to read, and always will be. There is no paywall here. We are independent, not affiliated with nor representing any political party or business organisation. We want the very best for emerging Europe, nothing more, nothing less. Your support will help us continue to spread the word about this amazing region.

You can contribute here. Thank you.

emerging europe support independent journalism