Some Thoughts Logo


As we speak, Artificial Intelligence (AI) is being implemented in health systems, sentencing, and educational services. It is already impacting citizens and institutions. An informed social dialogue with civil society on what is ethical and responsible AI is necessary to enable societies to enjoy the beneficial potential of AI and mitigate its risks.

Indeed, in an effort to avoid the risks associated with AI, and build trust in a socially disruptive technology, an important number of working groups have published various frameworks and guidelines¹ on what constitutes ethical and responsible AI. These efforts required considerable effort and collaboration and have established the grounds for industry and governments to develop and deploy AI assisted products and services. Unfortunately, the over representation of the tech industry, and lack of representation of civil society organizations (CSOs) lack the appearance of impartiality necessary to ensure trust.

The absence of CSOs in these workgroups is sometimes justified by their lack of expertise in AI, and low levels of digital literacy. Not only is it shocking that we would use the digital gap as an excuse to exclude a sector, where 80% of the workforce happens to be female, and AI’s female presence is notably low, but mostly it undervalues the importance of this sector’s essential contribution to unbiased and sustainable deployment of AI in society.

Our laws are written representations of social values, and with that assumption, democratic processes were put in place in an attempt to guarantee an equal representation of everyone’s voices. CSOs role is fundamental in ensuring the legitimacy, credibility and acceptability of AI and the normative frameworks that will govern its development and use. As citizen’s trusted allies, they are indeed best suited to ensure that normative frameworks are representative of the values and needs of citizens. In order to achieve this, these organizations need to be better equipped with an understanding of the potential and various political, legal, and societal implications of AI. Building this capacity will lead to inclusive regulatory innovation and growth in a knowledge-based, data-driven economy.

Furthermore, Fairness, Accountability and Transparency (FAT) are three widely accepted concepts defining the basis of Ethical AI. The definition of transparency includes the concept of communication, or social dialogue. In fact, to be functional, communication must be held between parties capable of understanding each other through shared language and knowledge.

In short, an obstructive digital gap is affecting the capacity of important segments of civil society to understand AI’s implications, and Ethical and Responsible AI is not yet possible.

What does art have to do with democratic and inclusive governance of AI?

To achieve legitimate AI policies, research² has shown that, we need 1) a large number of citizens, 2) a diversity of perspectives, and 3) an understanding of the implications of the science or technology at stake. One of the best ways to achieve this is through interactive forms of art. Indeed, art has time and time again proven to be a powerful tool to enhance public awareness on difficult issues, engage civil society in an informed social dialogue, revitalize democratic processes, and give qualitative substance to inclusive AI governance. For these reasons, I aspire to explore the role of art, artists and creators, in civic engagement, regulatory innovation and the development of informed policies around data and AI.

[1] The Global Landscape of AI Ethics Guidelines, A. Jobin, M. Ienca, E. Vayena, 2019.
[2] Theatre as a public engagement tool for health-policy development, J. Nisker, R. Bluhm, D. Martin, A. Daar, 2006.