Power/Knowledge/Responsibility & AI

Written in reaction to a series of conversations.

Conversations about Power & AI can produce a unique kind of chaos that we tend to avoid. For example:

Should AI engineers be uniquely responsible for irresponsible AI?

I think that this is the most divisive question in Trustworthy AI but I care about it because it forces us to discuss the role of power in AI/ethics.

Full-Stops and Question Marks

Often someone will say that AI/ethics is the collective responsibility of everyone who touches it, so:

(a)
no-one is really responsible because everyone is responsible.
(b)
it's all relative, so no-one can really say.
(c)
what is the responsibility of each person in that chain?

We need to rebel against (a)/(b) because it diffuses responsibility amongst an amorphous group yet, in my experience, these are the options that most of us take.

I believe that the pressure to reject the idea of unique responsibility is a reaction to the looming argument about the unique position of an AI engineer as a potential 'crumple zone'.

Toby Walsh (Machines Behaving Badly), for example, writes that:

power does not trump ethics… focusing on power rather than ethical concerns brings risks [like] alienating friendly voices within those power structures.

That sense of alienation is a full-stop in a conversation and such a position on power/ethics immediately yields problems.

Walsh, for instance, goes on to write that it is important to recognise that “there is no universal set of values with which we need to align our system [and that] it is not a simple dichotomy”.

Clearly, to navigate that kind of complexity, we need to think of AI ethics as a kind of discourse about a system frozen at a point in time.

... we should think of [AI ethics] as an activity: a type of reasoning... about what the world ought to look like. ... some ethical arguments about AI may ultimately be more persuasive than others... [it is] indeed constructive for AI ethics to welcome value pluralism

Annette Zimmermann, Bendert Zevenbergen, Freedom to Tinker (Princeton)

Yet such discourse is itself an exercise of power and persuasion its outcome!

When we retreat from a conversation of power in AI ethics we find ourselves caught in paradoxes and confusion. We have a desperate need to engage in an understanding of discursive power in AI ethics and yet a fundamental problem that these discussions alienate and divide.

So long as we choose full-stops we will trace out the outer limits of power structures in AI, inadvertently amplifying their significance in our aversion to it.

To progress Trustworthy AI, we need to re-invite a discussion of power back into the conversation.

Layers of Power

A Brief Note on 'Power'

I see power as the totality of externality that coerces, denies and imposes. To be clear, I do not find the construction of power as an attribute compelling.

A Brief Note on 'AI'

Rather than a technology cluster, I interpret 'AI' as a cultural practice of ceding our decision-making capacity to autonomous systems that co-shape us.

Power in AI

Challenging conversations in AI ethics begin to make sense when we recognise that AI uniquely projects and encodes power unlike other highly-available and commercial technologies:

Power is exercised when [we] participate in the making of a decision that affects [someone else]… but power is also exercised [to the extent that someone else] is prevented… from bringing to the fore any issues… detrimental to [our] set of preferences.

Bachrach and Baratz (1962), edited

Historically, our exercise of power through decision-making has been tempered by physical constraints. AI is fearsome because it can impose our decisions at massive scale and a micro-granularity in unforeseen and chaotic contexts.

So long as we are actively encoding our capacity to coerce and deny into such a technology, we have a unique responsibility to understand how power refracts through our work.

Power in AI Ethics

When we talk about AI ethics we are exercising our own discursive power to coerce the development of an AI system towards our own preferences.

If we deny the role of power in AI, then we freeze the ethical discourse at the point where power becomes an unavoidable part of the conversation and exclude others from participating in the management of power in AI.

...focusing on power rather than ethical concerns brings risks [like] alienating friendly voices within those power structures

The only people who can talk about power without talking about it are those who benefit from power structures. By forcing any discussion of power to be a point of alienation, we are reinforcing existing power structures that exacerbate historical inequities.

The Unique Role of the AI Engineer

Knowledge holds an intrinsic power-effect within the formative struggles of discussions about AI ethics and those with knowledge are more likely to shape outcomes to their own preferences.

... to understand the algorithms that go into deevloping the AI technology... we felt that we needed to work closely with... the technical experts that [are] actually developing the AI project... so that we can have them interpret for us... whether the algorithms are meeting the outcomes that were expected

Participant in Comptroller-General Forum on AI Oversight, Page 76 GAO-519-21SP Artificial Intelligence Accountability Framework

If AI ethics is a formative struggle by which we arrive at more responsible AI, then the prototypical 'AI Engineer' is characterised as a person who holds unique knowledge about an AI system.

In practice, their knowledge is persuasive and accords them a special role as a kind of arbiter of truth about the system.

Should AI engineers should be responsible for irresponsible AI?

Conversations about accountability in AI introduce an uncomfortable tension because we recognise that the power embedded in the unique knowledge of the 'AI Engineer' accords them a unique responsibility.

The 'AI Engineer', then, has a role not only for the mechanical process of encoding decision-making power into an 'AI' system but also in the discursive process of ethical reasoning.

Decoded.AI & Systems of Knowledge

So long as the knowledge of AI systems remains predominantly their domain, AI engineers are likely to be burdened with an enormous responsibility.

Our goal is to make their knowledge about a system more accessible to more participants at scale so that both the AI engineer and participants in the discursive process of ethical reasoning are unburdened.