Decoded.AI is about bending AI/ML pipelines to business and policy analysis.

We're on a journey to a world integrated with AI

The story that we hear over and over again is that most AI projects fail because they are disconnected from business and policy metrics. In practice, that could mean that the right risk controls aren't in place, that the limitations of the model are unclear (so it's prone to misuse) or that the product somehow isn't suited to the task. We've found that conversations quickly become difficult to reason about as different worlds collide and abstract business/policy risks are applied to the code that we write and the metrics that we capture.

When it comes down to it we're trying to decide whether a block of code is 'ethical' or 'cost-effective' and that's a significant challenge. It would be faster and easier if we could express and perform that analysis directly from the code so that we can meaningfully engage in these conversations earlier in our implementations. At Decoded.AI, we set out to do that using something called frames.


Decoded.AI logos

How do we get to an AI-native world?

In our view, there are two broad problems that need to be worked on to get to a truly AI-driven future:

  1. Tech: build better pipelines so that we can find more robust weights for more meaningful problems.
  2. Adoption: make more people more comfortable with relying on AI products for core activities.

As an industry, we've spent a lot of time building tools that help us find better performing weights so most AI products excel in technical achievement but lack in adoption and integration. Right now, we're going through a transition period where our weights and the patterns that we use to build them are evaluated in complex social contexts against increasingly abstract metrics.

Progress from the collision of worlds

To get to the next stage of AI, we need easier ways to reason about the abstract requirements that matter in the social adoption of a new technology. The best way to do that is to bring together many different people with different ways of framing the barriers to adoption so that we can collaboratively explore how to overcome them.

But this is also where we start to get the struggle of worlds colliding and often there's an intense collaborative tension between the team (or team member) that owns the abstract risk and the one charged with fixing the problem. In Agile/DevOps terms, our feedback loops and lead times on those metrics are long and high-risk, so we need to tighten them. One way to do that is to 'shift left' on our analysis of these abstract metrics so that we have a sense of them before we invest the training time and go on the long journey of approval.

Shifting left on abstract objectives

Shifting left is taxing as teams struggle against the challenge of projecting the performance of early stage AI product against abstract metrics. When our code and tooling depends on reducing high-dimensional, 'real-world' data to computable statistical symbols, re-injecting that context back into the analysis becomes confusing.

Something like bias is highly contextual and it matters more about why that bias exists, the likely consequences and what we're doing about it rather than which way it falls. At a high level, we're well-equipped to have that discussion but it falls apart as we approach the task of finding the lines of code or design patterns that drive that problem.

Two underlying problems

To find a way forward, we found ourselves asking two big questions:

  1. How can teams interpret and make sense of a thing that no one person can fully understand?
  2. When there are a number of ways of looking at a problem, how do we balance those perspectives?

The inspiration for frames

I often find myself thinking of an imagined conversation between three characters: William James, Walter Lippman and Erving Goffman that goes something like this:

[At a table in an ambigious time]

James contemplates: "under what circumstances do we think things are real?"

Lippman muses: "well, the real environment is altogether too big, too complex and too fleeting for direct acquaintance!"

Goffman proposes: "yes, so we must frame reality in order to negotiate it, to manage it, comprehend it, and choose appropriate repertories of cognition and action."

Lippman chortles: "yes, and when that world view is challenged, then comes the sensation of butting one's head against a stone wall".

What we find compelling is that frames appear across domains from Einstein's General Relativity to Picasso's Cubism and we started to ask whether we could capture their action to help us build more robust AI. If we could master frames, then we could use them to help teams to interpret and make sense of complex AI systems as well as to meaningfully balance perspectives by making them (and their assumptions) obvious and accessible to others at scale and independent of the AI creators.

Codifying frames

Inspired by these ideas, we set out to codify them so that we could being to shine a different light on our code, shifting left in our consideration and projection of abstract metrics. In The World is Watching, Todd Gitlin writes that "frames are principles of selection, emphasis and exclusion" and an interaction of presentation and interpretation. Using those same actions, we've built a platform that helps us to analyse AI pipelines from different perspectives by selecting, emphasising, excluding and arranging information in useful ways.

Using frames to drive AI adoption

Frames are a central idea at Decoded.AI and we use them to express all the different ways of understanding and analysing AI systems. They work by re-framing information in patterns that select, emphasise and exclude ways of thinking. With them, we can create simpler models of AI systems that are easier to analyse from different perspectives and amenable to the kind of abstract analysis required by adoption. By building frames directly from the code, we aim to take the industry forward by helping us all to project the performance of our models beyond accuracy and towards contextual problems like robustness and reliability.