White Circle Vector
White Circle Vector

Defining, Deriving & Quantifying AI Risk

We’re an award-winning Australian research lab unravelling a novel theory of AI to create a bigger economic impact.

Supported by

As seen on

Signature line

We’re pursuing paradigm shifts in AI:

Redefining AI: Changing Paradigms for Better Economic Outcomes

Definitions originate & delimit our conceptual understanding of AI. Before we can move forward, we need to shake up old paradigms and work with universal principles that can serve us long after we discover unforeseen technical methods.

First Principles of AI Security Risk Identification & Quantification

What is the root of AI security and how can we understand it well enough to universally identify, quantify and meaningfully redress them?

Fundamental Paradigms of Trust in AI: The Role of Negotiation

What is the role of ‘trust’ in furthering our experience of the economic transformations of AI and how does this influence how we think about our role in engineering AI systems?

Verifying & Validating AI Risk Controls: From Theory to Tests

How do we think about our building trust in systems with meaningful testing practices that directly translate to universal negotiations about the suitability of an AI system?

Building Trust Efficiently: Three pillars of ‘Trustworthiness’ in AI

If trust can accelerate the economic gains of AI, how do we approach building & measuring the ‘trustworthiness’ of an AI system to build better AI systems as well as save time and money during development?

Capital & AI: Understanding Economic Structures & Incentives in AI

What are the processes, structures and incentives by which we are building systems that are capable of taking on a decision and actively co-shaping us for increased economic gains?

Practicalities of Persuasion: Reviving Frames of Analsysis

How do we make the process of identifying, substantiating and negotiation about the claims of an AI system as efficient, enriched and dynamic as possible?

... and more being made public all of the time.

Shedding Old Paradigms to Embrace New Ideas:

Cursive line
Line Decoration

A definition is both the source and the limitation of an idea; a conceptual boundary tracing and substantiating outer limits by proclaiming:

“This is where AI begins, and this is where it completes. The remainder lies outside our interest.”

They are paradigmatic roots that form a conceptual structure within the confines of which we negotiate meaning and comprehension. Our definitions deeply shape pathways of linguistic friction in our discourse grinding some conversations to a halt whilst others glide freely.

By defining Artificial Intelligence as a technique or as a discipline of pursuing a vague idea of intelligence we emphasise techniques as the limits of our discursive possibilities creating odd moments like the tacking on of ‘ethics’ or ‘responsibility.’

Our reliance on an techno-centric, ad-hoc definition of AI has engendered a comfort with opaque systems that are deceptively sensitive to their contextual application as well as empowered existing hegemonies to decry hesitations to adopt it as unscientific.

When thinking about AI, we prefer to focus our attention on the Point of Economic Impact - the moment where AI makes a difference:

“AI meaningfully impacts our economy when we cede a decision so that it can be exercised at macro-scale with micro-granularity”

Power is uniquely and inextricably entangled within AI because we encode in it our most human expression of power to make fine-grained decisions that coerce and deny at scale:

“Artificial Intelligence is a practice of ceding decisions to autonomous systems that actively co-shapes us”

If we localise our interest at the moment of economic impact then we can be confident that activities that further our pursuit or achievement of the components of this definition will also increase the scale and significance of our impact.

We can also do away with confusing ideas like conceptualising of learning as a simulation of biological intelligence and instead work with it as a technical mechanism of shifting our capital investment to achieve a system that can exercise decisions at a macro-scale with micro-granularity

Our main body of work relates to proving and exploring how this alternative construction of AI can accelerate the development of more impactful AI as well as make important conversations universally accessible.

What we mean by Artificial Intelligence:

Vector Image

Decision that we cede

Neural Icon
White Circle Vector
White Circle Vector

Trust is a Negotiation at the heart of AI:

Hand Drawn Line Divider
Hand Drawing Illustration

What is the mechanism by which we cede a decision to an autonomous system?

Under our construction of AI, trust is conceptualised as a kind of highway of varying degrees of friction and support that empowers us to place a reliance on an AI system in order to yield meaningful economic gains.

Trust in an AI system is intrinsically built by persuading a person to cede a decision to a greater extent than before which we achieve mechanically by leveraging claims about our system as credibility for the negotiated comprehension and attribution of a system as sufficiently reliable.

As an AI engineer, on the other side of our negotiation is a group of people trying to figure out how to think about our system so that they can decide whether or not they would be in a better economic position should they place a reliance upon it.

The main function of an AI engineer therefore becomes a process of substantiating claims related to a definition of a system that can be leveraged to persuade people to cede a decision to an autonomous system that actively co-shapes them.

We lose out on the efficiency of our AI investment if we spend time substantiating claims that do not meaningfully impact their deliberations.

A pseudo-formalism of these ideas as an economic process involves us conceptualising people H as a bundle of decisions D ⊆ { D¹ ... Dˣ } which they can cede to a system S for an economic gain ΔY contingent on the satisfaction of claims criteria C:

AI can be thought of as a function ƒ: (Dⁿ, S, C) → ΔY

The satisfaction of the claims criteria creates a kind of binding reliance on a definition of an AI system which has consequences upon breaking.

Our job is to build S in such a way that the claims are satisfied C(S) → ⊤ so that we can maximise ΔY. Our main challenge is that C is typically unknown until the moment of where a system is being evaluated for its suitability to take on a decision.

To meaningfully and efficiently substantiate meaningful claims, we can derive a measurable approximation of the claims to which we believe a system must attest based on what we know about the decision Dⁿ so that we can build S to improve the chances of getting it right.

Trust has a clear role in AI and can be made into measurable process by understanding how we leverage substantiated claims of credibility or dis-credibility to negotiate about a definition of a system that supports a determination of whether or not to cede a decision.

gear divider
gear divider

More to come!

Cite as: Lengyel, S.R. and Fourie, J.M. (2023) Decoded.AI. Available at:

All rights reserved © 2023 Decoded.AI

The Content is for informational purposes only, you should not construe any such information or other material as legal, tax, investment, financial, or other advice. Nothing contained on our Site constitutes a solicitation, recommendation, endorsement, or offer by Decoded.AI or any third party service provider to buy or sell any securities or other financial instruments in this or in in any other jurisdiction in which such solicitation or offer would be unlawful under the securities laws of such jurisdiction.

All Content on this site is information of a general nature and does not address the circumstances of any particular individual or entity. Nothing in the Site constitutes professional and/or financial advice, nor does any information on the Site constitute a comprehensive or complete statement of the matters discussed or the law relating thereto. Decoded.AI is not a fiduciary by virtue of any person’s use of or access to the Site or Content. You alone assume the sole responsibility of evaluating the merits and risks associated with the use of any information or other Content on the Site before making any decisions based on such information or other Content. In exchange for using the Site, you agree not to hold Decoded.AI, its affiliates or any third party service provider liable for any possible claim for damages arising from any decision you make based on information or other Content made available to you through the Site.