As AI engineers, our algorithms have the power to shape the social structures in which our users exist. By making some processes easier, we encourage patterns of behaviour that come to influence how people interact and interpret their worlds. All algorithms are a part of the social context in which they act but most complex are those that feed back information from the environment in an active process of co-shaping. We believe that, because of that impact, we have a deep responsibility to incorporate socio-technical analysis into how we think about our work.
If algorithms make meaningful decisions then their impact must be of consequence. So, how do we best understand those consequences?
One of the challenges of socio-technical analysis is that it requires a kind of 'throughline' between the abstract sociological and the mechanics of the technological. To overcome that, we are using and building our product so that we can talk more deeply about these problems. The basic idea is that we can use information known to us during testing, training and developing to extract the data needed for sociological analysis. Because it is generated from our code, our analysis is inherently connected to (i.e. generated by) the technology of our learning pipeline.
You can think of our work as playing the role of the hyphen in socio---decoded.ai---technical.
A lot of the pages here contain embeddings of live 'decodings' that help us show you what we're talking about.
Last modified: 2022-11-07