We’ve asked what we can gain, have we asked what we may lose?

It's AI month at Luminary as we continue to develop what role generative AI will have in the business and the wider tech industry as a whole. As a company we are going through the process of understanding how we can ethically incorporate generative AI into our work flows, focusing on what we can and cannot use it for, what our responsibilities are with generative AI and how we engage with it.

Matthew Dalla Rosa

3 minute read

In society in general, but especially in digital and tech, there is a tendency to hail new technology’s benefits, the utopias that will spring forth with their implementation and praise the new world that awaits. The futurist movement kept alive and well. Yet this is often at the cost of stopping to think about what happens when we use it. 

How will it change us in the margins, how we relate to each other, the ideologies embedded in the technology and how it could influence the way we behave. We’ve seen this time and again from the rise of the internet, smartphones and social media just to name a few. With hindsight what might we be asking when it comes to generative AI?

Atrophy of language

We are learning to talk and prompt like a machine in order to elicit and create human-like responses. As we get better at creating prompts, refining and massaging responses, do we surrender the mind to a machine-like thought patterns. We ask AI to write responses with a specific tone instead of applying it ourselves. We ask an AI to summarise copy without fully considering what we want to truly summarise. Our ability to use language and bridge the gap between people, a notoriously difficult task, is facing the possibility of collective atrophy. 

The ability to communicate is being outsourced to the machine. In our efforts to find ways to save time, expedite research and communication we speed up until we obliterate the meaning behind anything we’re trying to communicate. The Italian Philosopher Franco "Bifo" Berardi describes the replacement of human language with artificial intelligence in his book The Third Unconscious as “The generation of automatic signs whose meaning is established by the code. Automated signification is nothing”. 

Machines talking to machines, empty symbols met by empty symbols. What does it look like when the work we create and emails we send are heavily influenced by AI language? Will our ability to communicate degrade and stagnate without the aid of our virtual assistants?

The process matters

When we start looking to expedite understanding and meaning to summarise source notes, complete a first pass at synthesising data, or write a first version of a report we continually degrade the quality of, our source material and ability to form deeper thoughts. This sentiment is explored greatly by short fiction writer Ted Chiang in his article ChatGPT is a blurry JPEG of the Web “Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.” 

For example, in UX research when synthesising interview notes the process of reading through, sorting and theming notes is as much about grouping them together as building an intricate understanding of what has been said, what hasn’t been said and how it starts to connect in novel and interesting ways. Walking into someone else’s drafts or work, even when you’ve ‘briefed’ them, is often a discombobulating experience and comes with a shallower understanding that requires us to re-work and backtrack. 

Yet for many the use of AI in these spaces is often seen as an expeditive tool you’d be unwise to take advantage of. Not all work needs to be inherently creative or original, much of what we do can often feel rote or expected even in research. Yet by using tools that devalue the creative process we will only ever be able to find the cursory, the superficial meaning skimmed from the top of a pool. Never knowing the hidden depth below the surface that may uncover issues, illuminate problems in a different way or change our perceptions of what we thought we understood.


Keep Reading

Want more? Here are some other blog posts you might be interested in.