I was walking through the suburbs the other day when I saw an advert for a real estate agent on the side of a bus stop. Even from afar, I understood the gist of it. Above the agent’s details was a princess castle surrounded by candy floss clouds. Fair enough, I thought. Message received! The slogan came into focus soon after and confirmed its intention. “Make your dream home come true,” or words to that effect. But it wasn’t until I was within a few meters of the picture that I realised what was wrong with it. The edges of the clouds were gnarled at the intersections and ironically grotesque. The castle’s ramparts disobeyed the architecture of the scene, like stacked geometry inside a heaving game engine.
You couldn’t look past these mistakes if you were an artist working on a commission, your hawk-like eyes darting across the finished canvas. I don’t think any living artist could intentionally make such mistakes, to be honest, even if they tried. It’s a very particular kind of artifice, a fake-ness that we’re all starting to become eerily acquainted with since the days of Google’s DeepDream doggies.
But what I hated most about this scenario was that I understood the advertisement from a distance. The media-to-brain transaction had been completed, and if I had turned a corner instead of approaching the bus stop, I would never have given it a second thought. For all I knew, the estate agent might have hired an artist to draw the fantasy diorama instead of feeding a machine prompts until it farted one out.
Streamlining apathy
Based on some of the real-world applications I’ve encountered so far, AI has quickly become a shorthand for human laziness. Its heaviest, hamfisted proponents have adopted an ‘ultimate efficacy’ mindset, a predictable consequence of our endless neoliberal nightmare. It’s a weird fork of silicon accelerationism that decrees anything slightly inconvenient as the enemy. You can imagine the rested eyes of Bateman-esque businessmen lighting up at the suggestion of a new financial streamline, provided, at no extra cost, by a generative artificial intelligence program.
Elsewhere, I can’t seem to watch a YouTube video without being interrupted by a plastic voice advertising the latest nefariously dropshipped product, a glasses cleaner or a garage remote, which is allegedly “Taking The World By Storm.” Google’s supposedly smart SEO algorithm can’t help but direct me towards AI articles, either, hammering another nail into the barely-wood coffin that is the post-GPT media landscape. Check out this article by ‘Jarvis the NPC’, who “loves to browse Reddit and explore YouTube for the latest gaming content and sharing his learnings with his gaming buddies.” Poor sap, he couldn’t even get a human to write his bio. It would be funny if it weren’t so sad.
Every week, we get a new, worse example of what can be done with AI. Most recently, Queensland’s state library launched an AI-powered WW1 Veteran chatbot, which went about as well as anyone could have expected. Then Drake decided to wade into the swamp, generating the voice of Tupac Shakur for a diss track – only for Shakur’s estate to threaten legal action until it was removed from streaming shortly after. What sucks about this is that by making headlines, institutions and influencers are legitimising the technology in its most base and reckless form.
Dream theatre
And it’s a shame because, in the right hands, AI does have some imaginative potential, especially if it is leveraged as part of the maker’s toolset to further an (often satirical) creative vision. Take this Adam Curtis CoreCore piece by Silvia Dal Dosso, which turns the technology on itself to capture the nausea of the cultural moment it is creating. Similarly, Jon Rafman’s Instagram feed attempts to lay bare the terrifying boundaries of our weird obsession with neural network imagery.
Alongside heaps of traditional computer-generated and stop-motion animation, Hugh Mulhern’s MV for Porter Robinson’s Cheerleader also uses a few artificial hallucinations to deepen the tongue-in-cheek mixed media spectacle. Scores of talented people still worked on the video.
When I’m feeling charitable about AI, I think about how image-based and text-to-video neural networks share many of the same faults our brains do when we’re dreaming. An inability to manifest proper fingers and hands, unexpected visitors in agoraphobic environments, and an inimitable haze that keeps our imagination juuust out of reach. Who knows what we might find if we could study that in a vacuum? But AI has already broken out of the facility and become our collective problem. It’s not a research project in the realm of the boffins anymore – anyone with an iPhone can turn Heath Ledger into Sean Dyche.
Hotel Breakfast by Joe Biden
Through flashpoints like the Taylor Made Freestyle, the average person may interpret AI’s potential as a new toy to be fooled with, which is great PR for the entire concept as it squirrels away in the background, obsoletising administrative jobs. I won’t pretend I didn’t laugh when someone made Joe Biden rap a Bladee song – I guess that makes me part of the problem. Even so, I’m holding out some small hope that this massive, corporately-legitimised-and-seemingly-inevitable technological seachange can also have some savvy medical applications, at least as a consolatory sidecar.
I have severe hearing loss and, consequentially, a free battery-powered hearing aid from the NHS. I’m grateful for it, even if it doesn’t help me much, but I often daydream about a projector chip that I could attach or integrate into my glasses, which would overlay captions as people speak to me IRL. Perhaps it could assign speakers and lay up a little awareness arrow pointing to where the sound is coming from – extra helpful if I’m in immediate danger, or if there’s a smoke alarm going off, or something. This technology would be life-changing for me and presumably many other HOH/Deaf people. It would significantly mitigate my hearing-related social anxiety and help me hold onto conversations without overclocking my superior temporal sulcus. I’d undoubtedly regain a lot of the confidence I’ve lost.
It would also rely on artificial intelligence, that pandoran gambit I’ve been digging at so far. As I found out in a fit of Google desperation, a team is working on part of this concept, producing an app called XRAI Glass that can cast subtitles onto digital screens. Genius! It’s compatible with some AR Glasses, too. The only thing is, the specs are all prohibitively expensive – just like the private hearing aids I had a depressing window shop for earlier this year. I could be using cutting-edge technology to improve my life… if it weren’t for very clear and insurmountable financial barriers! So it goes.
Quelle surprise, but the silicon superpowers remain relatively quiet on this ‘AI as a social enterprise’ front unless the innovation integrates with one of the many metaverse platforms with which they will vie for control over the digital Arrakis. As such, a bleak fog overwhelms the notion of progress in this direction. Besides some extremely rare authored ingenuity, it seems like AI will become another culture industry plaything, an intrinsically economical content vehicle that does little but extend our screen time. All the while, it keeps destroying (or, at the very least, consuming) more industries than it can ever hope to create. Let’s see how this one plays out!