The Merriam-Webster dictionary’s word of the year for 2025 was ‘slop’. No, not the food scraps thrown to a pig. But not far off. The modern-day slop is defined as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.”
In other words, it’s the tidal wave of images and videos, music, advertising, news and other written content, often fake, that has flooded the internet since the popularization of generative artificial intelligence(AI).
A lot of it, seemingly harmless, can be entertaining, or pull at our heart strings – the talking cats; a man rescuing a baby fox stuck under ice on a frozen lake; a deer, or heck, a cougar, crashing through somebody’s living room window, jumping over a couch behind which humans have taken shelter, and then bouncing right back out again.
But there’s also the deep fakes – the celebrities and political figures saying things the real versions of those people have never said. In one instance of this, CBC host Rosemary Barton and Prime Minister Mark Carney (both created by AI) spoke about cryptocurrency investment opportunities encouraged by the federal government. Some Canadians were compelled to put money behind it and lost thousands. It was a scam.
AI can now make music and tell you what kind of music you should listen to, it can make movies, it can write a news article, and it can write a love letter. More and more, it is claiming to know and represent what is real, and beyond this, to be able to do what has until now been a uniquely human ability to process and react to this reality, through art, through writing, through talking to each other.
Every time we invite the technology to do one of these things, to use what it already knows about the world to mash together something supposedly new, we write, or prompt, ourselves out of our most important job as humans, and that is using our own faculties to think and feel our way through this world. It’s a watering down of our human experience.
Even the lighthearted slop demands our attention, and asks us to do a double-take to figure out whether it’s real. What will this constant double-taking do to our relation to what is in fact real? Will we still trust it?
A common rebuttal to concerns around the popularization of AI argues they are similar to those that arose when the internet was introduced, or televisions, or even earlier forms of technology that revolutionized how we relate to each other and understand our world. But what those technologies did not do is replace the act of thinking and processing the world, and they did not claim to represent reality with artificial, computer-generated reconstructions of it.
It’s in reaction to this increasingly sloppy online world that THE EQUITY will soon be sharing its first AI policy. We’ve written it to offer full transparency on how and where we will be using AI going forward, both in our news and advertising content.
When it comes to our journalism, our use of AI will be next to none, more or less limited to the AI that exists in word processing software and translation technologies. We may use AI to help us with research or data analysis, but humans will always be brainstorming, interviewing, writing, editing, and fact checking the news content we publish, so readers will never have to wonder whether the photos or words we share are the product of a human brain. They’ll never have to do a double take.
This may take us longer, and we may make mistakes. We are humans. But when we do report something incorrectly, readers will have a human to hold accountable.
While AI may promise all sorts of efficiencies when it comes to information processing and storytelling (the business we are in), we feel the act of understanding and writing about this community, this world, is a privilege, not one we’re willing to hand over to the bots.
There’s an interesting conversation to be had about whether AI is anti-human, or an extension and enhancement of humanity, since it is, after all, humans that created the technology and human thinking patterns and creativity that we’re teaching it to mimic.
But regardless of the answer to this abstract question, every human will have to decide for themself how much of their own experience of the world they are willing to give over to an algorithm.
If we allow artificial intelligence to do the heavy lifting that is using our own intellectual and emotional intelligence, we demote ourselves to be mere control operators, technicians of the human experience, guiding it with prompts, but no longer at its centre.













