Big Tech’s MO has long been to ask for forgiveness rather than permission.
Last week OpenAI CEO Sam Altman sent a letter to the community of Tumbler Ridge, B.C., apologizing for his company’s failure to flag a user’s account with the RCMP. In February that person went on to shoot and kill eight people, six of whom were children.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” he wrote. “I reaffirm the commitment I made [ . . . ] to find ways to prevent tragedies like this in the future. Going forward, our focus will be on working with all levels of government to ensure something like this never happens again.”
We have his word, folks. The timing of the apology is awkward, though. Also last week, Florida launched a criminal investigation into the company over accusations that the alleged gunmen in a mass shooting at Florida State University last year, in which two people were killed, had been advised by ChatGPT on how to carry out the attack.
Chatbots are designed to tell users what they want to hear. They want users to continue using them. That’s how the companies behind them make their money. The problem is not the technology itself, it is the underlying business model. These platforms are designed to keep users addicted. This is how their designers make their billions, tracking our online behaviours and selling hyper-tailored markets to advertisers. We’ve learned this lesson with social media. Now we have AI to contend with.
Strategically rooted in a business ethos of disruption, Silicon Valley companies frame their work as upending the status quo, bulldozing the traditional structures that shape how we interact with the world, and offering new, supposedly democratizing platforms they promise will give us all more power.
The utopian promises of these so-called disruptive technologies are relentless. And for the most part, we’ve bought in. We welcomed social media platforms with open arms, inviting them to move into our homes, into the palms of our hands, and our children’s hands, where they’ve perhaps been most destructive. Now governments are realizing this may have gone too far, and are trying to put the toothpaste back in the tube – an impossible task.
Some provinces and school boards have banned cellphones in classrooms. Australia has banned children under 16 from accessing major social media platforms. Canada is considering doing something similar. Over the weekend, Manitoba premier Wab Kinew announced his province would be banning youth from using social media as well as AI chatbots, a first in Canada. The catch is, any age-based restrictions will have to depend on users uploading some form of government photo ID to prove they are who they say they are, and they are indeed of age. The 24-7 surveillance we already accept through our use of these platforms will reach new levels.
What effective regulation should look like that doesn’t further compromise users is unclear. Even if the solution was known, the question of implementation is a whole other beast, especially given that Canada is about to begin renegotiating its trade agreements with the U.S. and Mexico. We already saw how quickly it did away with its Digital Services Tax, which was meant to collect revenue from the American tech giants for the profits they make from Canadian eyeballs.
But the current approach of simply urging these companies to do more when it comes to ensuring the safety of their platforms, and demanding apologies when harm is caused, is a joke in the face of the current race underway to dominate the AI market. These companies are determined to embed themselves most thoroughly in our lives and beat out their competitors, seemingly at any human cost.
OpenAI sends its ‘deepest condolences’


