When the soon-to-be-former president of the United States fired up a mob whose leaders had planned, in open sight, an attack on his nation's Capitol, neither he nor they were exercising a right to free expression. Conspiracy to commit a crime is not protected speech—anywhere.
But when Twitter and Facebook suspended Donald Trump's and thousands of conspirators' feeds, was that an attack on their free speech? Not literally. Any publisher has its own free-speech right—to decide what to publish or not. (Pre-internet, doing so was called editorial responsibility, and the world is still figuring out where publishing power ends and digital conversation begins.)
In any case, I won't miss the utterances of Crazy Donald Trump (sorry, not sorry) on my own screens. The muting of his voice in daily feeds will hopefully bring some relief from abuse for BIPOC and trans people, among others. And maybe all Canadians will begin paying more attention to Canadian issues.
Maybe, even, normally non-crazed people will gradually begin dialing outrage down and spending more energy on listening.
But meanwhile, the upshot of January 6th, 2021, has opened a window on complex questions around social-media censorship. An American Civil Liberties Union lawyer was right to warn that while powerful people can work around the lack of a Twitter or Facebook account, "others—like the many Black, Brown, and LGBTQ activists who have been censored by social media companies—will not have that luxury."
To my eyes, anyway, there was something scary about the speed with which a near-monopolistic internet server provider (Amazon Web Services) and the big two mobile app stores (Apple's and Google) moved to shut down a social-media platform, however dreadful the conversations that Parler refused to moderate.
In effect, the first weeks of 2021 saw the big five tech giants shrug off their long-standing claim of political neutrality and assume (to liberals' widespread acclaim) a content-policing function. This doesn't exactly take the internet to terra incognita; rather, the dragons are now in plain sight. Tech providers know their terms of service (duh), and even Zoom has used these to censor academic conversations it doesn't like.
This idea of for-profit censorship seems about as healthy as if Bell Canada began screening phone conversations for whatever it considers appropriate sexual content.
Or as if letter carriers were free (oh, wait…) to deliver their personal choice of pro-choice or pro-life materials.
Or as if local police were free to choose which legal protests to protect and which to shut down. (Whose hands would be on the first placards to be confiscated, do you think?)
Which is why free expression still needs guarding.
It's true that right-wingers have "weaponized" the idea of free speech in recent years—it's activism 101 to grab the most convenient slogan available, and the left lost its grip on this one a while back.
But no one owns free speech. In any functioning democracy, protecting expression is less a political ideal than a necessary, yet fragile, reality.
Yes—fragile, and increasingly so in this time of information disorder. Thought-leaders and governments alike have taken increasingly urgent notice of the current crisis in truth-recognition and harm-avoidance. Social-media distribution systems are chronically and easily manipulated to maximize profit for fabulists and draw audiences to conspiracy-peddlers.
And, because digital advertising money is finite, the sheer amount of traffic flowing to junk-purveyors actively disincentivizes the publication of plain facts and the discussion of nuance.
Meanwhile, the continuing hunger of social platforms and cable news for "content," combined with a finite amount of fresh information, produces a near-constant, confusing stew of reported facts, reporters' speculation, paid-pundits' perspectives and hosts' reframing.
And that, in turn, forces audiences into segments that pick their poison rather than expecting plainly reported facts. Which further undermines ordinary people's ability to distinguish actual journalism from, well, the "alternative."
For practical ideas on combating the truth-determining power of Big Tech and freelance lie-peddlers, I'll continue to look, as you may, to experts on tech issues. But in December, I raised a parallel question: when and how should journalists offer attention to "ugly voices"—those who promote hatred and misinformation?
That column drew thought-provoking responses from several friends and colleagues. To some, I seemed to be implying that journalists should cover today's haters in much the same way as the Chicago Sun-Times covered activities planned by Nazis in Jewish neighbourhoods in the pre-digital 1970s.
I didn't mean to belittle the altered challenges of 2020s media work. That old Plus ça change axiom has long lost its resonance—today, the more things change, the faster they keep changing. Witness, for instance, the controversy sparked by a once-routine news feature about European far-right extremists in Sunday's New York Times.
The radical changes in publication and distribution pathways make it naïve to rely solely on media literacy and the innate power of robust argument to support the readiness of citizens to make informed democratic choices. The exact opposite may have become true, and some research suggests it's not technology (algorithms or robots) but individual humans who preferentially accelerate the spread of fake, versus factual, news.
That leaves the peculiar job of journalists—to shine the light of verified truth into every corner of public interest—more essential than ever.
Sometimes, people we like look bad under that spotlight, and people we despise look better. But that's for the observer to decide because if journalists collectively believe in anything, it is that truth exists, can be ascertained, and needs to replace both ignorance and lies.
But if the journalist's core job is to deliver truth, it is conversely, and obviously, to mitigate the flow of disinformation. Especially the kind that brings predictable harm to those least able to defend themselves.
But how? For a decade or so, my Ryerson colleague, Dr Nicole Blanchett, has been studying the interaction between journalists and digital audiences. It's a vital research field, and one to which some journalists could helpfully pay more attention.
As a news consumer, Nicole is often frustrated by journalists’ failure to minimize the harm done by disinformation that targets people’s race, gender, or background.
So, as a scholar and as a teacher, Nicole has set out to find, and teach, effective ways to "deamplify" bigotry .
To really "get" this concept, I found myself musing on a thought-experiment. It starts with a recognition that social media are to today's demagogues what the loudspeaker was to European fascists a century ago, but exponentially louder.
Imagine you and I had access to a time machine. Let’s step in (masks on, middle seat empty) and set course for Munich, Bavaria, on November 9, 1938.
We arrive and immediately hear a voice, in urgent crescendo, echoing through the streets. Jews have conspired to assassinate a diplomat, the voice shrieks. We must act. The Führer has decided that demonstrations against the Jews should erupt spontaneously…
If you studied history, you might recognize the words of Nazi propaganda minister Josef Goebbelst. Can you hear screams yet? Tonight, mobs will respond to this speech by smashing homes, businesses, and synagogues; within days, 30,000 Jewish men will be transported to camps.
We keep our trusty time-ship firmly in sight as we peer into an alley and notice a steel desk; on it, a dirty beige box with a single knob and a few wires attached.
You fiddle a bit with the knob, and notice that as you twirl, Goebbelst's voice gets a little louder, or softer.
So, of course, you give the damned knob a hard twist to port, and the shrieking demagoguing shriek descends to a whisper. Then, you get the hell out of there.
You don't know if your adjustment did anything to diminish the coming murderous orgy, but you had to try.
Back in 2021, it's even harder to deamplify the lies that wreck "othered" lives. Undergraduates in groups targeted by social-media abuse report significantly higher stress levels; white nationalists, misogynists and conspiracy theorists alike "exploit young men’s rebellion and dislike of 'political correctness' to spread white supremacist thought, Islamophobia, and misogyny through irony and knowledge of internet culture."
So, what's a journalist to do?
Common-sense consensus practices have emerged for more thoughtful reporting on known bigots and fake-news creators. A starting point: vigilance over context.
We live, as Nicole says, in "an environment where all content can be atomized." Any news story, or any column (such as the one you're reading now) can be pulled apart by someone else who wants to make my words mean something else. As in, for example: Jewish professor @ivorshap admits: "Jews have conspired to assassinate a diplomat." Please RT urgently. #Protect #Canada.
The "quote" could be literally accurate (see time-machine story, above), but with the context so dramatically distorted, the retweets could multiply across successive echo-chamber boundaries, traveling faster and further than truth could follow. (Oh man, I hope I'm not tempting fate here.)
There's no sure-fire method for preventing malevolent repackaging of this kind, but today's distorted information marketplace makes traditional news-media efforts to correct errors insufficient. Those efforts are as quaintly analogue-bound as are some simplistic arguments offered by those dubbed free speech "maximalists" by L. M. Sacasas.
By the time stated "facts" have been corrected by the source (say, a wire service) or fact-checked by outsiders, non-facts have raced to the ends of the earth. In live coverage and breaking news, fact-checking either gets done in real time or is wasted time (almost).
That's why myriad journalists used, near-religiously (and often rather lazily) phrases like "baseless claim" or "lie" when reporting the then-US president's utterances on election results, pandemic transmission, and more. They felt obliged to report the fact that the head of state was saying these things, so they wrapped the lie in what US media critic Jay Rosen dubbed a "truth sandwich."
His uncomplicated recipe is useful as far as it goes, especially on rapid-fire platforms that count every word. It's not a substitute for thoughtful news choices and reporting disciplines.
As in, first, asking whether a speech or tweet or protest is fresh enough to earn live coverage at all.
Nicole teaches her first-year news-reporting students that the primary job of news reporting is to seek evidence and place it in as much context as time allows.
To find less predictable facts and place them in richer context, they are expected to go further afield than the most likely sources.
"Talk to the people who are impacted the most by a development; get their perspective," Nicole tells her students. "Look for a variety of reliable sources—sources that don't share your ideology.
"Evaluate the information," Nicole continues. That's not the same as "balancing" a reliably true idea against one that's plainly false. Rather: "You're looking for good sources that give you different viewpoints."
If the rising generation of journalists take this advice to heart, they will earn renewed respect from news audiences.
News reporting that rests on gathering evidence, finding new perspectives, and providing context isn't just less likely to amplify false and destructive world-views. It’s more likely to offer an engaging, enriching portrait of the world as it can be.
Rising to such an ideal isn't easy, or popular. The sensitivity to others' reactions that comes with being human pulls a decent person to avoid some facts and opinions and relay others.
But to be a professional journalist (an idea to which I'll return next month) means something different.
It means knowing the difference between facts and opinions—and preferring facts.