Living for Truth in the Age of AI

Living for Truth in the Age of AI


In 1999’s The Matrix, Morpheus (Laurence Fishburne) brings the newly freed Neo (Keanu Reeves) up to speed with a history lesson. At some point in the early 21st century, Morpheus explains, “all of mankind was united in celebration” as it “gave birth” to artificial intelligence. This “singular consciousness” spawns an entire machine race that soon comes into conflict with humanity. The machines are ultimately victorious and convert humans into a renewable source of energy that’s kept compliant and servile by the illusory Matrix.

. . . even our current “mundane” forms of AI threaten to impose a form of false reality on us.

It’s a brilliantly rendered dystopian nightmare, hence The Matrix’s ongoing prominence in pop culture even 25 years after its release. What’s more, the film’s story about AI’s emergence in the early 21st century has turned out to be somewhat prophetic, as tools like ChatGPT, DALL-E, Perplexity, Copilot, and Gemini are currently bringing artificial intelligence to the masses at an increasingly fast pace.

Of course, the current AI landscape is nowhere near as flashy as what’s depicted in cyberpunk classics like The Matrix, Neuromancer, and Ghost in the Shell. AI’s most popular incarnations currently take the rather mundane forms of chatbots and image generators. Still, AI is the new gold rush, with countless companies racing to incorporate it into their offerings. Shortly before I began writing this piece, for example, Apple announced its own version of AI, which will soon be added to its product line. Meanwhile, Lionsgate, the movie studio behind the Hunger Games and John Wick franchises, announced an AI partnership with the goal of developing “cutting-edge, capital-efficient content creation opportunities.” (Now that sounds dystopian.)

Despite its increasing ubiquity, however, AI faces numerous concerns, including environmental impact, energy requirements, and potential privacy violations. The biggest debate, though, currently surrounds the massive amounts of data required to train AI tools. In order to meet this need, AI companies like OpenAI and Anthropic have been accused of essentially stealing content with little regard for things like ethics or copyright. To date, AI companies are facing lawsuits from authors, newspapers, artists, music publishers, and image marketplaces, all of whom claim that their intellectual property has been stolen for training purposes.

But AI poses a more fundamental threat to society than energy consumption and copyright infringement, bad as those things are. We’re still quite a ways from being enslaved by a machine empire that harvests our bioelectric power, just as we’re still quite a ways from unknowingly living in a “neural interactive simulation.” And yet, to that latter point—and at the risk of sounding hyperbolic—even our current “mundane” forms of AI threaten to impose a form of false reality on us.

Put another way, AI’s ultimate legacy may not be environmental waste and out-of-work artists but rather, the damage that it does to our individual and collective abilities to understand, determine, and agree upon what is real.

This past August, The Verge’s Sarah Jeong published one of the more disconcerting and dystopian articles that I’ve read in quite some time. Ostensibly a review of the AI-powered photo editing capabilities in Google’s new Pixel 9 smartphones, Jeong’s article explores the philosophical and even moral ramifications of being able to edit photos so easily and thoroughly. She writes:

If I say Tiananmen Square, you will, most likely, envision the same photograph I do. This also goes for Abu Ghraib or napalm girl. These images have defined wars and revolutions; they have encapsulated truth to a degree that is impossible to fully express. There was no reason to express why these photos matter, why they are so pivotal, why we put so much value in them. Our trust in photography was so deep that when we spent time discussing veracity in images, it was more important to belabor the point that it was possible for photographs to be fake, sometimes.

This is all about to flip—the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.

Jeong’s words may seem over-the-top, but she backs them up with disturbing examples, including AI-generated car accident and subway bomb photos that possess an alarming degree of verisimilitude. Jeong continues (emphasis mine),

For the most part, the average image created by these AI tools will, in and of itself, be pretty harmless—an extra tree in a backdrop, an alligator in a pizzeria, a silly costume interposed over a cat. In aggregate, the deluge upends how we treat the concept of the photo entirely, and that in itself has tremendous repercussions. Consider, for instance, that the last decade has seen extraordinary social upheaval in the United States sparked by grainy videos of police brutality. Where the authorities obscured or concealed reality, these videos told the truth.

[ . . . ]

Even before AI, those of us in the media had been working in a defensive crouch, scrutinizing the details and provenance of every image, vetting for misleading context or photo manipulation. After all, every major news event comes with an onslaught of misinformation. But the incoming paradigm shift implicates something much more fundamental than the constant grind of suspicion that is sometimes called digital literacy.

Google understands perfectly well what it is doing to the photograph as an institution—in an interview with Wired, the group product manager for the Pixel camera described the editing tool as “help[ing] you create the moment that is the way you remember it, that’s authentic to your memory and to the greater context, but maybe isn’t authentic to a particular millisecond.” A photo, in this world, stops being a supplement to fallible human recollection, but instead a mirror of it. And as photographs become little more than hallucinations made manifest, the dumbest shit will devolve into a courtroom battle over the reputation of the witnesses and the existence of corroborating evidence.

Setting aside the solipsism inherent to creating images that are “authentic to your memory,” Jeong’s article makes a convincing case that we’re on the cusp of a fundamental change to our assumptions of what is trustworthy or not, a change that threatens to wash away those assumptions altogether. As she puts it, “the impact of the truth will be deadened by the firehose of lies.”

Adding to the sense of alarm is that those developing this technology seem to care precious little about the potential ramifications of their work. To trot out that hoary old Jurassic Park reference, they seem far more concerned with whether or not they can build features like AI-powered photo editing, and less concerned with whether or not they should build them. AI executives seem perfectly fine with theft and ignoring copyright altogether, and more concerned with people bringing up AI safety than whether or not AI is actually safe. As a result of this rose-colored view of technology, we now have situations like Grok—X/Twitter’s AI tool—ignoring its own guidelines to generate offensive and even illegal images and Google’s Gemini generating images of Black and Asian Nazis.

Pundits and AI supporters may push back here, arguing that this sort of thing has long been possible with tools like Adobe Photoshop. Indeed, Photoshop has been used by countless designers, artists, and photographers to tweak and airbrush reality. I, myself, have often used it to improve photos by touching up and/or swapping out faces and backdrops, or even just adjusting the colors to be more “authentic” to my memory of the scene.

However, a “traditional” tool like Photoshop—which has received its own set of AI features in recent years—requires non-trivial amounts of time and skill to be useful. You have to know what you’re doing in order to create Photoshopped images that look realistic or even just half-way decent, something that requires lots of practice. Contrast that with AI tools that rely primarily on well-worded prompts to generate believable images. The issue isn’t one of what’s possible, but rather, the scale of what’s possible. AI tools can produce believable images at a rate and scale that far exceeds what even the most proficient Photoshop experts can produce, leading to the deluge that Jeong describes in her article.

The 2024 election cycle was already a fraught proposition before AI entered the fray. But on September 19, CNN published a bombshell report about North Carolina gubernatorial candidate Mark Robinson, alleging that he posted a number of racist and explicit comments on a porn site’s message board, including support for reinstating slavery, derogatory statements directed at Martin Luther King Jr., and a preference for transgender pornography.

Needless to say, such behavior would be in direct opposition to his conservative platform and image. When interviewed by CNN, Robinson quickly switched to “damage control” mode, denying that he’d made those comments and calling the allegations “tabloid trash.” He then went one step further: chalking it all up to AI. Robinson tried to redirect, referencing an AI-generated political commercial that parodies him before saying “The things that people can do with the Internet now is incredible.”

Unless we remain vigilant, we’ll just blindly accept or dismiss such things regardless of their authenticity and provenance because we’ve been trained to do so.

Robinson isn’t the only one who’s used AI to cast doubt on negative reporting. Former president Donald Trump has claimed that photos of Kamala Harris’s campaign crowds are AI-generated, as is a nearly 40-year-old photo of him with E. Jean Carroll, the woman he raped and sexually abused in the mid ’90s. Both Robinson and Trump have taken advantage of what researchers Danielle K. Citron and Robert Chesney call the “liar’s dividend.” That is, AI-generated images “make it easier for liars to avoid accountability for things that are in fact true.” Moreover,

Deep fakes will make it easier for liars to deny the truth in distinct ways. A person accused of having said or done something might create doubt about the accusation by using altered video or audio evidence that appears to contradict the claim. This would be a high-risk strategy, though less so in situations where the media is not involved and where no one else seems likely to have the technical capacity to expose the fraud. In situations of resource-inequality, we may see deep fakes used to escape accountability for the truth.

Deep fakes will prove useful in escaping the truth in another equally pernicious way. Ironically, liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes. Imagine a situation in which an accusation is supported by genuine video or audio evidence. As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deep fakes. Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence. This skepticism can be invoked just as well against authentic as against adulterated content.

Their conclusion? “As deep fakes become widespread, the public may have difficulty believing what their eyes or ears are telling them—even when the information is real. In turn, the spread of deep fakes threatens to erode the trust necessary for democracy to function effectively.” Although Citron and Chesney were specifically referencing deep fake images, it requires little-to-no stretch of the imagination to see how their concerns apply to AI more broadly, even to images created on a smartphone.

It’s easy to sound like a luddite when raising any AI-related concerns, especially given its growing popularity and ease-of-use. (I can’t tell you how many times I’ve had to tell my high schooler that querying ChatGPT is not a replacement for doing actual research.) The simple reality is that AI isn’t going anywhere, especially as it becomes increasingly profitable for everyone involved. (OpenAI, arguably the biggest player in the AI field, is currently valued at $157 billion, which represents a $70 billion increase this year alone.)

We live in a society awash in “fake news” and “alternative facts.” Those who seek to lead us, who seek the highest positions of power and responsibility, have proven themselves perfectly willing to spread lies, and evidence to the contrary be damned. As people who claim to worship “the way, and the truth, and the life,” it is therefore incumbent upon Christians to place the highest premium on the truth, even—and perhaps especially—when the truth does not seem to benefit us. This doesn’t simply mean not lying, but rather, something far more holistic. We ought to care about how truth is determined and ascertained, and whether or not we are unwillingly spreading false information under the guise of something seemingly innocuous, like a social media post.

Everyone loves to share pictures on social media, be it cute kid photos, funny memes, or shots from their latest vacation. But I’ve seen a recent rise in people resharing AI-generated images from anonymous accounts. These images run the gamut—blood-speckled veterans, brave-looking police officers, stunning landscapes, gorgeous shots of flora and fauna—but they all share one thing in common: they’re unreal. Those veterans never defended our country, those cops neither protect nor serve any community, and those landscapes will never be found anywhere on Earth.

These may seem like trivial distinctions, especially since I wouldn’t necessarily call out a painting of a veteran or a landscape in the same way. Because they look so real, however, these AI images can pass unscathed through the “uncanny valley.” They slip past the defenses our brains possess for interpreting the world around us, and in the process, slowly diminish our ability to determine and accept what is true and real.

This may seem like alarmist “Chicken Little” thinking, as if we’re on the verge of an AI-pocalypse. But given the fact that a candidate for our country’s highest office has already used AI to plant seeds of doubt concerning a verifiably decades-old photo of him and his victim, it’s not at all difficult to envision AI being used to fake war crimes, delegitimize images of police brutality, or put fake words in a politician’s mouth. (In fact, that last one has already occurred thanks to Democratic political consultant Steve Kramer, who created a robocall that mimicked President Biden’s voice. Kramer was subsequently fined $6 million by the FCC, underscoring the grave threat that such technology poses to our political processes.)

Unless we remain vigilant, we’ll just blindly accept or dismiss such things regardless of their authenticity and provenance because we’ve been trained to do so. Either that, or—as Lars Daniel notes concerning the AI-generated disaster imagery that has appeared on social media in the aftermath of Hurricane Helene—we’ll just be too tired to care anymore. He writes, “As people grow weary of trying to discern truth from falsehood, they could become less inclined to care, act, or believe at all.”

Some government officials and political leaders have apparently already grown tired of separating truth from falsehood. (Or perhaps more accurately, they’ve determined that such falsehoods can help further their own aims, no matter the harm.) As AI continues to grow in power and popularity, though, we must be wiser and more responsible lest we find ourselves lost in the sort of unreliable and illusory reality that, until now, has only been the province of dystopian sci-fi. The truth demands nothing less.





Source link

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Social Media

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.

Categories