Misdirection is common in our tech universe, whether it’s an AI giant telling us they cannot possibly figure out a way to reimburse content creators whose work helped train a generative AI model or the policy question of who is at fault when a robot car runs over a human being. Today, technology sells by hiding such issues. The disappearing acts vary from the role of human operators in the functions of many AI applications to the precise training data used in generative AI outputs. These black boxes undermine efforts to understand technology or use it wisely. They prohibit assigning responsibility for faults or credit for contributions.
Hiding information allows tragic outcomes. A few examples: (1) Tech companies can draw on global labor that, if not outright slavery, is so close it makes little difference. (2) Consumers remain ignorant of the poisonous working conditions in mining the metals and working with toxic chemicals necessary to assemble our smartphones. Just buy the latest version: so clean! so shiny! (3) No one worries about the legion of traumatized workers doing content moderation for social media sites and training AI models, employed in developing nations and sacrificing their lives for a barely subsistence wage so that the rest of us avoid seeing the terrors of rape, torture, and child abuse online. (4) We can be fooled by propaganda on WhatsApp, Twitter/X, and TikTok, where attention getting has priority over truth sharing. (5) We have no idea how rapidly we’re escalating the climate crisis with every GPT query. (6) We are susceptible to racist and sexist bias in hiring, banking, medical, and legal algorithms. I could go on. You’ve probably already thought of several other calamities enabled when tech companies obscure the truth behind their processes or products.
An industry based on hiding claims to be one about communication. Fortunately, it’s not too late to be optimistic; it’s not too late to put our technology to good use. But to optimize what’s happening in the world of digital technology, we’ve got to understand the unspoken rules of that world.
Unexpectedly, that often means uncovering religious ideas—specifically, Christian ideas—camouflaged in AI, robotics, and virtual reality. I’ve argued that in fact, the cheerleaders of digital technologies are fundamentally religious practitioners. From AI to video games, digital tech can replace the communities, ethics, beliefs, and experiences of transcendence once claimed by religion. I’ve tracked how the dreams of Apocalyptic AI run rampant: godlike machines, “resurrection” of the dead through computer simulation, and uploaded human immortality have become commonplace. Such beliefs are truly “apocalyptic”: they aren’t about the end of everything; they anticipate a glorious world to come, a world where machine life replaces biology and the cosmos becomes truly meaningful.
Usually, I’ve presented the apocalyptic dream of AI as a collection of facts on the ground rather than as a worldview to favor or oppose. Recently, however, I’ve grown concerned by the idea that future machines could somehow be more important than existing human beings. I expect most people find it uncomfortable witnessing the headlong, unregulated race toward artificial general intelligence (AGI) (best exemplified by OpenAI’s complete reversal from a nonprofit wanting to democratize AI into a profit-pursuing religious cult).
Fortunately, new allies want to wrestle with our technologies. Greg Epstein’s Tech Agnostic is a prophetic call to a better relationship with tech, a society and economy where people matter more than profits. Laura Marks’s The Fold aspires to open our imagination to a bigger and better world through interconnectedness and infinite possibility. Unfortunately, industrial and governmental forces oppose these goals, preferring to frustrate and confine us for control and profit. Epstein and Marks help us resist the intentions of marketing hype to find healthier relationships with technology.
The relationships we build using digital tech are social, romantic, economic, political, artistic, and more. Some include black boxes of uncertainty. These permit the use and abuse of human beings (and possibly intelligent machines). In the face of uncertainty, we must pursue relationships that rely on clear-as-possible communication. These provide leverage to make the world better and to share our best ideas. The kind of relationship we bake into digital technologies will shape our lives, so we must pursue clarity as a moral, political, social, and technological goal.
Epstein explores how contemporary technology follows religious logic, and for the most part he finds this distressing. He notes the consolidation of power in the hands of would-be messiahs and the emergence of problematic values such as effective altruism. Like the prophet Micah accusing ancient leaders of eating the flesh of the common people, Epstein critiques the wealthy and powerful (presumably at some professional risk). He describes technochauvinism and its trailing ideologies, noting how these lead to exclusion and oppression. Acknowledging his own time spent with computers, phones, and watches, he wonders when the companies that control our daily tech will move away from addictive interfaces toward safer products.
I don’t always agree with how Epstein frames the contours of tech religion. To my reading, he avoids offering a good definition of religion, despite his appreciation for historian of religion Jonathan Z. Smith (for whom such definitions and the examples employed are our foremost responsibility). As a result, in Epstein’s analysis, mindless compulsion gets conflated with ritual, while apocalypticism gets conflated with catastrophic disaster. I wish he spent more time on the narratives of salvation and purpose that flow through tech culture, which I believe prevent many people from challenging the economic and political agenda of Silicon Valley.
But Epstein is right to believe we won’t figure out what we’re doing with tech until we recognize the ways it draws on religious inclinations. Like Epstein, I see people turning to a digital religion in the absence of another compelling narrative.
In The Future of Religion (1985) sociologists Rodney Stark and William Sims Bainbridge proposed an exchange model for the origin of religion. Supposedly, people looked for trading partners who could offer things like perfect happiness and immortality. In the absence of human beings who could do so, we invented spirits and gods.
I don’t know whether that’s accurate to the earliest stages of religion; but, as I’ve written elsewhere, it predicts our digital religions perfectly. The average person looking for an escape from the mundane, the transhumanist looking to upload their mind into a machine and live forever: for both of these users, Silicon Valley offers a strong story of religious-level redemption.
None of us can be certain what outcomes the future will bring. But we might have reason for optimism if we reveal what lies hidden and lean in on clear communication and transparent technologies.
Laura Marks wants to open up our mental doorways to new kinds of faith and practice, to new ways of seeing, understanding, and doing. I sympathize with her goal, but The Fold: From Your Body to the Cosmos is an unfortunately (i.e., in the worst ways) academic book. It is riddled with references to monads, unfolding-enfolding aesthetics, semiosis, vincula, affective analysis, collapse informatics, and more. At one point, Marks suggests conducting empathic research to understand the experience of an electron.
The book’s purpose, however, could be expressed with reasonable clarity: Marks suggests that our visions of what could be are too narrow, and she believes we can create new futures. For example, she returns regularly to the idea of a healthy climate and ecosystem. If one can get beyond her all-encompassing (and thus empty) usage of words like “soul” and “image” and the near limitless academic jargon, there is a compelling analysis of how economic forces tell us who we should be and what we should like. Marks presents the entire universe as interconnected to show that individuals can assemble together and connect to ideas, people, and places that are far away in space, time, or theory.
Such interconnection rejects corporate and government strategies of control, which Marks rightly opposes. Like the panopticon, where watching a prisoner forces better behavior, corporate and government surveillance directs consumer and citizen choice. By putting blinders on us, technological and economic elites constrain our ability to imagine alternate arrangements. Tech control emerges because the citizen-consumer understands neither how the systems work nor why they are in place. The preference algorithm at Amazon doesn’t find what you need, it promotes your inability to ignore what you want … or invents new desires you didn’t previously have.
We need to reclaim the optimistic vision of tech as sharing, and Marks points us in a valuable direction. But to reveal the ways tech companies hide the truth from us and prevent us from seeing new possibilities, we cannot use academic jargon to hide. We must strive to be as clear as we can be. In fact, in one of the most bizarre moments in her book, Marks supports the idea that clarity, as a concept, is colonizing and domineering. I could not disagree more. Clarity is about sharing. When a student hands me a paper, I want to know what they’re thinking. I will never completely know what they know, but the clearer they are, the better I understand. I want them to share their insights with me. That is what communication is about. So I tell them that every word they write is an opportunity for me to misunderstand them: write fewer words and use simple ones.
The social implications of digital tech deserve that kind of simplicity. Complex ideas and complex machines must be rendered clear in their usage and outcomes. To celebrate black boxes and inexpressible ideas is to align with the very forces that Marks opposes.
One of the great books at producing tech clarity is Kate Crawford’s Atlas of AI. Her map reveals the social and industrial machinery that makes AI possible, from mining operations to databases of human facial images drawn from nonconsenting participants. Crawford reveals what is too often hidden. If nothing else, that gives us a sense of responsibility. We must know the actual costs of our choices. We should stop deferring the costs to people in future times and other lands. Our governments can require that AI companies open a few of those black boxes and use corporate taxes to benefit the world. For example, if generative AI companies can’t figure out how to reimburse artists whose work they used, then they must feed money into the support of future artists.
We cannot have the good side of tech-mediated relationships and minimize the bad side unless we are clear about what we want and what is happening. Sometimes that reveals ugly realities. In 2024, former Google CEO Eric Schmidt announced that he wasn’t worried about the climate impact of large language models even though they produce skyrocketing energy costs and water usage. He is content because he feels that we’re not solving climate change anyway; we should double down on AI and hope that leads to a solution. That’s a terrifying perspective based on individual and corporate disregard, combined with faith in technological salvation.
And it’s not just Google. As Epstein and the chorus he joins has described, the collective world of digital tech promises us a bright future … but only through the horrors it has hidden.
And yet, technologies are not inextricably bound to confusion, opacity, and oppression. They have other possible meanings. Marks wants us to find such meanings for them. Epstein hopes that it’s possible. Appropriately, neither is certain. But we can drive at new meanings when we bind digital tech in worthwhile networks of people and ideas.
I don’t think there’s good reason to be hopeful if we cannot get clarity, and it’s likely that we need government regulation to get us there. Both Epstein and Marks call for a better world—the one by rejecting dogmatic religious practices and beliefs, the other by rejecting the closure of vision that favors capitalist profit. Both want a better world in which technology advances the human condition and the opportunities for human creativity. None of us can be certain what outcomes the future will bring. But we might have reason for optimism if we reveal what lies hidden and lean in on clear communication and transparent technologies.
This article was commissioned by Mona Sloane.