Questions Hard, Soft, and Existential

As AI gets smarter and more accomplished, how can we isolate that humanity and infuse it in our work?

AT
first, it seemed like a fun, clever tech gimmick. Then it started to become a useful personal assistant. But just a few years after its introduction to the average citizen, the AI large-language model (or LLM) has become a source of deep worry and troubling questions.
There are practical worries: AI is set to replace hundreds of millions of jobs in high-income countries, and to alter the economy in profound and irreversible ways. In the first half of this calendar year, it’s already seized at least 10,000 jobs in the US. How will entire layers of the workforce put bread on the table when their skills are deemed obsolete and unnecessary?
There are worries about personal liberty: AI can provide governments with vast power not just to collect data, but also to make sense of it in chillingly immediate ways. Reuters reports that Chinese AI firms are building models that will streamline the already-existing extensive government surveillance of private citizens, resulting in a “one person, one file” system where every person’s family, social circles, purchases, earning power, social media posts, and travel patterns can be pulled up and instantly analyzed by virtual intelligence.
Even in the US, scholars and thinkers are concerned about AI’s potential to trespass or trample individual liberties. Anthropic, the parent company of the Claude LLM, is currently dealing with US law enforcement’s requests to use its AI models for citizen surveillance.
And then there are the worries about apocalyptic disasters. Yoshua Bengio, one of the “godfathers of AI,” keeps sounding the alert that as AI gets more sophisticated and human-like, it will start to exhibit the uglier side of human capacity: deception, cheating, and sabotaging others for self-preservation. Tech experts have already seen calculated deception by at least one LLM. Will AI ultimately decide to take control of the humans who created it? Will it cut off vital resources — electricity, water, healthcare — or utilize devastating weaponry to keep the upper hand?
These are some of the hard issues keeping deep thinkers, tech innovators, economists, and lawmakers up at night. But the AI revolution comes along with many “softer” questions, too — questions about us. And some are no less troubling.
Create or Aggregate
Two years ago, an AI jingle — be it a sheva brachos ditty or a poem thanking the morah — was sweet and cute in an awkward, insipid kind of way. An AI-generated email had enough telltale characteristics to give itself away with the first em-dash or checkmark. And we all had fun finding the extra finger or logistically impossible shadows in an AI-generated image. But AI is a quick learner, and it now produces writing, music, and images that can feel genuinely stirring — so stirring that we begin to wonder: What is creativity?
There’s One true Creator; only Hashem has the ability to create something from nothing. What we humans call creativity is more often the process of reassembling existing pieces in a new order and a new frame to create something that feels novel. Once the musical notes exist, every song is just a different arrangement of those preexisting notes. And if that’s all it is — why can’t a machine do it too, do it better?
But something in us rebels at that. There has to be an inherently human contribution that makes a piece of music sing, that makes a piece of writing breathe with emotion and relatability or a piece of art radiate real feeling. As AI gets smarter and more accomplished, how can we isolate that humanity and infuse it in our work? How can we develop an ear attuned enough to filter out the mechanical mimicry?
As Simple as That
AI also has an uncanny ability to “learn its users.” So many newsletters, emails, or even speeches are now produced by a LLM that’s been fed enough input to analyze the style, voice, tone, and even humor of the presenter and create a close copy. I’ve heard a business coach advise people to employ AI in planning for negotiations. “Give the LLM every piece of correspondence you’ve ever had with this person, and then ask it how to negotiate — what tone to use, which arguments to present first, what to save for last,” he said. “AI will know this person better than you do. It can predict exactly how they’ll respond to every move you make.”
AI knows something else about us humans. Most of the popular LLMs are not only unfailingly polite and deferential, they are also faithful followers of the “sandwich method” — whereby salty or spicy critique is packaged between layers of soft, pillowy compliments. This might be one of the more fascinating aspects of our interactions with AI: We crave positivity and compliments even when we know they’re fake. When your boss sends you a compliment followed by a criticism, you know it’s not real. You also know that the sweet cushioning primes you to be more receptive, and to go back and do a second draft of that piece, or crunch the numbers again. But it’s even more bizarre to consider how virtual compliments absent of even the tiniest speck of genuineness can leave us feeling warm and fuzzy and appreciated.
Are we really that predictable? Are we really so transparent? It’s troubling to consider how easily humans can be read — and how deftly manipulated.
Where Words Fall Short
What is language? What is communication? Can you have one without the other? Back in the 1980s, long before any LLMs existed, Geoffrey Hinton, one of the so-called godfathers of AI, built what he called a tiny language model. He was hoping to finally settle an old question: How is language learned? Is language acquisition a linear, logical process that can be programmed into a computer just like any other sequential task, or does it depend on some intangible neural capacity wired into our mental circuits?
Hinton’s tiny model showed that computers can process and seemingly “learn” language even without the logical programming sequences his professors recommended. And today’s LLMs – the exponentially more sophisticated versions of his experiment — interact with humans using something very close to dialogue. But many questions still remain. AI uses language, that much is clear; but when does the combination of words and phrases in commonly understood grammatical structures become more than a mindless transfer of data? When AI “talks” to us, is it executing auto-complete on a very grand scale, or is real communication taking place?
Spare the Struggle
No one wants to scrub their clothing in the river; we all bless the inventor of the washing machine. Automations and instant transcriptions save us from what we call “busy work” and allow us to focus on more complex, creative tasks. No one’s complaining about that. At the same time, we still acknowledge that there’s an unparalleled joy in achievement, in creating something, reaching an understanding, building a system that wasn’t there before. We know that worthwhile objectives take sweat and effort to attain. Things will be very different after Mashiach’s arrival, and AI might prove its true function in that utopian existence, but for us here, for us now — do we really want to strip our lives of the joy that comes with setting a goal and sweating to reach it?
Generations of computer programmers started their studies with the golden rule: GIGO, or “garbage in, garbage out.” Computers are stupid, they were told. They will only be as smart as the program you write. Computers were always able to hold information; some might call it knowledge. Computers with fantastic stores of memory can be expected to hold fantastic quantities of information. But we also knew that owning Otzar Hachochmah does not make one a talmid chacham. There’s a huge difference between having access to information and mastering it.
We as a people have always valued knowledge. But knowledge of Hashem and His Torah can only be won after engaging in a deep internal struggle encompassing much more than brainpower.
There’s also a very consequential chasm between information and intelligence. The daas we aim for is different from mere information. It’s the ability to distinguish and discriminate, to build categories and hierarchies, to distance and embrace.
The experts talk about the day computers will achieve agentic intelligence — with the ability not just to follow instructions, but to determine and implement plans of action. Can a collection of circuits ever become sentient? Can a creature with no brain achieve thought? What does it even mean to think? The questions keep growing, the stakes keep rising, and the timeframe keeps shrinking. What’s left for us to do?
Maybe to become more human, more intentional, more in touch with the parts of us that can never be mechanized. We’ve just experienced ten intense days immersed in the obligation — and gift — of reinventing ourselves. Just because we’ve always acted or spoken one way doesn’t mean we’re fated to continue. AI’s prediction model doesn’t have to have the last word. We can break our patterns.
Human speech is produced in the throat, the body part that connects mind to limbs, thought to action, theoretical to actual — because its essential function is to connect. Let’s not lose that. When we listen and speak, daven or debate or tell bedtime stories or conduct DMCs, we can try to avoid our own versions of auto-complete: the ready phrases and absent-minded patter, the vague mmm-hmms and vapid “I hear.” When we interact with people, we can try to dig down and find the capacity for emotion and for growth. And we can pursue true daas: the kind that only comes with struggle, that is so much more than applying a sequence of commands to a body of knowledge, that marks us as human and yet allows us a faint concept of the Divine.
(Originally featured in Mishpacha, Issue 1081)
Oops! We could not locate your form.







