fbpx
| Magazine Feature |

The Great Fake Brain 

Will the humans who created artificial intelligence (AI) soon be relegated to some dusty corner?

“What do you think about this?” my chavrusa asked me one morning.

He showed me a short text explaining the advantages of being an avreich:

Being an avreich is a very meaningful and noble way to serve your community. As an avreich, you are able to provide an important service to your community by teaching Torah and providing spiritual guidance. You are also able to share your knowledge and wisdom with others and help shape the future of the Jewish people. Additionally, being an avreich is a great way to deepen your own understanding of Jewish texts and to enrich your own spiritual life.

“So what do you say? Is it worded well?”

A bit kitschy and superficial, I thought to myself, but not bad, overall.

My chavrusa told me it had been generated by artificial intelligence (AI). He directed me to a new app, ChatGPT, which had set a record of a million downloads in a week. Developed by OpenAI, a San Francisco nonprofit cofounded by Elon Musk, Peter Thiel, Sam Altman, and others, ChatGPT is able to mimic human communication on a very high level.

I checked out this latest high-tech wonder. If you haven’t heard of it by now, you certainly will. It can do almost anything: write a syllabus for an education course, develop a detailed plan for establishing a start-up, compose songs, or even write lines of code by itself, saving you the need to hire a human programmer. It can come up with a list of attractions to see on a trip, or just give you a recipe for a good soup on a wintry day.

My curiosity mounted. I decided to try it out myself, and test its skills on that inimitable genre, frum music.

“Can you create an MBD-style Jewish song about Jerusalem?” I asked the interface.

I was late to the game, as users had been testing the software on music lyrics from day one. An (uncomfirmed) version of an Avraham Fried-themed song about Mashiach had made the rounds, and the resulting chorus was somewhere between Fried and a parody of the Red Army Choir:

“He’s the one we’ve been waiting for,

The savior of the Jewish nation.

With him by our side,

We’ll find true liberation.”

But by the time the MBD request was made, pushback from artists who complained about copyright infringement had left its mark, and ChatGPT’s songwriting functions had apparently been disabled.

“I’m sorry, I am not able to create a song in any specific style, including that of MBD,” the software responded. “However, I can provide you with some lyrics in Yiddish about Jerusalem that you could potentially use for a song:

“Jerusalem, di goldene medina,

Di shtot fun unzere vayse vaytn,

Di shtot fun unzere nakhes un freyd,

Di shtot fun unzere tratsn un raytn.”

Actually, I was touched by the app’s honesty. It had burst onto the scene with great fanfare just a few weeks ago, and now already its wings are being clipped. Who says an algorithm can’t feel? And besides, who decided we should prevent it from actualizing its unique abilities?

Giving up on ChatGPT’s artistic abilities, I switched to another subject — albeit a closely related one. We began to chat about the history of Jerusalem. And here I first began to see the app’s significant faults. The very articulate and organized text it presented me contained a claim that I knew to be groundless — if not outright conspiratorial.

I ran the results by a researcher, who confirmed that the app occasionally cranks out nuggets of nonsense. He told me about an entire scientific article that the program presented, citing headings and abstracts that never existed. It does this because, as impressive as it seems, it doesn’t “understand” what it is saying. It often tries to guess at what is most likely the next word — and if the result squares with the algorithm, then the app considers it reasonable, even if it is completely false.

Okay, so the unbelievably polite program that I was introduced to turned out to be a pathological liar, and it lies in this authoritative, irritating, know-it-all way. But that is not really the disturbing issue. When I tried to ask the program where it got its fabricated information, it could not direct me to the sources. And putting aside the hype about AI’s current capabilities and the very real fears of its effect on jobs, that aspect is one of the biggest problems with the new technology.

O

penAI, the company that developed ChatGPT — as well as Dall-E 2, a sister app for drawing that works in a similar fashion — began eight years ago in a Silicon Valley hotel room.

The meeting was a pantheon of geeks: Sam Altman, the CEO of OpenAI; Elon Musk; Peter Thiel, a co-founder of PayPal; Reid Hoffman, a founding partner of LinkedIn; and author Jessica Livingston. The group established a new nonprofit to develop artificial intelligence for the benefit of mankind. In cooperation with foundations, they emerged that evening with $1 billion in start-up capital for the project. Since then, OpenAI has become one of the hottest names in the rapidly developing AI industry that is changing the world.

Truth be told, it’s hard to precisely define what “artificial intelligence” is. Wikipedia gives the broadest definition: intelligence demonstrated by machines. But that turns out to be a rather fluid concept. Google search results, Amazon product recommendations, and social media click bait are all examples of machine intelligence. The more urgent question is what the expectations for this technology are. And they turn out to be pretty surprising.

For example, Deep Blue, the IBM computer that defeated chess grandmaster Garry Kasparov in a 1997 match, was then considered to be a type of AI. But compared to IBM’s later achievements, Deep Blue was more like a really sophisticated Chinese abacus. In 2010, IBM introduced Watson, a question-answering system that proved adept even at pop culture trivia contests.

After Deep Blue and Watson dispatched chess and TV game shows, the next hurdle was the ancient Chinese strategy game Go. The consensus was that no computer could defeat a master of Go. Then in 2017, AlphaGo Zero, from the AI company DeepMind, did just that. And there’s more to come.

But AI’s ability to play games is not the reason we’re hearing so much about it lately. AI is gaining traction today in two main areas: machine learning and deep learning. Over the past few years, significant breakthroughs have turned these into the hottest subjects in the high-tech world, promising what seems to be endless economic potential. Some are saying that AI is mankind’s greatest innovation since the discovery of electricity.

A

bout ten years ago, a researcher named Alex Krizhevsky entered his AlexNet, which employed a new AI deep-learning architecture, in the ImageNet Large Scale Visual Recognition Challenge, a contest in which AI programs compete to identify images. AlexNet set a new standard of precision — much higher than that of the competitors.

Krizhevsky’s success in implementing deep learning ignited fresh enthusiasm for the subject. Krizhevsky was a student of the cognitive psychologist and computer scientist Geoffrey Hinton, who pioneered the field of artificial neuron networks. Incidentally, Ilya Sutskever, the head scientist of OpenAI (and who started out at the Open University in Israel) did his doctorate under Krizhevsky’s tutelage.

To understand the excitement over artificial neuron networks, one must consider the basic differences between the human mind and a regular computer. Like most readers, you’ll probably struggle to come up with an answer for 153,857 times 423,784. A simple calculator, however, can do it in less than a second. (The answer is 65,202,134,888 — thanks for asking.)

On the other hand, you can easily read this text, even though you don’t necessarily recognize the font. For a regular computer, this is no simple feat. And taking it a step further, you can decipher your little daughter’s handwriting; for a computer, it might be easier to land a computer screen on the moon. And that’s nothing compared to the really big challenges, like distinguishing between a dog and a cat.

Human brains are complex enough to analyze differing inputs from every neuron and ultimately conclude that both a pit bull and a Chihuahua are dogs, but that the agile creature that rummages in your garbage dumpster is a cat (or a raccoon, depending where you live).

Neurons in the brain work together even though we don’t understand exactly how. Ultimately, we translate the information that our neurons receive into a non-linear function, and every such function is a collection of other non-linear functions. If we build a computer network that imitates the human neuron network, we might be able to discover — through a very lengthy process of trial and error —the functions that lead to the concept of “dog.”

This is also where the difference between machine learning and deep learning comes in. With machine learning, the program needs human assistance to arrive at a high number of “correct” answers and repeat responses until it learns the tens of thousands of criteria for making the right identification. With deep learning, the program succeeds in learning and improving on its own — without the constant presence of its human guide.

The deep learning systems can find rules even in places where our minds see absolute chaos. Scientists have already used AI to discover new types of antibiotics and new configurations of proteins. These networks can also help with early detection of diseases by connecting various pieces of information gathered early on with the appearance of symptoms at a later time. They can identify when a certain machine is going to go on the blink — and then help with what is called “preventive maintenance,” reducing production costs.

O

penAI has now introduced the latest entrant to the AI race, and it has generated a lot of interest. Bill Gates, who has been a voice of gloom and doom on subjects like the Covid pandemic and global warming, is among those who are excited. The company he founded, Microsoft, has invested heavily into OpenAI and hopes to incorporate ChatGPT into its Bing search engine.

But he also sounds a note of caution. “The long term-issue, which is not yet upon us, is what people worry about: the control issue. What if the humans who are controlling it take it in the wrong direction? If humans lose control, what does that mean? I believe those are valid debates.”

Microsoft has purchased a significant share of OpenAI (which some say contributed to Elon Musk’s departure from the company), converting some operations from a nonprofit to a limited profit-making partnership — a business model that sets a ceiling on the return that investors can earn from the company. Now, after announcing that ChatGPT will integrate with Bing, Microsoft has announced that it will incorporate the app into its Cloud products.

But the concerns that Bill Gates raised already loom large now. ChatGPT was launched only a short time after another OpenAI app, which works along similar principles, opened to the wider public: Dall-E 2. This app is essentially an artificial artist that produces custom pictures to order. Putting aside some of the totally unrealistic results, such as people with many hands, some of the works are quite impressive, so much so that some artists have become uneasy.

Moreover, just recently, a large law firm filed a class action suit in California against some of Dall-E 2’s competitors, the start-ups Midjourney, Stability AI, and DeviantArt AI. The central claim is that these programs crawl existing databases of pictures to discover artists and imitate them, and that the programs do this without consulting or compensating the original creators.

“If these products, and others like them, continue to operate in the way they do today, then in the foreseeable future, we will see them replacing those artists whose stolen works drive these AI products,” the lawyers wrote. They claim that the creators of the AI-based pictures not only violate the artists’ creative rights, but they will also “liquidate art as a future career.”

I

t’s not artists’ financial woes that concern Silicon Valley, however. Another tech giant is closely following the latest developments with no small amount of worry. Its name is Google, and right after the introduction of ChatGPT, it began a massive effort to recruit some 10,000 AI experts. Google knows very well that when Microsoft talks about integrating ChatGPT into the Bing search engine, it will give a big boost to the foremost — heretofore not very successful — competitor to Google’s own search engine.

ChatGPT was already being mentioned as a threat to Google, as seen by the massive number of downloads of its app. But OpenAI will have to improve ChatGPT’s services to make that threat credible. First of all, its database is only updated to 2021, while Google is constantly updated. In addition, ChatGPT, unlike Google, does not allow searchers to see its sources — which creates a significant issue of reliability.

As noted, the app’s advanced language skills allow it to present rather persuasive responses, even if it sometimes produces absolute nonsense. As happened to me, some researchers searched for scientific articles on specific subjects, and got counterfeit results fabricated to look like they were published in important periodicals. The presentation was all very professional, aside from the inconvenience of it not being true.

And there are bigger problems. Not long ago, FBI director Christopher Wray remarked that these “new technologies pose new security challenges,” mainly referring to the recent developments in AI.

Indeed, ChatGPT is not meant only to amuse the masses with mediocre haiku poems, or even to serve as an effective search engine. One of its interesting functions is the ability to write code for applications on its own. From now on, you don’t have to be a computer engineer to do these things. The problem is that it also includes writing code for malware.

But what is worrying Wray, who spoke at the World Economic Forum in Davos, is not the expansion of the pool of potential hackers. It’s actually the apps being developed by the biggest rival from the east.

“The attack zone is expanding, and the variety of assault tools has grown — both in quality and in quantity,” he said. “This technology provides many opportunities, but also dangers. When I see the functions of AI, I say to myself, ‘Wow, we can do this.’ And then, ‘Oh, G-d help us, they can do this,’” he concluded.

By “they,” he means China, of course, whose government has the greatest cyberwarfare program in the world. “They don’t have the restrictions and limitations that the West has,” Wray says.

The FBI director was referring mostly to the privacy laws in democratic countries, which place significant restrictions on data mining from users. Strong AI needs, first and foremost, a huge store of data for learning purposes, and the negligible Chinese sensitivity on the issues of privacy rights and copyrights gives it a significant advantage.

T

hese hypothetical questions are not new, and likewise, neither is the attention being given to AI, which is about as old as computer use itself. In light of the fact that for decades, we’ve been hearing about the turning point for AI, we can realize why warnings of this type have been, until recently, the domain of computer geeks. (Some will say this is still the case.)

After all, already in 1965, Herbert Simon announced that “machines will be capable, within twenty years, of doing any work a man can do.” Nearly forty years after his target date, voice assistants like Apple’s Siri and Amazon’s Alexa score IQ results similar to those of toddlers. Facial identification programs are becoming more sophisticated, and yet, chances are that a four-year-old child can identify his mother on a busy street as quickly as they can. But as AI advances in leaps and bounds, these questions are becoming less hypothetical and more realistic.

Of course, it’s not there yet. Sam Altman, the CEO of OpenAI, estimates that these big questions will pop up only when we develop programs with artificial general intelligence (AGI), which will essentially endow computers with full consciousness, like humans have. Current AI capabilities are nowhere near that threshold.

“But it will happen in about another ten years,” Altman added.

“What worries me is that the regulation is much slower than the technology, and it won’t be able to keep up with the pace of AI development,” Elon Musk cautioned at one opportunity.

B

ut those are questions for a science fiction dystopia. Until we reach that point, there are more prosaic problems to deal with in the AI industry. Despite the understandable enthusiasm it is generating, firms entering the field face some pretty daunting challenges.

“If you have a start-up and you are considering basing it on AI, you should have a very good answer for your investors who will ask you why specifically AI,” one entrepreneur in the field told Mishpacha. “AI is not always the right solution.”

For example, deep learning requires extensive resources, including massive investments in computing power and even higher wages for developers. AI is not going to replace computers, just like planes didn’t replace cars. But that is not the only problem.

For many organizations, AI is the ultimate authority. Large companies like Amazon and Unilever already use HireVue systems to scan video segments sent to them by potential job candidates, to vet the ones most suitable for interviews. These programs rely on countless criteria. Courts in the United States use models based on AI designed by Compass in order to estimate the danger posed by criminals.

On the face of it, these are excellent solutions. Professor Daniel Kahneman, who received a Nobel Prize for his groundbreaking work in applying psychological insights to economic theory, particularly in the areas of judgment and decision-making under uncertainty, thinks that “in the race between humans and algorithms, algorithms will prevail. Already today, algorithms are more precise than radiologists.”

The logic behind this is understandable. After all, human judgment is subject to the influences of distractions and background noise. Human judgment also relies on stereotypes and prejudices. Why wouldn’t AI be free of all this?

So that’s the issue — it’s not, completely. In other words, the AI will not make bad decisions because it skipped a meal, but it definitely can make mistakes, or make decisions under negative influences. The Compass system is based on court data gathered in America over the last two centuries, and it was discovered that it tags black men as being doubly at risk for being dangerous. Likewise, HireVue can misinterpret people’s limitations and disabilities in determining their eligibility for hire.

And when the system reaches a mistaken response, it’s nearly impossible to determine what underlying factors fed into that. Remember how ChatGPT couldn’t say where it got its mistaken information? The learning process of AI is too complex to be able to track, even for the robot itself. So neutralizing unwanted influences on its decision making process is harder than it sounds. It’s impossible to know if the criteria it uses are really successful; it could be that the moment you are tagged, your chances have already been damaged. For example, the system could filter out a job candidate with a facial muscle disorder because it interprets his failure to smile as a sign of an antisocial personality, even though a human interviewer with common sense understands that this is an incorrect conclusion.

Some will say that it is also impossible to track human thought processes. But that’s exactly the point. Humans examine various factors and draw on life experiences when evaluating facts to arrive at a judgment. Most of these tools are unavailable to an AI like ChatGPT.

AI can only employ statistics and probabilities to figure out what the most successful answer will be. When ChatGPT writes a sentence, it does not understand what it is saying, because it does not have any self-awareness. It certainly does not “create” its prose, whatever the definition of creativity you might apply. It simply guesses very proficiently.

This is the point that touches on the question of human relations. If I trust the judgment of a person whom I admire, I rely on that person’s understanding. When I’m facing a decision and I seek his advice, his wisdom is added into the equation. That makes my decision not only wiser, but it also adds a moral element.

AI can fill some very significant assistance tasks, which it already does today. To the extent that it filters out noise and distractions, and automates tedious repetitive tasks to free our minds to accomplish greater things, it will benefit mankind. But total reliance on it will mean we’ll be missing the most basic premise in the creation of Man: his judgment, intuition and morality.

 

Do Those Kichels Taste Artificial?
The Kichels’ creator outsources the frum world’s funniest comic to AI. How does it go?

Bracha Stein

O

ver the past couple of months, I’ve heard unsettled murmurs rising from circles of writers and artists, their whispers alternately bemoaning the advent of AI — or reassuring themselves that no conglomeration of circuitry and switches could ever match their creativity, that no android could ever rival the output of the artist.

Personally? I am 100 percent ready to sell out. You have a robot I can outsource my work to while still collecting a salary? BRING IT.

And so it was with a fair amount of excitement that I accessed the ChatGPT site and attempted to set up my account. The first hurdle isn’t long in coming: a CAPTCHA form where I need to check off a box verifying that “I’m not a robot.” Excuse me, ChatGPT? I need to prove that I’m not a robot? The sheer hypocrisy rankles. Talk about the pot calling the kettle black.

The stress intensifies with the next page, a series of those photos where you have to select all the photos of bicycles, only nothing looks like a bicycle. (Loyal Kichel readers will recall that I believe this is one of the world’s most stressful life tasks.) I’m pretty sure that AI is just teasing me, honestly — maybe there’s a secret backroom where robots beat my bicycle-selection skills and mock me.

Eventually, though, I am able to establish human superiority. Artificial intelligence is no match for my wit, and after just four attempts and six texts to my sister, I manage to set up an account.

I quickly navigate to the chat page and then — fingers poised over the keyboard — take a moment to contemplate just how my life is about to change.

Bracha: Tell me about the Kichels comic.

A pause, then three dots. Finally:

ChatGPT: I am not familiar with a specific comic called “The Kichels.” Can you please provide more information or context?

Dagger. Through. My. Heart.

Pathetically, I keep trying.

Bracha: It’s a popular comic strip that appears weekly.

ChatGPT: I’m sorry, but I am still not familiar with a comic strip called “Kichels.” It is possible that this comic strip is not well-known or widely distributed….

Twist the blade, why don’t you?

I attempt to feed the obtuse android by suggesting it check out our website, or any of our books (“bestselling,” the pathetic part of me explains), providing links for all of the above.

ChatGPT: As an AI language model, I do not have the ability to actively explore websites or access new information in real time.

Bracha: The strip appears weekly in Mishpacha magazine.

ChatGPT: Ah, I see. “Mishpacha” is a popular Orthodox Jewish magazine, and it is likely that the “Kichels” comic strip appears in its pages. if you are a fan of “Kichels,” I would recommend checking out other Jewish humor and satire publications, as they might offer similar content that you would enjoy.

So not only does ChatGPT refuse to recognize the Kichels, it’s directing users elsewhere? I give up in disgust.

The next day, though, I marshal all my resources of perseverance and determination and also extreme deadline pressure and resolve to try again. I wonder whether my attempts have taught AI anything; presumably all text entered into its fields are added to the great aggregate of information that comprises AI’s brain, right? (This is presumably the sort of knowledge I could confirm with a few minutes of reading, but as a professional humor writer, I am prohibited by law from engaging in activities such as “fact-checking” or “research.”)

But alas, it seems that ChatGPT is conspiring to leave the Kichels out of the great sum of all human knowledge.

ChatGPT: I am not familiar with a specific comic called “The Kichels.” Can you please provide more information or context?

I give up on ChatGPT providing any sort of balm for my ego. But maybe AI can still help me, with just a little bit more coaching. Like, maybe I need to feed it my strip ideas — and then let AI work its magic.

Bracha: You are a camper in a summer sleepaway camp for Orthodox Jewish teens. The camp has been divided into two teams for Color War. Each team has a different theme: One team is Night, the other is Day. You are on team Night, and you need to compose the team song. The song must rhyme, thematically relate to the concept of night, contain Jewish references, and include themes of persecution, genocide, and hope.

 

ChatGPT:

Here’s a possible rhyming song for team Night:

 

We’re the team of endless night,

With courage strong, and hearts so bright.

Though shadows loom, we stand so tall,

Our spirit shines, through the darkest hall.

 

(Chorus)

We’re team Night, hear us sing,

With hope and love, our hearts take wing.

We’ll stand together, we’ll rise above,

With courage and strength, we’ll find our love.

 

(Verse 2)

Like the stars that twinkle high,

We stand for justice, in the midnight sky.

We remember all those who suffered pain,

And keep their memories, alive in our refrain.

 

(Chorus)

We’re team Night, hear us sing,

With hope and love, our hearts take wing.

We’ll stand together, we’ll rise above,

With courage and strength, we’ll find our love.

 

(Bridge)

In the darkness, we find our way,

With the teachings of Torah, to guide us each day.

We honor our ancestors, who suffered so,

And with hope and love, we’ll let their memory grow.

 

(Chorus)

We’re team Night, hear us sing,

With hope and love, our hearts take wing.

We’ll stand together, we’ll rise above,

With courage and strength, we’ll find our love.

 

Well. Maybe ChatGPT can write actual color war songs, but I’m not quite sure it’s ready for an actual Kichels song. What about an entire strip?

ChatGPT:

Sure! Here’s an idea for a joke that could be used in a “The Kichels” comic strip:

Scene: Mrs. Kichel is making breakfast for her family.

Mrs. Kichel: (holding up a carton of eggs) “Who wants scrambled eggs for breakfast?”

Mr. Kichel: (excitedly) “I do! I do!”

Mrs. Kichel: (skeptically) “Really? You never eat eggs.”

Mr. Kichel: (smiling) “Well, today I’m feeling a little chassidish.”

(Mrs. Kichel raises an eyebrow, unsure of what he means.)

Mr. Kichel: (explaining) “You know, egg-zalted.”

(Mrs. Kichel laughs and shakes her head.)

 

This joke plays on the double meaning of the word “chassidish,” which can refer both to the chassidic Jewish movement and to being “excited” or “overjoyed.” By combining these two meanings, the joke creates a pun that is both humorous and relatable to readers who are familiar with Jewish culture and tradition.

Bracha: …

Bracha: Do you realize that puns are the lowest form of humor?

ChatGPT: As an AI language model, I don’t have personal beliefs or opinions. But, humor is subjective and what may be funny to one person may not be to another. Puns can be enjoyed by some people as they play with words and their multiple meanings. On the other hand, others may find them cheesy or unoriginal. It ultimately depends on personal taste.

For the record, ChatGPT is wrong: Humor is not subjective, and puns are a moral offense. But whatever: I have a deadline to make, and it’s become clear that artificial intelligence is not going to help me.

Even if my little experiment has proven a failure, though, at least creative geniuses and literary greats around the world can take comfort and rejoice in their mental superiority, can revel in the confidence that even the most sophisticated cyborg cannot compete with their cerebellum.

As for me? Well, I’ll be right here, waiting for ChatGPT 2.0.

 

(Originally featured in Mishpacha, Issue 949)

Oops! We could not locate your form.