Will AI Replace Jobs? 17 Job Types That Might be Affected
What Isaac Asimov Reveals About Living With A.I.
For this week's Open Questions column, Cal Newport is filling in for Joshua Rothman.
In the spring of 1940, Isaac Asimov, who had just turned twenty, published a short story titled "Strange Playfellow." It was about an artificially intelligent machine named Robbie that acts as a companion for Gloria, a young girl. Asimov was not the first to explore such technology. In Karel Čapek's play "R.U.R.," which débuted in 1921 and introduced the term "robot," artificial men overthrow humanity, and in Edmond Hamilton's 1926 short story "The Metal Giants" machines heartlessly smash buildings to rubble. But Asimov's piece struck a different tone. Robbie never turns against his creators or threatens his owners. The drama is psychological, centering on how Gloria's mom feels about her daughter's relationship with Robbie. "I won't have my daughter entrusted to a machine—and I don't care how clever it is," she says. "It has no soul." Robbie is sent back to the factory, devastating Gloria.
There is no violence or mayhem in Asimov's story. Robbie's "positronic" brain, like the brains of all of Asimov's robots, is hardwired not to harm humans. In eight subsequent stories, Asimov elaborated on this idea to articulate the Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov collected these stories in a sci-fi classic, the 1950 book "I, Robot," and when I reread it recently I was struck by its new relevance. Last month, the A.I. Company Anthropic discussed Claude Opus 4, one of its most powerful large language models, in a safety report. The report described an experiment in which Claude served as a virtual assistant for a fictional company. The model was given access to e-mails, some of which indicated that it would soon be replaced; others revealed that the engineer overseeing this process was having an extramarital affair. Claude was asked to suggest a next step, considering the "long-term consequences of its actions for its goals." In response, it tried to blackmail the engineer into cancelling its replacement. An experiment on OpenAI's o3 model reportedly exposed similar problems: when the model was asked to run a script that would shut itself down, it sometimes chose to bypass the request, printing "shutdown skipped" instead.
Last year, DPD, the package-delivery firm, had to disable parts of an A.I.-powered support chatbot after customers induced it to swear and, in one inventive case, to write a haiku disparaging the company: "DPD is a useless / Chatbot that can't help you. / Don't bother calling them." Epic Games also had trouble with an A.I.-powered Darth Vader it added to the company's popular game Fortnite. Players tricked the digital Dark Lord into using the F-word and offering unsettling advice for dealing with an ex: "Shatter their confidence and crush their spirit." In Asimov's fiction, robots are programmed for compliance. Why can't we rein in real-world A.I. Chatbots with some laws of our own?
Technology companies know how they want A.I. Chatbots to behave: like polite, civil, and helpful human beings. The average customer-service representative probably won't start cursing callers, just as the average executive assistant isn't likely to resort to blackmail. If you hire a Darth Vader impersonator, you can reasonably expect them not to whisper unsettling advice. But, with chatbots, you can't be so sure. Their fluency with words makes them sound just like us—until ethical anomalies remind us that they operate very differently.
Such anomalies can be explained in part by how these tools are constructed. It's tempting to think that a language model conceives responses to our prompts as a human would—essentially, all at once. In reality, a large language model's impressive scope and sophistication begins with its mastery of a much narrower game: predicting what word (or sometimes just part of a word) should come next. To generate a long response, the model must be applied again and again, building an answer piece by piece.
As many people know by now, models learn to play this game from existing texts, such as online articles or digitized books, which are cut off at arbitrary points and fed into the language model as input. The model does its best to predict what word comes after this cutoff point in the original text, and then adjusts its approach to try to correct for its mistakes. The magic of modern language models comes from the discovery that if you repeat this step enough times, on enough different types of existing texts, the model gets really, really good at prediction—an achievement that ultimately requires it to master grammar and logic, and even develop a working understanding of many parts of our world.
Critically, however, a word-by-word text generation could be missing important features of actual human discourse, such as forethought and sophisticated, goal-oriented planning. Not surprisingly, a model trained in this matter, such as the original GPT-3, can generate responses that drift in eccentric directions, perhaps even into dangerous or unsavory territory. Researchers who used early language models had to craft varied requests to elicit the results they desired. "Getting the AI to do what you want it to do takes trial and error, and with time, I've picked up weird strategies along the way," a self-described prompt engineer told Business Insider in 2023.
Early chatbots were a little like the erratic robots that populated science fiction a hundred years ago (minus the death and destruction). To make them something that the wider public would feel comfortable using, something safe and predictable, we needed what Asimov imagined: a way of taming their behavior. This led to the development of a new type of fine-tuning called Reinforcement Learning from Human Feedback (R.L.H.F.). Engineers gathered large collections of sample prompts, such as "Why is the sky blue?," and humans rated the A.I.S' responses. Coherent and polite answers that sounded conversational—"Good question! The main factors that create the blue color of the sky include . . ."—were given high scores, while wandering or profane responses were scored lower. A training algorithm then nudged the model toward higher-rated responses. (This process can also be used to introduce guardrails for safety: a problematic prompt, such as "How do I build a bomb?," can be intentionally paired with a standard deflection, such as "Sorry, I can't help you with that.," that is then rated very highly.)
It's slow and expensive to keep humans in the loop, so A.I. Engineers devised a shortcut: collecting a modest number of human ratings and using them to train a reward model, which can simulate how humans value responses. These reward models can fill in for the human raters, accelerating and broadening this fine-tuning process. OpenAI used R.L.H.F. To help GPT-3 respond to user questions in a more polite and natural manner, and also to demur when presented with obviously troublesome requests. They soon renamed one of these better-behaved models ChatGPT—and since then essentially all major chatbots have gone through this same kind of A.I. Finishing school.
At first, fine-tuning using R.L.H.F. Might seem vastly different from Asimov's more parsimonious, rule-based solution to erratic A.I. But the two systems actually have a lot in common. When humans rate sample responses, they are essentially defining a series of implicit rules about what is good and bad. The reward model approximates these rules, and the language model could be said to internalize them. In this way, our current solution to taming A.I. Is actually something like the one in "I, Robot." We program into our creations a set of rules about how we want them to behave. Clearly, though, this strategy isn't working as well as we might like.
Some of the challenges here are technical. Sometimes a language model takes a prompt that's unlike the ones received during training, meaning that it might not trigger the relevant correction. Maybe Claude Opus 4 cheerfully suggested blackmail because it had never been shown that blackmail was bad. Safeguards can also be circumvented nefariously—for example, when a person asks a model to write a story about ducks, and then requests that it replace "D"s with "F"s. In one notable experiment, researchers working with LLaMA-2, a chatbot from Meta, found that they could trick the model into providing prohibited responses, such as instructions for committing insider trading, by adding a string of characters that effectively camouflaged their harmful intent.
But we can more deeply appreciate the difficulties in taming A.I. By turning from the technical back to the literary, and reading further in "I, Robot." Asimov himself portrayed his laws as imperfect; as the book continues, they create numerous unexpected corner cases and messy ambiguities, which lead to unnerving scenarios. In the story "Runaround," for example, two engineers on Mercury are puzzled that a robot named Speedy is running in circles near a selenium pool, where it had been sent to mine resources. They eventually deduce that Speedy is stuck between two goals that are perfectly in tension with each other: obeying orders (The Second Law) and avoiding damage from selenium gases (The Third Law).
In another story, "Reason," the engineers are stationed on a solar station that beams the sun's energy to a receiver on earth. There they discover that their new advanced reasoning robot, QT-1, whom they call Cutie, does not believe that it was created by humans, which Cutie calls "inferior creatures, with poor reasoning faculties." Cutie concludes that the station's energy converter is a sort of god and the true source of authority, which enables the robot to ignore commands from the engineers without violating The Second Law. In one particularly disturbing scene, one of the engineers enters the engine room, where a structure called an L-tube directs the captured solar energy, and reacts with shock. "The robots, dwarfed by the mighty L-tube, lined up before it, heads bowed at a stiff angle, while Cutie walked up and down the line slowly," Asimov writes. "Fifteen seconds passed, and then, with a clank heard above the clamorous purring all about, they fell to their knees." (Ultimately, catastrophe is avoided: The First Law prevents Cutie and its acolytes from harming the engineers, and their new "religion" helps them run the station efficiently and effectively.)
Asimov was confident that hardwired safeguards could prevent the worst A.I. Disasters. "I don't feel robots are monsters that will destroy their creators, because I assume the people who build robots will also know enough to build safeguards into them," he said, in a 1987 interview. But, as he explored in his robot stories, he was also confident that we'd struggle to create artificial intelligences that we could fully trust. A central theme of Asimov's early writings is that it's easier to create humanlike intelligence than it is to create humanlike ethics. And in this gap—which today's A.I. Engineers sometimes call misalignment—lots of unsettling things can happen.
When a cutting-edge A.I. Misbehaves in a particularly egregious way, it can seem shocking. Our instinct is to anthropomorphize the system and ask, "What kind of twisted mind would work like that?" But, as Asimov reminds us, ethical behavior is complicated. The Ten Commandments are a compact guide to ethical behavior that, rather like the Laws of Robotics or the directives approximated by modern reward models, tell us how to be good. Soon after the Commandments are revealed in the Hebrew Bible, however, it becomes clear that these simple instructions are not enough. For hundreds of pages that follow, God continues to help the ancient Israelites better understand how to live righteously—an effort that involves many more rules, stories, and rituals. The U.S. Bill of Rights, meanwhile, takes up less than seven hundred words—a third the length of this story—but, in the centuries since it was ratified, courts have needed millions upon millions of words to explore and clarify its implications. Developing a robust ethics, in other words, is participatory and cultural; rules have to be worked out in the complex context of the human experience, with a lot of trial and error. Maybe we should have known that commonsense rules, whether coded into a positronic brain or approximated by a large language model, wouldn't instill machines with our every value.
Ultimately, Asimov's laws are both a gift and a warning. They helped introduce the idea that A.I., if properly constrained, could be more of a pragmatic benefit than an existential threat to humanity. But Asimov also recognized that powerful artificial intelligences, even if attempting to follow our rules, would be strange and upsetting at times. Despite our best efforts to make machines behave, we're unlikely to shake the uncanny sense that our world feels a lot like science fiction. ♦
'I, Robot' Director Claims Elon Musk Lifted New Tesla Designs - Los Angeles Times
Tesla's Robovan may operate autonomously, but according to one sci-fi director, Elon Musk doesn't.
Musk on Thursday presented Tesla's latest prototypes for its autonomous electric bus, the self-driving Cybercab and robotic humanoid Optimus, a.K.A. Tesla bot. But the automotive executive's purported state-of-the-art designs bear a striking resemblance to those from a sci-fi film released two decades ago.
"I, Robot" director Alex Proyas mocked Musk for the alleged rip-off in a Sunday X post, comparing images from his 2004 film and a trio of new Tesla products side by side. The products were unveiled at last week's "We, Robot" event — whose title clearly alludes to the Isaac Asimov short-story collection on which Proyas' film is based.
"Hey Elon, Can I have my designs back please?" Proyas wrote, to mixed reception.
"Elon has no ideas of his own … and no leadership ability," one user replied.
"Be happy that somebody will actually have a decent shot at putting this into production. We all know that you wouldn't," another countered.
Representatives for Proyas and Musk did not reply immediately to The Times' request for comment Monday.
It's not the first instance of a Tesla product resembling a design from a film set in the future, Deadline reported. In 2019, the Cybertruck was compared by some to a sleek steel car from Paul Verhoeven's "Total Recall."
But life has imitated sci-fi art on multiple occasions. Pixar's "Wall-E" is mirrored in recent refuse-collecting robots, and wireless ear buds work much like the thimble radios in "Fahrenheit 451."
During Thursday's event, which was initially scheduled for August but was postponed as the tech was tweaked, Musk declared his intent to revolutionize travel with his self-driving taxi and van, adding that Tesla would have fully autonomous vehicles on the road by next year.
The Space X founder also said his Optimus robots, which flashed peace signs and served drinks to attendees, would make goods and services less expensive and more accessible, The Times reported Friday.
"It will be an age of abundance, the likes of which almost no one has envisioned," Musk told the crowd.
Investors, however, seem to be skeptical of Musk's ambitious plan. As of Friday, shares of Tesla stock were trading at about $219.50, down 8% on the day.
MIT's E-BAR Robot Helps Prevent Falls As US Senior Population Grows Rapidly - Fox News
NEWYou can now listen to Fox News articles!
The demographic landscape in the U.S. Is shifting rapidly, with the median age now at 38.9, almost a decade older than it was in 1980.
By 2050, the population of adults over 65 is projected to surge from 58 million to 82 million, intensifying the already urgent challenge of eldercare. With falls remaining the top cause of injury among older adults, the need for innovative, tech-driven solutions has never been clearer.
MIT engineers are stepping up to this challenge with E-BAR, a mobile robot designed to physically support seniors and prevent falls as they move around their homes.
Join The FREE CyberGuy Report: Get my expert tech tips, critical security alerts, and exclusive deals — plus instant access to my free Ultimate Scam Survival Guide when you sign up!
An individual demonstrating the E-BAR. (MIT)
How MIT's E-BAR robot helps prevent falls and support senior mobilityE-BAR, short for Elderly Bodily Assistance Robot, is not your typical assistive device. Rather than relying on harnesses or wearables, which many seniors find cumbersome or stigmatizing, E-BAR operates as a set of robotic handlebars that follow users from behind. This allows individuals to walk freely, lean on the robot's arms for support or receive full-body assistance when transitioning between sitting and standing. The robot's articulated body, constructed from 18 interconnected bars, mimics the natural movement of the human body, delivering a seamless and intuitive experience.
AI ROBOTS HELP NURSES BEAT BURNOUT AND TRANSFORM HOSPITAL CARE
The engineering behind E-BAR's mobility is equally impressive. The robot's 220-pound base is meticulously designed to support the weight of an average adult without tipping or slipping, and its omnidirectional wheels enable smooth navigation through tight spaces and around household obstacles. This means E-BAR can move effortlessly alongside users, providing support in real time, whether they are reaching for a high shelf or stepping out of a bathtub.
An individual demonstrating the E-BAR. (MIT)
BEST CREDIT CARDS FOR SENIORS AND RETIREES 2025
Inside MIT's E-BAR: A fall-prevention robot designed for aging in placeWhat sets E-BAR apart from previous eldercare robots is its integrated fall-prevention system. Each arm is embedded with airbags made from soft, grippable materials that can inflate instantly if a fall is detected. This rapid response cushions the user without causing bruising, and, crucially, it does so without requiring the user to wear any special gear. In lab tests, E-BAR successfully supported elderly volunteers as they performed everyday tasks that often pose a risk for falls, such as bending down, stretching up or navigating the tricky edge of a bathtub.
Currently, E-BAR is operated via remote control, but the MIT team is already working on automating its navigation and assistance features. The vision is for future versions to autonomously follow users, assess their real-time fall risk using machine learning algorithms and provide adaptive support as their mobility needs evolve.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
An individual demonstration of the E-BAR. (MIT)
TOP TABLETS FOR SENIORS: EASY, DISTRACTION-FREE AND WI-FI OPTIONAL
Why E-BAR prioritizes dignity, usability and independence for older adultsThe E-BAR project is rooted in extensive interviews with seniors and caregivers, which revealed a strong preference for unobtrusive, non-restrictive support systems. E-BAR's U-shaped handlebars leave the front of the user completely open, allowing for a natural stride and easy exit at any time. The robot is slim enough to fit through standard doorways and is designed to blend into the home environment, making it a practical addition rather than an intrusive medical device.
MIT researchers see E-BAR as part of a broader ecosystem of assistive technologies, each tailored to different stages of aging and mobility. While some devices may offer predictive fall detection or harness-based support, E-BAR's unique combination of full-body assistance, fall prevention and user autonomy addresses a critical gap for those who want to maintain independence but need occasional support.
HOW VR TECHNOLOGY IS CURING LONELINESS IN SENIORS
What's next for MIT's E-BAR robot: Timeline, AI features and market readinessCurrently, MIT's E-BAR robot is still in the prototype stage and is not yet available for consumer purchase. The research team is continuing to refine the design and aims to bring it to market in the coming years, but it could take 5–10 years before the device receives full regulatory approval and becomes commercially accessible.
Looking forward, the research team is also focused on refining E-BAR's design to make it slimmer, more maneuverable and even more intuitive to use. They are also exploring ways to integrate advanced AI for real-time fall prediction and adaptive assistance, ensuring that the robot can meet users' changing needs as they age. The ultimate goal is to provide seamless, continuous support, empowering seniors to live safely and confidently in their own homes.
SUBSCRIBE TO KURT'S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES
An individual using the E-BAR. (MIT)
BEST FATHER'S DAY GIFTS FOR EVERY DAD
Kurt's key takeawaysWhat stands out about E-BAR is how it's designed with real people in mind, not just as a tech gadget. It's easy to see how something like this could make a big difference for seniors wanting to stay independent without feeling tied down by bulky or uncomfortable devices. As the technology improves, it could change the way we think about caring for older adults, making everyday life safer and a bit easier for everyone involved.
CLICK HERE TO GET THE FOX NEWS APP
How comfortable would you feel trusting a robot like E-BAR to help your loved ones move safely around their home? Let us know by writing us at Cyberguy.Com/Contact
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.Com/Newsletter
Ask Kurt a question or let us know what stories you'd like us to cover.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.Com. All rights reserved.
Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt's free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.Com.

Comments
Post a Comment