The History of Artificial Intelligence: Complete AI Timeline



ai robots :: Article Creator

MIT Technology Review

This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology.

Is robotics about to have its own ChatGPT moment?

Henry and Jane Evans are used to awkward houseguests. For more than a decade, the couple, who live in Los Altos Hills, California, have hosted a slew of robots in their home.

In 2002, at age 40, Henry had a massive stroke, which left him with quadriplegia and an inability to speak. While they've experimented with many advanced robotic prototypes in a bid to give Henry more autonomy, it's one recent model that works in tandem with AI models that has made the biggest changes—helping to brush his hair, and opening up his relationship with his granddaughter.

A new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes. Read the full story.

—Melissa Heikkilä

Melissa's story is from the next magazine issue of MIT Technology Review, set to go live on April 24, on the theme of Build. If you don't subscribe already, sign up now to get a copy when it lands.

The inadvertent geoengineering experiment that the world is now shutting off

The news: When we talk about climate change, the focus is usually on the role that greenhouse-gas emissions play in driving up global temperatures, and rightly so. But another important, less-known phenomenon is also heating up the planet: reductions in other types of pollution.

In a nutshell: In particular, the world's power plants, factories, and ships are pumping much less sulfur dioxide into the air, thanks to an increasingly strict set of global pollution regulations. Sulfur dioxide creates aerosol particles in the atmosphere that can directly reflect sunlight back into space or act as the "condensation nuclei" around which cloud droplets form. More or thicker clouds, in turn, also cast away more sunlight. So when we clean up pollution, we also ease this cooling effect.  

Why it matters: Cutting air pollution has unequivocally saved lives. But as the world rapidly warms, it's critical to understand the impact of pollution-fighting regulations on the global thermostat as well. Read the full story.

—James Temple

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I've combed the internet to find you today's most fun/important/scary/fascinating stories about technology.

1 Election workers are worried about AI Generative models could make it easier for election deniers to spam offices. (Wired $)+ Eric Schmidt has a 6-point plan for fighting election misinformation. (MIT Technology Review)

2 Apple has warned users in 92 countries of mercenary spyware attacksIt said it had high confidence that the targets were at genuine risk. (TechCrunch)

3 The US is in desperate need of chip engineersWithout them, it can't meet its lofty semiconductor production goals. (WSJ $)+ Taiwanese chipmakers are looking to expand overseas. (FT $)+ How ASML took over the chipmaking chessboard. (MIT Technology Review)

4 Meet the chatbot tutorsTens of thousands of gig economy workers are training tomorrow's models. (NYT $)+ Adobe is paying photographers $120 per video to train its generator. (Bloomberg $)+ The next wave of AI coding tools is emerging. (IEEE Spectrum)+ The people paid to train AI are outsourcing their work… to AI. (MIT Technology Review)

5 The Middle East is rushing to build AI infrastructureBoth Saudi Arabia and the UAE see sprawling data centers as key to becoming the region's AI superpower. (Bloomberg $)

6 Political content creators and activists are lobbying MetaThey claim the company's decision to limit the reach of 'political' content is threatening their livelihoods. (WP $)

7 The European Space Agency is planning an artificial solar eclipseThe mission, due to launch later this year, should provide essential insight into the sun's atmosphere. (IEEE Spectrum)

8 How AI is helping to recover Ireland's marginalized voicesStarting with the dung queen of Dublin. (The Guardian)+ How AI is helping historians better understand our past. (MIT Technology Review)

9 Video game history is vanishing before our eyesAs consoles fall out of use, their games are consigned to history too. (FT $)

10 Dating apps are struggling to make looking for love funCharging users seems counterintuitive, then. (The Atlantic $)+ Here's how the net's newest matchmakers help you find love. (MIT Technology Review)

Quote of the day

"We're women sharing cool things with each other directly. You want it to go back to men running QVC?"

—Micah Enriquez, a successful 'cleanfluencer,' who shares cleaning tips and processes with her followers, feels criticism leveled at such content creators has a sexist element, she tells New York Magazine.

The big story

Is it possible to really understand someone else's mind?

November 2023

Technically speaking, neuroscientists have been able to read your mind for decades. It's not easy, mind you. First, you must lie motionless within a hulking fMRI scanner, perhaps for hours, while you watch films or listen to audiobooks.

None of this, of course, can be done without your consent; for the foreseeable future, your thoughts will remain your own, if you so choose. But if you do elect to endure claustrophobic hours in the scanner, the software will learn to generate a bespoke reconstruction of what you were seeing or listening to, just by analyzing how blood moves through your brain.

More recently, researchers have deployed generative AI tools, like Stable Diffusion and GPT, to create far more realistic, if not entirely accurate, reconstructions of films and podcasts based on neural activity.

But as exciting as the idea of extracting a movie from someone's brain activity may be, it is a highly limited form of "mind reading." To really experience the world through your eyes, scientists would have to be able to infer not just what film you are watching but also what you think about it, and how it makes you feel. And these interior thoughts and feelings are far more difficult to access. Read the full story.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet 'em at me.)

+ Intrepid archaeologists have uncovered beautiful new frescos in the ruins of Pompeii.+ This doughy jellyfish sure looks tasty.+ A short rumination on literary muses, from Zelda Fitzgerald to Neal Cassady.+ Grammar rules are made to be broken.


European Car Manufacturer Will Pilot Sanctuary AI's Humanoid Robot

Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe's top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI's robots will be deployed.

The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.

For its part, Magna invested in Sanctuary AI back in 2021 -- right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system "Optimus." Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5'7" (a pretty standard height for these machines) and weighs 155 pounds.

Phoenix isn't Sanctuary's first humanoid (an early model had been deployed at a Canadian retailer), but it is the first to walk on legs -- this is in spite of the fact that most available videos only highlight the system's torso. The company has also focused some of its efforts on creating dexterous hands -- an important addition if the system is expected to expand functionality beyond moving around totes.

Sanctuary calls the pilot, "a multi-disciplinary assessment of improving cost and scalability of robots using Magna's automotive product portfolio, engineering and manufacturing capabilities; and a strategic equity investment by Magna."

As ever, these agreements should be taken as what they are: pilots. They're not exactly validation of the form factor and systems -- that comes later, if Magna gets what it's looking for with the deal. That comes down to three big letters: ROI.

The company isn't disclosing specifics with regard to the number of robots, the length of the pilot or even the specific factory where they will be deployed.

View comments


Watch Two Tiny, AI-powered Robots Play Soccer

Google DeepMind is now able to train tiny, off-the-shelf robots to square off on the soccer field. In a new paper published today in Science Robotics, researchers detail their recent efforts to adapt a machine learning subset known as deep reinforcement learning (deep RL) to teach bipedal bots a simplified version of the sport. The team notes that while similar experiments created extremely agile quadrupedal robots (see: Boston Dynamics Spot) in the past, much less work has been conducted for two-legged, humanoid machines. But new footage of the bots dribbling, defending, and shooting goals shows off just how good a coach deep reinforcement learning could be for humanoid machines.

While ultimately meant for massive tasks like climate forecasting and materials engineering, Google DeepMind can also absolutely obliterate human competitors in games like chess, go, and even Starcraft II. But all those strategic maneuvers don't require complex physical movement and coordination. So while DeepMind can study simulated soccer movements, it hasn't been able to translate to a physical playing field—but that's quickly changing.

AI photo

To make the miniature Messi's, engineers first developed and trained two deep RL skill sets in computer simulations—the ability to get up from the ground and how to score goals against an untrained opponent. From there, they virtually trained their system to play a full one-on-one soccer matchup by combining these skill sets, then randomly pairing them against partially trained copies of themselves.

[Related: Google DeepMind's AI forecasting is outperforming the 'gold standard' model.]

"Thus, in the second stage, the agent learned to combine previously learned skills, refine them to the full soccer task, and predict and anticipate the opponent's behavior," researchers wrote in their paper introduction, later noting that, "During play, the agents transitioned between all of these behaviors fluidly."

AI photo

Thanks to the deep RL framework, DeepMind-powered agents soon learned to improve on existing abilities, including how to kick and shoot the soccer ball, block shots, and even defend their own goal against an attacking opponent by using its body as a shield.

During a series of one-on-one matches using robots utilizing the deep RL training, the two mechanical athletes walked, turned, kicked, and uprighted themselves faster than if engineers simply supplied them a scripted baseline of skills. These weren't miniscule improvements, either—compared to a non-adaptable scripted baseline, the robots walked 181 percent faster, turned 302 percent faster, kicked 34 percent faster, and took 63 percent less time to get up after falling. What's more, the deep RL-trained robots also showed new, emergent behaviors like pivoting on their feet and spinning. Such actions would be extremely challenging to pre-script otherwise.

Screenshots of robots playing soccerCredit: Google DeepMind

There's still some work to do before DeepMind-powered robots make it to the RoboCup. For these initial tests, researchers completely relied on simulation-based deep RL training before transferring that information to physical robots. In the future, engineers want to combine both virtual and real-time reinforcement training for their bots. They also hope to scale up their robots, but that will require much more experimentation and fine-tuning.

The team believes that utilizing similar deep RL approaches for soccer, as well as many other tasks, could further improve bipedal robots movements and real-time adaptation capabilities. Still, it's unlikely you'll need to worry about DeepMind humanoid robots on full-sized soccer fields—or in the labor market—just yet. At the same time, given their continuous improvements, it's probably not a bad idea to get ready to blow the whistle on them.






Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Reimagining Healthcare: Unleashing the Power of Artificial ...

Christopher Wylie: we need to regulate artificial intelligence before it ...