Best AI tools of 2024



artificial intelligence and robotics sapienza :: Article Creator

Training Programme On Artificial Intelligence And Robotics Organised

Under the esteemed leadership of Prof. (Dr) Jaspal Singh Sandhu, Vice Chancellor, "Artificial Intelligence and Robotics" laboratory is established in the Department of Mechanical Engineering funded by RUSA grant, Govt. Of India, New Delhi.

Five-day Training Programme was conducted on"Factory Automation and Robotics Programing using FluidSim Software and CIROS Software of Germany based FESTO Company" atArtificial Intelligence and Robotics laboratory with the support of Professor Dr PK Pati, coordinator, Golden Jubilee Centre for Entrepreneurship and Innovation GNDU.

In this five-day training programme, students were given hands on training practice of AI based factory automation and robotic programming techniques.

Dr Pati, emphasised that this kind of training can bridge the gap between academics and latest technologies used in the industry.

Dr Harminder Singh, Head, Department of Mechanical Engineering welcomed the Resource Person, Er. Chandrashekhar V. Varerkar, Manager Didactic at Festo India Pvt. Ltd. Varerkar completed Mechatronics Trainer Level 1 & Level 2 Certification from Siemens Technik Academy, Berlin, Germany. He has 15 Years experience in core manufacturing industry at TATA Steel Wire Division in Electrical Maintenance Dept. And 9 Years core training experience in a Indo-German JV company, ChristianiSharpline Technical TrainingPvt. Ltd as a AGM Training with specialisation in Mechatronics.


First Trust Nasdaq Artificial Intelligence And Robotics ETF (NASDAQ:ROBT), Quotes And News Summary

First Trust Nasdaq Artificial Intelligence and Robotics ETF (NASDAQ: ROBT) stock price, news, charts, stock research, profile.

Open $42.930 High $43.269 52 Wk High 47.710 Volume 56.705K Beta - P/E Ratio - Previous Close - Low $42.930 52 Wk Low 36.380 Market Cap - Shares Out - P/B Ratio - Bid $42.500 Bid Size 16 VWAP $42.930 Ask $47.700 Ask Size 100 Exchange XNAS Dividend - Ex-Div Date Dec 22, 2023 Yield - Div. Freq Quarterly

Recent News

NewsPartnersPress Releases

Tesla Surprisingly Becomes Luminar's Largest Lidar Customer, Contradicting Musk's Past Criticism

Tesla CEO Elon Musk has criticized lidar technology, but Tesla is now Luminar's largest customer. Luminar's Q1 revenue was boosted by Tesla's 10% contribution.

Luminar Technologies Stock Is Trading Lower Monday - What's Going On?

Luminar Technologies, Inc (NASDAQ: LAZR) is reducing workforce by 20% and relying on contract manufacturer to restructure for an asset-light model.

Cloud Computing Firm Appian Stock Nosedives After Q1 Print, What's Going On?

Appian Corp reported 11% Y/Y revenue growth in Q1 2024, slightly beating analyst consensus. EPS loss of $(0.24) missed expectations. Stock price dropped 9%.


Reinforcement Learning AI Might Bring Humanoid Robots To The Real World

ChatGPT and other AI tools are upending our digital lives, but our AI interactions are about to get physical. Humanoid robots trained with a particular type of AI to sense and react to their world could lend a hand in factories, space stations, nursing homes and beyond. Two recent papers in Science Robotics highlight how that type of AI — called reinforcement learning — could make such robots a reality.

"We've seen really wonderful progress in AI in the digital world with tools like GPT," says Ilija Radosavovic, a computer scientist at the University of California, Berkeley. "But I think that AI in the physical world has the potential to be even more transformational."

The state-of-the-art software that controls the movements of bipedal bots often uses what's called model-based predictive control. It's led to very sophisticated systems, such as the parkour-performing Atlas robot from Boston Dynamics. But these robot brains require a fair amount of human expertise to program, and they don't adapt well to unfamiliar situations. Reinforcement learning, or RL, in which AI learns through trial and error to perform sequences of actions, may prove a better approach.

"We wanted to see how far we can push reinforcement learning in real robots," says Tuomas Haarnoja, a computer scientist at Google DeepMind and coauthor of one of the Science Robotics papers. Haarnoja and colleagues chose to develop software for a 20-inch-tall toy robot called OP3, made by the company Robotis. The team not only wanted to teach OP3 to walk but also to play one-on-one soccer.

"Soccer is a nice environment to study general reinforcement learning," says Guy Lever of Google DeepMind, a coauthor of the paper. It requires planning, agility, exploration, cooperation and competition.

The robots were more responsive when they learned to move on their own, versus being manually programmed. As input, the AIs received data including the positions and movements of the robot's joints and, from external cameras, the positions of everything else in the game. The AIs had to output new joint positions.

The toy size of the robots "allowed us to iterate fast," Haarnoja says, because larger robots are harder to operate and repair. And before deploying the machine learning software in the real robots — which can break when they fall over — the researchers trained it on virtual robots, a technique known as sim-to-real transfer.

Training of the virtual bots came in two stages. In the first stage, the team trained one AI using RL merely to get the virtual robot up from the ground, and another to score goals without falling over. As input, the AIs received data including the positions and movements of the robot's joints and, from external cameras, the positions of everything else in the game. (In a recently posted preprint, the team created a version of the system that relies on the robot's own vision.) The AIs had to output new joint positions. If they performed well, their internal parameters were updated to encourage more of the same behavior. In the second stage, the researchers trained an AI to imitate each of the first two AIs and to score against closely matched opponents (versions of itself).

To prepare the control software, called a controller, for the real-world robots, the researchers varied aspects of the simulation, including friction, sensor delays and body-mass distribution. They also rewarded the AI not just for scoring goals but also for other things, like minimizing knee torque to avoid injury.

Real robots tested with the RL control software walked nearly twice as fast, turned three times as quickly and took less than half the time to get up compared with robots using the scripted controller made by the manufacturer. But more advanced skills also emerged, like fluidly stringing together actions. "It was really nice to see more complex motor skills being learned by robots," says Radosavovic, who was not a part of the research. And the controller learned not just single moves, but also the planning required to play the game, like knowing to stand in the way of an opponent's shot.

"In my eyes, the soccer paper is amazing," says Joonho Lee, a roboticist at ETH Zurich. "We've never seen such resilience from humanoids."

But what about human-sized humanoids? In the other recent paper, Radosavovic worked with colleagues to train a controller for a larger humanoid robot. This one, Digit from Agility Robotics, stands about five feet tall and has knees that bend backward like an ostrich. The team's approach was similar to Google DeepMind's. Both teams used computer brains known as neural networks, but Radosavovic used a specialized type called a transformer, the kind common in large language models like those powering ChatGPT.

Instead of taking in words and outputting more words, the model took in 16 observation-action pairs — what the robot had sensed and done for the previous 16 snapshots of time, covering roughly a third of a second — and output its next action. To make learning easier, it first learned based on observations of its actual joint positions and velocity, before using observations with added noise, a more realistic task. To further enable sim-to-real transfer, the researchers slightly randomized aspects of the virtual robot's body and created a variety of virtual terrain, including slopes, trip-inducing cables and bubble wrap.

This bipedal robot learned to handle a variety of physical challenges, including walking on different terrains and being bumped off balance by an exercise ball. Part of the robot's training involved a transformer model, like the one used in ChatGPT, to process data inputs and learn and decide on its next movement.

After training in the digital world, the controller operated a real robot for a full week of tests outside — preventing the robot from falling over even a single time. And in the lab, the robot resisted external forces like having an inflatable exercise ball thrown at it. The controller also outperformed the non-machine-learning controller from the manufacturer, easily traversing an array of planks on the ground. And whereas the default controller got stuck attempting to climb a step, the RL one managed to figure it out, even though it hadn't seen steps during training.

Reinforcement learning for four-legged locomotion has become popular in the last few years, and these studies show the same techniques now working for two-legged robots. "These papers are either at-par or have pushed beyond manually defined controllers — a tipping point," says Pulkit Agrawal, a computer scientist at MIT. "With the power of data, it will be possible to unlock many more capabilities in a relatively short period of time." 

And the papers' approaches are likely complementary. Future AI robots may need the robustness of Berkeley's system and the dexterity of Google DeepMind's. Real-world soccer incorporates both. According to Lever, soccer "has been a grand challenge for robotics and AI for quite some time."






Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Christopher Wylie: we need to regulate artificial intelligence before it ...

Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions