firsttruck
Well-known member
- Thread starter
- #1
Maybe TeslaBot Optimus sub-Prime could teach itself ??
In the beginning it might need to be kept in a big padded room
In how many days later before it takes over the world :-(
-----------------------------
At the University of California (Berkeley, CA), a robot dog just taught itself to walk
AI could help robots learn new skills and adapt to the real world quickly.
By Melissa Heikkilä - MIT Technology Review
July 18, 2022
https://www.technologyreview.com/2022/07/18/1056059/robot-dog-ai-reinforcement/
.....
The robot dog is waving its legs in the air like an exasperated beetle. After 10 minutes of struggling, it manages to roll over to its front. Half an hour in, the robot is taking its first clumsy steps, like a newborn calf. But after one hour, the robot is strutting around the lab with confidence. What makes this four-legged robot special is that it learned to do all this by itself, without being shown what to do in a computer simulation.
Danijar Hafner and colleagues at the University of California, Berkeley, used an AI technique called reinforcement learning, which trains algorithms by rewarding them for desired actions, to train the robot to walk from scratch in the real world. The team used the same algorithm to successfully train three other robots, such as one that was able to pick up balls and move them from one tray to another. Traditionally, robots are trained in a computer simulator before they attempt to do anything in the real world. For example, a pair of robot legs called Cassie taught itself to walk using reinforcement learning, but only after it had done so in a simulation. "The problem is your simulator will never be as accurate as the real world. There'll always be aspects of the world you're missing," says Hafner, who worked with colleagues Alejandro Escontrela and Philipp Wu on the project and is now an intern at DeepMind. Adapting lessons from the simulator to the real world also requires extra engineering, he says.
The team's algorithm, called Dreamer, uses past experiences to build up a model of the surrounding world. Dreamer also allows the robot to conduct trial-and-error calculations in a computer program as opposed to the real world, by predicting potential future out comes of its potential actions. This allows it to learn faster than it could purely by doing. Once the robot had learned to walk, it kept learning to adapt to unexpected situations, such as resisting being toppled by a stick.[...] Jonathan Hurst, a professor of robotics at Oregon State University, says the findings, which have not yet been peer-reviewed, make it clear that "reinforcement learning will be a cornerstone tool in the future of robot control."
-----------------------------
Learning to Walk in the Real World in 1 Hour
We trained a quadruped robot to learn how to walk directly in the physical world without simulators. Learning from scratch in only 1 hour was possible by using the Dreamer algorithm to continuously learn a world model and plan inside of it.
Jul 6, 2022
Danijar Hafner
------
In the beginning it might need to be kept in a big padded room
In how many days later before it takes over the world :-(
-----------------------------
At the University of California (Berkeley, CA), a robot dog just taught itself to walk
AI could help robots learn new skills and adapt to the real world quickly.
By Melissa Heikkilä - MIT Technology Review
July 18, 2022
https://www.technologyreview.com/2022/07/18/1056059/robot-dog-ai-reinforcement/
.....
The robot dog is waving its legs in the air like an exasperated beetle. After 10 minutes of struggling, it manages to roll over to its front. Half an hour in, the robot is taking its first clumsy steps, like a newborn calf. But after one hour, the robot is strutting around the lab with confidence. What makes this four-legged robot special is that it learned to do all this by itself, without being shown what to do in a computer simulation.
Danijar Hafner and colleagues at the University of California, Berkeley, used an AI technique called reinforcement learning, which trains algorithms by rewarding them for desired actions, to train the robot to walk from scratch in the real world. The team used the same algorithm to successfully train three other robots, such as one that was able to pick up balls and move them from one tray to another. Traditionally, robots are trained in a computer simulator before they attempt to do anything in the real world. For example, a pair of robot legs called Cassie taught itself to walk using reinforcement learning, but only after it had done so in a simulation. "The problem is your simulator will never be as accurate as the real world. There'll always be aspects of the world you're missing," says Hafner, who worked with colleagues Alejandro Escontrela and Philipp Wu on the project and is now an intern at DeepMind. Adapting lessons from the simulator to the real world also requires extra engineering, he says.
The team's algorithm, called Dreamer, uses past experiences to build up a model of the surrounding world. Dreamer also allows the robot to conduct trial-and-error calculations in a computer program as opposed to the real world, by predicting potential future out comes of its potential actions. This allows it to learn faster than it could purely by doing. Once the robot had learned to walk, it kept learning to adapt to unexpected situations, such as resisting being toppled by a stick.[...] Jonathan Hurst, a professor of robotics at Oregon State University, says the findings, which have not yet been peer-reviewed, make it clear that "reinforcement learning will be a cornerstone tool in the future of robot control."
-----------------------------
Learning to Walk in the Real World in 1 Hour
We trained a quadruped robot to learn how to walk directly in the physical world without simulators. Learning from scratch in only 1 hour was possible by using the Dreamer algorithm to continuously learn a world model and plan inside of it.
Jul 6, 2022
Danijar Hafner
------
Last edited: