Physical AI (for Robots … in Space)
Some of the most exciting recent accomplishments of AI in physical space have come from robotics.
Spatial AI isn’t synonymous with “physical AI,” although they’re related. Physical AI takes into account more of the material and physical properties of everyday space – involving models of solid objects, gravity, and everyday Newtonian physics, and all the associated practical problems it’s intended to solve. Can we build robots to perform tasks that humans can?
Roboticists build and program machines that move and navigate through space, often transporting goods in the process. From consumer vacuum cleaners to industrial robotic arms, robot dogs, autonomous vehicles (self-driving cars) and drones, and today, humanoid robots, each machine has its own abilities and jobs to do, and thus it “understands” the space around itself differently from other robots.
In addition to way-finding techniques, roboticists may use classes of algorithms like “path planning” to determine a robot’s route or “motion planning” to compute the movements it must make to (1) move a package from cart to conveyer and (2) avoid bumping into anything solid.
If we were to design a robot for delivering packages through the sidewalks of New York City and another for the streets and casinos of Las Vegas, what capabilities would we endow them with? Flight, perhaps? What navigation rules would we teach them? Would we distinguish such rules by city, or would we write more general rules? Does the spatial reasoning needed for a self-driving car differ significantly from that of an autonomous flying drone?
Roboticists have been wrestling and achieving remarkable results with these questions for decades. And with companies like Boston Dynamics, Agility Robotics, Unitree, Honda, and Figure making breakthroughs almost every day, most of the AI being developed for spatial purposes is exactly for this, for robots – of varying degrees of autonomy – navigating physical environments.