Article 21 min read 4,776 words

Gemini Robotics-ER: Embodied Reasoning

Google DeepMind's Gemini Robotics-ER 1.6 is not a new robot you can buy. It is more interesting than that: it is a model for the part of a robot that has to look at the world, understand the task, check physical constraints, and decide whether the next action is safe or finished.

ui44 Team All articles

That layer is called embodied reasoning. For a home robot buyer, the plain-English version is: can the robot think about the room before it moves, and can it tell when its own work failed?

embodied reasoning loop for home robot AI autonomy
Scroll sideways to inspect the full chart.

That matters because the next wave of home robots is already moving beyond voice assistants on wheels. 1X NEO, Samsung Ballie, Unitree G1, and future humanoids from Figure, Apptronik, and others all need some version of this loop before they can be trusted with chores. A robot that can start a task is useful. A robot that can notice the cup is about to tip, the drawer is stuck, or the room changed since yesterday is a different category.

What does embodied reasoning mean for home robots?

Most buyer-facing robot AI terms are overloaded. A VLA model connects vision, language, and action: it turns "pick up that mug" into motor behavior. A world model predicts what might happen next. Embodied reasoning sits above and around those systems. It asks practical questions about the physical scene:

  • Which object is the user referring to?
  • Is the requested object reachable from this body position?
  • Is the item small, light, dry, and safe enough for this gripper?
  • Did the robot actually complete the task, or did it only move in the right direction?
  • Should the robot retry, ask for help, or refuse?

Google describes Gemini Robotics-ER 1.6 as a reasoning-first model for visual and spatial understanding, task planning, and success detection. It can call tools, including VLA models, rather than directly being the low-level motor controller. That distinction is important. In a home robot, you do not want one big model blindly issuing movement commands. You want a stack: perception, reasoning, action, verification, and safety limits.

The most useful way to think about embodied reasoning is as a chore supervisor. It may not be the hand that grasps the cup, but it should be the system that says, "the cup is too close to the edge," "the handle moved out of view," or "that request violates the robot's payload limit."

Gemini Robotics-ER 1.6: what actually changed

Google's announcement highlights four capabilities that map directly to home-robot problems.

Capability

Pointing and counting

What Google showed
Identifying tools, quantities, and absent objects
Home-robot translation
Picking the right mug, toy, cable, pill bottle, or laundry item

Capability

Multi-view success detection

What Google showed
Combining overhead and wrist camera views
Home-robot translation
Checking whether an item really went into a bin, sink, or drawer

Capability

Instrument reading

What Google showed
Reading gauges, sight glasses, and digital displays
Home-robot translation
Reading appliances, thermostats, meters, labels, and status lights

Capability

Safety constraint following

What Google showed
Respecting object, material, and weight limits
Home-robot translation
Not grabbing liquids, sharp objects, pets' toys in use, or heavy items

Google says the 1.6 model improves spatial and physical reasoning over earlier Gemini Robotics-ER models and general Gemini models, especially on pointing, counting, success detection, and safety instruction following. The examples are industrial, but the failure modes are familiar in a kitchen or living room. Poor lighting, occlusion, ambiguous commands, and awkward camera angles are not factory-only problems.

The most buyer-relevant feature is success detection. ui44 has covered how robots know chores are done, because this is where many demos quietly cheat. Moving toward a target is easy to film. Knowing that the object landed in the right place, stayed upright, and did not create a new mess is the hard part.

Boston Dynamics Spot using embodied reasoning AI for autonomous inspection

Boston Dynamics is already using this commercially with Spot. Spot is not a home robot, but it is a useful preview because it works in messy real facilities rather than staged demo rooms. In the ui44 database, Spot is a 33.8 kg quadruped with roughly 90 minutes of battery life, autonomous navigation, self-charging, IP54 weather resistance, an optional arm, and a 14 kg payload capacity. Boston Dynamics says more than 1,500 units are deployed worldwide.

That is why the Google partnership is worth watching. Boston Dynamics says Gemini-powered AIVI-Learning can read gauges, measure sight-glass fullness, count pallets, detect puddles, and show transparent reasoning for inspection prompts. Those are not glamorous humanoid chores. They are the kind of boring verification tasks that make autonomy real.

Why homes are harder than Google’s examples

A pressure gauge is not a sock drawer. In some ways, it is easier: it is fixed to a wall, it has one purpose, and the success condition is a number. A home robot has to deal with soft objects, moving people, pets, clutter, sentimental items, and commands that are often incomplete.

That does not make Gemini Robotics-ER irrelevant to homes. It shows the missing middle layer. A home robot needs to reason about physical context before it calls the manipulation policy. For example:

  • "Put the blue cup in the dishwasher" requires identifying the cup, locating the dishwasher rack, checking whether the cup is dishwasher-safe, avoiding a nearby knife, and confirming the cup did not fall sideways.
  • "Tidy the living room" requires sorting toys, trash, remotes, clothes, cables, and personal items into different categories, then knowing when to stop.
  • "Bring me my medication" requires reading labels, resolving ambiguity, refusing unsafe substitutions, and possibly asking for confirmation.

This is why flashy locomotion specs do not automatically translate into household usefulness. Unitree G1 is available from $13,500, stands 132 cm tall, weighs 35 kg, runs for about two hours, and can be configured with dexterous three-finger hands. Those are impressive specs for a compact humanoid research platform. They do not tell you whether the robot can judge a messy bedside table safely.

Unitree G1 humanoid robot showing why embodied reasoning matters for home chores

The same applies to more home-focused robots. 1X NEO is listed in the ui44 database at $20,000 for early adopters, with a 167 cm soft humanoid body, 30 kg weight, roughly four hours of battery life, RGB and depth cameras, tactile skin, and a home-chore mission. Those physical choices make sense for homes. But the buyer question is still not "does it have cameras?" It is "can it use those cameras to avoid unsafe or pointless actions?"

The home-robot stack buyers should ask about

When a company says its robot has advanced AI, ask which layer it is talking about. A credible home robot needs at least five layers working together.

Layer

Perception

Buyer question
What can the robot see and localize?
Example signal
Cameras, depth, tactile sensors, LiDAR, microphones

Layer

Reasoning

Buyer question
Can it understand spatial constraints and task goals?
Example signal
Embodied reasoning, tool use, transparent explanations

Layer

Action

Buyer question
Can it move and manipulate safely?
Example signal
VLA model, gripper design, force control, payload limits

Layer

Verification

Buyer question
Can it tell whether the task succeeded?
Example signal
Multi-view success detection, retries, outcome labels

Layer

Governance

Buyer question
Can it refuse or ask for help?
Example signal
Safety policies, human override, remote assist boundaries

This is where Gemini Robotics-ER is more useful as a benchmark than as a brand name. If a robot maker cannot explain how its system checks success and handles constraints, it is probably still closer to a demo pipeline than a reliable home assistant.

Figure 03 is a good example of why the distinction matters. ui44's database lists Figure 03 as a 173 cm, 61 kg humanoid with a roughly five-hour battery, stereo vision, depth cameras, force sensors, tactile arrays, and a 20 kg payload. Figure's current public site is explicitly home-oriented: it calls Figure 03 "the future of home help," says it is built for everyday household tasks, and says Helix helps it navigate changing home environments. The buyer caveat is availability, not intent. Figure 03 still has no public consumer price or order path, while BMW and production-evaluation work remain separate evidence of where Figure can learn from controlled deployments.

Figure humanoid robot VLA model and embodied reasoning for household manipulation

Apptronik Apollo points in a similar direction. Apollo is a 173 cm, 73 kg enterprise humanoid with roughly four hours of battery life, vision, force/torque sensing, proprioception, and a reported heavy payload around 25 kg. Apptronik has a strategic partnership with Google DeepMind, and its official Apollo page says the robot will operate in warehouses and manufacturing plants in the near term before eventually extending into construction, oil and gas, electronics production, retail, home delivery, elder care, and more.

That production-learning path may be frustrating if you want a robot at home now, but it is rational. Factories and warehouses make it easier to define tasks, instrument outcomes, collect failure data, and roll out updates. Homes will inherit the useful parts only after the models prove they can reason outside the marketing video.

What embodied reasoning does not solve

Embodied reasoning is not magic. It will not make weak hardware strong, turn a toy arm into a dishwasher loader, or remove the need for safety certification. It also raises privacy questions because better reasoning often wants more camera views, more logs, and more data sharing.

Boston Dynamics is explicit that AIVI-Learning requires data sharing and that customer data is shared with Boston Dynamics to improve facility-specific models. That may be acceptable in an industrial inspection deployment. In a home, the bar is higher. Buyers should ask whether camera images leave the device, how long logs are stored, who can review failures, and whether the robot can keep useful functions when cloud AI is unavailable.

1X NEO home humanoid robot needing embodied reasoning for safe chores

There is also a latency problem. Some reasoning can happen in the cloud. Some must happen locally. If a robot is holding a glass, stepping around a child, or deciding whether a drawer is jammed, it cannot wait politely for a remote model every time. The long-term answer is probably a split stack: local safety reflexes and motion control, plus larger off-board reasoning for slower planning and learning.

Samsung Ballie shows the consumer version of this tension. The ui44 database lists Ballie as a development-stage AI companion with Gemini plus Samsung language models, SmartThings integration, a camera, spatial and environmental sensors, projection features, and no confirmed price or release date. A rolling companion robot may not need humanoid manipulation, but it still needs embodied reasoning to know where it is, what it is looking at, whether the user’s request makes sense, and when not to disturb the household.

How to evaluate embodied reasoning claims before buying

For the next few years, most embodied-reasoning claims will sound better than the products feel. Use this checklist before putting down a deposit or joining a beta.

1. Ask for failure handling, not only success clips

A credible demo should show what happens when the robot misses a grasp, sees two similar objects, finds a blocked path, or fails to complete the task. Does it retry intelligently? Does it ask for clarification? Does it stop safely?

2. Look for multi-view verification

A wrist camera alone can lose the scene. A room camera alone can miss the hand. Multi-view reasoning matters because chores often fail in small, hidden ways: a cable drags behind a basket, a cup tips inside a rack, or a folded shirt slips off the pile.

3. Separate navigation autonomy from chore autonomy

A robot that maps your house is not automatically a robot that can perform chores. Navigation is necessary. Manipulation plus success detection is the leap.

4. Check local safety limits

Ask what the robot refuses to handle: liquids, knives, pets, medicines, glass, hot objects, heavy items, or stairs. A useful home robot should have clear boundaries. ui44's home robot safety guide covers why saying no is a feature, not a defect.

5. Ask what improves after purchase

If the company says the robot learns, ask from what data. Does it learn only from your home? From fleet failures? From teleoperation corrections? From simulation? From app updates? And can you opt out?

The bottom line: embodied reasoning is the boring breakthrough

Gemini Robotics-ER 1.6 will not put a humanoid in your kitchen tomorrow. But it clarifies what the credible companies are racing to build: not just robot bodies, and not just chatty AI, but a reasoning layer that can connect perception, action, success detection, and safety.

For buyers, that changes the question. Do not ask only whether a robot has a VLA model, a humanoid body, or a famous AI partner. Ask whether it can understand the physical consequences of its own actions.

The first genuinely useful home robots may not be the ones with the most human-looking hands or the fastest walking speed. They may be the ones that can pause, check the scene, notice the chore is not done, and choose the safer next move.

Database context

Use this article as a privacy verification workflow

Turn the article into a real verification pass

Gemini Robotics-ER: Embodied Reasoning already points you toward 6 linked robots, 6 manufacturers, and 4 countries inside the ui44 database. That matters because strong buyer guidance is easier to apply when you can move immediately from a claim or warning into concrete product pages, manufacturer directories, component explainers, and country-level context instead of treating the article as an isolated opinion piece. The fastest next step is to turn the article into a shortlist workflow: open the linked robot pages, verify which specs are actually published for those models, then compare the surrounding manufacturer and component context before you decide whether the underlying claim changes your buying plan.

For this topic, the useful discipline is to separate the editorial lesson from the catalog evidence. The article gives you the framing, but the robot pages tell you what each product actually ships with today: sensor stack, connectivity methods, listed price, release timing, category, and support-relevant compatibility notes. The manufacturer pages then show whether you are looking at a one-off launch, a broader lineup pattern, or a company that spans multiple categories. That layered workflow reduces the risk of buying on a single marketing phrase or a single support FAQ.

Use the robot pages to confirm which products actually expose cameras, microphones, Wi-Fi, or voice systems, then use the manufacturer pages to decide how much of the privacy question seems product-specific versus brand-wide. On this route cluster, NEO, Ballie, and G1 form the fastest reality check. If you want a quick working shortlist, open Compare NEO, Ballie, and G1 next, then keep this article open as the reasoning layer while you compare structured data side by side.

Practical Takeaway

Every robot, manufacturer, category, component, and country reference below resolves to a real ui44 page, keeping the follow-up path grounded in database records rather than generic advice.

Suggested next steps in ui44

  1. Open NEO and note the listed sensors, connectivity methods, and voice stack before you interpret any policy claim.
  2. Cross-check the wider brand context on 1X Technologies so you can see whether the privacy question touches one model or a broader lineup.
  3. Use the linked component pages to confirm how common the relevant sensors and connectivity layers are across the database.
  4. Keep a short note of which policy layers you checked, which device features are actually present on the robot page, and which items still depend on region- or app-level confirmation.
  5. Finish with Compare NEO, Ballie, and G1 so the policy reading sits next to structured product data.

Database context

Robot profiles worth opening next

Use the linked product pages as the evidence layer

The linked robot pages are where this article becomes operational. Instead of asking whether the headline is interesting, use the robot entries to inspect the actual mix of sensors, connectivity options, batteries, pricing, release timing, and stated capabilities attached to the products mentioned in the article. That is the easiest way to see whether the warning or opportunity described here affects one product family, a specific design pattern, or an entire buying lane.

NEO

1X Technologies · Humanoid · Pre-order

$20,000

NEO is tracked on ui44 as a pre-order humanoid robot from 1X Technologies. The database currently records a listed price of $20,000, a release date of 2025-10-28, ~4 hours battery life, Not disclosed charging time, and a published stack that includes RGB Cameras, Depth Sensors, and Tactile Skin plus Wi-Fi and Bluetooth.

For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether NEO combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Household Chores, Tidying Up, and Safe Human Interaction with any cloud, app, or voice layers.

Ballie

Samsung · Companions · Development

Price TBA

Ballie is tracked on ui44 as a development companions robot from Samsung. The database currently records a listed price of Price TBA, a release date of TBD, Not officially disclosed battery life, Not officially disclosed charging time, and a published stack that includes Camera, Spatial Sensors, and Environmental Sensors plus Wi-Fi and SmartThings.

For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether Ballie combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Autonomous Home Navigation, Built-in Projector (Wall & Floor), and Smart Home Control via SmartThings with any cloud, app, or voice layers, including Bixby.

G1

Unitree · Humanoid · Available

$13,500

G1 is tracked on ui44 as a available humanoid robot from Unitree. The database currently records a listed price of $13,500, a release date of 2024, ~2 hours battery life, Not disclosed charging time, and a published stack that includes Depth Camera, 3D LiDAR, and 4 Microphone Array plus Wi-Fi 6 and Bluetooth 5.2.

For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether G1 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Bipedal Walking, Object Manipulation, and Dexterous Hands (optional Dex3-1) with any cloud, app, or voice layers.

Spot

Boston Dynamics · Commercial · Active

Price TBA

Spot is tracked on ui44 as a active commercial robot from Boston Dynamics. The database currently records a listed price of Price TBA, a release date of 2020, ~90 minutes battery life, 60 minutes charging time, and a published stack that includes 360° Stereo Cameras, Time-of-Flight Sensor, and Ultrasonic Sensors (front + rear) plus Wi-Fi 2.4GHz/5GHz and Ethernet.

For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether Spot combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Autonomous Industrial Inspection, Stair Climbing (±30° slopes), and Dynamic Obstacle Avoidance with any cloud, app, or voice layers.

Figure 03

Figure AI · Humanoid · Active

Price TBA

Figure 03 is tracked on ui44 as a active humanoid robot from Figure AI. The database currently records a listed price of Price TBA, a release date of 2025-10-09, ~5 hours battery life, Not disclosed charging time, and a published stack that includes Stereo Vision, Depth Cameras, and Force Sensors plus Wi-Fi and Bluetooth.

For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether Figure 03 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Complex Manipulation, Warehouse Work, and Manufacturing Tasks with any cloud, app, or voice layers.

Database context

Manufacturer context behind the article

Check whether this is one product story or a broader company pattern

Manufacturer pages add the privacy context that individual product pages cannot show on their own. They help you check whether cameras, microphones, cloud accounts, app controls, and policy assumptions appear across a broader lineup or stay tied to one specific product story.

1X Technologies

ui44 currently tracks 2 robots from 1X Technologies across 1 category. The company is grouped under Norway, and the current catalog footprint on ui44 includes NEO, EVE.

That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.

Samsung

ui44 currently tracks 2 robots from Samsung across 2 categorys. The company is grouped under South Korea, and the current catalog footprint on ui44 includes Ballie, Bespoke AI Jet Bot Steam Ultra.

That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Companions, Cleaning as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.

Unitree

ui44 currently tracks 2 robots from Unitree across 1 category. The company is grouped under China, and the current catalog footprint on ui44 includes H1, G1.

That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.

Boston Dynamics

ui44 currently tracks 3 robots from Boston Dynamics across 2 categorys. The company is grouped under USA, and the current catalog footprint on ui44 includes Atlas (Electric), Spot, Stretch.

That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid, Commercial as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.

Database context

Broaden the scan without leaving the database

Categories, components, and countries add the wider context

Category framing

Category pages are useful when the article touches a buying pattern that shows up across brands. A category route helps you confirm whether the linked products sit in a narrow niche or whether the same question should be tested across a larger field of alternatives.

Humanoid

The Humanoid category page currently groups 78 tracked robots from 55 manufacturers. ui44 describes this lane as: Full-size bipedal humanoid robots designed to work alongside humans. From factory floors to household tasks, these machines represent the cutting edge of robotics.

That makes the category route a practical follow-up when you want to check whether the products linked in this article are typical for the lane or whether they sit at one edge of the market. Useful starting examples currently include NEO, EVE, Mornine M1.

Companions

The Companions category page currently groups 35 tracked robots from 32 manufacturers. ui44 describes this lane as: Social robots, robot pets, and elderly care companions designed for emotional connection and daily support.

That makes the category route a practical follow-up when you want to check whether the products linked in this article are typical for the lane or whether they sit at one edge of the market. Useful starting examples currently include PARO, Abi, Moflin.

Country and ecosystem context

Country pages give extra context when support practices, launch sequencing, regulatory posture, or manufacturer mix matter. They are not a substitute for model-level verification, but they do help you see which ecosystems cluster together and which manufacturers sit in the same regional field when you broaden the search beyond the article headline.

Norway

The Norway route currently groups 2 tracked robots from 1 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.

On the current route, manufacturers like 1X Technologies make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.

South Korea

The South Korea route currently groups 2 tracked robots from 1 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.

On the current route, manufacturers like Samsung make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.

China

The China route currently groups 52 tracked robots from 15 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.

On the current route, manufacturers like AGIBOT, Unitree Robotics, Roborock make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.

Database context

Questions to answer before you move from reading to buying

A follow-up FAQ built from the entities already linked in this article

Frequently Asked Questions

Which page should I open first after reading “Gemini Robotics-ER: Embodied Reasoning”?

Start with NEO. That gives you a concrete product anchor for the article’s main claim. From there, branch into the manufacturer and component pages so you can tell whether the article is describing one specific model, a repeated brand pattern, or a wider technology issue that affects multiple shortlist options.

How do the manufacturer pages change the buying decision?

1X Technologies help you zoom out from one article and one product. On ui44 they show lineup breadth, category spread, and the neighboring robots tied to the same company. That context is useful when you are deciding whether a risk belongs to a single model, whether it shows up across a brand’s portfolio, and whether you should keep looking at alternatives before committing.

When should I switch from reading to side-by-side comparison?

Move into Compare NEO, Ballie, and G1 as soon as you understand the article’s main warning or promise. The article explains what to watch for, but the compare view is where you can check whether price, status, battery life, connectivity, sensors, and category fit still make the robot a good match for your own home and budget.

Database context

Where to go next in ui44

Keep the research chain inside the database

If you want to keep going, these follow-on pages give you the cleanest expansion path from article to research session. Open the comparison route first if you are deciding between products today. Open the manufacturer, category, and component routes if you still need to understand the broader pattern behind the claim.

UT

Written by

ui44 Team

Published May 7, 2026

Share this article

Open a plain share link on X or Bluesky. No embeds, no widgets, no cookie baggage.

Explore the database

Go beyond the headlines

Compare specs, features, and prices across 100+ robots from leading manufacturers worldwide.