That matters because the new generation of home-adjacent humanoids is being sold on VLA models: vision-language-action systems that turn camera views and spoken instructions into robot actions. VLA is real progress, but a bigger model does not automatically make a smoother arm. The robot still has to see, infer, move, measure contact, and correct itself quickly enough that the physical world has not changed by the time the command reaches the motors.
A May 2026 technical post from Chef Robotics is useful because it puts numbers on a problem buyers usually only see as "jerky robot movement." Chef is an industrial food-robotics company, not a consumer home-robot maker, but the timing lesson translates directly to household manipulation.
Why do home robots move jerkily?
The short answer: the robot is often acting on old information.
Chef Robotics describes modern VLA systems that predict actions in chunks. The model looks at camera images, joint state, and a language instruction, then outputs a short sequence of future motor commands. Chunking is necessary because a large model cannot always produce a fresh action every control tick.
Chef measured a single forward pass through a billion-parameter model at 50-135 ms on an RTX 5090 GPU. A 30 Hz robot controller wants a new command every 33 ms. If the robot waited for a full VLA inference every step, it would freeze. So the model predicts a horizon of future actions while the robot executes the current chunk.
That solves one bottleneck and creates another. When the next chunk arrives, it was predicted from a slightly different, slightly stale observation. The new trajectory may not line up perfectly with the old one. The hand does not glide; it twitches at the seam.
Chef decomposes the delay into three parts that home-robot buyers should care about:
Delay source
Model inference
- Chef Robotics measurement
- 50-135 ms overall; about 55 ms with 3 images and 71 ms with 4 images in the measured setup
- Buyer translation
- The AI model may be slower than the motor-control loop.
Delay source
Mechanical lag
- Chef Robotics measurement
- About 67 ms in leader-follower data collection
- Buyer translation
- Teleoperated training data can record the robot body behind the commanded action.
Delay source
Camera asynchrony
- Chef Robotics measurement
- 5-30 ms across USB camera views
- Buyer translation
- Multi-camera perception is not automatically simultaneous.
Delay source
30 Hz control loop
- Chef Robotics measurement
- 33 ms per control step
- Buyer translation
- A "small" delay can become several control steps.
| Delay source | Chef Robotics measurement | Buyer translation |
|---|---|---|
| Model inference | 50-135 ms overall; about 55 ms with 3 images and 71 ms with 4 images in the measured setup | The AI model may be slower than the motor-control loop. |
| Mechanical lag | About 67 ms in leader-follower data collection | Teleoperated training data can record the robot body behind the commanded action. |
| Camera asynchrony | 5-30 ms across USB camera views | Multi-camera perception is not automatically simultaneous. |
| 30 Hz control loop | 33 ms per control step | A "small" delay can become several control steps. |
Chef's fix was not a bigger model. It shifted the training target forward by the measured composite delay and used delay-aware augmentation, then requested the next action chunk early through asynchronous prefetch. On a real bimanual robot running at 30 Hz, Chef reported a 64.9% reduction in velocity discontinuity and a 30.8% reduction in acceleration jerk, with no additional inference cost.
The consumer translation is important: smoother motion can come from better system identification, timing, cameras, local compute, and training alignment — not only from more parameters or more impressive demo language.
VLA latency is a buyer issue, not just an engineering issue
A jerky robot arm is not automatically unsafe. Slow, conservative motion can be the right choice around people. But unpredictable jerks are different from deliberate slow movement. They suggest the robot is stitching together actions that do not quite agree.
In a home, that affects ordinary tasks:
- picking up a cup without nudging it sideways;
- pulling a drawer without bouncing the handle;
- placing a plate without scraping the surface;
- folding fabric without losing the edge;
- reacting when a pet, child, or human hand enters the workspace;
- recovering after the first grasp fails.
This is why a polished one-take demo is not enough. Buyers should watch for what happens at the boundaries: the start of a grasp, the transition from reaching to lifting, the moment the robot touches an object, and the recovery after an error. Those are where stale observations and mismatched chunks tend to show up.
1X NEO is a good example of why the timing question matters in a real home. ui44 tracks NEO as a $20,000 pre-order humanoid with a 167 cm body, 30 kg weight, about 4 hours of battery life, RGB cameras, depth sensors, tactile skin, and a soft/tendon-driven design. 1X says Redwood can jointly control navigation and manipulation, learn from successes and failures, and use off-board language intent while the robot performs chores.
That is the right kind of architectural direction for home work. It also leaves the buyer question open: how much of the fast contact-and-correction loop is local, measured, and repeatable when the robot is not in a curated demo?
Which robots expose useful clues today?
Robot companies rarely publish a clean "latency" spec. The ui44 database has to read around the edges: sensors, compute, payload, autonomy claims, control-stack transparency, and whether the robot is actually aimed at manipulation or mostly locomotion.
Robot
- ui44 database signal
- 173 cm, 61 kg, Helix VLA, stereo/depth vision, force sensors, tactile arrays, 20 kg payload; no public price
- What it says about smooth manipulation
- Figure's Helix write-up is unusually specific: a slower semantic system at 7-9 Hz and a fast reactive system at 200 Hz. That split is exactly what buyers should want to see, though Figure 03 is not a consumer purchase.
Robot
- ui44 database signal
- $20,000, 167 cm, 30 kg, RGB/depth sensing, tactile skin, soft body, home-first claims
- What it says about smooth manipulation
- NEO is one of the most relevant home humanoids, but buyers still need public proof of repeatable unscripted chores and clear autonomy versus Expert Mode boundaries.
Robot
- ui44 database signal
- $13,500, 132 cm, 35 kg, depth camera, 3D LiDAR, Wi-Fi 6, optional Jetson Orin on EDU, optional Dex3-1 hands
- What it says about smooth manipulation
- G1 exposes more hardware detail than most low-cost humanoids, but the public pitch is research/development, not a finished household chore worker.
Robot
- ui44 database signal
- From $4,900, 123 cm, about 29 kg, binocular cameras, IMU, optional Jetson Orin on EDU, UnifoLM voice/image interaction
- What it says about smooth manipulation
- R1 is impressive as an affordable biped, but the database description rightly treats it as locomotion-first rather than heavy manipulation-first.
Robot
- ui44 database signal
- $24,950, 141 cm, 24.5 kg, RGB-D cameras, LiDAR, ROS 2/Python SDK, 2 kg payload
- What it says about smooth manipulation
- Stretch 3 is not a humanoid, but its open stack and scoped mobile manipulator design make it easier to reason about timing, control, and research reproducibility.
Robot
- ui44 database signal
- 171 cm, 63 kg, 55 DoF, 12-DoF dexterous hands, ERA-42 AI model, 20 kg payload
- What it says about smooth manipulation
- Strong manipulation hardware claims, but buyers should separate those from published evidence about control timing and home reliability.
| Robot | ui44 database signal | What it says about smooth manipulation |
|---|---|---|
| Figure 03 | 173 cm, 61 kg, Helix VLA, stereo/depth vision, force sensors, tactile arrays, 20 kg payload; no public price | Figure's Helix write-up is unusually specific: a slower semantic system at 7-9 Hz and a fast reactive system at 200 Hz. That split is exactly what buyers should want to see, though Figure 03 is not a consumer purchase. |
| 1X NEO | $20,000, 167 cm, 30 kg, RGB/depth sensing, tactile skin, soft body, home-first claims | NEO is one of the most relevant home humanoids, but buyers still need public proof of repeatable unscripted chores and clear autonomy versus Expert Mode boundaries. |
| Unitree G1 | $13,500, 132 cm, 35 kg, depth camera, 3D LiDAR, Wi-Fi 6, optional Jetson Orin on EDU, optional Dex3-1 hands | G1 exposes more hardware detail than most low-cost humanoids, but the public pitch is research/development, not a finished household chore worker. |
| Unitree R1 | From $4,900, 123 cm, about 29 kg, binocular cameras, IMU, optional Jetson Orin on EDU, UnifoLM voice/image interaction | R1 is impressive as an affordable biped, but the database description rightly treats it as locomotion-first rather than heavy manipulation-first. |
| Hello Robot Stretch 3 | $24,950, 141 cm, 24.5 kg, RGB-D cameras, LiDAR, ROS 2/Python SDK, 2 kg payload | Stretch 3 is not a humanoid, but its open stack and scoped mobile manipulator design make it easier to reason about timing, control, and research reproducibility. |
| RobotEra STAR1 | 171 cm, 63 kg, 55 DoF, 12-DoF dexterous hands, ERA-42 AI model, 20 kg payload | Strong manipulation hardware claims, but buyers should separate those from published evidence about control timing and home reliability. |
The pattern is clear: the more a robot wants to touch household objects, the more its maker should explain the fast loop. "AI-powered" is too vague. Better evidence includes control frequency, onboard inference, tactile sensing, force feedback, sensor synchronization, recovery tests, and what happens when a grasp fails.
Unitree's public G1 spec sheet is a useful contrast. It lists an 8-core CPU, depth camera, 3D LiDAR, dual encoders, optional Jetson Orin module on the EDU version, optional tactile Dex3-1 hands, and an explicit caution that humanoid robots are early-stage products that individual users should understand before buying. That warning is not anti-robot. It is honest context: affordable hardware is arriving faster than finished home autonomy.
How to judge a home robot arm demo
The practical test is not "does it have VLA?" It is "does the VLA fit into a real-time robot system?"
When you watch a home robot demo, look for these signals:
- Continuous uncut attempts. A ten-second clip can hide ten failed starts. A longer run with small corrections is more valuable than a perfect montage.
- Contact moments. Watch the exact instant the gripper touches an object. Does the wrist settle smoothly, or does it bounce and re-aim?
- Recovery behavior. The best home robots will not be perfect. They will notice failure, retry safely, slow down, or ask for help.
- Disclosed fast-loop details. Figure's Helix 7-9 Hz / 200 Hz split is the type of disclosure buyers should reward. Many companies reveal far less.
- Local safety and control. Language planning can be hybrid or cloud-based. Balance, collision avoidance, grasp correction, and stop behavior should be local enough to survive network lag.
- Sensor diversity. Cameras alone are not the whole story. Force sensors, tactile arrays, compliant joints, and depth sensing help a robot understand physical contact.
- Human-in-the-loop boundaries. Teleoperation and Expert Mode can be useful, but they are not the same as autonomous manipulation. Ask when a remote human is involved.
If a company only shows fast cuts, musical overlays, and slogans about a robot "understanding" the world, stay skeptical. Understanding is not the same as stable closed-loop control.
Can software updates fix jerky robot movement?
Sometimes, yes — but not always.
Chef's result is encouraging because it reduced jerk by changing how the model was trained and scheduled, not by replacing the whole robot. That suggests some rough VLA motion can improve through better timing models, asynchronous inference, camera synchronization, data augmentation, and action-chunk training. A robot that ships with enough local compute and good sensors may get smoother over time.
But software cannot erase every physical limitation. Cheap cameras with unsynced capture, weak actuators, loose joints, low-rate control loops, poor grippers, overloaded processors, and missing tactile feedback all limit how much a future model update can help. The hardware has to give the software timely, reliable state information.
That is why ui44 treats latency as part of the broader buyer picture alongside on-device AI, VLA architecture, and actual robot specs in the database. Smoothness is not a luxury feature. For a robot that touches your things, it is evidence that perception, planning, and control are working together.
The bottom line
Jerky home robot movement is not just a demo aesthetic. It is a clue. The robot may be waiting on a model, switching between mismatched action chunks, seeing with cameras that are a few frames apart, or executing commands through a body that lags behind the training data.
For buyers, the safest reading is simple: do not rank home robots by AI slogans alone. Compare the full system. Use the ui44 robot database and /compare to check sensors, compute, weight, payload, price, and availability. Then watch the demo again and ask whether the robot moves smoothly because it is genuinely closing the loop — or because the video was edited around the hard parts.
Database context
Use this article as a privacy verification workflow
Turn the article into a real verification pass
Why Home Robots Move Jerkily: VLA Latency Explained already points you toward 6 linked robots, 6 manufacturers, and 3 countries inside the ui44 database. That matters because strong buyer guidance is easier to apply when you can move immediately from a claim or warning into concrete product pages, manufacturer directories, component explainers, and country-level context instead of treating the article as an isolated opinion piece. The fastest next step is to turn the article into a shortlist workflow: open the linked robot pages, verify which specs are actually published for those models, then compare the surrounding manufacturer and component context before you decide whether the underlying claim changes your buying plan.
For this topic, the useful discipline is to separate the editorial lesson from the catalog evidence. The article gives you the framing, but the robot pages tell you what each product actually ships with today: sensor stack, connectivity methods, listed price, release timing, category, and support-relevant compatibility notes. The manufacturer pages then show whether you are looking at a one-off launch, a broader lineup pattern, or a company that spans multiple categories. That layered workflow reduces the risk of buying on a single marketing phrase or a single support FAQ.
Use the robot pages to confirm which products actually expose cameras, microphones, Wi-Fi, or voice systems, then use the manufacturer pages to decide how much of the privacy question seems product-specific versus brand-wide. On this route cluster, NEO, Figure 03, and G1 form the fastest reality check. If you want a quick working shortlist, open Compare NEO, Figure 03, and G1 next, then keep this article open as the reasoning layer while you compare structured data side by side.
Practical Takeaway
Every robot, manufacturer, category, component, and country reference below resolves to a real ui44 page, keeping the follow-up path grounded in database records rather than generic advice.
Suggested next steps in ui44
- Open NEO and note the listed sensors, connectivity methods, and voice stack before you interpret any policy claim.
- Cross-check the wider brand context on 1X Technologies so you can see whether the privacy question touches one model or a broader lineup.
- Use the linked component pages to confirm how common the relevant sensors and connectivity layers are across the database.
- Keep a short note of which policy layers you checked, which device features are actually present on the robot page, and which items still depend on region- or app-level confirmation.
- Finish with Compare NEO, Figure 03, and G1 so the policy reading sits next to structured product data.
Database context
Robot profiles worth opening next
Use the linked product pages as the evidence layer
The linked robot pages are where this article becomes operational. Instead of asking whether the headline is interesting, use the robot entries to inspect the actual mix of sensors, connectivity options, batteries, pricing, release timing, and stated capabilities attached to the products mentioned in the article. That is the easiest way to see whether the warning or opportunity described here affects one product family, a specific design pattern, or an entire buying lane.
NEO
1X Technologies · Humanoid · Pre-order
NEO is tracked on ui44 as a pre-order humanoid robot from 1X Technologies. The database currently records a listed price of $20,000, a release date of 2025-10-28, ~4 hours battery life, Not disclosed charging time, and a published stack that includes RGB Cameras, Depth Sensors, and Tactile Skin plus Wi-Fi and Bluetooth.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether NEO combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Household Chores, Tidying Up, and Safe Human Interaction with any cloud, app, or voice layers.
Figure 03 is tracked on ui44 as a active humanoid robot from Figure AI. The database currently records a listed price of Price TBA, a release date of 2025-10-09, ~5 hours battery life, Not disclosed charging time, and a published stack that includes Stereo Vision, Depth Cameras, and Force Sensors plus Wi-Fi and Bluetooth.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether Figure 03 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Complex Manipulation, Warehouse Work, and Manufacturing Tasks with any cloud, app, or voice layers.
G1 is tracked on ui44 as a available humanoid robot from Unitree. The database currently records a listed price of $13,500, a release date of 2024, ~2 hours battery life, Not disclosed charging time, and a published stack that includes Depth Camera, 3D LiDAR, and 4 Microphone Array plus Wi-Fi 6 and Bluetooth 5.2.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether G1 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Bipedal Walking, Object Manipulation, and Dexterous Hands (optional Dex3-1) with any cloud, app, or voice layers.
R1
Unitree Robotics · Humanoid · Pre-order
R1 is tracked on ui44 as a pre-order humanoid robot from Unitree Robotics. The database currently records a listed price of $4,900, a release date of 2025, ~1 hour (mixed activity) battery life, Not officially disclosed charging time, and a published stack that includes Binocular Cameras, 4-Mic Array, and Dual 6-Axis IMU plus Wi-Fi and Bluetooth 5.2.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether R1 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Bipedal Walking & Running, Cartwheels & Handstands, and Push Recovery with any cloud, app, or voice layers, including UnifoLM (voice + image commands).
Stretch 3
Hello Robot · Home Assistants · Active
Stretch 3 is tracked on ui44 as a active home assistants robot from Hello Robot. The database currently records a listed price of $24,950, a release date of 2024, 2–5 hours battery life, Not disclosed charging time, and a published stack that includes Intel D405 RGBD Camera (gripper), Intel D435if RGBD Camera (head), and Wide-Angle RGB Camera (head) plus Wi-Fi and Ethernet.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether Stretch 3 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Mobile Manipulation, Autonomous Navigation, and Teleoperation (Web / Gamepad / Dexterous) with any cloud, app, or voice layers.
Database context
Manufacturer context behind the article
Check whether this is one product story or a broader company pattern
Manufacturer pages add the privacy context that individual product pages cannot show on their own. They help you check whether cameras, microphones, cloud accounts, app controls, and policy assumptions appear across a broader lineup or stay tied to one specific product story.
1X Technologies
ui44 currently tracks 2 robots from 1X Technologies across 1 category. The company is grouped under Norway, and the current catalog footprint on ui44 includes NEO, EVE.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Figure AI
ui44 currently tracks 2 robots from Figure AI across 1 category. The company is grouped under USA, and the current catalog footprint on ui44 includes Figure 03, Figure 02.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Unitree
ui44 currently tracks 2 robots from Unitree across 1 category. The company is grouped under China, and the current catalog footprint on ui44 includes H1, G1.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Unitree Robotics
ui44 currently tracks 7 robots from Unitree Robotics across 2 categorys. The company is grouped under China, and the current catalog footprint on ui44 includes B2, B1, Go2.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Quadruped, Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Database context
Broaden the scan without leaving the database
Categories, components, and countries add the wider context
Category framing
Category pages are useful when the article touches a buying pattern that shows up across brands. A category route helps you confirm whether the linked products sit in a narrow niche or whether the same question should be tested across a larger field of alternatives.
Humanoid
The Humanoid category page currently groups 78 tracked robots from 55 manufacturers. ui44 describes this lane as: Full-size bipedal humanoid robots designed to work alongside humans. From factory floors to household tasks, these machines represent the cutting edge of robotics.
That makes the category route a practical follow-up when you want to check whether the products linked in this article are typical for the lane or whether they sit at one edge of the market. Useful starting examples currently include NEO, EVE, Mornine M1.
Home Assistants
The Home Assistants category page currently groups 12 tracked robots from 12 manufacturers. ui44 describes this lane as: Arm-based household helpers — laundry folders, kitchen robots, and mobile manipulators that handle physical tasks at home.
That makes the category route a practical follow-up when you want to check whether the products linked in this article are typical for the lane or whether they sit at one edge of the market. Useful starting examples currently include Robody, Futuring 2 (F2), Stretch 3.
Country and ecosystem context
Country pages give extra context when support practices, launch sequencing, regulatory posture, or manufacturer mix matter. They are not a substitute for model-level verification, but they do help you see which ecosystems cluster together and which manufacturers sit in the same regional field when you broaden the search beyond the article headline.
Norway
The Norway route currently groups 2 tracked robots from 1 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.
On the current route, manufacturers like 1X Technologies make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.
USA
The USA route currently groups 17 tracked robots from 12 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.
On the current route, manufacturers like Boston Dynamics, Figure AI, Richtech Robotics make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.
China
The China route currently groups 52 tracked robots from 15 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.
On the current route, manufacturers like AGIBOT, Unitree Robotics, Roborock make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.
Database context
Questions to answer before you move from reading to buying
A follow-up FAQ built from the entities already linked in this article
Frequently Asked Questions
Which page should I open first after reading “Why Home Robots Move Jerkily: VLA Latency Explained”?
Start with NEO. That gives you a concrete product anchor for the article’s main claim. From there, branch into the manufacturer and component pages so you can tell whether the article is describing one specific model, a repeated brand pattern, or a wider technology issue that affects multiple shortlist options.
How do the manufacturer pages change the buying decision?
1X Technologies help you zoom out from one article and one product. On ui44 they show lineup breadth, category spread, and the neighboring robots tied to the same company. That context is useful when you are deciding whether a risk belongs to a single model, whether it shows up across a brand’s portfolio, and whether you should keep looking at alternatives before committing.
When should I switch from reading to side-by-side comparison?
Move into Compare NEO, Figure 03, and G1 as soon as you understand the article’s main warning or promise. The article explains what to watch for, but the compare view is where you can check whether price, status, battery life, connectivity, sensors, and category fit still make the robot a good match for your own home and budget.
Database context
Where to go next in ui44
Keep the research chain inside the database
If you want to keep going, these follow-on pages give you the cleanest expansion path from article to research session. Open the comparison route first if you are deciding between products today. Open the manufacturer, category, and component routes if you still need to understand the broader pattern behind the claim.
Written by
ui44 Team
Published May 9, 2026
Share this article
Open a plain share link on X or Bluesky. No embeds, no widgets, no cookie baggage.