That does not make it a finished home robot. It makes KAI a useful test case for a more practical question: when a company announces extreme dexterity, what should buyers actually believe?
The ui44 database currently tracks Kinetix AI KAI as a development-stage humanoid with no official public price, no consumer order path, and a manufacturer page that mainly says KaiBot is coming in early 2026. Independent launch coverage fills in many of the headline numbers, but that distinction matters. Official teaser page plus third-party spec reporting is not the same thing as a purchaseable robot with warranty terms, service coverage, and repeatable home-task evidence.
So this guide reads KAI in two ways at once: as a genuinely interesting dexterous humanoid, and as a reminder that home readiness is not measured in degrees of freedom alone.
Is Kinetix AI KAI a real home robot or a prototype?
KAI is real enough to belong in a home-robot database, but buyers should treat it as a prototype/development platform until Kinetix AI publishes commercial terms.
The strongest public claims around KAI are hardware claims. ui44's record lists KAI as a 173 cm, 70 kg humanoid with a reported 115 total degrees of freedom, 72 hand DoF across both hands, a 1.7 kWh semi-solid-state battery, Ethernet and Wi-Fi, and a reported carrying capacity of up to 20 kg. Independent reports also cite a top speed around 5 km/h, a runtime range around 3-4 hours, and synthetic tactile skin with 18,000 sensing points capable of detecting forces down to 0.1 newtons.
That is a serious spec sheet. The home-relevance is obvious: dishes, clothes, handles, tools, and pet-safe contact are hand-and-touch problems, not just walking problems.
But the buyer caveats are just as important:
- Kinetix AI has not published official pricing for KAI.
- Third-party price reports conflict: some cite a target below $40,000, while one product database lists $80,000.
- There is no public consumer checkout, reservation contract, warranty policy, or service network.
- Most task claims are demonstration claims, not published reliability rates.
- The official KaiBot page is a teaser, not a full technical datasheet.
That is why ui44 marks the price as null rather than pretending the market has settled on a number. If you are comparing home humanoids, an unavailable $80,000 prototype and a promised sub-$40,000 mass-production target behave very differently.
Should buyers care about 115 degrees of freedom?
Yes — but not in the way launch headlines imply.
Degrees of freedom are valuable because household work is full of awkward contact. A gripper can pick up a box; a hand has to adjust around a mug handle, cloth edge, spoon, door pull, charging cable, or crumpled shirt. Passive joints can also help a hand conform to objects without demanding instant computation for every tiny collision. That is why KAI's reported hand design is interesting: 36 DoF per hand, with active and passive joints, suggests Kinetix AI is thinking beyond simple claw gripping.
The problem is that every extra joint is also a reliability, calibration, cost, and repair question. More joints mean more actuators or mechanisms, more sensors, more wiring, more failure modes, and more software coordination. In a lab, that complexity can produce beautiful demos. In a kitchen, it has to survive grease, dust, dropped utensils, curious children, pets, cable snags, and thousands of repetitive grasps.
This is the trade-off buyers should watch:
Signal
115 total DoF
- Why it helps
- More human-like reach, posture, and hand motion
- Why it can backfire
- More places for mechanical slop, calibration drift, and failures
Signal
72 hand DoF
- Why it helps
- Better object adaptation and in-hand manipulation
- Why it can backfire
- Harder repair and harder software control
Signal
18,000 tactile points
- Why it helps
- Potentially safer grip and contact feedback
- Why it can backfire
- Needs robust sensor fusion, cleaning tolerance, and validation
Signal
20 kg reported load
- Why it helps
- Useful for carrying bags or household objects
- Why it can backfire
- Payload is not the same as precise manipulation at arm's length
Signal
3-4 hour runtime
- Why it helps
- Fits some assisted-task sessions
- Why it can backfire
- Runtime under active dual-arm work may differ from headline runtime
| Signal | Why it helps | Why it can backfire |
|---|---|---|
| 115 total DoF | More human-like reach, posture, and hand motion | More places for mechanical slop, calibration drift, and failures |
| 72 hand DoF | Better object adaptation and in-hand manipulation | Harder repair and harder software control |
| 18,000 tactile points | Potentially safer grip and contact feedback | Needs robust sensor fusion, cleaning tolerance, and validation |
| 20 kg reported load | Useful for carrying bags or household objects | Payload is not the same as precise manipulation at arm's length |
| 3-4 hour runtime | Fits some assisted-task sessions | Runtime under active dual-arm work may differ from headline runtime |
The right conclusion is not "KAI is hype." The right conclusion is "KAI is ambitious, and ambition moves the burden of proof from walking demos to reliability data."
How KAI compares with other home-adjacent robots
The closest comparisons are not ordinary robot vacuums or lawn mowers. KAI should be compared with humanoids and mobile manipulators that make a real claim on household objects.
1X NEO is the cleanest home-first contrast. ui44 tracks NEO at $20,000, 167 cm, 30 kg, about 4 hours of battery life, RGB and depth sensing, tactile skin, and a soft body designed for human coexistence. NEO has a clearer consumer pitch than KAI, but buyers still need to understand autonomy boundaries, remote Expert Mode involvement, and what tasks are truly repeatable in an ordinary home.
Figure 03 sits at the other end. It is not a consumer purchase, and Figure has no public price, but the database records useful hardware signals: 173 cm, 61 kg, roughly 5 hours of battery life, 20 kg payload, stereo/depth vision, force sensors, tactile arrays, and Helix VLA. Figure's value as a comparison is architectural. It shows how much a serious manipulation robot depends on the full loop: perception, tactile sensing, force feedback, action generation, and fast reactive control.
Unitree G1 is a price anchor. At $13,500 starting price, 132 cm, 35 kg, optional dexterous hands, depth camera, 3D LiDAR, Wi-Fi 6, and optional NVIDIA Jetson Orin on the EDU version, G1 is more attainable than most humanoids. But ui44 still treats it as research/development hardware, not a polished home helper.
Unitree R1-A7-D is even more useful for separating price headlines from configuration reality. Unitree says the R1 dual-arm line starts from $4,290, but the exact R1-A7-D mobile-base configuration price is not public. Its 7-DoF arms, 2-4 kg posture-dependent arm payload, 1.5 hour battery-powered runtime, optional grippers or dexterous hands, and open interfaces make it compelling for labs and developers. That is different from being ready to clear a dinner table.
Hello Robot Stretch 3 looks less humanoid, but it is the boring comparison KAI has to beat. Stretch 3 is a $24,950 mobile manipulator with a compact base, RGB-D cameras, LiDAR, ROS 2/Python support, and a scoped 2 kg payload. It will not sell itself with 115 DoF, but its narrower embodiment makes it easier to evaluate, repair, and reproduce in research homes.
The pattern is clear: the robots closest to home usefulness are not always the ones with the largest spec numbers. They are the ones where the maker can explain what the robot can do repeatedly, what it cannot do, and who fixes it when the hand stops behaving.
What tactile skin changes — and what it does not
Tactile skin is one of the most important KAI claims because homes are contact-rich environments. A robot vacuum can avoid most human contact. A laundry-folding or dishwasher-loading robot cannot. It has to touch objects, surfaces, and sometimes people.
In theory, KAI's reported 18,000 sensing points and 0.1 N touch sensitivity could help with three buyer-relevant problems:
- Grip force: holding a glass firmly enough to lift it but gently enough not to crack it.
- Contact awareness: knowing when an elbow, wrist, or torso has brushed a cabinet, wall, chair, pet, or person.
- Failure detection: noticing that a grasp slipped, a cloth edge folded, or a tool is misaligned.
That is exactly the direction home manipulation needs. It also aligns with the broader industry move toward hands, force sensing, and tactile data systems. ui44 has already argued in its robot-hand coverage that five-finger hands are not automatically better than grippers; the better question is whether the hand, sensors, control loop, and task model are matched to the job.
For KAI, the missing evidence is public validation. Does the tactile skin survive repeated cleaning? How is it calibrated? What happens when a sensor region fails? Are force thresholds local and fast, or routed through slower planning software? Can users replace damaged skin modules? Does the robot slow or stop safely when tactile readings conflict with the visual model?
Those are not niche engineering questions. They are the difference between a robot that can impress at a launch event and a robot a family would trust near plates, pets, and hands.
The home-assistant claim: dishwasher, clothes, and the messy middle
KAI's reported demonstrations include dishwasher loading/unloading, folding clothes, sorting goods, light assembly, and even threading a needle. Those examples are smartly chosen because they sound domestic and dexterous at the same time.
They are also some of the hardest chores to generalize.
A dishwasher task involves object recognition, wet or reflective surfaces, thin handles, varied rack geometry, reach constraints, collision avoidance, and fragile items. Clothes folding adds deformable material, hidden edges, and ambiguous success criteria. Sorting goods is simpler if the set of objects is controlled; it becomes harder when packaging is crushed, occluded, or mixed with household clutter. Needle threading is an impressive dexterity demo, but it is not evidence that the robot can safely handle a month's worth of ordinary chores.
That does not diminish KAI. It gives buyers a better way to watch future demos. Ask whether the task is:
- uncut rather than edited;
- repeated across different homes or object sets;
- performed after small disturbances;
- recovered safely after a failed grasp;
- measured with a success rate;
- run with the same hardware configuration being sold;
- performed without a hidden operator choosing every next action.
A home assistant humanoid is not judged by its best attempt. It is judged by the boring tenth attempt.
Data strategy may matter as much as the robot body
Kinetix AI's KAI Halo training device may be as important as KAI itself. The idea, according to launch coverage, is to capture first-person video, body movement, and spatial data from people performing ordinary tasks, then use that data to train the robot's world model.
That fits a wider physical-AI trend. 1X, Figure, Genesis AI-style hand systems, ROBOTIS, and other robot companies are all wrestling with the same bottleneck: the internet has enormous text and video data, but not enough high-quality robot action data. A humanoid has to learn how physical actions actually change the world.
The buyer translation is simple. If KAI's body is the product, KAI Halo is part of the supply chain. Better task data could help Kinetix AI reduce the gap between impressive dexterity and useful autonomy. Poorly covered task data could leave the robot with a beautiful hand and no robust everyday skill library.
The privacy question belongs here too. A device designed to collect egocentric household task data may be useful for training, but buyers should ask where the data goes, how it is consented, how faces and private spaces are handled, and whether future home users are expected to contribute data after purchase.
What would make KAI feel buyer-ready?
KAI does not need to be perfect to be interesting. It does need clearer evidence before a buyer should treat it like a home-assistant option. The most useful next disclosures would be:
- Official price and purchase model: list price, deposit terms, leasing or service plan, refund rules, and shipping regions.
- Task reliability: success rates for dishwasher handling, clothes folding, object sorting, and failed-grasp recovery.
- Autonomy boundaries: which tasks are autonomous, which use remote assistance, and which require developer setup.
- Safety validation: force limits, stop behavior, tactile-skin failure modes, child/pet assumptions, and certifications if applicable.
- Repair plan: expected service intervals, replaceable skin or hand modules, battery replacement, and field repair coverage.
- Data policy: KAI Halo training-data handling, home user data retention, opt-outs, and whether footage leaves the home.
Until those pieces are public, KAI is best understood as a high-ceiling robot, not a near-term household purchase.
Bottom line: KAI is exciting because it raises the bar
Kinetix AI KAI is worth watching because it puts three home-relevant bets in one body: dexterous hands, tactile skin, and world-model-based action evaluation. Those are exactly the areas that matter if humanoids are ever going to move from walking demos to useful household manipulation.
But a high-DoF robot is not automatically a home robot. In ui44's database, KAI sits beside other home-adjacent machines that each expose a different trade-off. 1X NEO has the clearer home preorder story. Unitree G1 has a lower research-platform price. Figure 03 has a stronger industrial autonomy narrative. Stretch 3 has a narrower but more explainable mobile-manipulation stack.
KAI's job is to prove that its complexity buys more than a launch headline. If Kinetix AI can turn 115 DoF and tactile skin into reliable, repairable, privacy-respecting home tasks, it will deserve serious attention. Until then, buyers should admire the engineering and keep the checklist open.
For a broader look at the category, compare KAI with other humanoids in the ui44 robot database or use ui44 Compare to line up price, status, sensors, and manipulation claims side by side.
Database context
Use this article as a privacy verification workflow
Turn the article into a real verification pass
Kinetix AI KAI: 115 DoF Home Robot Reality Check already points you toward 6 linked robots, 6 manufacturers, and 3 countries inside the ui44 database. That matters because strong buyer guidance is easier to apply when you can move immediately from a claim or warning into concrete product pages, manufacturer directories, component explainers, and country-level context instead of treating the article as an isolated opinion piece. The fastest next step is to turn the article into a shortlist workflow: open the linked robot pages, verify which specs are actually published for those models, then compare the surrounding manufacturer and component context before you decide whether the underlying claim changes your buying plan.
For this topic, the useful discipline is to separate the editorial lesson from the catalog evidence. The article gives you the framing, but the robot pages tell you what each product actually ships with today: sensor stack, connectivity methods, listed price, release timing, category, and support-relevant compatibility notes. The manufacturer pages then show whether you are looking at a one-off launch, a broader lineup pattern, or a company that spans multiple categories. That layered workflow reduces the risk of buying on a single marketing phrase or a single support FAQ.
Use the robot pages to confirm which products actually expose cameras, microphones, Wi-Fi, or voice systems, then use the manufacturer pages to decide how much of the privacy question seems product-specific versus brand-wide. On this route cluster, KAI (KaiBot), NEO, and Figure 03 form the fastest reality check. If you want a quick working shortlist, open Compare KAI (KaiBot), NEO, and Figure 03 next, then keep this article open as the reasoning layer while you compare structured data side by side.
Practical Takeaway
Every robot, manufacturer, category, component, and country reference below resolves to a real ui44 page, keeping the follow-up path grounded in database records rather than generic advice.
Suggested next steps in ui44
- Open KAI (KaiBot) and note the listed sensors, connectivity methods, and voice stack before you interpret any policy claim.
- Cross-check the wider brand context on Kinetix AI so you can see whether the privacy question touches one model or a broader lineup.
- Use the linked component pages to confirm how common the relevant sensors and connectivity layers are across the database.
- Keep a short note of which policy layers you checked, which device features are actually present on the robot page, and which items still depend on region- or app-level confirmation.
- Finish with Compare KAI (KaiBot), NEO, and Figure 03 so the policy reading sits next to structured product data.
Database context
Robot profiles worth opening next
Use the linked product pages as the evidence layer
The linked robot pages are where this article becomes operational. Instead of asking whether the headline is interesting, use the robot entries to inspect the actual mix of sensors, connectivity options, batteries, pricing, release timing, and stated capabilities attached to the products mentioned in the article. That is the easiest way to see whether the warning or opportunity described here affects one product family, a specific design pattern, or an entire buying lane.
KAI (KaiBot)
Kinetix AI · Humanoid · Development
KAI (KaiBot) is tracked on ui44 as a development humanoid robot from Kinetix AI. The database currently records a listed price of Price TBA, a release date of 2026-04, 1.7 kWh semi-solid-state battery; reported runtime ranges from 3 hours of continuous dual-arm operation to about 4 hours per charge battery life, Not officially disclosed charging time, and a published stack that includes Full-body tactile skin with 18,000 sensing points, Touch detection down to 0.1 N reported, and Vision and spatial data captured for training through the KAI Halo wearable system plus Ethernet and Wi-Fi.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether KAI (KaiBot) combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Bipedal Humanoid Locomotion, 115 Degrees of Freedom, and 72 Degrees of Freedom Across Both Hands with any cloud, app, or voice layers.
NEO
1X Technologies · Humanoid · Pre-order
NEO is tracked on ui44 as a pre-order humanoid robot from 1X Technologies. The database currently records a listed price of $20,000, a release date of 2025-10-28, ~4 hours battery life, Not disclosed charging time, and a published stack that includes RGB Cameras, Depth Sensors, and Tactile Skin plus Wi-Fi and Bluetooth.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether NEO combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Household Chores, Tidying Up, and Safe Human Interaction with any cloud, app, or voice layers.
Figure 03 is tracked on ui44 as a active humanoid robot from Figure AI. The database currently records a listed price of Price TBA, a release date of 2025-10-09, ~5 hours battery life, Not disclosed charging time, and a published stack that includes Stereo Vision, Depth Cameras, and Force Sensors plus Wi-Fi and Bluetooth.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether Figure 03 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Complex Manipulation, Warehouse Work, and Manufacturing Tasks with any cloud, app, or voice layers.
G1 is tracked on ui44 as a available humanoid robot from Unitree. The database currently records a listed price of $13,500, a release date of 2024, ~2 hours battery life, Not disclosed charging time, and a published stack that includes Depth Camera, 3D LiDAR, and 4 Microphone Array plus Wi-Fi 6 and Bluetooth 5.2.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether G1 combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Bipedal Walking, Object Manipulation, and Dexterous Hands (optional Dex3-1) with any cloud, app, or voice layers.
R1-A7-D
Unitree Robotics · Humanoid · Development
R1-A7-D is tracked on ui44 as a development humanoid robot from Unitree Robotics. The database currently records a listed price of Price TBA, a release date of 2026-04-30, Approx. 1.5 hours (battery-powered; external power also supported) battery life, Not officially disclosed charging time, and a published stack that includes Chassis LiDAR, Binocular camera / depth module, and Optional wrist camera plus Wi-Fi 6 and Bluetooth 5.2.
For privacy-focused reading, this page matters because it shows the concrete device surface behind the policy discussion. Use it to verify whether R1-A7-D combines sensors and connectivity in a way that could change the in-home data footprint, and compare the listed capabilities such as Mobile Dual-Arm Manipulation, 7-DOF Arms, and Wheeled Mobile Base with any cloud, app, or voice layers, including Voice interaction via 4-mic array and dual speakers.
Database context
Manufacturer context behind the article
Check whether this is one product story or a broader company pattern
Manufacturer pages add the privacy context that individual product pages cannot show on their own. They help you check whether cameras, microphones, cloud accounts, app controls, and policy assumptions appear across a broader lineup or stay tied to one specific product story.
Kinetix AI
ui44 currently tracks 1 robot from Kinetix AI across 1 category. The company is grouped under China, and the current catalog footprint on ui44 includes KAI (KaiBot).
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
1X Technologies
ui44 currently tracks 2 robots from 1X Technologies across 1 category. The company is grouped under Norway, and the current catalog footprint on ui44 includes NEO, EVE.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Figure AI
ui44 currently tracks 2 robots from Figure AI across 1 category. The company is grouped under USA, and the current catalog footprint on ui44 includes Figure 03, Figure 02.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Unitree
ui44 currently tracks 2 robots from Unitree across 1 category. The company is grouped under China, and the current catalog footprint on ui44 includes H1, G1.
That wider brand context matters because privacy questions rarely stop at one FAQ page. A manufacturer route helps you see whether the article is centered on one premium model or on a company that has several relevant products and therefore more than one place where the same policy or app assumptions might matter. The category mix here currently points toward Humanoid as the most useful next route if you want to see whether this article reflects a wider pattern inside the brand.
Database context
Broaden the scan without leaving the database
Categories, components, and countries add the wider context
Category framing
Category pages are useful when the article touches a buying pattern that shows up across brands. A category route helps you confirm whether the linked products sit in a narrow niche or whether the same question should be tested across a larger field of alternatives.
Humanoid
The Humanoid category page currently groups 80 tracked robots from 57 manufacturers. ui44 describes this lane as: Full-size bipedal humanoid robots designed to work alongside humans. From factory floors to household tasks, these machines represent the cutting edge of robotics.
That makes the category route a practical follow-up when you want to check whether the products linked in this article are typical for the lane or whether they sit at one edge of the market. Useful starting examples currently include NEO, EVE, Mornine M1.
Home Assistants
The Home Assistants category page currently groups 12 tracked robots from 12 manufacturers. ui44 describes this lane as: Arm-based household helpers — laundry folders, kitchen robots, and mobile manipulators that handle physical tasks at home.
That makes the category route a practical follow-up when you want to check whether the products linked in this article are typical for the lane or whether they sit at one edge of the market. Useful starting examples currently include Robody, Futuring 2 (F2), Stretch 3.
Country and ecosystem context
Country pages give extra context when support practices, launch sequencing, regulatory posture, or manufacturer mix matter. They are not a substitute for model-level verification, but they do help you see which ecosystems cluster together and which manufacturers sit in the same regional field when you broaden the search beyond the article headline.
China
The China route currently groups 52 tracked robots from 15 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.
On the current route, manufacturers like AGIBOT, Unitree Robotics, Roborock make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.
Norway
The Norway route currently groups 2 tracked robots from 1 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.
On the current route, manufacturers like 1X Technologies make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.
USA
The USA route currently groups 17 tracked robots from 12 manufacturers in ui44. That gives you a useful regional lens when the article points toward support practices, launch sequencing, or brand clusters that may share similar ecosystem assumptions.
On the current route, manufacturers like Boston Dynamics, Figure AI, Richtech Robotics make the page a good way to broaden the scan without losing the regional context that often shapes availability, documentation style, and adjacent alternatives.
Database context
Questions to answer before you move from reading to buying
A follow-up FAQ built from the entities already linked in this article
Frequently Asked Questions
Which page should I open first after reading “Kinetix AI KAI: 115 DoF Home Robot Reality Check”?
Start with KAI (KaiBot). That gives you a concrete product anchor for the article’s main claim. From there, branch into the manufacturer and component pages so you can tell whether the article is describing one specific model, a repeated brand pattern, or a wider technology issue that affects multiple shortlist options.
How do the manufacturer pages change the buying decision?
Kinetix AI help you zoom out from one article and one product. On ui44 they show lineup breadth, category spread, and the neighboring robots tied to the same company. That context is useful when you are deciding whether a risk belongs to a single model, whether it shows up across a brand’s portfolio, and whether you should keep looking at alternatives before committing.
When should I switch from reading to side-by-side comparison?
Move into Compare KAI (KaiBot), NEO, and Figure 03 as soon as you understand the article’s main warning or promise. The article explains what to watch for, but the compare view is where you can check whether price, status, battery life, connectivity, sensors, and category fit still make the robot a good match for your own home and budget.
Database context
Where to go next in ui44
Keep the research chain inside the database
If you want to keep going, these follow-on pages give you the cleanest expansion path from article to research session. Open the comparison route first if you are deciding between products today. Open the manufacturer, category, and component routes if you still need to understand the broader pattern behind the claim.
Written by
ui44 Team
Published May 9, 2026
Share this article
Open a plain share link on X or Bluesky. No embeds, no widgets, no cookie baggage.