Components atlas

Robot component layers

Start with the layer that will change the shortlist fastest, then open the workbench with the clearest repeated signals.

Sensors and connectivity answer overlap and compatibility. AI and voice explain behavior, autonomy nuance, and interface fit.

Quick routes

Start where the shortlist changes fastest

Research sequence

Use the atlas to orient fast, then leave it once the next click is obvious.

1

Choose the right question

Shared-first handles overlap and compatibility. Signature-heavy handles behavior, autonomy, and interface fit.

2

Confirm the signal quickly

Use the preview rows and counts before opening the denser workbench and the long tail of one-off entries.

3

Leave once the route has done its job

Switch to robot pages or Compare as soon as this route has narrowed the next move.

Fastest shortlist lane

Start with layers that repeat across many robots

These workbenches surface the fastest comparison anchors: perception hardware and network control layers that recur across categories, brands, and buyer journeys.

Shared-first

Sensor

562 tracked

High-volume hardware signals for scanning perception stacks fast - cameras, LiDAR, IMUs, touch, and other physical sensing layers.

Best used for

Fast scan

Hardware-heavy shortlist work where you need to spot repeated perception modules quickly before drilling into edge-case sensors.

80

Shared

411

30d active

562

Tracked

Scan first

Preview

Why this lane matters

80 entries repeat across multiple robots, which keeps the first pass fast. The remaining 482 stay in the long tail for exact-match research.

Connectivity

143 tracked

Protocols, apps, radios, and control surfaces that determine how robots connect to homes, operators, and cloud services.

Best used for

Fast scan

Comparing which radios, apps, and protocols will actually fit an existing smart-home or operator workflow.

36

Shared

101

30d active

143

Tracked

Scan first

Preview

Why this lane matters

36 entries repeat across multiple robots, which keeps the first pass fast. The remaining 107 stay in the long tail for exact-match research.

Deeper research lane

Use signature-heavy layers when labels explain behavior

AI and voice matter when you are testing product personality, autonomy claims, or interface fit. Treat them like signature catalogs, not popularity tables.

Signature-heavy

AI

204 tracked

Reasoning, navigation, autonomy, and compute layers - from embedded inference to full multimodal software stacks.

Best used for

Fast scan

Deep stack research where vendor-specific autonomy, compute, and branded intelligence systems matter more than broad reuse counts.

2

Shared

133

30d active

204

Tracked

Representative signals

Preview

Why this lane matters

2 entries repeat across multiple robots, which keeps the first pass fast. The remaining 202 stay in the long tail for exact-match research.

Voice Assistant

51 tracked

Speech interfaces, assistant integrations, and conversational layers that shape command, response, and accessibility behavior.

Best used for

Fast scan

Checking whether a robot will fit a household's preferred assistant, accessibility flow, or hands-free control style.

10

Shared

32

30d active

51

Tracked

Scan first

Preview

Why this lane matters

10 entries repeat across multiple robots, which keeps the first pass fast. The remaining 41 stay in the long tail for exact-match research.

Field guide

How to use the components atlas without getting lost in labels

Use the atlas as a routing tool: orient fast, open the layer that changes the shortlist fastest, then hand off to product-level pages before the route turns into trivia.

The atlas tracks 960 components across sensors, connectivity, AI, and voice - only 128 repeat across multiple robots. Shared-first lanes surface comparison anchors fast; signature-heavy layers preserve branded stacks for deeper inspection once the shortlist is tighter.

Component research usually breaks in two predictable ways: over-scrolling (every label gets the same visual weight, so recurring clues disappear into a wall of names) and over-interpretation (a familiar label like LiDAR or Alexa gets treated as a verdict instead of a clue). This atlas counters both by separating the layers that speed up comparison from the layers that explain behavioral differences. The field guide below shows you how to read each layer, how to triage the long tail, and when to leave the atlas for product-level pages.

Layer map

Open the layer that changes the shortlist fastest

Sensor

How does this robot perceive the real world?

LiDAR RGB camera Depth Radar

Best when: Navigation confidence, obstacle handling, manipulation feedback, and environment awareness.

Connectivity

How will this robot fit the ecosystem around it?

Wi-Fi Bluetooth App control API

Best when: Smart-home compatibility, operator workflow, remote control, and deployment friction.

AI

Why might two similar robots behave differently?

Autonomy stack Navigation brain Jetson LLM

Best when: Autonomy nuance, branded intelligence claims, planning behavior, and compute differences.

Voice Assistant

Will the interface feel natural in daily use?

Alexa Google Assistant Siri Speech control

Best when: Accessibility, assistant fit, hands-free control, and household usability.

Reading the numbers

Robot count separates anchors from signatures

Higher counts signal reusable comparison anchors - parts, protocols, or assistants that appear across many robots. Lower counts signal proprietary or niche layers that need more product context before they become meaningful.

Use it for

Fast scanning - decide which labels deserve the first click when you need to know whether a pattern is shared or isolated.

!

Do not treat it as

Treating common labels as automatically better. Repetition tells you what is reusable, not what is superior. A high-count Wi-Fi module might be table stakes; a low-count autonomy stack might define the product.

Freshness signal

30-day activity triages current reading

677 components tie to recently touched robot records - use freshness to prioritize the most current layer before going deeper. This is especially valuable when AI, apps, or assistant integrations are changing quickly.

Use it for

Fast-moving categories where the newest maintained records are more likely to reflect the current marketed stack.

!

Do not treat it as

Confusing recency with quality. Recent review means recently checked, not best-in-class.

Real-world use

Map the layer to the risk you are reducing

Buyer shortlist

Sensors first for navigation confidence

For vacuums, mowers, and mobile robots, sensors usually tell you more than marketing language. Open this layer first when the risk is navigation quality, obstacle handling, or real-world perception.

Ecosystem fit

Connectivity when setup friction could kill adoption

A robot can look impressive and still fail if it cannot join the network, app stack, automation flow, or operator workflow around it. Open connectivity first when compatibility and deployment friction matter as much as hardware.

Behavior nuance

AI when similar robots act differently

Two robots can share visible hardware and still diverge once autonomy, mapping, planning, or branded intelligence enters the picture. Open AI when you need to explain behavior or compute claims.

Household usability

Voice when interface comfort matters

Voice matters when the adoption question is how naturally people can use the robot every day, not just what it can do. Open this layer when accessibility, assistant fit, or hands-free control could change daily usability.

Move to robot pages for product context

Component labels tell you what is in the stack. Robot pages tell you whether that stack sits inside a product that fits the category, room, workload, budget, or household environment you care about.

Use Compare for the final call

Once you have real candidates from the component atlas, move into Compare for side-by-side evaluation against price, weight, battery, and deployment fit.

Avoid false confidence

Three ways component pages get misread

1

A label is not a product verdict

Two robots can both mention LiDAR or Alexa and still diverge sharply on navigation quality, maintenance burden, safety behavior, or buyer fit. A single label does not capture sensor placement, software maturity, or integration quality.

2

Novelty is not value

Rare components sometimes signal genuine innovation, but they can also be thin branding or isolated features with little downstream impact. Do not assume rarity equals superiority.

3

Every layer only matters in context

Sensors matter for perception risk, connectivity for ecosystem fit, AI for behavior, and voice for interface comfort. Match the layer to the decision you are actually making, not to whichever label sounds most interesting.

Interpretation help

Use the lower half of the atlas only when the next move is still unclear

The FAQ now sits beside a sticky route rail so large screens keep the exits, framing, and research sequence visible while you skim the deeper guidance.

Frequently Asked Questions

Using the hub
What is the /components route optimized for?

This route is a routing tool, not a reference dump. It answers: which layer should I inspect first? Shared-first lanes surface comparison anchors (sensors, connectivity). Signature-heavy lanes preserve branded stacks (AI, voice) for deeper research once the shortlist is tighter. Workbench cards carry preview rows and counts before you open denser directories.

The route is also optimized for research sequencing. Shared layers help establish overlap and comparison anchors. Signature-heavy layers help explain why apparently similar robots still diverge once autonomy stacks or assistant ecosystems enter the picture. The hub exists to restore a reading order so the first ten minutes of research produce a clear next move.

When should I start with shared-first versus signature-heavy?

If you are asking "what overlaps?" - start shared-first (sensors, connectivity). If you are asking "what explains the difference?" - start signature-heavy (AI, voice). Shared-first reduces orientation risk; signature-heavy reduces interpretation risk. Start broad, then go deep.

The distinction matters because not every research session starts with the same uncertainty. A buyer comparing robot vacuums may care first about navigation (sensors). An integrator evaluating delivery robots may care first about APIs (connectivity). The lane split gives each session a cleaner first move.

Reading the signals
Why collapse the single-robot long tail?

Only 128 of 960 components repeat across multiple robots. Those carry most early comparison value. One-off entries still matter — but after you know which layer deserves deeper inspection. Collapsing them keeps the reusable signal visible instead of burying it under noise from items that only appear once in the entire database.

What does robot count actually tell me?

Robot count is a browse signal, not a quality score. High counts = comparison anchors (shared building blocks). Low counts = differentiators (proprietary stacks, branded features). Use count to choose reading order, not final judgment.

Pair count with the question you are answering. If you are asking "what is common across the shortlist?" look for higher-count anchors. If you are asking "what makes this manufacturer unusual?" lower-count entries can be the right clue.

Why does AI look less shared than sensors or connectivity?

AI labels are often vendor-specific and branded — 204 entries, many behaving like autonomy signatures rather than standardized parts. The AI workbench is framed as a signature catalog, not a shared-parts table. Read it like editorial evidence, not a popularity ranking.

That does not make the AI workbench weaker; it makes it different. AI is often where a manufacturer expresses product distinctiveness — one robot may emphasize on-device compute, another whole-home mapping, another OpenAI compatibility. Those labels still matter for research, but they behave less like a reusable table of shared parts and more like a catalog of signature stacks.

What do the 30-day activity signals mean?

A freshness cue: 677 components tie to recently verified robot records. Use it to triage the most current slice of the catalog — especially useful in fast-moving areas like AI, apps, and assistant integrations where the newest maintained records may answer a question faster than an older untouched slice. Freshness is not the same as quality, but recency helps prioritize attention when you are short on time.

If you are scanning quickly, pair the freshness signal with robot count: repeated components that also show recent activity give you the most current comparison anchors. That combination is faster than reading every label equally.

Can the same component label mean different real-world outcomes?

Yes. Two robots can both list LiDAR, ROS 2, Alexa, or Wi-Fi and still produce very different results. The label tells you a technology appears in the stack — it does not tell you how well it is integrated, how recent the implementation is, or whether the rest of the robot benefits from it.

Use Components to discover the clue, robot pages to inspect the full product context, and Compare when the decision is concrete. Component names are excellent for pattern-finding, but weak on their own as final verdicts. Treat them as evidence that narrows the search space.

When to leave this route
When should I use components, robots, compare, or the glossary?

Glossary for definitions → Components for pattern-finding → Robot pages for product context → Compare for decisions. If the current page stops changing the decision, leave it.

Each route answers a different class of question. Components tells you whether a technology clue is shared, rare, or current. Robot pages tell you whether that clue sits inside a product that fits your needs. Compare is where those clues get tested side by side against price, battery, and deployment fit.

Ask yourself: am I still narrowing the evidence, or am I already judging products? If you are still narrowing, stay in Components. If you are judging products, move to robot pages or Compare. That distinction keeps research fast and prevents the atlas from becoming a dead-end scroll.

How do the four layers affect real-world buyer fit differently?

Sensors are about perception quality — navigation reliability, obstacle handling, environmental awareness. If that is the risk, open sensors first. Connectivity is about integration and control — smart-home compatibility, remote operation, deployment friction. AI is about behavior and decision-making — why similar robots act differently once autonomy or branded intelligence enters the picture. Voice affects accessibility, family adoption, and interface comfort.

Match the layer to the risk you are reducing. Worried about navigation quality? Start with sensors. Worried about compatibility? Start with connectivity. Worried about autonomy claims? Start with AI. Worried about household usability? Start with voice. The page is organized around that logic so technical browsing stays tied to real-world decisions.

How should I use the shared-first and signature-heavy lanes together?

The fastest way to get value from this route is to match the workbench to the kind of uncertainty you are trying to reduce. Start broad with shared layers (sensors, connectivity) to establish what is actually common across the shortlist. Then go deep with signature layers (AI, voice) to test whether the remaining candidates are still similar once you examine autonomy stacks, assistant ecosystems, or hands-free interaction.

That sequence prevents two common mistakes: reacting to flashy one-off labels too early, and treating the AI layer as a simple popularity contest when it is actually a signature catalog. Start with the anchors, then move to the differentiators.