An independent bilingual guide to ShengShu Technology's Motubrain launch, benchmark claims, architecture, and current access status.
Independent resource. Motubrain.org is not affiliated with ShengShu Technology.
Key figures reported by official or benchmark sources as of April 29, 2026.
WorldArena EWM Score reported by ShengShu for Motubrain
RoboTwin 2.0 clean and randomized scores reported on the official page
Public launch date in the ShengShu PRNewswire announcement
Motubrain is presented by ShengShu Technology as a World Action Model: a unified embodied AI model that connects what a robot sees with the actions it should take.
The official framing moves beyond video-only world modeling by joining perception, prediction, and robot action in one system.
The launch describes robots acting across homes, industrial spaces, and commercial environments rather than a consumer chatbot or memory app.
As of April 29, 2026, official pages explain the model and its benchmark claims, but do not show a public self-serve API or download.
Searches may use Motubrain or MotuBrain. This guide treats both as the same ShengShu World Action Model unless a source says otherwise.
The core idea is to learn video, language, and action together so a robot can reason about what changes next and what to do next.
Use the source trail before treating any model score as independently settled.
Use ShengShu's Motubrain page for the stated capabilities, partner context, and benchmark figures.
WorldArena explains EWM Score, while RoboTwin 2.0 documents the dual-arm manipulation benchmark context.
The current public materials explain Motubrain, but this site found no official self-serve API, downloadable model, or public demo.
The launch frames Motubrain as a shift from task-specific robot systems toward scalable embodied intelligence.
Official materials say task variety improves multi-task performance instead of requiring isolated skill training for every behavior.
Motubrain is positioned as cross-embodiment, designed to adapt across robot types rather than being tied to one hardware platform.
The model is described as learning full task sequences directly, including complex multi-step work beyond short atomic actions.
WorldArena evaluates embodied world models across perceptual and functional utility, including action-planning roles.
RoboTwin 2.0 is a large-scale dual-arm manipulation benchmark with 50 tasks and domain randomization.
Benchmark numbers on this site are reported as source claims, not independent certification by Motubrain.org.