Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Open X-Embodiment Collaboration
Authors listed in alphabetical order.

For technical questions, please file a bug at the github repo. For any other inquiries, please email

Contributing datasets: if you are interested in contributing datasets to the Open X-Embodiment dataset, please fill out the Dataset Enrollment Form.

RT-2-X API Interest Form

  • Abhishek Padalkar
  • Acorn Pooley
  • Ajay Mandlekar
  • Ajinkya Jain
  • Albert Tung
  • Alex Bewley
  • Alex Herzog

  • Alex Irpan
  • Alexander Khazatsky
  • Anant Rai
  • Anikait Singh
  • Animesh Garg
  • Anthony Brohan
  • Antonin Raffin

  • Ayzaan Wahid
  • Ben Burgess-Limerick
  • Beomjoon Kim
  • Bernhard Schölkopf
  • Brian Ichter
  • Cewu Lu
  • Charles Xu

  • Chelsea Finn
  • Chenfeng Xu
  • Cheng Chi
  • Chenguang Huang
  • Christine Chan
  • Chuer Pan
  • Chuyuan Fu

  • Coline Devin
  • Danny Driess
  • Deepak Pathak
  • Dhruv Shah
  • Dieter Büchler
  • Dmitry Kalashnikov
  • Dorsa Sadigh

  • Edward Johns
  • Federico Ceola
  • Fei Xia
  • Freek Stulp
  • Gaoyue Zhou
  • Gaurav S. Sukhatme
  • Gautam Salhotra

  • Ge Yan
  • Giulio Schiavi
  • Gregory Kahn
  • Hao Su
  • Hao-Shu Fang
  • Haochen Shi
  • Heni Ben Amor
  • Henrik I Christensen

  • Hiroki Furuta
  • Homer Walke
  • Hongjie Fang
  • Igor Mordatch
  • Ilija Radosavovic
  • Isabel Leal
  • Jacky Liang

  • Jad Abou-Chakra
  • Jaehyung Kim
  • Jan Peters
  • Jan Schneider
  • Jasmine Hsu
  • Jeannette Bohg
  • Jeffrey Bingham

  • Jiajun Wu
  • Jialin Wu
  • Jianlan Luo
  • Jiayuan Gu
  • Jie Tan
  • Jihoon Oh
  • Jitendra Malik
  • Jonathan Booher

  • Jonathan Tompson
  • Jonathan Yang
  • Joseph J. Lim
  • João Silvério
  • Junhyek Han
  • Kanishka Rao
  • Karl Pertsch

  • Karol Hausman
  • Keegan Go
  • Keerthana Gopalakrishnan
  • Ken Goldberg
  • Kendra Byrne
  • Kenneth Oslund
  • Kento Kawaharazuka

  • Kevin Zhang
  • Krishan Rana
  • Krishnan Srinivasan
  • Lawrence Yunliang Chen
  • Lerrel Pinto
  • Li Fei-Fei

  • Liam Tan
  • Lionel Ott
  • Lisa Lee
  • Masayoshi Tomizuka
  • Max Spero
  • Maximilian Du
  • Michael Ahn
  • Mingtong Zhang

  • Mingyu Ding
  • Mohan Kumar Srirama
  • Mohit Sharma
  • Moo Jin Kim
  • Naoaki Kanazawa
  • Nicklas Hansen
  • Nicolas Heess

  • Nikhil J Joshi
  • Niko Suenderhauf
  • Norman Di Palo
  • Nur Muhammad Mahi Shafiullah
  • Oier Mees
  • Oliver Kroemer

  • Pannag R Sanketi
  • Paul Wohlhart
  • Peng Xu
  • Pierre Sermanet
  • Priya Sundaresan
  • Quan Vuong
  • Rafael Rafailov

  • Ran Tian
  • Ria Doshi
  • Roberto Martín-Martín
  • Russell Mendonca
  • Rutav Shah
  • Ryan Hoque
  • Ryan Julian

  • Samuel Bustamante
  • Sean Kirmani
  • Sergey Levine
  • Sherry Moore
  • Shikhar Bahl
  • Shivin Dass
  • Shubham Sonawani

  • Shuran Song
  • Sichun Xu
  • Siddhant Haldar
  • Simeon Adebola
  • Simon Guist
  • Soroush Nasiriany
  • Stefan Schaal

  • Stefan Welker
  • Stephen Tian
  • Sudeep Dasari
  • Suneel Belkhale
  • Takayuki Osa
  • Tatsuya Harada
  • Tatsuya Matsushima

  • Ted Xiao
  • Tianhe Yu
  • Tianli Ding
  • Todor Davchev
  • Tony Z. Zhao
  • Travis Armstrong
  • Trevor Darrell

  • Vidhi Jain
  • Vincent Vanhoucke
  • Wei Zhan
  • Wenxuan Zhou
  • Wolfram Burgard
  • Xi Chen
  • Xiaolong Wang

  • Xinghao Zhu
  • Xuanlin Li
  • Yansong Pang
  • Yao Lu
  • Yevgen Chebotar
  • Yifan Zhou
  • Yifeng Zhu
  • Ying Xu

  • Yixuan Wang
  • Yonatan Bisk
  • Yoonyoung Cho
  • Youngwoon Lee
  • Yuchen Cui
  • Yueh-Hua Wu
  • Yujin Tang

  • Yuke Zhu
  • Yunzhu Li
  • Yusuke Iwasawa
  • Yutaka Matsuo
  • Zhuo Xu
  • Zichen Jeff Cui


Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train “generalist” X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms.

move red pepper to tray

pick ice cream

move red pepper to A

RT-2-X (55B): one of the biggest models to date performing unseen tasks in academic labs

Dataset Overview

We introduce the Open X-Embodiment Dataset, the largest open-source real robot dataset to date. It contains 1M+ real robot trajectories spanning 22 robot embodiments, from single robot arms to bi-manual robots and quadrupeds.

The dataset was constructed by pooling 60 existing robot datasets from 34 robotic research labs around the world. Our analysis shows that the number of visually distinct scenes is well-distributed across different robot embodiments and that the dataset includes a wide range of common behaviors and household objects. For a detailed listing of all included datasets, see this Google Sheet.

Model Overview

We train two models on the robotics data mixture: (1) RT-1, an efficient Transformer-based architecture designed for robotic control, and (2) RT-2, a large vision-language model co-fine-tuned to output robot actions as natural language tokens.

Both models output robot actions represented with respect to the robot gripper frame. The robot action is a 7-dimensional vector consisting of x, y, z, roll, pitch, yaw, and gripper opening or the rates of these quantities. For data sets where some of these dimensions are not exercised by the robot, during training, we set the value of the corresponding dimensions to zero.

We refer to the RT-1 model trained using the robotic data mixture as RT-1-X, and the RT-2 model trained using the robotic data mixture as RT-2-X.


RT-1-X evaluation on in-distribution skills

At UC Berkeley (RAIL)

At University of Freiburg (AiS)


At UC Berkeley (AUTOLab)

At Stanford (IRIS)


RT-1-X performing diverse tasks in 6 academic labs
RT-1-X models outperform RT-1 or Original Methods trained on individual datasets by 50% in the small-data domain

Original Method refers to the model developed by the creators of the dataset trained only on that respective dataset. The Original Method constitutes a reasonable baseline insofar as it can be expected that the model has been optimized to work well with the associated data. The lab logos indicate the physical location of real robot evaluation, and the robot pictures indicate the embodiment used for the evaluation.

RT-2-X evaluation on emergent skills

move apple near cloth

move apple on cloth

move apple between can & orange

RT-2-X modulates low-level behaviors based on small changes in prepositions (see “on” vs “near” above) and demonstrates understanding of spatial relationships between objects
Image 1
Image 2
RT-2-X outperforms RT-2 by 3x in emergent skill evaluations

RT-2-X demonstrates skills that the RT-2 model was not capable of previously, including better spatial understanding in both the absolute and relative sense. Small changes in preposition in the task string can also modulate low-level robot behavior. The skills used for evaluation are illustrated in the figure above.


If you're using the Open X-Embodiment dataset and RT-X in your research, please cite. If you're specifically using datasets that have been contributed to the joint effort, please cite those as well. We provide a dataset spreadsheet with citation for each dataset for your convenience.


We would like to thank John Guilyard for the amazing animations used for this website. The authors would like to acknowledge Yuheng Kuang, Ning Hou, Utsav Malla, Sarah Nguyen, Rochelle Dela Cruz, Justice Carbajal, Brianna Zitkovich, Emily Perez, Elio Prado, Jodilyn Peralta, Tran Pham, Deeksha Manjunath, Samuel Wan, Jaspiar singh and the greater Google DeepMind team for their feedback and contributions. The authors would like to thank Sanah Choudhry, Michael Griessel and Jon Small for their legal advice.

The website template was borrowed from Jon Barron.