Open Roles
We move fast. And we want you to move with us! See our list of open positions here:
Ready to get started? Fill out the form application form and we'll be in touch.
-
Develop and refine locomotion policies for a next-generation humanoid robot platform, enabling robust walking, stair climbing, and terrain adaptation in unstructured industrial environments. You will work at the intersection of sim-to-real reinforcement learning and real-world deployment on physical hardware.
RESPONSIBILITIES
• Train locomotion policies in simulation (Isaac Sim / MuJoCo) using reinforcement learning, with a focus on robustness to terrain variation, payload changes, and external disturbances
• Design and implement sim-to-real transfer pipelines, including domain randomization and system identification, to deploy policies on the physical humanoid platform
• Develop and benchmark locomotion primitives (flat walking, incline traversal, step climbing) against performance targets for speed, stability, and energy efficiency
• Instrument hardware tests and build evaluation frameworks to quantify policy performance on the physical robot
• Collaborate with the dynamic motion and perception teams to integrate locomotion with higher-level task planners and agile maneuver capabilities
QUALIFICATIONS
• Graduate student (MSc or PhD) in Robotics, CS, or related field with coursework in reinforcement learning and robot control
• Strong Python proficiency; experience with PyTorch and at least one robotics simulator (Isaac Sim, MuJoCo, PyBullet)
• Familiarity with legged locomotion literature (e.g., policies from Hybrid RL, AMP, or similar frameworks)
• Experience deploying learned policies on physical hardware is a strong plus
• Comfortable working in a fast-paced startup environment with ambiguous problem definitions
-
Push the boundaries of humanoid agility by developing learning-based controllers for highly dynamic motions: running, jumping, climbing over obstacles, and coordinated locomanipulation. You will train policies that enable a humanoid robot to perform parkour-inspired maneuvers and interact with its environment at speed, targeting deployment in industrial settings where navigating cluttered, multi-level structures is critical.
RESPONSIBILITIES
• Train end-to-end reinforcement learning policies for dynamic humanoid behaviors including running, jumping, vaulting, and climbing over obstacles in simulation
• Develop locomanipulation skills that coordinate whole-body motion with arm interactions (e.g., grabbing rails while climbing, bracing against surfaces, opening hatches while balancing)
• Design reward functions and curriculum learning strategies that progressively build from basic dynamic gaits to complex parkour-style motion sequences
• Implement sim-to-real transfer techniques (domain randomization, dynamics augmentation) to bridge the gap for high-impact dynamic motions on physical hardware
• Build evaluation benchmarks for agile locomotion covering success rate, robustness to perturbation, and generalization across obstacle configurations
QUALIFICATIONS
• Graduate student (MSc or PhD) in Robotics, CS, or related field with strong experience in reinforcement learning for locomotion or manipulation
• Deep familiarity with physics simulators (Isaac Sim, MuJoCo) and training frameworks for contact-rich, dynamic tasks
• Experience with whole-body control, motion imitation learning (e.g., AMP, DeepMimic), or agile locomotion research
• Strong Python and PyTorch proficiency; C++ experience for real-time deployment is a plus
• Comfort with high-risk hardware experiments and iterating rapidly between simulation and physical testing
-
Build and optimize a multi-camera visual SLAM pipeline for a new humanoid robot platform equipped with several onboard cameras. The goal is to enable reliable real-time localization and dense mapping in GPS-denied industrial facilities such as refineries, offshore platforms, and processing plants.
RESPONSIBILITIES
• Develop and integrate a multi-camera visual SLAM system that fuses inputs from the robot's camera array for robust 6-DOF pose estimation
• Implement loop closure, relocalization, and map management strategies tailored to repetitive industrial environments (pipes, corridors, symmetric structures)
• Optimize the pipeline for real-time performance on the robot's onboard compute, profiling and reducing latency across the perception stack
• Build evaluation tools and benchmark datasets using data collected on the physical platform in lab and field environments
• Collaborate with the navigation and locomotion teams to feed accurate localization into path planning and gait adaptation modules
QUALIFICATIONS
• Graduate student (MSc or PhD) in Computer Vision, Robotics, or related field with strong foundations in multi-view geometry and SLAM
• Experience with visual or visual-inertial SLAM systems (ORB-SLAM, VINS-Mono, Kimera, or similar)
• Proficiency in C++ and Python; experience with ROS2 and OpenCV
• Familiarity with camera calibration, multi-camera extrinsic estimation, and sensor fusion
• Experience working with real sensor data on physical robot platforms is a strong plus
-
Develop semantic perception capabilities that enable a humanoid robot to understand and reason about its surroundings in complex industrial environments. This role focuses on turning raw multi-camera imagery into actionable scene representations - traversability maps, hazard detection, and object-level understanding - that feed directly into the robot's autonomous navigation stack.
RESPONSIBILITIES
• Build and fine-tune semantic segmentation and object detection models for industrial scene understanding (walkable surfaces, obstacles, stairs, hazards, equipment)
• Develop a multi-camera fusion pipeline that combines per-camera semantic outputs into a unified 3D semantic map around the robot
• Design traversability estimation modules that classify terrain and predict safe footholds using both geometric and semantic cues
• Create data collection and annotation workflows for industrial environments, including synthetic data generation from simulation
• Integrate semantic outputs with the SLAM and path planning systems to enable context-aware autonomous navigation.
QUALIFICATIONS
• Graduate student (MSc or PhD) in Computer Vision, Machine Learning, or Robotics with experience in semantic segmentation or 3D scene understanding
• Strong proficiency with PyTorch and modern vision architectures (transformers, foundation models for segmentation)
• Experience with 3D point cloud processing, depth estimation, or multi-view 3D reconstruction
• Familiarity with ROS2 and deploying perception models on edge hardware (NVIDIA Jetson or similar)
• Interest in bridging perception and planning; prior exposure to navigation stacks is a plus