AI & Machine Learning Robotics 1 min read

SO-ARM100: The Open-Source Robot Arm Transforming AI Robotics

B
Bright Coding
Author
Share:
SO-ARM100: The Open-Source Robot Arm Transforming AI Robotics
Advertisement

SO-ARM100: The Open-Source Robot Arm Transforming AI Robotics

Building sophisticated robot arms used to require million-dollar budgets and closed ecosystems. Not anymore. The SO-ARM100 robot arm is democratizing robotics, putting powerful, AI-ready manipulators within reach of individual developers, researchers, and hobbyists. This comprehensive guide reveals everything you need to know about this game-changing platform.

Whether you're training imitation learning models, building teleoperation systems, or teaching the next generation of roboticists, the SO-ARM100 delivers professional capabilities at a fraction of traditional costs. You'll learn how to source parts, assemble your arm, integrate with Hugging Face's LeRobot framework, and deploy real AI applications—all with a vibrant community backing you every step.

Ready to build the future? Let's dive into the SO-ARM100 ecosystem.

What is the SO-ARM100 Robot Arm?

The SO-ARM100 is a standardized, open-source robot arm designed specifically for end-to-end AI development. Originally created by TheRobotStudio in collaboration with Hugging Face, this platform represents a fundamental shift in how we approach robotics research and education.

At its core, the SO-ARM100 is more than just hardware—it's a complete ecosystem for learning, experimentation, and deployment. The design prioritizes accessibility without sacrificing performance, using off-the-shelf components and 3D-printed parts to keep costs low while maintaining precision.

The latest iteration, SO-101, addresses key pain points from the original SO-100 design. Improved wiring simplifies assembly dramatically. You no longer need to remove gears to access connections, reducing build time by hours. Updated motors for the leader arm provide smoother teleoperation, while the follower arm maintains the robust STS3215 servo lineup with optimized gear ratios for different joints.

What makes this platform truly revolutionary is its seamless integration with the 🤗 LeRobot library. This isn't an afterthought—every design decision considers how the hardware will interact with modern AI frameworks. The result? A robot arm that works out-of-the-box with imitation learning, reinforcement learning, and teleoperation pipelines that previously required custom engineering.

The community aspect cannot be overstated. With an active Discord server and open documentation on Hugging Face, you're never building alone. This is open-source hardware done right: transparent, collaborative, and focused on solving real problems for AI developers.

Key Features That Set SO-ARM100 Apart

Open-Source Design Philosophy Every component, from CAD files to firmware, is publicly available. You can modify, remix, and commercialize your improvements. This transparency accelerates innovation—when one developer solves a problem, everyone benefits.

Dual-Arm Teleoperation Ready The platform is engineered for leader-follower setups from day one. The leader arm uses lighter, more responsive STS3215 servos with specific gear ratios (1/191 and 1/147) for intuitive human control. The follower arm employs higher-torque configurations (1/345) to handle payload tasks reliably.

Global Parts Sourcing No more hunting for obscure components. The bill of materials includes verified purchase links for the US, EU, China, and Japan. Whether you need motors from Alibaba, control boards from Amazon, or cables from Taobao, the documentation points you to reliable suppliers with exact part numbers.

LeRobot Integration Native support for Hugging Face's LeRobot library means you can collect demonstration data, train policies, and deploy models using standardized Python APIs. The hardware abstraction layer handles motor communication, safety limits, and coordinate transformations automatically.

Modular Camera Mounts Vision is critical for AI robotics. Optional hardware designs include multiple camera mounting solutions—overhead, wrist-mounted, and third-person views. These integrate directly into the LeRobot data collection pipeline, synchronizing video streams with motor commands.

Cost-Effective Precision The entire two-arm teleoperation setup costs under $500 in parts. Compare this to commercial systems costing $10,000+ with similar reach and payload capacity. The secret lies in clever design: using commodity servos, 3D printing structural components, and open-source motor control boards.

Active Community Development With contributors from TheRobotStudio, Hugging Face, and independent developers worldwide, the platform evolves rapidly. New end-effectors, improved calibration routines, and advanced control algorithms appear weekly in the Discord and GitHub discussions.

Real-World Use Cases That Shine

1. AI Research and Imitation Learning

Train robotic policies from human demonstrations without writing custom hardware drivers. Researchers at institutions worldwide use SO-ARM100 to validate imitation learning algorithms. The standardized data format means you can share datasets across labs, accelerating comparative studies. Collect 100 demonstrations in an afternoon, train a policy overnight, and deploy the next morning—a workflow that previously took weeks of engineering.

2. Industrial Task Automation Prototyping

Manufacturing engineers prototype assembly sequences before investing in industrial robots. The SO-ARM100's 300mm reach and 200g payload capacity handle small-part manipulation, screw driving, and pick-and-place operations. Test your process logic with AI vision and teleoperation, then scale to UR5e or Franka arms with minimal code changes thanks to LeRobot's unified API.

3. Remote Teleoperation for Hazardous Environments

Connect the leader arm to a VR headset and control the follower arm in dangerous locations. Nuclear facility inspectors use modified SO-ARM100 arms to manipulate tools around radioactive materials. The low cost means redundant systems are affordable, and the open design allows for easy decontamination and part replacement.

4. STEM Education and Robotics Curriculum

University professors build entire courses around the platform. Students assemble the arm in week one, collect demonstration data in week two, and train their first policies by week three. The tangible progress keeps learners engaged, while the low cost enables one-arm-per-student ratios impossible with industrial hardware.

5. Home Automation and Assistive Technology

Makers integrate SO-ARM100 arms into smart home setups—loading dishwashers, sorting laundry, or feeding pets. The active community has created specialized grippers for each task. For assistive technology, voice-controlled teleoperation helps individuals with mobility impairments interact with their environment independently.

Step-by-Step: Build and Setup Your SO-101 Arm

Phase 1: Source Your Components

For a complete teleoperation setup (leader + follower), you'll need:

# Motor quantities for TWO arms (7 servos each)
7x STS3215 Servo 7.4V, 1/345 gear (C001)  # Base, shoulder, elbow joints
2x STS3215 Servo 7.4V, 1/191 gear (C044)  # Leader wrist rotation
3x STS3215 Servo 7.4V, 1/147 gear (C046)  # Leader wrist flexion
2x Motor Control Board                    # One per arm
2x USB-C Cable (2 pcs each)              # Data and power

Budget approximately $450-500 for all components, depending on shipping and regional availability. The BOM table in the repository provides exact links for your region—use them to avoid compatibility issues.

Phase 2: 3D Print Structural Parts

Download STL files from the repository's 3d_prints directory. Print settings matter for rigidity:

  • Material: PETG or ABS (avoid PLA for longevity)
  • Layer height: 0.2mm for structural parts
  • Infill: 40-50% with 4-5 perimeters
  • Supports: Required for overhangs >45°

Total print time: ~40 hours for both arms. Consider using a print service if you lack a large-format printer—many vendors offer SO-101 specific kits.

Phase 3: Assembly

Follow the official assembly guide on Hugging Face. Key improvements in SO-101:

  1. Motor wiring routes through channels—no disassembly required
  2. Snap-fit connectors reduce screw count by 40%
  3. Color-coded cables prevent wiring errors

Pro tip: Assemble the leader arm first. Its simpler gearing makes it more forgiving for beginners. Test each joint manually before connecting power.

Phase 4: Software Installation

# Create a Python environment
conda create -n lerobot python=3.10
conda activate lerobot

# Install LeRobot with SO-ARM100 support
pip install lerobot[so100]

# Verify installation
python -c "from lerobot.common.robot_devices.robots.so100 import SO100Robot; print('SO-ARM100 ready')"

Phase 5: Calibration and Testing

# Connect leader arm (typically /dev/ttyACM0)
python -m lerobot.scripts.control_robot --robot.type=so100 --robot.port=/dev/ttyACM0 --control.mode=calibrate

# Connect follower arm (typically /dev/ttyACM1)
python -m lerobot.scripts.control_robot --robot.type=so100 --robot.port=/dev/ttyACM1 --control.mode=calibrate

# Test teleoperation
python -m lerobot.scripts.control_robot --robot.type=so100 --robot.port_leader=/dev/ttyACM0 --robot.port_follower=/dev/ttyACM1 --control.mode=teleop

Calibration is critical—run it whenever you disassemble motors or change payload. The script automatically detects joint limits and sets safe operating boundaries.

Real Code Examples from the SO-ARM100 Ecosystem

Example 1: Basic Robot Initialization

This snippet shows how to instantiate and control a single SO-101 arm using the LeRobot library:

from lerobot.common.robot_devices.robots.so100 import SO100Robot
from lerobot.common.robot_devices.motors.feetech import FeetechMotorsBus

# Initialize motor communication bus
# The SO-101 uses a 1Mbps baud rate for responsive control
motor_bus = FeetechMotorsBus(
    port="/dev/ttyACM0",  # Adjust for your system (COM3 on Windows)
    motors={
        # Map joint names to servo IDs and models
        # The gear ratios affect effective torque and speed
        "shoulder_pan": (1, "sts3215"),      # 1/345 gear, high torque
        "shoulder_lift": (2, "sts3215"),     # 1/345 gear, high torque
        "elbow_flex": (3, "sts3215"),        # 1/345 gear, high torque
        "wrist_flex": (4, "sts3215"),        # 1/147 gear for leader, 1/345 for follower
        "wrist_roll": (5, "sts3215"),        # 1/191 gear for leader, 1/345 for follower
        "gripper": (6, "sts3215"),           # 1/345 gear for precision grasping
    }
)

# Create robot instance with calibration data
robot = SO100Robot(
    motor_bus=motor_bus,
    config_path="so101_config.yaml",  # Contains joint limits and offsets
    calibration_dir="./calibrations"
)

# Connect and enable torque
robot.connect()
robot.enable_torque()  # Must be called before any movement

# Move to home position (all joints centered)
robot.move_to_home(speed=50)  # Speed in degrees per second

# Read current joint positions
joint_angles = robot.read_positions()
print(f"Current joint angles: {joint_angles}")

# Disconnect safely
robot.disable_torque()
robot.disconnect()

Key points: The motor mapping uses specific servo IDs that must match your wiring. The config file stores joint limits to prevent dangerous movements. Always disable torque before disconnecting to prevent servos from locking.

Example 2: Teleoperation Data Collection

This script demonstrates how to record demonstration data for imitation learning:

import numpy as np
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.robot_devices.robots.so100 import SO100Robot

# Initialize leader and follower arms
leader = SO100Robot(port="/dev/ttyACM0", config_path="leader_config.yaml")
follower = SO100Robot(port="/dev/ttyACM1", config_path="follower_config.yaml")

# Connect both arms
leader.connect()
follower.connect()
leader.enable_torque()
follower.enable_torque()

# Create dataset for storing demonstrations
dataset = LeRobotDataset.create(
    repo_id="your_username/so101_pick_place_task",
    fps=30,  # Record at 30Hz for smooth trajectories
    robot_type="so100",
    use_videos=True  # Enable video recording if cameras attached
)

print("Starting data collection. Move leader arm to record.")
print("Press 'r' to start/stop recording, 'q' to quit.")

recording = False
while True:
    # Read leader position (human demonstration)
    leader_pos = leader.read_positions()
    
    # Mirror to follower in real-time for visual feedback
    follower.set_goal_positions(leader_pos, speed=100)
    
    if recording:
        # Capture timestamped data point
        dataset.add_frame(
            observation={
                "leader_position": leader_pos,
                "follower_position": follower.read_positions(),
                "timestamp": time.time()
            },
            action=leader_pos  # The action is the target position
        )
    
    # Check for keyboard input (simplified)
    key = check_keyboard()
    if key == 'r':
        recording = not recording
        print(f"Recording: {'ON' if recording else 'OFF'}")
    elif key == 'q':
        break

# Save dataset to disk
dataset.consolidate()
print(f"Saved {dataset.num_frames} frames to {dataset.repo_id}")

# Cleanup
leader.disable_torque()
follower.disable_torque()
leader.disconnect()
follower.disconnect()

Why this matters: This creates a LeRobot-compatible dataset ready for training. The follower arm mirrors movements in real-time, giving immediate visual feedback. The 30Hz recording rate captures human nuances essential for imitation learning.

Example 3: Training a Policy on Collected Data

After collecting demonstrations, train a neural network policy:

from lerobot.common.policies.act.modeling_act import ACTPolicy
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.trainers.trainer import Trainer

# Load your demonstration dataset
dataset = LeRobotDataset.load("your_username/so101_pick_place_task")

# Configure ACT (Action Chunking with Transformers) policy
# This architecture excels at learning from human demonstrations
policy = ACTPolicy.from_pretrained(
    pretrained_model_name_or_path=None,  # Train from scratch
    config_path="lerobot/config/policy/act_so101.yaml"
)

# The config specifies:
# - 6 action dimensions (one per joint)
# - 8 observation dimensions (positions + velocities)
# - Transformer architecture with 6 layers
# - Chunk size of 100 for smooth execution

# Initialize trainer
trainer = Trainer(
    policy=policy,
    dataset=dataset,
    train_batch_size=64,
    num_epochs=1000,
    save_freq=100,
    output_dir="./so101_policy/"
)

# Train the policy
trainer.train()

# Save the trained model
policy.save_pretrained("./so101_pick_place_policy")
print("Training complete! Policy saved to ./so101_pick_place_policy")

Training details: On a modern GPU, this takes 2-4 hours for 1000 epochs. The ACT policy learns to predict sequences of actions, making it robust to temporal variations in human demonstrations. The saved policy can be deployed back to the robot for autonomous execution.

Example 4: Deploying the Trained Policy

Run your trained policy on the physical robot:

from lerobot.common.policies.act.modeling_act import ACTPolicy
from lerobot.common.robot_devices.robots.so100 import SO100Robot
import torch

# Load trained policy
policy = ACTPolicy.from_pretrained("./so101_pick_place_policy")
policy.eval()  # Set to evaluation mode
policy.to("cuda")  # Move to GPU for faster inference

# Initialize robot
robot = SO100Robot(port="/dev/ttyACM1", config_path="follower_config.yaml")
robot.connect()
robot.enable_torque()

print("Executing trained policy. Press Ctrl+C to stop.")

try:
    while True:
        # Read current state
        current_pos = robot.read_positions()
        current_vel = robot.read_velocities()
        
        # Create observation dict (must match training format)
        observation = {
            "observation.state": torch.tensor(current_pos + current_vel, 
                                            dtype=torch.float32).unsqueeze(0).to("cuda")
        }
        
        # Predict next action (chunk of 100 timesteps)
        with torch.no_grad():
            action_chunk = policy.select_action(observation)
        
        # Execute first action in chunk
        next_pos = action_chunk[0].cpu().numpy()
        robot.set_goal_positions(next_pos, speed=150)
        
        # Small delay to match training fps
        time.sleep(1/30)
        
except KeyboardInterrupt:
    print("Policy execution stopped.")

robot.disable_torque()
robot.disconnect()

Deployment notes: The policy runs at 30Hz inference speed on a GTX 1660. The action chunking mechanism ensures smooth, continuous motion even if individual inferences have slight latency. Always have an emergency stop ready during autonomous execution.

Advanced Usage and Best Practices

Custom End-Effectors The modular wrist design accepts standard servo horns, enabling rapid prototyping of grippers, suction tools, or pen holders. The community has shared designs for parallel jaw grippers with force feedback and even 3D-printed soft robotic fingers. Always recalibrate after changing end-effectors—the mass affects dynamic performance.

Multi-Camera Setups For complex tasks, mount cameras at multiple viewpoints. The LeRobot dataset supports synchronized video streams. Use the optional hardware designs for overhead and wrist-mounted cameras. Timestamp alignment is crucial—the motor control board provides hardware sync pulses to ensure frame-to-action mapping accuracy.

VR Integration The leader arm pairs perfectly with VR headsets for intuitive 3D control. Community members have integrated Oculus/Meta controllers with the leader arm, mapping hand poses to joint angles. This creates immersive teleoperation for remote manipulation tasks, with latency under 50ms when using USB 3.0.

Performance Optimization

  • Motor tuning: Adjust PID gains in the Feetech servos for your specific payload
  • Power supply: Use a 5V 10A supply for dual-arm setups to prevent voltage sag
  • USB isolation: Add isolated USB hubs to prevent ground loops between arms
  • Thermal management: Add small heatsinks to servos running above 80% torque for extended periods

Community Contributions Submit your modifications back to the repository. The maintainers actively merge improvements: new gripper designs, camera mounts, calibration routines, and even alternative motor configurations. Your contribution could become the standard for the next hardware revision.

SO-ARM100 vs. Alternatives: Why This Platform Wins

Feature SO-ARM100 Koch v1.1 LeKiwi Dobot Magician Franka Emika Panda
Cost ~$450 (dual) ~$600 (dual) ~$800 (single) $1,500 (single) $30,000+ (single)
Open Source Full (HW + SW) Full (HW + SW) Full (HW + SW) Partial (SW only) No
LeRobot Integration Native Native Native Community plugin No
Assembly Time 4-6 hours 6-8 hours 3-4 hours 1 hour (pre-assembled) N/A
Payload 200g 150g 300g 500g 3kg
Reach 300mm 280mm 350mm 320mm 855mm
Precision ±2mm ±3mm ±2mm ±0.5mm ±0.1mm
Community Size 2,000+ members 1,500+ members 800+ members Large (commercial) Small (academic)
Learning Curve Moderate Steep Moderate Easy Steep

Why SO-ARM100 wins: It hits the sweet spot of affordability, capability, and community support. While Dobot offers better precision, it's closed-source and expensive. Franka provides industrial quality but at 60x the cost. Koch and LeKiwi are excellent alternatives, but SO-ARM100's improved wiring and active Hugging Face partnership give it an edge in AI research workflows.

Frequently Asked Questions

Q: What's the main difference between SO-100 and SO-101? A: SO-101 features simplified wiring (no gear removal required), updated leader arm motors for smoother teleoperation, and refined 3D prints for easier assembly. The SO-100 design is now deprecated but still functional. New builds should use SO-101.

Q: How much technical skill do I need to build one? A: Basic electronics (soldering optional, most connections are plug-and-play) and familiarity with Python. The assembly guide is step-by-step with photos. If you can build a PC and write simple scripts, you can build an SO-101.

Q: Can I use the arm without LeRobot? A: Yes, but you'll need to write your own motor control code. The Feetech motors use a standard UART protocol. However, LeRobot provides pre-built safety features, calibration tools, and dataset management that save months of development.

Q: What if a servo burns out? A: Replacement STS3215 servos cost $14 and are stocked by multiple vendors. The modular design means you can swap a single servo in 10 minutes without disassembling the entire arm. Keep one spare of each gear ratio on hand.

Q: Is there warranty support for kit purchases? A: Warranty varies by vendor. PartaBot and Seeed Studio offer 30-day warranties on assembled kits. For DIY builds, you're responsible for your own assembly, but the community provides excellent troubleshooting support on Discord.

Q: How accurate is the arm for precision tasks? A: Expect ±2mm repeatability after proper calibration. This is sufficient for pick-and-place, assembly, and manipulation tasks. For sub-millimeter precision (e.g., PCB testing), consider adding a wrist-mounted camera for visual servoing.

Q: Can I scale up to a larger robot? A: The design principles scale, but STS3215 servos max out at 16.5kg.cm (7.4V version). For larger arms, you'd need to redesign for more powerful motors like Dynamixel XM series. The LeRobot software stack supports arbitrary robot configurations.

Conclusion: Your Gateway to AI Robotics

The SO-ARM100 robot arm represents more than just affordable hardware—it's a paradigm shift in who can participate in robotics research. By combining open-source design, global parts sourcing, and tight integration with Hugging Face's LeRobot ecosystem, this platform removes barriers that have historically kept individuals and small labs from contributing to AI robotics.

My take? After reviewing dozens of robotic platforms, the SO-ARM100 stands out for its pragmatic balance of capability and accessibility. The active community isn't just a nice-to-have; it's the secret sauce that transforms a good design into a living ecosystem. When you hit a snag, 2,000+ developers have your back.

The improved SO-101 design shows the maintainers listen to user feedback. Simplified wiring and easier assembly mean you spend less time debugging and more time innovating. This is what open-source hardware should look like—not just published files, but a complete support system.

Ready to start? Visit the official GitHub repository to download the latest files, join the Discord community, and access the Hugging Face documentation. The future of robotics isn't just for big corporations anymore—it's in your hands. Build it.


Have you built an SO-ARM100? Share your project in the comments below and join the robotics revolution!

Advertisement

Comments (0)

No comments yet. Be the first to share your thoughts!

Leave a Comment

Apps & Tools Open Source

Apps & Tools Open Source

Bright Coding Prompt

Bright Coding Prompt

Categories

Advertisement
Advertisement