Coming Soon

Rush

The Universal Shell

Intent-Driven Syntax (AI Win) • Clean Syntax (Human Win) • Cross-Platform (We All Win)
macOS Linux Windows x64 + ARM64
rush
# Clean, expressive syntax
$ name = "world"
$ puts "hello #{name}"
hello world
# Objectify pipeline
$ netstat | objectify | where State == "LISTEN" | select LocalAddress, PID
LocalAddress PID
127.0.0.1:8080 1234
0.0.0.0:443 5678
# AI integration
$ ai "find large files in /tmp"
Found 3 files over 100MB:
/tmp/download.iso (2.1GB)
/tmp/backup.tar (512MB)
# Pipe data through AI
$ cat error.log | ai "summarize these errors"
3 categories of errors found:
- Connection timeouts (47)
- Auth failures (12)
# AI agent mode
$ cat servers.txt | ai --agent "update the OS on these servers"
Thinking: I'll SSH to each server and run updates...
β–Έ ssh server1 "apt update && apt upgrade -y"
βœ“ server1 updated (3 packages)

Why Intent-Driven Syntax Matters

The same clean syntax that's readable for humans is also easier for LLMs to generate correctly

Spaces That Make Sense

Bash name="Mark" # no spaces allowed!
Rush name = "Mark" # readable

In bash, name = "Mark" is an error. LLMs constantly add spaces. Rush just allows them.

No Sigil Confusion

Bash $name vs ${name} vs "$name"
Rush puts "hello #{name}"

One interpolation syntax. No $var vs ${var} vs quoting confusion.

Explicit Block Structure

Bash if [ ... ]; then ... fi
Rush if condition ... end

Clear if/end blocks instead of if/then/fi, case/esac, or bracket gymnastics.

Natural File Handling

Bash IFS=$'\n'; for f in *.txt
Rush for f in Dir.files("*.txt")

Files with spaces, quotes, and special characters just work. No IFS hackery required.

Objects, Not Text Parsing

Bash awk '{print $4}' | grep -E
Rush | where Status == "Up"

Query object properties directly. No regex column extraction or brittle text parsing.

Comparison Operators

Bash [ "$x" -gt 5 ] vs (( x > 5 ))
Rush if x > 5

Use >, <, == naturally. No -gt, -eq, or double-bracket rules.

Designed for LLM Context Windows

Rush isn't just easier syntax β€” it's a language designed from the ground up to fit how LLMs actually work.

πŸ“¦

Context-Friendly

The entire language spec fits in a single context window. No scattered documentation across dozens of man pages, no "see also" rabbit holes. One file, complete knowledge.

🎯

Single-Shot Docs

rush-lang-spec.yaml is machine-readable documentation designed for LLM ingestion. Read once, generate correctly. No web searches, no StackOverflow, no "let me look that up."

⚠️

Explicit Mistakes Section

The spec includes common mistakes LLMs make β€” patterns to avoid, with corrections. We're training the LLM on what not to do, not just what to do.

πŸ”„

Consistent Patterns

No special cases. No "except when..." rules. The same pattern works everywhere. LLMs thrive on consistency; Rush delivers it.

Compare this to Python: 400+ pages of docs, spread across tutorials, library references, PEPs, and community guides. An LLM either needs massive pre-training or live web search to be competent. Rush fits in 50KB.

LLM-Ready Documentation

Any LLM becomes an instant Rush expert with a single file read

πŸ“„

rush-lang-spec.yaml

A machine-readable language specification designed for single-shot LLM ingestion. Includes:

  • Complete syntax rules with transpilation examples
  • Common mistakes section β€” LLMs are explicitly told what NOT to do
  • Unix β†’ Rush translation guide
  • Mental model for code generation
From the spec:
# LLMs: avoid these patterns
common_mistakes:
  - wrong: 'foreach ($x in $list) { }'
    right: 'for x in list ... end'

  - wrong: 'if (x > 5) { puts x }'
    right: 'if x > 5; puts x; end'

  - wrong: '[System.IO.File]::ReadAllText("path")'
    right: 'File.read("path")'

AI Agent Mode

When AI needs to run autonomously, Rush provides the infrastructure

πŸ’¬

Interactive AI

ai "prompt"

Ask questions, get answers. Pipe data through AI for analysis.

cat error.log | ai "what went wrong?"
git diff | ai "write a commit message"
ai "how do I find large files?"
πŸ”Œ

MCP Server

rush --mcp

Integrates with Claude Code. Persistent sessions, tool exposure, JSON-RPC protocol.

rush install mcp --claude
# Now Claude Code has rush_execute,
# rush_read_file, rush_context tools

Under the Hood: LLM Mode Wire Protocol

When AI agents drive Rush autonomously (rush --llm), all I/O is structured JSON. No ANSI colors, no decorationsβ€”just machine-readable context and results. This is the protocol ai --agent uses internally, and it's available for any LLM framework to use.

Features

πŸ€–

MCP Server Mode

rush --mcp integrates with Claude Code. Persistent sessions, tool exposure, JSON-RPC protocol support.

πŸ”—

SSH Gateway

rush --mcp-ssh for multi-host management. Target any SSH host from Claude with seamless authentication.

πŸ“Š

Objectify Pipelines

Auto-convert text streams to objects. netstat | objectify | where State == LISTEN just works.

πŸ—„οΈ

Built-in SQL

Query SQLite, PostgreSQL, ODBC directly. sql @db "SELECT * FROM users"

🌍

Cross-Platform

macOS, Linux, Windows. Platform blocks for conditional code execution based on your OS.

⌨️

Vi Mode

Full readline vi mode. W/B/E motions, dot-repeat, undo-all for efficient editing.

Developer Experience

Small details that make a big difference in daily use

πŸ”„

Config Sync

Sync your Rush config across machines via GitHub, SSH, or local path. Secrets file is never synced.

sync init github
sync init ssh server:~/.config/rush
sync push / sync pull
πŸ“

UNC SSH Paths

Access remote files directly β€” no scp or rsync needed. Uses your SSH config.

ls //ssh:server/var/log
cat //ssh:server/etc/nginx/nginx.conf
cp //ssh:server/data/backup.sql .
🎨

Auto-Theming

Detects your terminal background at startup. Configures LS_COLORS and GREP_COLORS automatically.

# Dark terminals β†’ bold/bright
# Light terminals β†’ non-bold/darker
set theme auto # default
πŸ€–

Multi-LLM Support

Use the ai command with Anthropic, OpenAI, Gemini, or Ollama. Switch providers instantly.

set --save aiProvider anthropic
set --save aiProvider openai
set --save aiProvider ollama
πŸ›€οΈ

PATH Management

Visual PATH editor with existence checks. Add, remove, reorder β€” changes persist automatically.

path # show with existence check
path add --save /opt/bin
path edit --save # opens $EDITOR
⚑

Hot Reload

Edit your config and reload without restarting. --hard restarts the binary while preserving session state.

init # edit init.rush, auto-reload
reload # reload config
reload --hard # full restart

Shell Scripting, Without the Pain

Rush solves the problems that make shell scripting frustrating

Files with Spaces

Before (Bash)
IFS=$'\n'
for f in $(find . -name "*.txt"); do
  echo "$f"  # Still breaks on some names
done
After (Rush)
for f in Dir.files("*.txt", recursive: true)
  puts f.Name
end

Exit Code Handling

Before (Bash)
if ! git pull; then
  echo "failed"
fi
# Or: git pull || echo "failed"
After (Rush)
git pull
if $?.failed?
  warn "pull failed: #{$?.code}"
end

Cross-Platform Scripts

Before (Bash)
if [[ "$OSTYPE" == "darwin"* ]]; then
  brew install rg
elif [[ "$OSTYPE" == "linux-gnu"* ]]; then
  apt install ripgrep
fi
After (Rush)
macos { brew install rg }
linux { apt install ripgrep }
win64 { winget install ripgrep }

Parsing Command Output

Before (Bash)
docker ps | tail -n +2 | \
  awk '{print $NF}' | \
  grep -v "^$"
After (Rush)
docker ps | objectify | select Names

Two MCP Servers

Local shell access and persistent SSH sessions β€” both with structured JSON protocol

πŸ’»

Local MCP

rush --mcp

Direct shell access on the local machine. Claude Code gets rush_execute, rush_read_file, and rush_context tools.

Install & Configure
rush install mcp --claude
# Adds to claude_desktop_config.json
Claude sends
{"tool": "rush_execute",
 "command": "ls ~/Documents"}
Rush returns
{"status": "success",
 "exit_code": 0,
 "stdout": [...],
 "cwd": "/Users/mark"}

Multi-Host Agent Mode

User runs
cat ~/servers.txt | ai --agent "update the OS on these servers"

The agent opens parallel SSH sessions to each host, runs commands, observes results, and iterates until done.

On each server (JSON wire protocol)
# prod-web-01
β†’ {"ready":true,"host":"prod-web-01","cwd":"/root"...}
← apt update && apt upgrade -y
β†’ {"status":"success","exit_code":0,"stdout":"..."}

# prod-web-02
β†’ {"ready":true,"host":"prod-web-02","cwd":"/root"...}
← apt update && apt upgrade -y
β†’ {"status":"success","exit_code":0,"stdout":"..."}

More Examples

# Variables β€” no sigils
name = "rush"
puts "Welcome to #{name.upcase}!"

# Method chaining
"hello world".split.map { |w| w.capitalize }.join(" ")

# Readable loops
for f in Dir.files("*.txt")
  puts "Processing #{f.Name}"
end
# Docker containers as objects
docker ps | objectify | where Status =~ "Up" | select Names, Image

# Network connections
netstat | objectify | where State == "LISTEN" | sort LocalAddress

# Process management
ps | objectify | where CPU > 10 | select ProcessName, CPU, Memory

# Output as JSON
ls /var/log | objectify | as json
#!/usr/bin/env rush
# incident-response.rush β€” AI-assisted log analysis

# Gather context from multiple sources
errors = $(grep -i error /var/log/app/*.log | tail -100)
metrics = $(curl -s localhost:9090/metrics)
recent_deploys = $(git log --oneline -5)

# AI analyzes the full picture
analysis = ai <<PROMPT
  Recent errors: #{errors}
  Current metrics: #{metrics}
  Recent deploys: #{recent_deploys}

  Identify the root cause and suggest fixes.
PROMPT

puts analysis

# Or let the agent handle it autonomously
ai --agent <<TASK
  1. Check application health endpoints
  2. Review error patterns in the last hour
  3. If errors correlate with a deploy, prepare rollback
  4. Summarize findings and recommended action
TASK

# Pipe structured data through AI
docker ps | ai "which containers are using the most memory?"
git diff HEAD~5 | ai "summarize changes and flag any security concerns"
cat ~/.ssh/config | ai "list all hosts I can connect to"
#!/usr/bin/env rush
# update-system.rush β€” One script, every platform

macos
  puts "Updating macOS..."
  brew update && brew upgrade
  softwareupdate --install --all
end

macos.arch == "arm64"
  # Apple Silicon-specific setup
  eval "$(/opt/homebrew/bin/brew shellenv)"
end

linux.version >= "6.8.0"
  puts "Modern kernel detected"
end

ubuntu
  sudo apt update && sudo apt upgrade -y
end

redhat
  sudo dnf update -y
end

win64
  # PowerShell 7 β€” Rush syntax
  winget upgrade --all --accept-source-agreements
end

win32
  # PowerShell 5.1 β€” raw PS syntax, sent verbatim
  Get-WindowsUpdate -Install -AcceptAll
end

puts "βœ“ System updated."
# Add a named connection
sql add @logs --driver sqlite --path ~/data/app.db

# Query directly
sql @logs "SELECT * FROM events WHERE level = 'ERROR' LIMIT 10"

# Pipeline integration
sql @logs "SELECT path, count FROM requests" | where count > 100

# Multiple output formats
sql @logs "SELECT * FROM users" --json
sql @logs "SELECT * FROM users" --csv
#!/usr/bin/env rush
# Deploy script showing shell + scripting integration

target = ARGV[0] || die("usage: deploy.rush <env>")
branch = $(git rev-parse --abbrev-ref HEAD).strip

unless branch == "main" || target == "staging"
  die "can only deploy to production from main"
end

puts "deploying #{branch} to #{target}..."

git pull origin #{branch}
npm run build

if target == "production"
  rsync -avz --delete dist/ prod.example.com:/var/www/
else
  rsync -avz --delete dist/ staging.example.com:/var/www/
end

puts "done."

Installation

Coming Soon

Rush is currently in alpha, being tested in select production environments. Join the waitlist for early access.

🍎 macOS arm64, x64
🐧 Linux x64, arm64
πŸͺŸ Windows x64, ARM64