The Universal Shell
The same clean syntax that's readable for humans is also easier for LLMs to generate correctly
name="Mark" # no spaces allowed!
name = "Mark" # readable
In bash, name = "Mark" is an error. LLMs constantly add spaces. Rush just allows them.
$name vs ${name} vs "$name"
puts "hello #{name}"
One interpolation syntax. No $var vs ${var} vs quoting confusion.
if [ ... ]; then ... fi
if condition ... end
Clear if/end blocks instead of if/then/fi, case/esac, or bracket gymnastics.
IFS=$'\n'; for f in *.txt
for f in Dir.files("*.txt")
Files with spaces, quotes, and special characters just work. No IFS hackery required.
awk '{print $4}' | grep -E
| where Status == "Up"
Query object properties directly. No regex column extraction or brittle text parsing.
[ "$x" -gt 5 ] vs (( x > 5 ))
if x > 5
Use >, <, == naturally. No -gt, -eq, or double-bracket rules.
Rush isn't just easier syntax β it's a language designed from the ground up to fit how LLMs actually work.
The entire language spec fits in a single context window. No scattered documentation across dozens of man pages, no "see also" rabbit holes. One file, complete knowledge.
rush-lang-spec.yaml is machine-readable documentation designed for LLM ingestion. Read once, generate correctly. No web searches, no StackOverflow, no "let me look that up."
The spec includes common mistakes LLMs make β patterns to avoid, with corrections. We're training the LLM on what not to do, not just what to do.
No special cases. No "except when..." rules. The same pattern works everywhere. LLMs thrive on consistency; Rush delivers it.
Compare this to Python: 400+ pages of docs, spread across tutorials, library references, PEPs, and community guides. An LLM either needs massive pre-training or live web search to be competent. Rush fits in 50KB.
Any LLM becomes an instant Rush expert with a single file read
A machine-readable language specification designed for single-shot LLM ingestion. Includes:
# LLMs: avoid these patterns
common_mistakes:
- wrong: 'foreach ($x in $list) { }'
right: 'for x in list ... end'
- wrong: 'if (x > 5) { puts x }'
right: 'if x > 5; puts x; end'
- wrong: '[System.IO.File]::ReadAllText("path")'
right: 'File.read("path")'
When AI needs to run autonomously, Rush provides the infrastructure
ai "prompt"
Ask questions, get answers. Pipe data through AI for analysis.
cat error.log | ai "what went wrong?"
git diff | ai "write a commit message"
ai "how do I find large files?"
ai --agent "task"
AI executes commands, observes results, iterates until done. Multi-turn task completion.
cat ~/servers.txt | ai --agent "update OS on these"
ai --agent "find all TODO comments and create issues"
ai --agent "deploy the staging environment"
rush --mcp
Integrates with Claude Code. Persistent sessions, tool exposure, JSON-RPC protocol.
rush install mcp --claude
# Now Claude Code has rush_execute,
# rush_read_file, rush_context tools
When AI agents drive Rush autonomously (rush --llm), all I/O is structured JSON. No ANSI colors, no decorationsβjust machine-readable context and results. This is the protocol ai --agent uses internally, and it's available for any LLM framework to use.
rush --mcp integrates with Claude Code. Persistent sessions, tool exposure, JSON-RPC protocol support.
rush --mcp-ssh for multi-host management. Target any SSH host from Claude with seamless authentication.
Auto-convert text streams to objects. netstat | objectify | where State == LISTEN just works.
Query SQLite, PostgreSQL, ODBC directly. sql @db "SELECT * FROM users"
macOS, Linux, Windows. Platform blocks for conditional code execution based on your OS.
Full readline vi mode. W/B/E motions, dot-repeat, undo-all for efficient editing.
Small details that make a big difference in daily use
Sync your Rush config across machines via GitHub, SSH, or local path. Secrets file is never synced.
sync init github
sync init ssh server:~/.config/rush
sync push / sync pull
Access remote files directly β no scp or rsync needed. Uses your SSH config.
ls //ssh:server/var/log
cat //ssh:server/etc/nginx/nginx.conf
cp //ssh:server/data/backup.sql .
Detects your terminal background at startup. Configures LS_COLORS and GREP_COLORS automatically.
# Dark terminals β bold/bright
# Light terminals β non-bold/darker
set theme auto # default
Use the ai command with Anthropic, OpenAI, Gemini, or Ollama. Switch providers instantly.
set --save aiProvider anthropic
set --save aiProvider openai
set --save aiProvider ollama
Visual PATH editor with existence checks. Add, remove, reorder β changes persist automatically.
path # show with existence check
path add --save /opt/bin
path edit --save # opens $EDITOR
Edit your config and reload without restarting. --hard restarts the binary while preserving session state.
init # edit init.rush, auto-reload
reload # reload config
reload --hard # full restart
Rush solves the problems that make shell scripting frustrating
IFS=$'\n'
for f in $(find . -name "*.txt"); do
echo "$f" # Still breaks on some names
done
for f in Dir.files("*.txt", recursive: true)
puts f.Name
end
if ! git pull; then
echo "failed"
fi
# Or: git pull || echo "failed"
git pull
if $?.failed?
warn "pull failed: #{$?.code}"
end
if [[ "$OSTYPE" == "darwin"* ]]; then
brew install rg
elif [[ "$OSTYPE" == "linux-gnu"* ]]; then
apt install ripgrep
fi
macos { brew install rg }
linux { apt install ripgrep }
win64 { winget install ripgrep }
docker ps | tail -n +2 | \
awk '{print $NF}' | \
grep -v "^$"
docker ps | objectify | select Names
Local shell access and persistent SSH sessions β both with structured JSON protocol
rush --mcp
Direct shell access on the local machine. Claude Code gets rush_execute, rush_read_file, and rush_context tools.
rush install mcp --claude
# Adds to claude_desktop_config.json
{"tool": "rush_execute",
"command": "ls ~/Documents"}
{"status": "success",
"exit_code": 0,
"stdout": [...],
"cwd": "/Users/mark"}
rush --mcp-ssh
Persistent SSH sessions via stdin/stdout streaming. One connection per host, kept alive across commands.
ssh server "apt update" 2>&1
ssh server "apt upgrade -y" 2>&1
ssh server "systemctl restart app" 2>&1
# 3 connections, 600-1500ms total
ssh server "rush --llm"
apt update
apt upgrade -y
systemctl restart app
# 1 connection, 60-150ms total
cat ~/servers.txt | ai --agent "update the OS on these servers"
The agent opens parallel SSH sessions to each host, runs commands, observes results, and iterates until done.
# prod-web-01
β {"ready":true,"host":"prod-web-01","cwd":"/root"...}
β apt update && apt upgrade -y
β {"status":"success","exit_code":0,"stdout":"..."}
# prod-web-02
β {"ready":true,"host":"prod-web-02","cwd":"/root"...}
β apt update && apt upgrade -y
β {"status":"success","exit_code":0,"stdout":"..."}
# Variables β no sigils
name = "rush"
puts "Welcome to #{name.upcase}!"
# Method chaining
"hello world".split.map { |w| w.capitalize }.join(" ")
# Readable loops
for f in Dir.files("*.txt")
puts "Processing #{f.Name}"
end
# Docker containers as objects
docker ps | objectify | where Status =~ "Up" | select Names, Image
# Network connections
netstat | objectify | where State == "LISTEN" | sort LocalAddress
# Process management
ps | objectify | where CPU > 10 | select ProcessName, CPU, Memory
# Output as JSON
ls /var/log | objectify | as json
#!/usr/bin/env rush
# incident-response.rush β AI-assisted log analysis
# Gather context from multiple sources
errors = $(grep -i error /var/log/app/*.log | tail -100)
metrics = $(curl -s localhost:9090/metrics)
recent_deploys = $(git log --oneline -5)
# AI analyzes the full picture
analysis = ai <<PROMPT
Recent errors: #{errors}
Current metrics: #{metrics}
Recent deploys: #{recent_deploys}
Identify the root cause and suggest fixes.
PROMPT
puts analysis
# Or let the agent handle it autonomously
ai --agent <<TASK
1. Check application health endpoints
2. Review error patterns in the last hour
3. If errors correlate with a deploy, prepare rollback
4. Summarize findings and recommended action
TASK
# Pipe structured data through AI
docker ps | ai "which containers are using the most memory?"
git diff HEAD~5 | ai "summarize changes and flag any security concerns"
cat ~/.ssh/config | ai "list all hosts I can connect to"
#!/usr/bin/env rush
# update-system.rush β One script, every platform
macos
puts "Updating macOS..."
brew update && brew upgrade
softwareupdate --install --all
end
macos.arch == "arm64"
# Apple Silicon-specific setup
eval "$(/opt/homebrew/bin/brew shellenv)"
end
linux.version >= "6.8.0"
puts "Modern kernel detected"
end
ubuntu
sudo apt update && sudo apt upgrade -y
end
redhat
sudo dnf update -y
end
win64
# PowerShell 7 β Rush syntax
winget upgrade --all --accept-source-agreements
end
win32
# PowerShell 5.1 β raw PS syntax, sent verbatim
Get-WindowsUpdate -Install -AcceptAll
end
puts "β System updated."
# Add a named connection
sql add @logs --driver sqlite --path ~/data/app.db
# Query directly
sql @logs "SELECT * FROM events WHERE level = 'ERROR' LIMIT 10"
# Pipeline integration
sql @logs "SELECT path, count FROM requests" | where count > 100
# Multiple output formats
sql @logs "SELECT * FROM users" --json
sql @logs "SELECT * FROM users" --csv
#!/usr/bin/env rush
# Deploy script showing shell + scripting integration
target = ARGV[0] || die("usage: deploy.rush <env>")
branch = $(git rev-parse --abbrev-ref HEAD).strip
unless branch == "main" || target == "staging"
die "can only deploy to production from main"
end
puts "deploying #{branch} to #{target}..."
git pull origin #{branch}
npm run build
if target == "production"
rsync -avz --delete dist/ prod.example.com:/var/www/
else
rsync -avz --delete dist/ staging.example.com:/var/www/
end
puts "done."
Rush is currently in alpha, being tested in select production environments. Join the waitlist for early access.
Join the waitlist to be the first to know when Rush launches.