Earn TON with Your GPU
Provide compute power to the Cocoon network and receive TON for processing AI inference requests. Secure, transparent, and fully automated.
How It Works
Download & Install
Download the Cocoon Worker distribution, unpack it, and configure your settings.
Configure GPU & Wallet
Set up your NVIDIA GPU for confidential computing and provide your TON wallet address.
Earn TON
Start the worker and receive automatic TON payments for every AI request processed.
Hardware Requirements
GPU Requirements
- GPU: NVIDIA H100 / H200
- VRAM: 80GB+
- CC: Confidential Computing support required
- RTX series NOT supported for production
Server Requirements
- CPU: Intel (TDX)
- Linux: 6.16+
- QEMU: 10.1+
- RAM: 128GB+
Software & Accounts
- Updated VBIOS (via NVIDIA support)
- Hugging Face token
- TON wallet for payouts
Earnings Calculator
These are estimated values. Actual earnings depend on network demand, model selection, and uptime.
Quick Start
1. Download and unpack
wget https://ci.cocoon.org/cocoon-worker-release-latest.tar.xz
tar xzf cocoon-worker-release-latest.tar.xz
cd cocoon-worker
2. Start seal-server (required for production)
./bin/seal-server --enclave-path ./bin/enclave.signed.so
3. Configure worker
cp worker.conf.example worker.conf
# Edit configuration:
# owner_address = YOUR_TON_WALLET_ADDRESS
# gpu = 0000:01:00.0
# model = Qwen/Qwen3-0.6B
# hf_token = hf_xxx...
4. Start the worker
./scripts/cocoon-launch worker.conf
For detailed instructions, see the full setup guide.
How Payments Work
Automatic Payouts
Smart contracts on TON blockchain handle all payments automatically. No manual claims needed.
Transparent & Verifiable
All transactions are recorded on-chain. View your earnings anytime via TON explorer.
Security & Privacy
Intel TDX
Workers run inside hardware-isolated virtual machines. Host cannot access request data.
Data Privacy
All prompts and responses are encrypted. You never see user data.
GPU Attestation
Hardware verification ensures only genuine GPUs participate in the network.
Smart Contracts
Open-source contracts on TON ensure fair and transparent payments.
Full Setup Guide
1 Prerequisites
Before starting, ensure your hardware and software meet the following requirements for running a Cocoon worker.
Hardware Requirements
- GPU: NVIDIA H100 / H200 (Confidential Computing)
- CPU: Intel Xeon (TDX support)
- RAM: 128GB+ DDR5
- Storage: NVMe SSD, 500GB+
- Network: 1Gbps+, stable connection
Software Requirements
- Linux: Kernel 6.16+ (TDX support)
- QEMU: 10.1+ (TDX patches)
- NVIDIA Driver: 560+
Note: Consumer GPUs (RTX series) are not supported due to lack of Confidential Computing capabilities.
2 VBIOS Update
GPU attestation requires a special CC-enabled VBIOS. Standard VBIOS versions do not support attestation features needed for production deployment.
Update Process
- Contact NVIDIA Enterprise Support with your GPU serial number
- Request CC-enabled VBIOS for your specific GPU model (H100/H200)
- Follow NVIDIA instructions to flash the new VBIOS
- Verify update with nvidia-smi and check CC mode availability
Warning: Incorrect VBIOS flashing can damage your GPU. Follow NVIDIA instructions carefully.
3 TDX Configuration
Intel Trust Domain Extensions (TDX) provides hardware-isolated virtual machines. TDX must be enabled in BIOS for production workers.
BIOS Settings
- Enter BIOS setup (usually F2/Del on boot)
- Navigate to Security or Advanced CPU settings
- Enable "Intel TDX" or "Trust Domain Extensions"
- Save and reboot
Verify TDX is Active
dmesg | grep -i tdx
4 Configuration File
The worker.conf file (INI format) contains all settings for your worker. Copy the example file and edit it with your values.
cp worker.conf.example worker.conf
nano worker.conf
Required Parameters
| Parameter | Description |
|---|---|
type | Must be "worker" |
model | AI model to serve (e.g., Qwen/Qwen3-0.6B) |
owner_address | Your TON wallet address for payouts |
gpu | PCI address of GPU (find with lspci | grep -i nvidia) |
hf_token | Hugging Face API token for model download |
node_wallet_key | Base64-encoded worker wallet private key |
Optional Parameters
| Parameter | Default | Description |
|---|---|---|
instance | 0 | Worker instance number for multi-GPU setups |
worker_coefficient | 1000 | Price multiplier (1000 = 1.0x, 1500 = 1.5x) |
persistent | auto | Path to persistent disk image |
Example Configuration
type = worker
model = Qwen/Qwen3-0.6B
owner_address = EQC...your_ton_address
gpu = 0000:01:00.0
hf_token = hf_xxxxxxxxxxxxx
node_wallet_key = base64_encoded_key
ton_config = ./mainnet-config.json
root_contract_address = EQ...
Tip: Use lspci | grep -i nvidia to find your GPU PCI address.
5 Seal Server
seal-server provides secure key derivation for TDX guests using an SGX enclave. It must run on the host before starting workers.
Why seal-server is Required
- Derives cryptographic keys tied to the TDX image
- Keys persist across reboots
- Ensures host cannot access worker secrets
Start seal-server
./bin/seal-server --enclave-path ./bin/enclave.signed.so
Note: One seal-server instance can serve multiple workers. Keep it running in a separate terminal or as a systemd service.
6 Launch Worker
After configuring worker.conf and starting seal-server, you can launch the worker in different modes.
Execution Modes
| Mode | Command |
|---|---|
| Production | ./scripts/cocoon-launch worker.conf |
| Test (real TON) | ./scripts/cocoon-launch --test worker.conf |
| Test (mock TON) | ./scripts/cocoon-launch --test --fake-ton worker.conf |
Command-line Overrides
./scripts/cocoon-launch \
--instance 0 \
--worker-coefficient 1500 \
--model Qwen/Qwen3-8B \
worker.conf
7 Multi-GPU Setup
Run multiple workers on a single server, each handling one GPU. Use the --instance flag to assign unique identifiers.
Port Assignment
| Instance | Port | CID |
|---|---|---|
| 0 | 12000 | 6 |
| 1 | 12010 | 16 |
| 2 | 12020 | 26 |
Launch Multiple Workers
# GPU 1
./scripts/cocoon-launch --instance 0 --gpu 0000:01:00.0 worker.conf &
# GPU 2
./scripts/cocoon-launch --instance 1 --gpu 0000:41:00.0 worker.conf &
Tip: Each instance gets separate persistent storage automatically.
8 Monitoring
Monitor worker health and performance through HTTP endpoints and the health-client utility.
HTTP Endpoints
| Endpoint | Description |
|---|---|
/stats | Human-readable status |
/jsonstats | JSON format for automation |
/perf | Performance metrics |
curl http://localhost:12000/stats
curl http://localhost:12000/jsonstats | jq
Health Client Commands
./health-client --instance worker status
./health-client -i worker gpu
./health-client -i worker logs cocoon-vllm 100
./health-client -i worker all
9 Troubleshooting
Common issues and their solutions when running Cocoon workers.
Worker Does Not Start
- Verify seal-server is running
- Check GPU is in CC mode
- Confirm VBIOS is updated
- Ensure TDX is enabled in BIOS
Diagnostic Commands
# Check GPU CC mode
nvidia-smi -q | grep "Confidential"
# Verify seal-server
ps aux | grep seal-server
# Check TDX
dmesg | grep -i tdx
# View logs
./health-client -i worker logs cocoon-vllm 50