Earn TON with Your GPU

Provide compute power to the Cocoon network and receive TON for processing AI inference requests. Secure, transparent, and fully automated.

How It Works

1

Download & Install

Download the Cocoon Worker distribution, unpack it, and configure your settings.

2

Configure GPU & Wallet

Set up your NVIDIA GPU for confidential computing and provide your TON wallet address.

3

Earn TON

Start the worker and receive automatic TON payments for every AI request processed.

Hardware Requirements

GPU Requirements

  • GPU: NVIDIA H100 / H200
  • VRAM: 80GB+
  • CC: Confidential Computing support required
  • RTX series NOT supported for production

Server Requirements

  • CPU: Intel (TDX)
  • Linux: 6.16+
  • QEMU: 10.1+
  • RAM: 128GB+

Software & Accounts

  • Updated VBIOS (via NVIDIA support)
  • Hugging Face token
  • TON wallet for payouts

Earnings Calculator

NVIDIA H100 (80GB)
NVIDIA H200 NVL (141GB)
Qwen3-0.6B (~8000 tok/s)
Qwen3-8B (~3000 tok/s)
Qwen3-32B (~1000 tok/s)
~125 TON
per month (estimated)

These are estimated values. Actual earnings depend on network demand, model selection, and uptime.

Quick Start

1. Download and unpack

wget https://ci.cocoon.org/cocoon-worker-release-latest.tar.xz
tar xzf cocoon-worker-release-latest.tar.xz
cd cocoon-worker

2. Start seal-server (required for production)

./bin/seal-server --enclave-path ./bin/enclave.signed.so

3. Configure worker

cp worker.conf.example worker.conf
# Edit configuration:
# owner_address = YOUR_TON_WALLET_ADDRESS
# gpu = 0000:01:00.0
# model = Qwen/Qwen3-0.6B
# hf_token = hf_xxx...

4. Start the worker

./scripts/cocoon-launch worker.conf

For detailed instructions, see the full setup guide.

How Payments Work

👤 Client Wallet
🔄 Proxy Contract
⚙️ Worker Contract
💰 Your Wallet

Automatic Payouts

Smart contracts on TON blockchain handle all payments automatically. No manual claims needed.

Transparent & Verifiable

All transactions are recorded on-chain. View your earnings anytime via TON explorer.

Security & Privacy

🔒

Intel TDX

Workers run inside hardware-isolated virtual machines. Host cannot access request data.

🛡️

Data Privacy

All prompts and responses are encrypted. You never see user data.

GPU Attestation

Hardware verification ensures only genuine GPUs participate in the network.

📄

Smart Contracts

Open-source contracts on TON ensure fair and transparent payments.

Full Setup Guide

1 Prerequisites

Before starting, ensure your hardware and software meet the following requirements for running a Cocoon worker.

Hardware Requirements

  • GPU: NVIDIA H100 / H200 (Confidential Computing)
  • CPU: Intel Xeon (TDX support)
  • RAM: 128GB+ DDR5
  • Storage: NVMe SSD, 500GB+
  • Network: 1Gbps+, stable connection

Software Requirements

  • Linux: Kernel 6.16+ (TDX support)
  • QEMU: 10.1+ (TDX patches)
  • NVIDIA Driver: 560+

Note: Consumer GPUs (RTX series) are not supported due to lack of Confidential Computing capabilities.

2 VBIOS Update

GPU attestation requires a special CC-enabled VBIOS. Standard VBIOS versions do not support attestation features needed for production deployment.

Update Process

  • Contact NVIDIA Enterprise Support with your GPU serial number
  • Request CC-enabled VBIOS for your specific GPU model (H100/H200)
  • Follow NVIDIA instructions to flash the new VBIOS
  • Verify update with nvidia-smi and check CC mode availability

Warning: Incorrect VBIOS flashing can damage your GPU. Follow NVIDIA instructions carefully.

3 TDX Configuration

Intel Trust Domain Extensions (TDX) provides hardware-isolated virtual machines. TDX must be enabled in BIOS for production workers.

BIOS Settings

  • Enter BIOS setup (usually F2/Del on boot)
  • Navigate to Security or Advanced CPU settings
  • Enable "Intel TDX" or "Trust Domain Extensions"
  • Save and reboot

Verify TDX is Active

dmesg | grep -i tdx

4 Configuration File

The worker.conf file (INI format) contains all settings for your worker. Copy the example file and edit it with your values.

cp worker.conf.example worker.conf
nano worker.conf

Required Parameters

ParameterDescription
typeMust be "worker"
modelAI model to serve (e.g., Qwen/Qwen3-0.6B)
owner_addressYour TON wallet address for payouts
gpuPCI address of GPU (find with lspci | grep -i nvidia)
hf_tokenHugging Face API token for model download
node_wallet_keyBase64-encoded worker wallet private key

Optional Parameters

ParameterDefaultDescription
instance0Worker instance number for multi-GPU setups
worker_coefficient1000Price multiplier (1000 = 1.0x, 1500 = 1.5x)
persistentautoPath to persistent disk image

Example Configuration

type = worker
model = Qwen/Qwen3-0.6B
owner_address = EQC...your_ton_address
gpu = 0000:01:00.0
hf_token = hf_xxxxxxxxxxxxx
node_wallet_key = base64_encoded_key
ton_config = ./mainnet-config.json
root_contract_address = EQ...

Tip: Use lspci | grep -i nvidia to find your GPU PCI address.

5 Seal Server

seal-server provides secure key derivation for TDX guests using an SGX enclave. It must run on the host before starting workers.

Why seal-server is Required

  • Derives cryptographic keys tied to the TDX image
  • Keys persist across reboots
  • Ensures host cannot access worker secrets

Start seal-server

./bin/seal-server --enclave-path ./bin/enclave.signed.so

Note: One seal-server instance can serve multiple workers. Keep it running in a separate terminal or as a systemd service.

6 Launch Worker

After configuring worker.conf and starting seal-server, you can launch the worker in different modes.

Execution Modes

ModeCommand
Production./scripts/cocoon-launch worker.conf
Test (real TON)./scripts/cocoon-launch --test worker.conf
Test (mock TON)./scripts/cocoon-launch --test --fake-ton worker.conf

Command-line Overrides

./scripts/cocoon-launch \
  --instance 0 \
  --worker-coefficient 1500 \
  --model Qwen/Qwen3-8B \
  worker.conf

7 Multi-GPU Setup

Run multiple workers on a single server, each handling one GPU. Use the --instance flag to assign unique identifiers.

Port Assignment

InstancePortCID
0120006
11201016
21202026

Launch Multiple Workers

# GPU 1
./scripts/cocoon-launch --instance 0 --gpu 0000:01:00.0 worker.conf &

# GPU 2
./scripts/cocoon-launch --instance 1 --gpu 0000:41:00.0 worker.conf &

Tip: Each instance gets separate persistent storage automatically.

8 Monitoring

Monitor worker health and performance through HTTP endpoints and the health-client utility.

HTTP Endpoints

EndpointDescription
/statsHuman-readable status
/jsonstatsJSON format for automation
/perfPerformance metrics
curl http://localhost:12000/stats
curl http://localhost:12000/jsonstats | jq

Health Client Commands

./health-client --instance worker status
./health-client -i worker gpu
./health-client -i worker logs cocoon-vllm 100
./health-client -i worker all

9 Troubleshooting

Common issues and their solutions when running Cocoon workers.

Worker Does Not Start

  • Verify seal-server is running
  • Check GPU is in CC mode
  • Confirm VBIOS is updated
  • Ensure TDX is enabled in BIOS

Diagnostic Commands

# Check GPU CC mode
nvidia-smi -q | grep "Confidential"

# Verify seal-server
ps aux | grep seal-server

# Check TDX
dmesg | grep -i tdx

# View logs
./health-client -i worker logs cocoon-vllm 50

Frequently Asked Questions

How much can I earn?
Earnings depend on your GPU model, the AI models you serve, network utilization, and uptime. Use our calculator above for estimates. As the network grows, demand and earnings potential will increase.
Which GPUs are supported?
Currently, only NVIDIA H100 and newer GPUs with Confidential Computing support are accepted for production. Consumer GPUs (RTX series) are not supported due to lack of CC capabilities.
Is it safe for my hardware?
Yes. Cocoon runs AI inference workloads, not traditional mining. There's no excessive wear on your GPU. The TEE environment ensures secure, isolated execution.
How do I set up multiple GPUs?
Run separate worker instances for each GPU using the --instance flag. Each instance gets unique ports and handles one GPU. Example: ./scripts/cocoon-launch --instance 0 --gpu 0000:01:00.0 worker.conf
What is seal-server and why is it required?
seal-server provides secure key derivation for TDX guests using an SGX enclave. It ensures your worker's cryptographic keys survive reboots and are tied to the verified image. Without it, production workers won't start.
How do I update VBIOS for GPU attestation?
Contact NVIDIA support to request a CC-enabled VBIOS for your GPU model. Standard VBIOS versions may not support attestation. This is required for production deployment.
What if my worker doesn't start?
Check that: 1) seal-server is running, 2) GPU is in CC mode (use setup-gpu-vfio script), 3) VBIOS is updated, 4) TDX is enabled in BIOS. See the deployment docs for troubleshooting.