1
            Developer
Submits confidential inference request via Cocoon SDK or API.
A decentralized network for secure, private AI inference. App developers reward GPU owners with TON for processing inference requests. Users get privacy; operators earn; everyone wins.
Submits confidential inference request via Cocoon SDK or API.
Assigns job to the best secure node considering latency and capacity.
Runs the model in a confidential environment; input/output are private.
Encrypted results are returned to the app; TON payout is settled on-chain.
Inputs, prompts, and outputs are processed in secure enclaves and never exposed to operators.
Nodes register attested hardware/config; dispatchers enforce policies and record proofs.