DEMO

Verifiable AI inference

This is a live demo of verifiable AI inference built on Caution, running inside a secure enclave. The conversation is end-to-end encrypted and only decryptable by you and the enclave, and you have the ability to prove exactly what source code was used, down to the kernel, with a cryptographic proof.

Problem

Most confidential compute solutions can prove that a workload is running inside of genuine confidential compute hardware, and the hash of the code that was deployed, but no ability to tie the deployed code to its originating source code. Additionally many confidential compute solutions improperly handle encryption and expose data to untrusted parts of the system.

Solution

Caution combines verifiability and true end-to-end encryption to prove what code runs inside the enclave, and keep conversations private.

How to verify

1. Install the Caution CLI

2. Run caution verify --attestation-url https://chat.caution.dev/attestation

Verifiable AI Chat

Ask a question to see verifiable AI inference in action.

Powered by llama.cpp. Phi-3.1-mini (3.8B). Runs on CPU today. GPU support coming.