Documentation
# LEGION Documentation (Public)

This page mirrors the Developer documentation with one exception: MCP access token generation requires login.

## Create an App

### Via Dashboard
- Log in to the Legion dashboard at https://legion-ai.org/developer
- Or start on the public registration page at https://legion-ai.org/register and sign in only after submitting the form
- Create a new app: enter your app name and domain (e.g., https://yourapp.com).
- You'll receive a client_id. Register your domain as the redirect URI.
- The server automatically appends /callback if no path is given, so https://yourapp.com becomes https://yourapp.com/callback. You can also register a custom path (e.g., /auth/callback) and it will be used as-is.
- Use the full redirect URI (with path) in your code.
- Response format is OpenAI-compatible.

### Via CLI
legion login
legion create-app --name "My App" --redirect-uris "https://myapp.com/callback"
# Returns client_id and client_secret (secret shown only once — save it!)

# Multiple redirect URIs (comma-separated):
legion create-app --name "My App" --redirect-uris "http://localhost:3000/callback,https://myapp.com/callback"

### Via MCP (AI Agents)
Use the legion_create_oauth_app tool with name and redirect_uris parameters.

## Key Concepts: OAuth App vs Railway Log Source

**OAuth App** = An OAuth application registration. Creating an app gives you a client_id and client_secret, which you use to integrate Legion's AI API into your product via the SDK or direct API calls. OAuth apps are for building AI-powered features.

**Railway Log Source** = A Railway deployment connection. Connecting a log source gives you access to deployment logs for debugging. Log sources are for monitoring and log access only — they are not required for API integration.

Most developers only need an OAuth App. Railway log sources are optional and only useful if you deploy on Railway and want log access via the CLI or MCP tools.

## Integration Approach

### Recommended: Login at Point of Use
Prompt users to connect their Legion wallet when they first try to use AI features (e.g., send a chat message). This provides the best UX - users only authenticate when they actually need it.

### Alternative: Dedicated Button
A "Connect Wallet" button in your UI is also acceptable if you prefer upfront authentication.

## OAuth Connect Flow (Manual)

Important notes:
- Only response_type=code is supported (authorization code flow)
- scope defaults to 'chat:completions' if omitted
- redirect_uri is NOT required in the token exchange body (Step 2), only client_id and code
- client_secret is optional for public clients (e.g., browser apps, CMS plugins)

// Step 1: Redirect user to the Legion consent page
// The /connect page reads client_id, redirect_uri, and response_type from query params
window.location.href = 'https://legion-ai.org/connect?client_id=YOUR_CLIENT_ID&redirect_uri=https://yourapp.com/callback&response_type=code';

// Step 2: User authorizes, Legion redirects to your callback with a code. Exchange it:
const res = await fetch('https://auth.legion-ai.org/v1/oauth/token', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    grant_type: 'authorization_code',
    code: code,
    client_id: 'YOUR_CLIENT_ID',
  }),
});
const { access_token, refresh_token, expires_in } = await res.json();
// access_token expires in 86400 seconds (24 hours)
// Store both tokens - use refresh_token to get new access tokens

// Step 3: Make API calls with user's token
const chatRes = await fetch('https://api.legion-ai.org/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${access_token}`,
  },
  body: JSON.stringify({
    // Optional: omit model to use your app's default model from the dashboard
    model: 'gpt-5-nano',
    messages: [{ role: 'user', content: 'Hello!' }],
    // Optional: inject user context for personalized responses
    user_preferences: true,  // user's AI preferences (e.g. "brief answers")
    user_information: true,  // demographics (age, location, occupation)
    user_context: true,      // query patterns (top topics, intent distribution)
  }),
});
const data = await chatRes.json();
const reply = data.choices[0].message.content;

---

## Available Models

Default model: your app's default model if configured in the dashboard; otherwise gpt-5-nano.
Developer margin: configurable markup on model costs (default 50%). You earn 75% of this margin; Legion retains 25%. Set via the dashboard, CLI (--margin), or PATCH /v1/apps/:clientId with developer_margin_pct.

OpenAI: gpt-5-nano, gpt-5-mini, gpt-5-pro, gpt-4o, gpt-4o-mini
Anthropic: claude-opus-4-6, claude-sonnet-4-6, claude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5
Google: gemini-2.0-flash, gemini-2.0-pro, gemini-2.5-flash, gemini-2.5-pro, gemini-3.1-pro-preview, gemini-3-flash-preview, gemini-3-pro-preview
xAI: grok-4-1-fast-reasoning, grok-4-1-fast-non-reasoning, grok-4-fast-reasoning, grok-4-fast-non-reasoning, grok-4-0709

### List Models Programmatically
GET https://api.legion-ai.org/v1/models
Returns an OpenAI-compatible model list. No authentication required.

Example:
curl https://api.legion-ai.org/v1/models
// Response: { "object": "list", "data": [{ "id": "gpt-5-nano", "object": "model", ... }, ...] }

---

## User Context Injection (ON by Default)

Every API call automatically enriches the AI request with user-specific context for personalized responses. Three flags control what is injected — all default to true. To opt out, set a flag to false explicitly in your request body.

Flag: user_preferences (default: true)
- Source: User's AI Preferences setting
- Injects: Freeform preference text (e.g., "I prefer brief answers with sources")
- Opt out: set user_preferences: false

Flag: user_information (default: true)
- Source: User's profile demographics
- Injects: Age, sex, language, location, occupation, industry
- Opt out: set user_information: false

Flag: user_context (default: true)
- Source: User's query history (last 90 days, across all apps)
- Injects: Top 5 topics, intent distribution %, primary language, avg query length, total queries
- Opt out: set user_context: false

How it works:
- Context is appended to the system message before forwarding to the AI provider
- If no system message exists, one is created
- Empty sections are skipped automatically
- If user has no profile data, nothing is injected
- Zero additional latency when all flags are set to false
- Results are cached for 5 minutes

### SDK Example
const response = await legion.chat(
  [{ role: 'user', content: 'Help me plan my week' }],
  {
    model: 'claude-sonnet-4-5',
    user_preferences: true,
    user_information: true,
    user_context: true,
  }
);

### Direct API Example
body: JSON.stringify({
  model: 'gpt-5-nano',
  messages: [{ role: 'user', content: 'Help me plan my week' }],
  user_preferences: true,
  user_information: true,
  user_context: true,
})

---

## App Knowledge Base (RAG Context)

Give your app domain-specific knowledge by uploading files. Content is automatically chunked, embedded, and retrieved at query time via hybrid search (semantic + keyword with Reciprocal Rank Fusion).

### Supported File Types
- PDF (.pdf)
- Word (.docx)
- PowerPoint (.pptx)
- Video (.mp4) — embedded as single unit
- Audio (.mp3) — embedded as single unit

### Upload Flow
1. Go to your app's detail page in the developer dashboard
2. Drag and drop files into the Knowledge Base section
3. Wait for status to change from "processing" to "ready"
4. Each file gets a context_id (e.g., ctx_abc123...) that you can copy

### Using context_ids in API Calls
By default, all app context is searched for every API call. To limit search to specific files, pass context_ids:

#### SDK Example
const response = await legion.chat(
  [{ role: 'user', content: 'What does our return policy say?' }],
  {
    model: 'gpt-5-nano',
    context_ids: ['ctx_abc123def456', 'ctx_789xyz'],
  }
);

#### Direct API Example
body: JSON.stringify({
  model: 'gpt-5-nano',
  messages: [{ role: 'user', content: 'What does our return policy say?' }],
  context_ids: ['ctx_abc123def456'],
})

### How It Works
- Files are chunked (~1000 tokens with overlap) and embedded using Gemini Embedding 2
- At query time, the user's message is embedded and used for hybrid search (pgvector cosine + tsvector keyword)
- Top 5 matching chunks are injected into the system message as "[App Knowledge Base]"
- Results are cached for 60 seconds
- If no context exists for the app, the search is skipped entirely (zero latency impact)

### Programmatic Upload (via API)
POST /v1/apps/:clientId/context/upload-url — get presigned upload URL
POST /v1/apps/:clientId/context/confirm — confirm upload, trigger processing
GET  /v1/apps/:clientId/context — list all context items
GET  /v1/apps/:clientId/context/:contextId — get item status
DELETE /v1/apps/:clientId/context/:contextId — delete item and chunks

---

## SDK Quick Start

### Install the SDK
npm install @legionai/sdk

### Basic Usage
import { createLegion } from '@legionai/sdk';

const legion = createLegion({
  clientId: 'YOUR_CLIENT_ID',
  // Optional: callback when tokens are auto-refreshed (persist new tokens)
  onTokenRefresh: (tokens) => saveTokens(tokens),
});

// Connect user's wallet (redirectUri is passed here, not in createLegion)
legion.connect({ redirectUri: 'https://yourapp.com/callback' });

// After user returns to your callback, exchange code for tokens
const tokens = await legion.exchangeCode(code);
// tokens = { access_token, refresh_token, token_type, expires_in }

// Make API calls (uses your app default model if set; otherwise gpt-5-nano)
const response = await legion.chat([
  { role: 'user', content: 'Hello!' }
]);

// Specify a different model
const response2 = await legion.chat(
  [{ role: 'user', content: 'Hello!' }],
  { model: 'claude-sonnet-4-5' }
);

---

## Next.js Quick Start (@legionai/next)

Drop-in OAuth integration for Next.js App Router apps. Handles the full connect flow, stores tokens in httpOnly cookies, and provides React hooks for connection state.

### Install
npm install @legionai/next @legionai/sdk

### 1. Route Handler — app/api/auth/[...legion]/route.js
import { createLegionHandler } from '@legionai/next';

export const { GET, POST } = createLegionHandler({
  clientId: process.env.LEGION_CLIENT_ID,
  afterCallback: '/dashboard', // where to redirect after OAuth
});

### 2. Layout — app/layout.js
import { LegionProvider } from '@legionai/next';

export default function Layout({ children }) {
  return <html><body><LegionProvider>{children}</LegionProvider></body></html>;
}

### 3. Connect Button — any client component
'use client';
import { useLegion } from '@legionai/next';

export function ConnectButton() {
  const { connected, loading, connect, disconnect } = useLegion();
  if (loading) return null;
  return (
    <button onClick={connected ? disconnect : connect}>
      {connected ? 'Disconnect' : 'Connect Legion'}
    </button>
  );
}

### 4. Custom API Route — app/api/chat/route.js
import { getLegionSession } from '@legionai/next';

export async function POST(request) {
  const session = getLegionSession(request);
  if (!session) return Response.json({ error: 'Not connected' }, { status: 401 });

  const { messages } = await request.json();
  const res = await fetch('https://api.legion-ai.org/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${session.accessToken}`,
    },
    body: JSON.stringify({ model: 'gpt-5-nano', messages }),
  });
  return Response.json(await res.json());
}

### Handler Config Options
- clientId (required): Your Legion app client ID
- afterCallback (default: '/'): Redirect path after successful OAuth
- basePath (default: '/api/auth/legion'): Route prefix for the catch-all handler
- cookiePrefix (default: 'legion'): Prefix for cookie names
- scope (default: 'chat:completions'): OAuth scope

### How It Works
The catch-all route handles four endpoints automatically:
- GET /api/auth/legion/connect — Redirects to Legion consent page
- GET /api/auth/legion/callback — Exchanges code for tokens, sets cookies
- GET /api/auth/legion/status — Returns { connected: true/false }
- POST /api/auth/legion/disconnect — Clears cookies

Tokens are stored in httpOnly cookies (never exposed to client JavaScript). The React hook only reflects connection state — it does not access tokens directly.

---

## Authentication

### Token Lifecycle
- Access tokens expire after 24 hours (86400 seconds)
- The SDK automatically refreshes tokens on 401 errors using the refresh token
- When tokens are refreshed, the onTokenRefresh callback is called so you can persist the new tokens
- If refresh fails, the user must re-authorize via connect()
- Refresh tokens are single-use — each refresh call returns a new refresh token (rotation)

### Manual Token Refresh (Without SDK)
If you're not using the SDK, refresh tokens manually:

const res = await fetch('https://auth.legion-ai.org/v1/oauth/token', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    grant_type: 'refresh_token',
    refresh_token: 'YOUR_REFRESH_TOKEN',
  }),
});
const { access_token, refresh_token, expires_in } = await res.json();
// IMPORTANT: Save the new refresh_token — it replaces the old one (single-use rotation)
// client_id is not required for refresh requests

### Restoring Sessions
// Load saved tokens (both access and refresh tokens)
legion.setTokens({
  accessToken: saved.access_token,
  refreshToken: saved.refresh_token,
});

---

## Error Handling

Status | Code | Action
401 | invalid_token | Token expired. SDK auto-refreshes; if that fails, call connect()
403 | connection_inactive | User deactivated. Call connect() to reauthorize
402 | limit_exceeded | Monthly limit reached. User can adjust in dashboard
402 | insufficient_funds | Wallet empty. User needs to add funds

### Recommended Pattern
For any authentication or connection error, call connect() to restart the OAuth flow:

try {
  const response = await legion.chat(messages);
} catch (error) {
  if (error.requiresReauth() || error.isDeactivated()) {
    // Reconnect - this handles reauth and reactivates deactivated connections
    legion.connect({ redirectUri: 'https://yourapp.com/callback' });
  } else if (error.isBillingError()) {
    showMessage('Please add funds to your Legion wallet');
  }
}

---

## Connection States

Active: Normal operation. Make API calls freely.
Inactive: User deactivated. Call connect() - reauthorization automatically reactivates.
Revoked: Permanently disconnected. Call connect() to start fresh.

---

## API Reference

createLegion(config)
- Create a Legion SDK instance.
- Config: clientId, authUrl (default: https://auth.legion-ai.org), apiUrl (default: https://api.legion-ai.org), onTokenRefresh

legion.connect(options)
- Start OAuth flow (redirects user to consent page).
- Options: redirectUri, state, scope (default: 'chat:completions'), popup

legion.exchangeCode(code)
- Exchange authorization code for tokens.
- Returns: { access_token, refresh_token, token_type, expires_in }

legion.setTokens({ accessToken, refreshToken })
- Set tokens directly (e.g., from storage).

legion.chat(messages, options)
- Make chat completion request.
- Default model: your app default if set, otherwise gpt-5-nano. Auto-refreshes token on 401.
- Options: model, temperature, max_tokens, autoRefresh (default: true), user_preferences, user_information, user_context, context_ids

legion.refreshAccessToken()
- Manually refresh access token using refresh token.
- Calls onTokenRefresh callback with new tokens.

legion.isConnected()
- Returns true if SDK has an access token.

legion.disconnect()
- Clear all tokens from SDK.

---

## LegionError
All SDK errors are instances of LegionError with these properties:
- error.message: Human-readable message
- error.code: Error code (e.g., 'invalid_token')
- error.status: HTTP status code

Helper Methods
- error.requiresReauth() - Returns true if user needs to reconnect
- error.isDeactivated() - Returns true if user deactivated the connection
- error.isBillingError() - Returns true for wallet/limit issues (402)

---

## AI Agent Quick Start

End-to-end flow for AI agents integrating a new app with Legion:

### 1. Setup
npm install -g @legionai/cli
legion login
legion create-app --name "My App" --redirect-uris "https://myapp.com/callback" --margin 50

### 2. Save Credentials
The create-app output includes client_id and client_secret. The secret is shown only once — save it immediately.

### 3. Integrate
npm install @legionai/sdk
Then use createLegion({ clientId }) in your app. See "SDK Quick Start" above for full usage.

### 4. What Happens by Default on Every API Call
When your app makes a POST /v1/chat/completions call, Legion automatically:
- Injects user context: user_preferences, user_information, and user_context all default to true. The user's AI preferences, demographics, and query patterns are added to the system message.
- Searches app knowledge base (RAG): If you've uploaded files (PDF, DOCX, PPTX, video, audio) to your app's knowledge base via the dashboard, every call automatically searches this content. Top 5 matching chunks are injected as [App Knowledge Base] in the system message. If no files are uploaded, this is skipped (zero latency).
- Applies developer margin: Your configured markup on model costs (default 50%). You earn 75% of this margin; Legion retains 25%.
- Routes to the right provider: All responses are OpenAI-compatible regardless of the underlying provider (OpenAI, Anthropic, Google, xAI).

### 5. How to Control Defaults
- Opt out of user context: set user_preferences: false, user_information: false, or user_context: false in the request body
- Limit RAG search to specific files: pass context_ids: ['ctx_abc123'] in the request body
- Change margin: legion update-app --client-id <id> --margin <pct>
- Change default model: configure in the dashboard, or pass model in each request

### 6. View Full Docs
Run "legion docs" to print the complete documentation in your terminal.

### 7. (Optional) Connect Railway Log Source
legion connect-source --name "my-api" --project-id <id> --environment-id <id> --project-token <token>

## MCP Server (AI Agents)
Manage apps and access deployment logs from AI agents like Claude Code, Cursor, or Windsurf.

Transport: stdio (the MCP server communicates over stdin/stdout)

### Quick Setup
1. Install CLI: npm install -g @legionai/cli
2. Install MCP server: npm install -g @legionai/mcp
3. Login: legion login (opens browser, saves token to ~/.legion/config.json)
4. Add the config below to your editor's MCP settings file
5. Restart your editor / reload MCP servers

### Installation
npm install -g @legionai/cli    # CLI commands (login, app management, log sources)
npm install -g @legionai/mcp    # MCP server (for AI agent integration)

### Configuration (Login Required)
Generate your MCP access token in the logged-in dashboard:
Developer > Documentation > MCP Server > Get MCP Config

#### Claude Code (~/.config/claude/mcp.json)
{
  "mcpServers": {
    "legion": {
      "command": "legion-mcp",
      "env": {
        "LEGION_AUTH_SERVER_URL": "https://auth.legion-ai.org",
        "LEGION_ACCESS_TOKEN": "<LOGIN_REQUIRED_TOKEN>"
      }
    }
  }
}

#### Cursor (~/.cursor/mcp.json)
{
  "mcpServers": {
    "legion": {
      "command": "legion-mcp",
      "env": {
        "LEGION_AUTH_SERVER_URL": "https://auth.legion-ai.org",
        "LEGION_ACCESS_TOKEN": "<LOGIN_REQUIRED_TOKEN>"
      }
    }
  }
}

#### Windsurf (~/.windsurf/mcp.json)
{
  "mcpServers": {
    "legion": {
      "command": "legion-mcp",
      "env": {
        "LEGION_AUTH_SERVER_URL": "https://auth.legion-ai.org",
        "LEGION_ACCESS_TOKEN": "<LOGIN_REQUIRED_TOKEN>"
      }
    }
  }
}

### Available Tools

OAuth App Management:
- legion_list_oauth_apps: List all OAuth apps on your account
- legion_get_oauth_app: Get details of an OAuth app by client ID
- legion_create_oauth_app: Create a new OAuth app
- legion_update_oauth_app: Update OAuth app name or redirect URIs
- legion_delete_oauth_app: Delete an OAuth app (revokes all connections)
- legion_rotate_app_secret: Rotate an OAuth app's client secret (new secret shown once)
- legion_get_oauth_app_stats: Get usage stats (users, tokens, earnings)
- legion_get_oauth_app_transactions: Get transaction history (prompts, costs)

API Documentation:
- legion_get_api_docs: Get full API docs (request format, models, errors)

Deployment Logs (Railway):
- legion_list_railway_sources: List connected Railway log sources
- legion_connect_railway_source: Connect a Railway deployment as a log source
- legion_get_deployment_logs: Fetch deployment logs with filters (level, limit)
- legion_get_deployment_errors: Fetch error-level deployment logs only
- legion_disconnect_railway_source: Remove a Railway log source connection

### CLI Equivalents (install @legionai/cli)
All MCP tools have CLI counterparts. Examples:
  legion list-apps
  legion create-app --name "My App" --redirect-uris "https://myapp.com/callback"
  legion get-app lgn_abc123
  legion rotate-secret lgn_abc123
  legion app-stats lgn_abc123
  legion list-sources
  legion get-logs --source-id conn_xyz
  legion get-errors --source-id conn_xyz

Add --verbose or --debug to any command for HTTP request/response details:
  legion list-apps --verbose

Run "legion --help" for the full command reference.

### Example Prompts
- "List my OAuth apps and their stats"
- "Create a new OAuth app called my-saas with redirect URI https://myapp.com/callback"
- "Check my deployment logs for errors"
- "Connect my Railway deployment for debugging"

---

## CMS Integrations (WordPress & Drupal)

Add AI chat to your WordPress or Drupal site with zero code. Site visitors pay via their Legion Wallet, and you earn 75% of your developer margin.

### How It Works
1. Install the Legion plugin on your site
2. Enter your Client ID in the plugin settings
3. Site visitors connect their Legion Wallet to chat
4. You earn 75% of the developer margin on every API call

### Download Plugins
- WordPress: https://github.com/gregorymcgann/legion/releases/latest/download/legion-wordpress.zip (requires WordPress 5.8+ / PHP 7.4+)
- Drupal: https://github.com/gregorymcgann/legion/releases/latest/download/legion-drupal.zip (requires Drupal 10+ / PHP 8.1+)

### WordPress Plugin

#### Installation
1. Download legion-wordpress.zip from the link above
2. In WordPress Admin: Plugins > Add New > Upload Plugin
3. Upload and activate "Legion AI Connect"

#### Configuration
1. Go to Settings > Legion AI
2. Enter your Client ID (get it from legion-ai.org/developer)
3. Copy the displayed Redirect URI
4. Add this Redirect URI to your Legion app settings
5. Save changes

#### Usage
Add the chat widget using the shortcode:
`[legion_chat]`

Shortcode options:
- position: bottom-right, bottom-left, inline (default: bottom-right)
- theme: dark, light, auto (default: dark)
- placeholder: Custom input placeholder text

Example:
`[legion_chat position="inline" theme="light" placeholder="Ask me anything..."]`

### Drupal Module

#### Installation
1. Download legion-drupal.zip from the link above
2. Extract to /modules/contrib/legion_integration
3. Enable: drush en legion_integration

#### Configuration
1. Go to Admin > Configuration > Services > Legion AI
2. Enter your Client ID
3. Copy the Redirect URI and add to your Legion app
4. Save configuration

#### Usage
1. Go to Structure > Block Layout
2. Place the "Legion AI Chat" block in your desired region
3. Configure block settings (position, theme, placeholder)

### Creating a CMS App

When creating your app in the Legion dashboard:
1. Select "WordPress" or "Drupal" as the app type
2. The redirect URI will auto-suggest /legion-callback
3. Enter your site domain (e.g., https://mysite.com/legion-callback)

### Widget Features
- Automatic session management (tokens stored in localStorage)
- OAuth popup flow for seamless wallet connection
- Light/dark/auto theme support
- Responsive design (works on mobile)
- Auto error handling (token refresh, reauth prompts, billing alerts)
- Zero backend required — public client OAuth, no server-side code
- "Powered by Legion" footer with link

### Security Notes
- Uses Public Client OAuth flow (no client_secret in PHP)
- Tokens stored in browser localStorage
- Strict redirect URI matching enforced
- CORS enabled for cross-origin requests

---

## Chat AI (Hosted Chat Apps)

Create a hosted chat app on Legion — no SDK integration needed. Users chat directly at https://legion-ai.org/chat/:appId with automatic wallet connection.

### Creating a Chat App
1. Go to https://legion-ai.org/developer and click "Create a Chat AI"
2. Or use the public registration page at https://legion-ai.org/register-chat
3. Configure a system prompt in the app detail view
4. Share the chat link: https://legion-ai.org/chat/YOUR_APP_ID

Chat apps use app_type='chat' and appear with a purple "Chat" badge in the developer dashboard.

### Chat API Endpoints

All endpoints are under /v1/chat/:appId on the auth server (https://auth.legion-ai.org).

GET /v1/chat/:appId/info
- Public (no auth required)
- Returns chat app metadata: name, description, whether a system prompt is configured

POST /v1/chat/:appId/connect
- Requires Clerk authentication
- Auto-connects the user to the chat app and returns an access token
- If user is already connected, returns existing token

POST /v1/chat/:appId/conversations
- Requires auth
- Creates a new conversation, returns conversation ID

GET /v1/chat/:appId/conversations
- Requires auth
- Lists the user's conversations for this chat app

GET /v1/chat/:appId/conversations/:id/messages
- Requires auth
- Returns messages for a conversation

POST /v1/chat/:appId/conversations/:id/messages
- Requires auth
- Saves messages (user + assistant) to a conversation