Skip to main content

Overview

Logic MCP servers support asynchronous task execution, allowing AI platforms to start long-running tasks without blocking while they complete. Instead of waiting for completion, the AI platform receives an execution ID and can check back later for results.

How It Works

When an AI platform executes a Logic document through MCP:
  1. Task starts immediately - Logic begins processing
  2. Execution ID returned - AI platform receives a unique identifier
  3. Status checking - Use check_execution_status tool with the execution ID
  4. Results retrieved - When complete, the tool returns the output

Example Flow

AI Platform → Start Logic document execution
Logic MCP → Returns execution_id: "exec_abc123"
AI Platform → check_execution_status("exec_abc123")
Logic MCP → Status: "running"
[wait a bit]
AI Platform → check_execution_status("exec_abc123")
Logic MCP → Status: "completed" + results

Running Multiple Tasks

The async model enables parallel execution. AI platforms can:
  • Start multiple Logic documents simultaneously
  • Poll each execution ID separately
  • Retrieve results as each task completes
Example: Process 10 images in parallel instead of sequentially
  • Without async: 10 × 30 seconds = 5 minutes total
  • With async: ~30 seconds total (parallelized)

Platform Support

Different AI platforms handle async tasks automatically:
  • Claude platforms: Automatically poll and manage multiple executions
  • ChatGPT: Check platform documentation for async behavior
  • Custom clients: Implement check_execution_status tool and polling logic

Next Steps