Overview
Logic MCP servers support asynchronous task execution, allowing AI platforms to start long-running tasks without blocking while they complete. Instead of waiting for completion, the AI platform receives an execution ID and can check back later for results.How It Works
When an AI platform executes a Logic document through MCP:- Task starts immediately - Logic begins processing
- Execution ID returned - AI platform receives a unique identifier
- Status checking - Use
check_execution_statustool with the execution ID - Results retrieved - When complete, the tool returns the output
Example Flow
Running Multiple Tasks
The async model enables parallel execution. AI platforms can:- Start multiple Logic documents simultaneously
- Poll each execution ID separately
- Retrieve results as each task completes
- Without async: 10 × 30 seconds = 5 minutes total
- With async: ~30 seconds total (parallelized)
Platform Support
Different AI platforms handle async tasks automatically:- Claude platforms: Automatically poll and manage multiple executions
- ChatGPT: Check platform documentation for async behavior
- Custom clients: Implement
check_execution_statustool and polling logic
Next Steps
- Review document scope - Control which documents support async execution
- Explore integrations - See how platforms handle async tasks

