API Services
Learn to run Crew AI agents as API services using FastAPI or Flask. Expose your AI agents for real-time tasks like summarization, research, and analysis in your applications.
Running Crew AI Agents as API Services with FastAPI/Flask
Exposing Crew AI agents through an API service allows external systems, applications, or users to interact with them in real-time. By leveraging lightweight frameworks like FastAPI or Flask, you can deploy your agents as RESTful endpoints to process tasks such as summarization, research, analysis, or validation on demand.
This setup is ideal for SaaS platforms, dashboards, automation tools, and cloud-based workflows that require dynamic interaction with Crew AI-powered agents.
Why Use API Services for Crew AI?
Real-time Interaction: Enable immediate communication and task execution with your AI agents.
Application Integration: Seamlessly integrate AI capabilities into existing or new applications.
HTTP Accessibility: Make your agents accessible via standard HTTP endpoints, facilitating broad compatibility.
Scalable Deployment: Deploy agents on cloud servers or containers and scale them using load balancers and gateways.
Multi-tenancy & Frontend Support: Ideal for multi-tenant systems and providing interactive AI features for frontend applications.
Basic Requirements
Python: Version 3.8+
Crew AI: Installed via
pip install crewai
Web Framework: FastAPI or Flask installed
Production Serving (Optional but Recommended): Uvicorn (for FastAPI) or Gunicorn (for Flask)
FastAPI Setup for Crew AI
Project Structure
A typical project structure for a FastAPI application with Crew AI might look like this:
crewai-api/
├── main.py # FastAPI application logic
├── agents.py # Agent definitions
├── requirements.txt # Project dependencies
main.py
Example with FastAPI
This example demonstrates a simple FastAPI application that exposes a summarization agent.
from fastapi import FastAPI, Request
from crewai import Agent, Crew, Task
import uvicorn
app = FastAPI()
## Define agents
summarizer = Agent(
role="Summarizer",
goal="Summarize any given text clearly and concisely",
backstory="You are an expert AI assistant skilled in extracting key insights and presenting them in a summarized format.",
verbose=True,
allow_delegation=False,
)
@app.post("/summarize/")
async def summarize_task(request: Request):
"""
Summarizes the provided text using a Crew AI agent.
Expects a JSON body with a 'text' field.
Example: {"text": "Your text to summarize here..."}
"""
try:
body = await request.json()
text_to_summarize = body.get("text", "")
if not text_to_summarize:
return {"error": "Please provide 'text' in the request body."}, 400
# Define the task for the agent
summarize_task = Task(
description=f"Summarize the following text:\n\n{text_to_summarize}",
expected_output="A concise summary of the provided text.",
agent=summarizer,
)
# Create and run the crew
crew = Crew(
agents=[summarizer],
tasks=[summarize_task],
verbose=2, # Set to 1 or 2 for more detailed output during development
)
result = crew.kickoff()
return {"summary": result}
except Exception as e:
return {"error": str(e)}, 500
## To run this application locally for development:
## Save this as main.py and run: uvicorn main:app --reload
Running Locally (FastAPI)
To run this FastAPI application locally, save the code above as main.py
and execute the following command in your terminal within the project directory:
uvicorn main:app --host 0.0.0.0 --port 8000
This will start a development server accessible at http://localhost:8000
.
Flask Setup for Crew AI
Flask Version of API
Here's an equivalent implementation using Flask:
from flask import Flask, request, jsonify
from crewai import Agent, Crew, Task
import os
app = Flask(__name__)
## Define agents
summarizer = Agent(
role="Summarizer",
goal="Summarize any given text clearly and concisely",
backstory="You are an expert AI assistant skilled in extracting key insights and presenting them in a summarized format.",
verbose=True,
allow_delegation=False,
)
@app.route('/summarize', methods=['POST'])
def summarize():
"""
Summarizes the provided text using a Crew AI agent via Flask.
Expects a JSON body with a 'text' field.
Example: {"text": "Your text to summarize here..."}
"""
try:
data = request.get_json()
text_to_summarize = data.get("text", "")
if not text_to_summarize:
return jsonify({"error": "Please provide 'text' in the request body."}), 400
# Define the task for the agent
summarize_task = Task(
description=f"Summarize the following text:\n\n{text_to_summarize}",
expected_output="A concise summary of the provided text.",
agent=summarizer,
)
# Create and run the crew
crew = Crew(
agents=[summarizer],
tasks=[summarize_task],
verbose=2, # Set to 1 or 2 for more detailed output during development
)
result = crew.kickoff()
return jsonify({"summary": result})
except Exception as e:
return jsonify({"error": str(e)}), 500
if __name__ == "__main__":
# For development, use Flask's built-in server.
# For production, use Gunicorn or another WSGI server.
app.run(host="0.0.0.0", port=8000, debug=True)
Running Locally (Flask)
To run the Flask application locally, save the code as app.py
(or any other name) and run:
python app.py
This will start the Flask development server, typically at http://127.0.0.1:5000/
. You'll need to adjust the port or use app.run(host="0.0.0.0", port=8000)
as shown in the example for consistency.
requirements.txt
Ensure your requirements.txt
file includes the necessary libraries.
crewai
fastapi
uvicorn # For running FastAPI
flask # If using Flask
## You might also need:
## python-dotenv for environment variables
## pydantic for FastAPI input validation
API Request and Response Examples
Request Example
This is a generic example, the exact endpoint might vary based on your implementation (e.g., /summarize/
for FastAPI, /summarize
for Flask).
Method: POST
Endpoint: /summarize
(or /summarize/
) Body (JSON):
{
"text": "Artificial intelligence is transforming industries with automation and predictive insights. Machine learning algorithms are at the core of many AI applications, enabling systems to learn from data and improve performance over time."
}
Response Example
Status Code: 200 OK
Body (JSON):
{
"summary": "AI, driven by machine learning, is revolutionizing industries through automation and predictive insights, allowing systems to learn from data and enhance performance."
}
Deployment Options
Containerization (Docker): Package your application with its dependencies into a Docker image. This ensures consistency across environments.
Create a
Dockerfile
specifying the base image, dependencies, and how to run your application (e.g., usinguvicorn
orgunicorn
).Deploy the Docker container to cloud platforms like AWS (ECS, EKS), Google Cloud (GKE, Cloud Run), or Azure (AKS, App Service).
API Gateway / Reverse Proxy: Use services like AWS API Gateway, Google Cloud API Gateway, or self-hosted NGINX/Traefik. These can handle routing, authentication, rate limiting, and SSL termination.
Process Managers: For production environments, run your web server (Uvicorn/Gunicorn) using process managers like
supervisor
orPM2
to ensure it stays running, restarts on failure, and manages worker processes.
Best Practices
Input Validation: Use libraries like Pydantic (with FastAPI) or Marshmallow (with Flask) to validate incoming request data, ensuring it conforms to the expected format and types.
Error Handling & Logging: Implement robust error handling to catch exceptions gracefully and log errors effectively for debugging and monitoring.
Rate Limiting: Protect your API and control LLM costs by implementing rate limiting on your endpoints. This prevents abuse and manages resource consumption.
Environment Variables: Store sensitive information (API keys, database credentials) and configuration settings in environment variables using libraries like
python-dotenv
.Security: Secure your API endpoints using authentication mechanisms such as API keys, JWT (JSON Web Tokens), or OAuth.
SEO Keywords
Crew AI FastAPI integration
Deploy Crew AI as an API
Build REST API for multi-agent AI
Flask AI API example
Crew AI summarizer endpoint
Expose Crew AI with HTTP REST
Real-time AI summarization with FastAPI
Crew AI and FastAPI production deployment
Potential Interview Questions
What is Crew AI, and how does it differ from other multi-agent frameworks like LangGraph or AutoGen?
Follow-up: Can you explain Crew AI’s role-based architecture?
How would you design an API to expose multiple Crew AI agents with different tasks like summarization, validation, and analysis?
Explain how you can handle real-time user interaction with a Crew AI agent through a RESTful API using FastAPI or Flask. What are the key components?
Describe a use case where Crew AI would be a better fit than LangGraph or AutoGen. Why?
How would you handle memory and context management in a Crew AI-powered agent system exposed via an API?
What are the pros and cons of using FastAPI over Flask for deploying LLM-based agents as web services?
How do you ensure input validation and secure API access when deploying a Crew AI-powered FastAPI service?
Explain how you would deploy a Crew AI app on the cloud (AWS/GCP/Azure). What production-grade setups would you recommend?
In a multi-agent setup exposed via API, how do you manage scalability and load balancing for high-volume traffic?
Describe a method to implement rate limiting in your API to prevent overuse of expensive LLM-based agents.