Frequently Asked Questions¶
Quick answers to common questions
Find solutions and learn best practices for 400+ modules and 237 enterprise features
Enterprise Framework
Part of the most comprehensive AI agent framework. See Enterprise Documentation for advanced features.
Quick Navigation¶
-
General
About the framework
-
Getting Started
Installation and setup
-
Troubleshooting
Common issues and fixes
-
Usage
How-to questions
General Questions¶
What is AgenticAI Framework?¶
AgenticAI Framework is an enterprise-grade Python SDK with 400+ modules for building intelligent agentic applications. It provides a comprehensive toolkit including 237 enterprise modules for creating, managing, and orchestrating AI agents with advanced features like memory management, multi-agent collaboration, guardrails, and 12-tier evaluation systems.
Who should use AgenticAI Framework?¶
- AI/ML Engineers building production agent systems
- Software Developers creating AI-powered applications
- Data Scientists implementing agentic workflows
- Enterprise Teams requiring scalable agent orchestration
- Researchers experimenting with multi-agent systems
What are the system requirements?¶
- Python: 3.10 or higher (3.13+ recommended)
- Memory: Minimum 4GB RAM (8GB+ recommended)
- OS: Linux, macOS, or Windows
- Dependencies: See
requirements.txt
How does it compare to other frameworks?¶
| Feature | AgenticAI | CrewAI | LangChain | AutoGPT |
|---|---|---|---|---|
| Total Modules | 400+ | ~20 | ~50 | ~30 |
| Enterprise Modules | 237 | None | Limited | None |
| Multi-Agent Orchestration | ||||
| Memory Managers | 7 | Basic | ||
| State Managers | 7 | |||
| 12-Tier Evaluation | ||||
| Guardrails & Safety | ||||
| Production Monitoring | 16 Modules | |||
| MCP Tools Support | ||||
| Agent Hub | ||||
| 12-Tier Evaluation |
Getting Started¶
How do I install AgenticAI Framework?¶
| Bash | |
|---|---|
Do I need API keys?¶
Yes, for LLM providers:
- OpenAI:
OPENAI_API_KEY(Get key) - Anthropic:
ANTHROPIC_API_KEY(Get key) - Azure OpenAI:
AZURE_OPENAI_KEYandAZURE_OPENAI_ENDPOINT
| Bash | |
|---|---|
What's the quickest way to create an agent?¶
| Python | |
|---|---|
Where can I find examples?¶
- Documentation: Examples Section
- GitHub: examples/ directory
- Tutorials: Quick Start Guide
Agent Questions¶
How many agents can I run simultaneously?¶
Default: 50 agents. Configurable via:
| Python | |
|---|---|
Actual limit depends on: - Available system resources - Task complexity - LLM API rate limits
Can agents communicate with each other?¶
Yes! Multiple patterns:
| Python | |
|---|---|
How do I handle agent failures?¶
| Python | |
|---|---|
Can I use custom LLMs?¶
Yes! Implement the LLM interface:
| Python | |
|---|---|
Memory Questions¶
What memory backends are supported?¶
- In-Memory: Fast, volatile (default)
- Redis: Fast, persistent, distributed
- SQLite: Lightweight, file-based
- PostgreSQL: Production-grade, scalable
- MongoDB: Document-based
| Python | |
|---|---|
How long is memory retained?¶
Configurable via TTL (Time-To-Live):
Can I search through memory?¶
Yes! Multiple search options:
LLM Questions¶
Which LLM providers are supported?¶
- OpenAI: GPT-4, GPT-3.5
- Anthropic: Claude 3 (Opus, Sonnet, Haiku)
- Azure OpenAI: GPT-4, GPT-3.5
- Google: PaLM 2, Gemini (via API)
- Local Models: Via Ollama, LM Studio
How do I reduce LLM costs?¶
-
Use Cheaper Models:
Python -
Enable Caching:
Python -
Reduce Token Usage:
-
Batch Requests:
Python
Do you support streaming responses?¶
Yes!
| Python | |
|---|---|
Can I use local models?¶
Yes, via Ollama:
| Python | |
|---|---|
Performance Questions¶
What's the typical latency?¶
| Operation | Latency (P50) | Latency (P95) |
|---|---|---|
| Agent Creation | < 10ms | < 50ms |
| Task Execution | 100-500ms | 1-2s |
| Memory Retrieval | < 5ms | < 20ms |
| LLM Call | 500ms-2s | 2-5s |
Latencies depend on LLM provider, task complexity, and system resources.
How do I optimize performance?¶
See Performance Guide for detailed strategies:
- Enable Caching
- Use Connection Pooling
- Batch Operations
- Async I/O
- Optimize Prompts
Can it handle production load?¶
Yes! Designed for production:
- Horizontal Scaling: Run multiple instances
- Load Balancing: Built-in support
- Circuit Breakers: Prevent cascade failures
- Rate Limiting: Protect APIs
- Monitoring: Prometheus/Grafana integration
Security Questions¶
Is it secure for production use?¶
Yes! Security features:
- Input Validation: Prevent injection attacks
- API Key Management: Secure secret handling
- Content Moderation: Guardrails for safe outputs
- Rate Limiting: Prevent abuse
- Audit Logging: Track all operations
How are API keys stored?¶
Never hardcode keys! Use:
-
Environment Variables:
Bash -
Secret Managers:
Does it filter harmful content?¶
Yes, via Guardrails:
| Python | |
|---|---|
Testing Questions¶
How do I test my agents?¶
Are there built-in test utilities?¶
Yes!
| Python | |
|---|---|
What's the test coverage?¶
Current coverage: 66%
Run tests:
| Bash | |
|---|---|
Troubleshooting¶
Agent not starting?¶
Common causes:
-
Missing API Key:
-
Invalid Configuration:
-
Resource Limits:
Memory issues?¶
| Python | |
|---|---|
LLM rate limiting?¶
| Python | |
|---|---|
Where can I get help?¶
- Documentation: https://isathish.github.io/agenticaiframework/
- GitHub Issues: Report bugs
- Discussions: Ask questions
- Discord: Coming soon!
Advanced Topics¶
Can I deploy to production?¶
Yes! See Deployment Guide:
- Docker: Containerized deployment
- Kubernetes: Scalable orchestration
- AWS/Azure/GCP: Cloud deployment
- Serverless: Lambda/Functions
How do I monitor in production?¶
| Python | |
|---|---|
Can I create custom evaluators?¶
Yes!
| Python | |
|---|---|
Is there enterprise support?¶
Currently open-source. Enterprise support coming soon: - Priority support - Custom features - SLA guarantees - Training & onboarding
Best Practices¶
Agent Design¶
- Single responsibility per agent
- Clear role definition
- Comprehensive error handling
- Appropriate timeout values
- Don't create too many agents
- Avoid circular dependencies
Memory Management¶
- Set appropriate TTLs
- Use tiered storage (hot/warm/cold)
- Regular cleanup of stale data
- Monitor memory usage
- Don't store large objects
- Avoid memory leaks
LLM Usage¶
- Cache responses
- Use appropriate models
- Optimize prompts
- Handle rate limits
- Don't use GPT-4 for simple tasks
- Avoid redundant API calls
Additional Resources¶
- Quick Start: Get started in 5 minutes
- API Reference: Complete API docs
- Examples: Real-world examples
- Best Practices: Production guidelines
- Architecture: System design
- Contributing: Contribution guide
Still Have Questions?¶
Can't find your answer?
- Search the documentation
- Check existing issues
- Ask in discussions
- Open a new issue