About
We build multi-agent systems that actually ship
Neul Labs is an AI engineering consultancy focused on production-grade multi-agent systems. We got tired of watching beautiful CrewAI prototypes crumble under real-world load — so we wrote the acceleration layer ourselves.
What we believe
Frameworks should stay out of your way. The right answer to "my CrewAI is slow" is almost never "rewrite everything in LangGraph." It is usually: fix the hot paths. That's why Fast-CrewAI is a drop-in rather than a fork.
Benchmarks that aren't reproducible don't count. Every number we publish runs against a pinned CrewAI version, in a public benchmark suite, with 101 compatibility tests gating each release.
Rust where it earns its keep. We use PyO3, serde, Tokio, and SQLite FTS5 only for the paths where Python overhead dominates — serialization, search, and tool execution caching. The rest stays idiomatic Python.
Fast-CrewAI in numbers
- 34.5× faster JSON serialization via serde (80,525 ops/s)
- 17.3× faster tool execution with result caching
- 11.2× faster memory search via SQLite FTS5 + BM25
- 101 compatibility tests passing against
crewai==1.7.2 - 99% less memory on tool execution hot paths
- MIT licensed and open source on GitHub
How to work with us
Most engagements start with a one-week performance audit. From there, we usually move into an implementation sprint to ship the fix list. For teams running CrewAI in revenue-critical production, we also offer retained engineering.
Open source
Fast-CrewAI is MIT-licensed and lives at github.com/neul-labs/fast-crewai. Issues, discussions, and pull requests are welcome. The goal is for the whole CrewAI ecosystem to be faster — not just our clients'.
Ready to make CrewAI faster?
Talk to the team that wrote the acceleration layer. We take on performance audits, full system builds, and retained engineering.