Migrating to Fast-CrewAI: the zero-code-changes playbook
What the one-line import actually does, how to verify it's active, how to toggle individual components, and how to roll back if something breaks. The honest migration guide.
“Zero code changes” is a claim that invites skepticism, and it should. This guide walks through exactly what happens when you add import fast_crewai.shim to a CrewAI project — what gets patched, what doesn’t, how to verify it worked, and how to back out if it didn’t.
The one-liner
import fast_crewai.shim # must run before `from crewai import ...`
from crewai import Agent, Task, Crew
agent = Agent(role="Analyst", goal="Summarize quarterly metrics")
task = Task(description="Extract KPIs from the report", agent=agent)
crew = Crew(agents=[agent], tasks=[task], memory=True)
crew.kickoff()
That’s the whole migration. No configuration files, no decorators on your tools, no subclasses to swap. The rest of this guide is about understanding why that works and how to verify it.
What the shim actually does
When fast_crewai.shim is imported, it runs a small bootstrap that installs accelerated versions of these CrewAI classes into sys.modules:
crewai.memory.storage.rag_storage.RAGStoragecrewai.memory.short_term.ShortTermMemorycrewai.memory.long_term.LongTermMemorycrewai.memory.entity.EntityMemorycrewai.memory.storage.ltm_sqlite_storage.LTMSQLiteStoragecrewai.tools.base_tool.BaseToolcrewai.task.Taskcrewai.crew.Crew
The replacement happens via dynamic inheritance — the Fast-CrewAI classes subclass their CrewAI counterparts at runtime, override the hot-path methods with Rust-backed implementations, and put themselves back into the module namespace. When your code later writes from crewai import Agent, Task, Crew, Python’s import machinery finds the patched classes.
That is why your code doesn’t change. It’s also why import fast_crewai.shim has to run before any from crewai import ... statement — if CrewAI has already been imported, your references are already bound to the original classes, and patching won’t reach them.
Verifying it worked
Add a quick check at startup:
import fast_crewai.shim
from crewai import Task
print("Patched class:", Task.__mro__[1].__name__ if "fast_crewai" in str(Task.__mro__) else "NOT patched")
Or, more properly:
import fast_crewai
print("Fast-CrewAI version:", fast_crewai.__version__)
print("Rust extensions:", fast_crewai.rust_available())
If rust_available() returns False, you’re running on the pure-Python fallback and the speedups won’t apply. The most common cause is missing a pre-built wheel for your platform — usually fixed by pip install --upgrade fast-crewai or by building from source with maturin develop --release.
Toggling individual components
Sometimes you want serialization acceleration but not tool caching, or you want to A/B test a single component in production. Fast-CrewAI honors environment variables at import time:
# Master switch
export FAST_CREWAI_ACCELERATION=1 # default
# Per-component switches
export FAST_CREWAI_MEMORY=true
export FAST_CREWAI_TOOLS=false # disable tool caching
export FAST_CREWAI_DATABASE=true
Setting FAST_CREWAI_ACCELERATION=0 globally is the correct way to temporarily disable the shim for debugging — it’s the closest thing to “undoing the import” without actually removing the line.
Pre-migration checklist
Before flipping the shim on in production, walk through this list:
- Pin your CrewAI version. Fast-CrewAI is tested against specific CrewAI versions. Pin to the same one the release notes mention.
- Run your test suite once with
FAST_CREWAI_ACCELERATION=0. Make sure it passes first — you want a clean baseline. - Run your test suite again with acceleration on. All tests should still pass. The 101 compatibility tests in the Fast-CrewAI repo give us confidence, but your tests are the ones that matter.
- Check your tool caching assumptions. If your tools have side effects or return time-sensitive data, make sure you’re not caching them accidentally. Cache is opt-in, but double-check.
- Run a single workflow under a profiler (
cProfile,py-spy,austin) to confirm the speedup is real on your workload. It’s much more satisfying to see it.
What will not change
Your code:
- Agent definitions are untouched.
- Task definitions are untouched.
- Tool definitions are untouched (unless you opt into caching).
- Your custom memory backends, if you have them, are untouched unless they inherit from the patched classes.
- Your LLM provider, prompts, and prompt templates are untouched.
Your observability:
- Logs still look the same.
- Metrics still come from the same places.
- Tracing spans still attribute to CrewAI class names.
The only thing that changes is how fast the hot paths run.
What might surprise you
A few things we’ve seen catch people out during migrations:
- Import ordering. If you have a module that imports CrewAI at the top and
fast_crewai.shimsomewhere deeper, patching won’t land. Put the shim import at the very top of your entry point. - Long-running workers. If you have a long-lived worker process that imported CrewAI before Fast-CrewAI was installed, restart the worker. The patch only applies to modules loaded after the shim.
- Monkeypatching collisions. If you’re also using a library that monkeypatches CrewAI, the order matters. Put
fast_crewai.shimfirst, then the other patches. - Tool caching + mutable arguments. If your tool takes a list or dict that gets mutated between calls, caching won’t behave the way you want. Use immutable arguments or explicitly disable caching for those tools.
Rolling back
If something breaks, rolling back is easy:
- Set
FAST_CREWAI_ACCELERATION=0in the environment. Your next deploy will run on pure CrewAI. - If that’s not enough, remove the
import fast_crewai.shimline. Your code is already CrewAI code — there’s nothing else to undo. - Open an issue. We care about compatibility and the 101-test suite exists specifically to catch regressions.
After the migration
- Watch your P99 latency, not just the mean. Serialization and memory gains show up most dramatically at the tail.
- Watch memory usage. The tool execution memory savings are the biggest practical win for most teams.
- Run
make benchmarkin the repo periodically to see how your gains compare to the published numbers.
Going deeper
- Why CrewAI gets slow — root-cause analysis of the bottlenecks Fast-CrewAI targets.
- Benchmarks explained — methodology behind the 34×/17×/11× numbers.
- Production architecture patterns — structural choices that amplify the gains.