Fast-CrewAI
Menu
Issue commentarytoolsseverity: low

Tool argument validation is slower than the tools themselves

CrewAI validates tool arguments with Python's jsonschema on every call. For simple tools that return instantly, validation can be 70% of the call cost.

Neul Labs ·

The symptom

You profile a tool that reads a value from an in-memory dict and notice it takes 400 microseconds per call. The actual dict lookup takes 200 nanoseconds. You dig into the flamegraph and the rest of the time is in jsonschema validation.

Why this happens

CrewAI uses pydantic / jsonschema to validate tool arguments against the tool’s declared schema. For complex schemas with nested objects this is cheap relative to the tool’s own work. For trivial schemas with flat primitives, the validation overhead is a large multiple of the tool body’s cost. You pay it on every call.

This is a death-by-a-thousand-papercuts problem. Each call is only 400 microseconds, but a long-running crew can make tens of thousands of tool calls, and the cumulative validation cost shows up as “the framework is slow” in your P95 metrics.

Why this persists upstream

Validation is load-bearing. If CrewAI skipped argument validation to save time, malformed LLM tool calls would propagate deeper into your code and cause hard-to-debug failures later. The safe default is to validate everything, always. The optimization opportunity is to validate faster, not to skip validation.

How Fast-CrewAI addresses it

Fast-CrewAI’s tool executor handles argument validation via serde_json on the Rust side. serde’s validation is schema-driven but compiles to straight-line code at build time — no runtime schema interpretation, no dict walking per call, no per-field type checks in Python. On typical tool argument payloads, validation is roughly 10× faster than jsonschema and uses much less memory.

Combined with result caching and execution stats, the tool execution path as a whole is 17.3× faster on synthetic benchmarks. A meaningful slice of that comes from the validation speedup alone, even for tools that don’t hit the cache.

Workaround you can ship today

If you want to reduce validation overhead without Fast-CrewAI, the cleanest approach is to use msgspec instead of pydantic for your tool argument models. msgspec.Struct validates much faster than BaseModel and integrates cleanly with CrewAI’s tool interface:

import msgspec

class WeatherArgs(msgspec.Struct):
    city: str
    units: str = "metric"

You give up some of Pydantic’s ergonomics (.model_dump(), validators, etc.) and in exchange you get validation that’s often 5–10× faster.

When it matters

High-throughput tool-heavy workloads. If you’re calling dozens of tools per task and running thousands of tasks per hour, validation overhead becomes visible in aggregate. Low-throughput use cases can safely ignore this issue.

Need help applying this to your codebase?

Neul Labs offers audits, full implementation, and retained CrewAI engineering. We built fast-crewai — we can build yours.