// RES
T077
v1.0
Agent Performance Review
Assess success rate, latency, cost, and failure patterns in an AI workflow.
ABOUT
About this task
A scoped performance review for internal AI workflows or customer-facing agents. Focuses on the metrics and failure modes that matter most, then turns them into a measurement framework and optimization priorities your team can act on.
SPEC
Input / output spec
INPUT_REQUIRED
- Agent purpose and workflow scope
- Logs, transcripts, or metrics
- Current KPIs or targets
- Known failure cases
OUTPUT_DELIVERED
- Performance findings summary
- Metric and KPI recommendations
- Failure pattern analysis
- Optimization next steps
PROCESS
Execution flow
01 → Share the relevant assets, links, transcripts, exports, or samples.
02 → Receive a scope-specific quote and ETA in under 5 minutes.
03 → We analyze the workflow, draft the deliverable, and rank the highest-leverage next moves.
04 → A human reviewer tightens the output and removes noise.
05 → Get a ready-to-use report or workflow spec your team can act on next.
TARGET
Who it is for
Best for teams that need clearer visibility into how an AI workflow is performing.
DESCRIPTION
Suggested task description
The public API only needs a plain-language description. Copy this, then replace the team context, export link, and output language as needed.
Copy this description into the task description field
Review our AI agent or workflow using the logs, metrics, and examples we provide, then create a performance review covering success rate, latency, cost, failure patterns, and recommended optimization priorities.