Most teams review cloud and AI spend too late. They look at the invoice after the month closes, explain what happened, and move on. That is useful for accounting, but not very useful for operating the product.
If you want spend to influence product and engineering decisions in time to matter, review it weekly. The goal is not to create finance theater. The goal is to surface the handful of cost changes that need action while they are still small enough to fix.
Quick answer: what should a weekly cost review include?
Every weekly review should answer five questions:
- What did we spend last week?
- What changed versus the previous week?
- Which provider, model, service, or feature drove the change?
- Are we still on track for the month?
- Does anything need action this week?
If the meeting cannot answer those five questions, it is missing the right data or the right structure.
Who should be in the weekly review?
Keep it small:
- one engineering owner,
- one product owner,
- and optionally finance or operations if spend is already material.
This is an operating review, not a status meeting. Keep it focused on decisions.
What metrics should you review every week?
| Metric | Why it matters | Recommended weekly view |
|---|---|---|
| Total cloud spend | Shows infrastructure direction of travel | Last 7 days vs prior 7 days |
| Total AI spend | Shows model and API cost movement | Last 7 days vs prior 7 days |
| Top provider deltas | Identifies where the change came from | Top 3 increases and decreases |
| Model or service mix | Explains cost-per-request changes | Share of spend by model or service |
| Forecast vs plan | Shows whether the month is drifting | Current pace vs monthly target |
| Incidents or anomalies | Flags things that need immediate action | Any material alerts since the last review |
This is enough for most teams. You can add more later, but start with the metrics that drive decisions.
What should the actual agenda look like?
Use this 20-minute format:
1. Start with total spend (2–3 minutes)
Look at:
- total cloud spend for the last 7 days,
- total AI spend for the last 7 days,
- and change versus the previous week.
Why this section exists: This is the frame for everything else. If total spend is flat week-over-week, the rest of the review is confirmation. If total spend is up 30%, every other conversation happens against that backdrop. Starting here prevents the team from spending 15 minutes on a small delta while missing a larger pattern.
What a good result looks like: Total spend is within 10–15% of the prior week, with a known reason if it is higher (a launch, more active users, a new background job).
What needs investigation: Total spend jumped more than 20% week-over-week with no known cause. This should dominate the rest of the review.
2. Review the biggest deltas (8–10 minutes)
Ask:
- Which provider increased the most?
- Which model or service increased the most?
- Was the change expected?
This is where the review becomes useful. The purpose is to explain the delta, not just observe it.
Why this section exists: A total spend number tells you something moved. Deltas tell you what and by how much. The engineering owner's job in this block is not to report that OpenAI went up — the dashboard already shows that. Their job is to explain why: "OpenAI was up 22% because we enabled the summarization feature for a second customer tier on Wednesday and it is running at expected volume." That is a complete answer. The review moves on.
What a good result looks like: Every material delta has a named explanation. If a change was unexpected but small, it gets assigned to someone for follow-up before the next review.
What needs investigation: A delta is unexplained. "We are not sure why Bedrock spend doubled this week" is an action item, not a closing statement. Assign it with a due date.
3. Check month-end forecast (3–4 minutes)
Weekly reviews should still look forward.
You want to know:
- are we on track for the month,
- are we ahead of plan,
- and did the forecast change materially this week?
This is especially important for AI workloads, where weekly growth can meaningfully change the monthly outcome.
Why this section exists: A weekly delta review tells you what happened in the past 7 days. The forecast block tells you what the month is likely to close at if the current pace continues. These are different questions. A team that reviews deltas but never checks forecast can be consistently 15% over budget month after month without the right conversation ever happening.
What a good result looks like: Forecast is within 10% of plan. If it is moving upward, the team can explain why and has a decision about whether to accept it or correct it.
What needs investigation: Forecast is more than 15% above plan and the team cannot point to a specific cause that is temporary or intentional.
4. Identify actions (3–5 minutes)
A useful review ends with action owners.
Examples:
- investigate a Bedrock increase,
- reduce prompt length in one workflow,
- check whether a fallback is overfiring,
- or move a background workload to a cheaper model.
If there is no action, record that the change was expected and move on.
Why this section exists: A review that ends without named actions is an update, not a review. The operational value of the weekly cadence comes entirely from this block. Without it, the team has spent 15 minutes observing data together and nothing changes as a result. Requiring at least one named action — even if that action is "no change needed, forecast confirmed" — creates the habit of closing the loop.
What should PMs look for?
Product managers should care about:
- spend tied to launches,
- spend per active feature,
- and whether cost growth matches product value.
If spend is rising because a successful feature is growing, the action may be pricing, packaging, or usage limits rather than engineering optimization.
What should engineers look for?
Engineering should focus on:
- model changes,
- prompt size increases,
- retries or background jobs,
- and infra or provider-specific increases.
This is where weekly review prevents small drift from becoming a big monthly surprise.
What should you prepare before the meeting?
Prepare a one-page view with:
- total cloud spend
- total AI spend
- top provider deltas
- top model or service deltas
- forecast vs target
- open anomalies
If your team needs a longer deck to review weekly spend, the system is too complicated.
What decisions should come out of the review?
A weekly cost review is only useful if it changes behavior. Typical decisions include:
- leave things alone because the increase is expected,
- investigate a specific spike,
- change routing or model choice,
- tighten limits,
- or revise forecast and communicate it.
The review should create clarity, not performativity.
How often should you review different workloads?
AWS Well-Architected suggests review cadence should reflect workload importance and spend significance. That is a useful principle here too.
A practical rule:
- high-growth or high-cost workloads: weekly
- important but stable workloads: biweekly
- small or low-risk workloads: monthly
For most AI products, weekly is the right default because usage can change quickly.
A copyable template
Use this structure each week:
Weekly AI + Cloud Cost Review
- Period: [date range]
- Total cloud spend: [$X] vs prior week [+/-%]
- Total AI spend: [$X] vs prior week [+/-%]
- Top increases: [provider / model / feature]
- Top decreases: [provider / model / feature]
- Forecast for month: [$X] vs target [$Y]
- Incidents or anomalies: [none / list]
- Actions this week: [owner + due date]
That template is simple on purpose. Teams are more likely to keep using it.
A worked review scenario
Here is what a 20-minute review looks like when something needs attention.
Context: A product and engineering team building an AI content platform. Weekly spend is usually around $4,200. This week it came in at $5,600.
Total spend (2 min): Total cloud + AI: $5,600, up $1,400 from $4,200. The engineering lead flags this immediately. The frame for the rest of the review is: "We are up 33% this week and we need an explanation."
Biggest deltas (9 min): Provider breakdown shows AWS Bedrock up $980, OpenAI flat, AWS infrastructure up $420. The engineering lead explains: a new batch re-classification job was deployed Tuesday. It processes all content created in the past 7 days and re-runs classification against an updated model. The first run was Tuesday, which is why Bedrock jumped. The AWS infrastructure jump is the EC2 workers running the job.
The product lead asks: "Is the re-classification job supposed to run every week?" The answer is yes, weekly. The team calculates: if Bedrock adds ~$980/week ongoing, the monthly run rate increases by ~$3,900. That is a meaningful shift and was not in the budget assumption.
Forecast check (3 min): Current pace puts month-end at $22,400 against a $19,000 plan. The gap is almost entirely the new job. The team flags it as a forecast update needed.
Actions (4 min):
- Update month-end forecast to $22,000 — engineering lead, today
- Check if re-classification frequency can be reduced to bi-weekly without quality impact — engineering lead, by Thursday
- Communicate updated forecast to CTO — product lead, by end of day
Total time: 18 minutes. The team leaves with a clear explanation of the increase, a forecast update, and one optimization to investigate.
What should you avoid?
- reviewing too many dimensions at once,
- spending the whole meeting on one spike,
- confusing expected growth with overspend,
- and reviewing only total cost without drivers.
The review should be short, repeatable, and decision-oriented.
Bottom line
The best weekly cost review is not a finance meeting. It is a product and engineering operating review that answers:
- what changed,
- why it changed,
- whether it matters,
- and what to do next.
If your team adopts one habit from this article, make it a 20-minute weekly review with clear actions and owners.
FAQ
How long should the weekly review be?
Usually 15 to 20 minutes. If it regularly runs longer, either the data is unclear or the agenda is too broad.
Should finance attend?
Only if spend is large enough that forecast changes need immediate financial context. For many teams, engineering and product are enough.
What is the minimum dataset I need?
Total spend, provider deltas, model or service deltas, and monthly forecast. That is enough to start.
How is this different from monthly budget review?
Monthly reviews explain what happened. Weekly reviews give you time to change what will happen.
Should I review cloud and AI together?
Yes, if both are meaningful parts of your product cost. Splitting them often hides the real total cost of serving the product.
What if there are no actions?
That is fine. The value is in having a reliable operating rhythm and noticing meaningful change early.
References
- AWS Well-Architected Cost Optimization Pillar
- Develop a Workload Review Process
- Configure Billing and Cost Management Tools
- OpenAI Usage API Reference
- Anthropic Cost Report
- Cloud cost monitoring
- AI cost monitoring
- How to forecast cloud and AI spend without a FinOps team
- Weekly AI FinOps operating rhythm