Guides
March 6, 2026
By Andrew Day

How to Attribute AI Costs by Feature, Team, and Customer

A practical guide to making AI costs explainable. How developers and product teams should structure projects, workspaces, API keys, tags, and metadata to track spend by feature, team, and customer.

Share this post

Send it to someone managing cloud or AI spend.

LinkedInX

Use this when you can see your total AI bill but cannot explain which feature, team, or customer drove it — or when you are building a new AI product and want to instrument attribution before usage scales.

The fast answer: attribution is an application-level responsibility, not a provider one. You need to attach stable metadata — feature name, environment, team owner, customer ID — to every meaningful AI request at the point your code makes it. Provider grouping (OpenAI projects, Anthropic workspaces, AWS tags) gives you coarse buckets. Your application metadata gives you the explanation. You need both.

Most teams can tell you their total AI spend. Far fewer can tell you which feature, team, or customer caused it. That gap makes optimization harder, forecasting weaker, and product decisions slower.

If you want to track AI cost per feature or track AI costs by customer, you need an attribution model before you need a dashboard. An AI cost attribution platform joins provider billing with your product metadata so spend becomes explainable. This guide walks through a practical structure that developers and product teams can implement without turning the system into accounting software.

Quick answer: what is the best way to attribute AI spend?

Use a layered approach:

  1. Provider-native grouping where available, such as OpenAI projects or Anthropic workspaces.
  2. Application metadata for feature, team, workflow, and customer identifiers.
  3. Cloud tags or labels for Bedrock, Vertex AI, or Azure OpenAI workloads.
  4. A reporting layer that joins provider usage with your own product metadata.

Do not rely on one layer alone. Provider grouping is helpful but rarely enough by itself.

Why do most attribution systems fail?

They usually fail for one of three reasons:

  • the provider bill is grouped differently from the application,
  • metadata is inconsistent or optional,
  • or the team waited until the bill got large before deciding how to track it.

The fix is to define attribution fields early and make them hard to skip.

What dimensions should you track?

At minimum, track these five dimensions for every meaningful AI request:

Dimension Why it matters Example
Provider Lets you compare OpenAI, Anthropic, Bedrock, Vertex AI, Azure OpenAI, and others openai, anthropic, bedrock
Model Explains cost-per-request changes gpt-5-mini, claude-sonnet, gemini-flash
Feature or workflow Shows which product surface is expensive support-chat, code-review, nightly-summary
Team or owner Creates cost ownership and review paths search, product-ops, platform
Customer or account Enables unit economics and pricing decisions org_12345, enterprise-plan

If you are not capturing all five, you can still start. But the long-term goal should be to make every expensive workflow explainable across those dimensions. For model-level cost per project and customer, see AI cost monitoring.

How should developers structure attribution in direct AI APIs?

OpenAI

OpenAI exposes organization-level usage and cost APIs and supports project-level structure. If your application spans multiple internal products or environments, projects are a strong first boundary.

Use OpenAI projects for:

  • environment separation,
  • major product areas,
  • or internal platform ownership.

Then add your own metadata for feature, team, and customer inside the app. Projects are useful, but they are usually too coarse for product decisions on their own.

Anthropic

Anthropic's Admin API supports usage and cost reporting grouped by dimensions such as workspace and API key. Workspaces are a strong primitive if you want organizational separation without building all grouping yourself.

Use Anthropic workspaces for:

  • team boundaries,
  • environment isolation,
  • or internal business-unit separation.

Then attach your own application-level fields for feature and customer.

How should teams attribute costs on managed cloud AI platforms?

AWS / Bedrock

If you run AI workloads through AWS, cost allocation tags are essential. AWS requires you to activate user-defined cost allocation tags before they appear in cost reporting tools. Without that step, the tags exist operationally but do not help billing analysis.

Use AWS tags for:

  • product or feature name,
  • team owner,
  • environment,
  • customer tier if relevant.

GCP / Vertex AI

Google Cloud Billing export to BigQuery includes labels in the billing data. That makes labels one of the cleanest paths to cost attribution if you are willing to query or process billing export data.

Use GCP labels for:

  • service or workload,
  • team,
  • environment,
  • customer segment or tenant grouping.

Azure / Azure OpenAI

Azure Cost Management supports grouping and filtering by tags, and Azure exports include cost data that can be analyzed outside the portal. Microsoft also documents billing tags and cost exports, which helps if you need more formal accounting or cross-team reporting.

Use Azure tags for:

  • deployment grouping,
  • environment,
  • feature ownership,
  • or department-level reporting.

What is the best hierarchy for AI cost attribution?

Here is a practical hierarchy that works well:

  1. Billing account / cloud subscription
  2. Provider project / workspace / tagged workload
  3. Application feature
  4. Customer or tenant
  5. Request-level diagnostics when needed

That hierarchy is simple enough to maintain and detailed enough to explain most surprises.

What should product managers ask engineering to instrument?

Ask for these fields in every significant AI request log or event:

  • provider
  • model
  • feature or endpoint
  • team owner
  • environment
  • customer or org id
  • input tokens
  • output tokens
  • request outcome

If those are present, you can answer most PM questions without a separate data project.

What is the most important implementation rule?

Do not make metadata optional for expensive workflows.

If engineers can skip feature or owner fields, they eventually will, especially on internal tools, migrations, and new product experiments. That leads to the worst category in every report: "unknown."

A falsifiable recommendation: if a workflow can spend more than a few hundred dollars per month, require attribution fields before launch.

How should you handle shared infrastructure?

Shared services are normal. The mistake is forcing false precision too early.

Use one of these approaches:

  • attribute shared platform costs to a platform owner bucket,
  • split them by request volume across downstream features,
  • or separate them from direct model spend in reporting.

The right answer depends on how the data will be used. For pricing decisions, customer allocation matters more. For internal ownership, feature and team allocation matters more.

What should you avoid?

  • Tagging everything with free-text values — normalize your dimensions.
  • Using customer names instead of IDs — names change; IDs do not.
  • Depending only on provider dashboards — they rarely know your feature model.
  • Trying to get perfect attribution immediately — start useful, then improve.

Good attribution is iterative. But it needs a consistent schema from the beginning.

A practical rollout plan

  1. Pick a stable set of attribution fields.
  2. Enforce them in application code for expensive workflows.
  3. Use provider-native structure where it exists.
  4. Turn on cloud tags, labels, and billing exports.
  5. Build reporting around provider + feature + team + customer.

That is enough to move from "we have a big bill" to "we know exactly what drove it."

A worked example: setting up attribution from scratch in one sprint

Here is what attribution setup looks like for a team starting from zero.

Starting state: A 10-person startup has one OpenAI organization API key shared across three products: a customer-facing chat assistant, a background summarization pipeline, and an internal search tool. All three hit the same key. The monthly bill is $6,800 and nobody can tell you how much each product costs.

Sprint goal: Get to a state where the team can answer "what did the assistant cost last week?" within 30 seconds.

Day 1 — Provider structure. The team creates two OpenAI projects: production-customer-facing and production-internal. They move the chat assistant and summarization pipeline to one project, the internal search tool to the other. They generate project-specific API keys. Cost: 2 hours.

Day 2–3 — Application metadata. The team agrees on four stable fields: feature (assistant, summarization, search), environment (prod, staging), owner (product-team, platform-team), and customer_id (enterprise tier only). They add these as log metadata on every API call in the two most expensive workflows. Cost: 4 hours.

Day 4 — Reporting layer. They pull daily cost totals from the OpenAI Cost API, join them with their request log metadata using the request ID, and write the enriched data to a shared table. A simple view by feature shows: assistant $3,200, summarization $2,900, search $700. They immediately learn that summarization is almost as expensive as the customer-facing product.

Sprint result: The team can now see cost by feature with a daily lag. They identify that summarization is running two passes per document (a bug from a recent refactor) and fix it, reducing that line by $900/month. The attribution setup paid for itself in two weeks.

The point is not to build a perfect system before shipping. It is to get one useful dimension — feature, at minimum — before usage grows to the point where the investigation takes weeks.

How StackSpend helps

StackSpend provides cross-provider daily visibility by provider, model, service, and category — the foundation layer that your attribution model sits on top of. When you can see that Bedrock inference is up or OpenAI cost per request changed, you have the provider-side signal. Your application metadata layer provides the product-side explanation. See AI cost monitoring for the StackSpend side of this workflow.

Bottom line

The best AI cost attribution model combines provider structure with application metadata:

  • provider/project/workspace for coarse grouping,
  • feature/team/customer for decision-making,
  • and cloud tags or labels for managed AI workloads.

If you do only one thing this quarter, make feature and owner metadata mandatory on expensive AI paths.

What to do next

FAQ

Is provider-native grouping enough?
Usually no. Projects, workspaces, and tags help, but they do not fully reflect feature ownership or customer usage inside your app.

Should I attribute by feature or by customer first?
If you are optimizing product spend, start with feature so you can track AI cost per feature. If you are working on pricing or margin, start with customer so you can track AI costs by customer. Most mature teams track both. For the full framework on when and how to use cost per customer, see AI unit economics for startups.

Do I need request-level logging?
Not always, but you do need request-level diagnostics for the workflows that drive material spend or routinely spike.

What if one feature serves many customers?
Track both the feature and the customer. One explains internal ownership; the other explains unit economics.

What if my AI workload runs through Bedrock, Vertex AI, or Azure OpenAI?
Use cloud tags, labels, or exports in addition to your application metadata. Managed AI bills still need an application-level explanation layer.

How detailed should I get at first?
Start with provider, model, feature, owner, and customer. That is detailed enough to be useful without becoming fragile.

References

Share this post

Send it to someone managing cloud or AI spend.

LinkedInX

Know where your cloud and AI spend stands — every day.

Connect providers in minutes. Get 90 days of visibility and start receiving daily cost updates before the invoice lands.

14-day free trial. No credit card required. Plans from $19/month.