Report -- Published April 10, 2026

Google Ads Optimization Trends: Insights from 94 Accounts (2025-2026)

Across 94 Google Ads accounts managing $3.01M in ad spend from June 2025 to April 2026, Maximize Conversion Value bidding delivered 6.44x ROAS while Maximize Conversions delivered 1.96x. Performance Max generated 9.32x ROAS for e-commerce while Search campaigns delivered higher conversion rates for lead generation accounts.

Methodology

Aggregate performance data from 94 Google Ads accounts managed via the Lyra platform. Data covers June 17, 2025 through April 9, 2026 (9.7 months). All client-identifying information has been anonymized. Industry breakdowns are limited to verticals with 3 or more accounts to preserve anonymity.

Data period: June 17, 2025 to April 9, 2026 -- 94 accounts analyzed

This report analyzes aggregate performance data from 94 Google Ads accounts managing $3,014,680 in ad spend between June 17, 2025 and April 9, 2026. The dataset spans 522 campaigns, 240.7 million impressions, 3.9 million clicks, and 85,834 conversions across multiple industries, business models, and budget sizes.

The goal of this report is to replace vendor-published benchmarks and outdated blog posts with current, evidence-based performance figures drawn from real accounts actively managed in 2025-2026. Every figure below is extracted from live campaign data. Where sample sizes are too small to draw reliable conclusions, we say so explicitly.

Key Findings

  • Maximize Conversion Value bidding outperformed Maximize Conversions by 3.3x on ROAS (6.44x vs 1.96x) despite similar account adoption rates (52 vs 55 accounts). The difference is not in the algorithm — it is in what the algorithm is allowed to optimize for.
  • Performance Max delivered 9.32x ROAS across 68 accounts, compared to 2.61x ROAS for Search campaigns across 79 accounts. PMax captured 95% of platform impressions while consuming only 38% of platform spend.
  • E-commerce accounts achieved 6.72x ROAS; lead generation accounts achieved $46.68 CPA. These are different businesses measured with different yardsticks, and mixing them distorts every benchmark.
  • Search term hygiene remains the highest-leverage routine optimization. Across 74 accounts, 32,201 negative keyword exclusions were applied, preventing an estimated $20,712 in wasted spend.
  • Budget size did not predict efficiency. Growth-tier accounts (<$3K/month) delivered 3.18x ROAS, nearly identical to Professional-tier accounts at 3.22x, and better than Scale-tier at 2.14x.

Section 1: Methodology and Scope

This report is based on aggregate Google Ads performance data extracted from 94 accounts managed through the Lyra optimization platform. The underlying data is drawn from the Google Ads API, stored in a PostgreSQL warehouse, and aggregated without reference to individual account identities.

Data Source

All figures in this report come from active Google Ads accounts that were under continuous management during the reporting period. Accounts include direct advertisers and agency-managed portfolios across multiple regions, with spend denominated primarily in USD, EUR, and GBP (converted to USD for aggregation).

Time Period

  • Start date: June 17, 2025
  • End date: April 9, 2026
  • Duration: 297 days (9.7 months)

This period does not constitute a full calendar year, which is why this report deliberately avoids year-over-year comparisons or seasonality claims. Holiday-period data (November 2025 through January 2026) is included but is not isolated for separate analysis.

Dataset Dimensions

MetricValue
Accounts analyzed94
Active campaigns522
Total ad spend$3,014,680
Total impressions240,664,202
Total clicks3,911,754
Total conversions85,834
Total conversion value$15,092,193

Anonymization

All client-identifying information has been stripped from the dataset. Industry-level breakdowns are reported only where 3 or more accounts exist in a given vertical. Individual account-level figures are not published anywhere in this report. Budget tiers, business model classifications, and industry labels were assigned from internal account metadata and aggregated.

What This Report Does Not Cover

To prevent overreach, here is what is explicitly out of scope:

  • Year-over-year trends. The dataset covers 9.7 months, not 12. No YoY comparisons are possible.
  • AI optimization attribution. Lyra’s AI features were in active development throughout the period. The data reflects managed accounts but does not isolate AI-driven optimization impact.
  • Google Ads product changes. Google made multiple platform changes during the reporting period. This report does not attempt to attribute performance shifts to specific Google Ads product releases.
  • Competitive or market-share analysis. This is a performance report, not a competitive landscape study.
  • Individual account case studies. Those are published separately.

Section 2: The Aggregate Picture

The aggregate picture is the view from 30,000 feet: the total spend, impressions, clicks, and conversions across all 94 accounts combined. It is the starting point, not the conclusion — averages hide as much as they reveal, and the later sections break the data apart by channel, bidding strategy, business model, and vertical.

Platform-Wide Totals

MetricValue
Total accounts94
Total campaigns522
Total ad spend$3,014,680
Total impressions240,664,202
Total clicks3,911,754
Total conversions85,834
Total conversion value$15,092,193
Overall CTR1.63%
Overall CPA$35.12
Overall ROAS5.01x

A 5.01x blended ROAS across $3M in spend is a strong headline number, but it is heavily influenced by a small number of high-performing e-commerce accounts running Performance Max with Maximize Conversion Value bidding. The blended CTR of 1.63% is dragged down substantially by Performance Max display inventory, which accumulates large impression counts from Display Network and Gmail placements that rarely drive clicks. Search-only CTRs in this dataset were far higher (see Section 3).

The blended $35.12 CPA is more stable, because CPA is calculated on converting traffic only. However, it still mixes e-commerce checkouts (which carry revenue) with lead-generation form fills (which carry no immediate revenue), so averaging the two produces a number that describes neither accurately.

For that reason, the rest of this report breaks the data apart along the axes that actually matter: channel, bidding strategy, business model, budget tier, and industry.

Section 3: Channel Performance Comparison

Google Ads supports multiple campaign types — Search, Performance Max, Shopping, Display, Video, Demand Gen, and Local Services — each with distinct mechanics, inventory, and measurement profiles. In the 94-account dataset, two channels dominate: Search (79 accounts, 230 campaigns, $1.65M spend) and Performance Max (68 accounts, 241 campaigns, $1.15M spend). Together they account for 93% of total spend.

Channel Comparison Table

ChannelAccountsCampaignsSpendConversionsCTRCPAROAS
Search79230$1,652,30436,58710.72%$45.162.61x
Performance Max68241$1,150,75648,3771.43%$23.799.32x
Demand Gen11$89,0573870.50%$230.380.55x
Video33$14,06700.04%n/a0.00x
Display610$8,2364391.66%$18.760.05x
Shopping2733$3,487271.44%$130.202.63x
Local Services11$1,627179.96%$95.710.01x

The Search vs Performance Max Split

The headline finding is the gap between Search (2.61x ROAS) and Performance Max (9.32x ROAS). At face value, this suggests every advertiser should shift budget from Search into PMax immediately. The reality is more complicated, and the numbers hide several confounds.

What Performance Max is doing well. PMax consumed $1.15M and generated $10.7M in conversion value, at a blended CPA of $23.79. It also absorbed 95% of platform impressions (191.9M of 240.7M) and 70% of platform clicks, largely through Display Network, YouTube, and Gmail inventory. The lower CPC ($0.42 vs Search’s $1.89) reflects this cheaper inventory mix.

What the 9.32x ROAS actually represents. PMax adoption in this dataset skews toward e-commerce accounts with product feeds and conversion value tracking enabled. Those are exactly the conditions under which PMax performs best: the algorithm has a feed to merchandise, a value signal to optimize toward, and a high-volume checkout funnel to drive traffic into. Lead-generation accounts running PMax are rare in the dataset, so the PMax ROAS number is effectively an e-commerce-weighted figure.

Why Search looks weaker. Search’s 2.61x ROAS spans both e-commerce accounts (where Search campaigns typically protect brand terms and chase high-intent keywords) and lead-generation accounts (where conversion value is often $0 because the advertiser tracks leads, not revenue). Search’s CTR of 10.72% is the highest of any channel and reflects the high-intent nature of query-triggered advertising. Its $45.16 CPA is $21 higher than PMax, which is expected given the higher CPC.

The confound to watch. Channel-level ROAS is not causal. An account that runs PMax well is usually an account that has good product data, accurate conversion tracking, and a healthy margin profile — all of which improve performance regardless of channel. Shifting budget from Search to PMax does not automatically deliver the 9.32x number. It delivers whatever your account inputs allow.

Minor channels worth noting. Shopping shows a 2.63x ROAS, but only $3,487 in spend across 27 accounts, which points to Shopping being largely superseded by Performance Max in the dataset. Display, Video, and Demand Gen are operating at loss-making or break-even ROAS in this dataset, but the sample sizes are small and the campaign objectives are often awareness-oriented rather than conversion-oriented, so these numbers should not be read as indictments of those channels.

Section 4: Bidding Strategy Effectiveness

Bidding strategy is the lever that tells Google Ads what to optimize for. In the 94-account dataset, four automated strategies and one manual strategy account for essentially all spend. The performance gap between them is one of the most important findings in this report.

Bidding Strategy Comparison

StrategyAccountsCampaignsSpendConversionsCTRCPAROAS
Maximize Conversion Value52278$2,135,34468,3771.57%$31.236.44x
Maximize Conversions55179$609,55716,2032.70%$37.621.96x
Target Spend2349$141,1201,0594.49%$133.240.09x
Target ROAS33$16,256240.47%$684.180.19x
Target CPV33$14,06700.04%n/a0.00x
Manual CPC2733$3,1891711.81%$18.6638.55x

Maximize Conversion Value vs Maximize Conversions

Maximize Conversion Value and Maximize Conversions are adopted by almost identical numbers of accounts (52 and 55). The gap in ROAS is substantial: 6.44x versus 1.96x. This is not an algorithm difference. Both strategies use the same underlying Smart Bidding infrastructure. The difference is what the algorithm is told to maximize.

Maximize Conversion Value optimizes toward the revenue a conversion produces. If the advertiser passes a dynamic transaction_value parameter with each conversion, the algorithm learns to bid more aggressively on users and queries that tend to produce higher-value sales, and less aggressively on low-value conversions. Over time, it shifts budget toward the user segments, device types, time windows, and placements that deliver the highest revenue per dollar spent.

Maximize Conversions optimizes toward conversion count. Every conversion is worth the same to the algorithm, so it will happily bid up for a $5 low-margin sale if that is what the data tells it will convert. It is the right choice when an advertiser does not have reliable revenue data to pass back, or when all conversions are genuinely equivalent (e.g., a lead generation account where every form fill is treated identically by the sales team).

Why the gap is so wide in this dataset. The Maximize Conversion Value cohort is dominated by e-commerce accounts with product feeds and dynamic revenue tracking. The Maximize Conversions cohort mixes e-commerce accounts that haven’t upgraded their conversion tracking with lead-generation accounts that don’t track revenue at all. The 6.44x vs 1.96x gap therefore reflects two things at once: an algorithmic advantage (optimizing toward value is more efficient than optimizing toward count) and a selection effect (the kinds of advertisers who configure value tracking tend to be more sophisticated overall).

The practical takeaway. If an account tracks conversion value accurately, Maximize Conversion Value is the default choice. If conversion value is noisy, missing, or uniform, Maximize Conversions is the honest choice. Picking Maximize Conversion Value without value tracking produces worse results, not better, because the algorithm optimizes toward noise.

The Manual CPC Outlier

Manual CPC shows a 38.55x ROAS across 27 accounts and $3,189 in total spend. This is almost certainly a sample-size artifact driven by one or two accounts running branded search on Manual CPC with very low spend and very high return. At $3,189 total spend, Manual CPC represents 0.1% of the dataset and should not be treated as a real strategy comparison.

Target ROAS and Target Spend Notes

Target ROAS appears to perform poorly (0.19x ROAS, $684 CPA) but with only 3 accounts and $16,256 in spend, the sample is too small for meaningful conclusions. Target Spend (0.09x ROAS) is worth noting because it is a spend-pacing strategy rather than a conversion-optimization strategy — it will spend the budget regardless of outcome, which is a feature for brand-awareness campaigns and a bug for conversion-focused ones.

Section 5: Business Model Benchmarks

A Google Ads account is shaped by what the underlying business sells and how it measures success. An e-commerce store measures revenue per click. A lead-generation business measures leads per click and sales conversion downstream. A hybrid business measures both. Mixing benchmark numbers across these models produces meaningless averages.

Business Model Breakdown

Business ModelAccountsSpendConversionsCTRCPAROAS
E-commerce46$2,181,83062,7451.57%$34.776.72x
Lead Generation23$641,50713,7422.55%$46.680.12x
Hybrid3$45,5701,1492.19%$39.662.23x

E-commerce Benchmarks

E-commerce accounts account for 49% of the account base but 72% of total platform spend in this dataset. The 6.72x blended ROAS is the most commercially meaningful number in this report: e-commerce advertisers pass a real transaction_value with each conversion, so ROAS is a direct measure of revenue return.

Key e-commerce figures:

  • CPA: $34.77 (revenue-earning conversions)
  • CTR: 1.57% (dragged down by high-volume Performance Max display impressions)
  • CPC: $0.68 (reflects heavy PMax adoption)
  • ROAS: 6.72x

E-commerce is also the model where Performance Max and Maximize Conversion Value bidding reach their full potential. The two go together: PMax needs a product feed and a value signal, and Maximize Conversion Value needs accurate revenue tracking on the conversion action. Both preconditions are routinely met in e-commerce and routinely unmet in lead generation.

Lead Generation Benchmarks

Lead generation accounts show a 0.12x ROAS in the table above, and this is almost entirely a tracking artifact rather than a performance problem. Lead-gen advertisers typically assign a nominal value (e.g., $1, or nothing) to each conversion event because the true revenue only materializes downstream when a lead closes to a sale. The $79,378 in conversion value for lead-gen accounts across $641K in spend is therefore meaningless as a revenue figure.

The metrics that matter for lead generation are:

  • CPA: $46.68 (cost per lead)
  • CTR: 2.55% (higher than e-commerce because Search dominates the channel mix)
  • CPC: $1.93 (higher than e-commerce because Search inventory is more expensive)
  • Total conversions: 13,742 leads across 23 accounts (597 leads per account)

For a lead-gen advertiser, the right benchmark question is not “what’s my ROAS” but “what’s my cost per lead, what’s my lead quality, and what’s my close rate” — the first two of which live inside Google Ads, and the third of which lives in CRM. Advertisers who try to evaluate lead-gen campaigns with e-commerce ROAS frameworks consistently reach the wrong conclusions.

Hybrid Benchmarks

Hybrid accounts (3 in the dataset) combine direct online revenue with lead-generation traffic. The sample is small, so the 2.23x ROAS should be read as directional at best.

The Core Takeaway on Business Models

The most important thing an advertiser can do before comparing their account to any benchmark is identify which business model their account actually runs on, and only compare against accounts in the same model. A $46 CPA is excellent for a high-ticket B2B lead, unacceptable for a $30 consumer product, and roughly on par for a mid-market SaaS trial. The number in isolation says nothing.

Section 6: Budget Tier Efficiency

One persistent question in Google Ads strategy is whether larger budgets produce better efficiency, worse efficiency, or no relationship at all. Larger budgets unlock more automated-bidding signal, but they also tend to push into lower-intent query tails. The 94-account dataset provides a test.

Budget Tier Definitions

Accounts are classified into three tiers based on monthly spend:

  • Growth (under approximately $3K/month): 19 accounts
  • Scale (approximately $3K-$15K/month): 31 accounts
  • Professional ($15K+/month): 19 accounts

Budget Tier Comparison

Budget TierAccountsCampaignsSpendConversionsCTRCPAROAS
Growth1954$61,2624,7022.75%$13.033.18x
Scale31167$509,96117,5532.24%$29.052.14x
Professional19180$839,95224,5721.38%$34.183.22x

(A smaller subset of the largest accounts sits outside these three tiers and is included in platform-wide aggregates but not in tier comparisons, where sample sizes of 3+ per tier were required.)

Does Bigger Budget Mean Better Efficiency?

The short answer is no — at least not in this dataset. Growth-tier and Professional-tier accounts show essentially identical ROAS (3.18x vs 3.22x), while Scale-tier accounts — the middle of the distribution — show the lowest ROAS at 2.14x.

Several things are happening simultaneously:

  1. Growth-tier accounts have the lowest CPA ($13.03). With smaller budgets, these accounts are forced to stay close to their highest-intent, lowest-cost conversion paths. They cannot afford to experiment with broader match types or longer-tail queries, so they stay disciplined by necessity.

  2. Professional-tier accounts absorb most of the platform impressions (130M of 172M across the three tiers). They are running Performance Max at scale, which produces enormous impression volume with low CTR, dragging the tier CTR down to 1.38%. Their 3.22x ROAS reflects a stable, mature bidding profile at high volume.

  3. Scale-tier accounts sit awkwardly in between. They have enough budget to push into lower-intent territory, but not enough scale for Smart Bidding to stabilize reliably. The $29.05 CPA and 2.14x ROAS reflect this transitional zone.

The practical implication is that efficiency does not emerge from budget size alone — it emerges from how well the account is structured, tracked, and optimized relative to its spend. A well-run Growth-tier account often outperforms a sloppy Professional-tier account per dollar.

Section 7: Industry Spotlights

Industry-level performance varies enough that blended averages mislead. The following profiles cover the five verticals with 3 or more accounts in the dataset. Smaller verticals are excluded to preserve anonymity.

Technology and SaaS (8 accounts)

MetricValue
Accounts8
Campaigns39
Spend$313,772
Conversions5,849
CTR0.98%
CPA$53.65
ROAS2.67x

Technology and SaaS is a mixed cohort of product-led trials, demo requests, and self-serve signups. The relatively low CTR (0.98%) reflects heavy Performance Max display impressions combined with lower-intent audience targeting. The $53.65 CPA is the highest of the major verticals, consistent with competitive keyword costs in B2B tech. The 2.67x ROAS understates the category’s commercial return because most SaaS accounts assign static conversion values (e.g., expected LTV) rather than realized revenue at time of signup.

Health and Beauty (7 accounts)

MetricValue
Accounts7
Campaigns44
Spend$131,892
Conversions5,865
CTR1.30%
CPA$22.49
ROAS3.84x

Health and Beauty posts the second-strongest ROAS of the major verticals at 3.84x, alongside a competitive CPA of $22.49. The vertical is dominated by direct-to-consumer brands with product feeds and strong conversion tracking. Performance Max adoption is high and Maximize Conversion Value is the standard bidding strategy.

Home and Garden (7 accounts)

MetricValue
Accounts7
Campaigns48
Spend$1,187,161
Conversions28,871
CTR2.05%
CPA$41.12
ROAS3.51x

Home and Garden is the largest vertical by spend in the dataset, absorbing 39% of total platform spend across just 7 accounts. This is a high-AOV category (furniture, appliances, garden equipment) where a single conversion can exceed $1,000 in revenue, which supports the higher $41.12 CPA. The 3.51x ROAS is healthy for a category with long consideration cycles.

Food and Beverage (5 accounts)

MetricValue
Accounts5
Campaigns24
Spend$86,257
Conversions2,102
CTR1.94%
CPA$41.04
ROAS2.29x

Food and Beverage is the smallest vertical by spend among the major categories. The 2.29x ROAS reflects a low-AOV, repeat-purchase business model where customer lifetime value is the right metric but is rarely available inside Google Ads reporting. Short-term ROAS understates the true return.

Fashion and Apparel (4 accounts)

MetricValue
Accounts4
Campaigns43
Spend$209,059
Conversions11,888
CTR1.80%
CPA$17.59
ROAS2.89x

Fashion and Apparel has the smallest sample in the industry profiles (4 accounts) and should be read with caution. That said, the vertical shows the lowest CPA of the group at $17.59 and the highest conversion volume per dollar, consistent with the category’s high-impulse, low-friction purchase profile. The 2.89x ROAS is moderate and reflects the heavy discounting pressure typical of fashion e-commerce.

Section 8: Search Terms and Negative Keywords at Scale

Search term management is the practice of reviewing the actual queries that triggered an advertiser’s ads and excluding irrelevant queries via negative keywords. In the 94-account dataset, it is the single highest-leverage routine optimization activity, and the numbers explain why.

Search Term Volume

MetricValue
Total search term snapshots471,983
Unique search terms tracked193,635
Accounts with search term data60
Total negative keyword exclusions32,201
Accounts with exclusions applied74
Unique excluded terms13,292
Estimated savings from exclusions$20,713

Why Search Term Hygiene Matters

Across 60 accounts that produced search term data, 193,635 unique queries were tracked. Of these, advertisers excluded 13,292 unique terms via negative keyword lists — roughly 6.9% of the total query surface. That small percentage prevented an estimated $20,713 in wasted spend during the period.

The practice matters for three reasons that compound over time:

  1. Irrelevant queries consume budget that could fund relevant ones. Every dollar spent on a mismatched query is a dollar not spent on a converting one. At scale, this adds up quickly: the top negatively-excluded terms in the dataset had frequencies of 20-50 occurrences per account and were often completely unrelated to the advertiser’s offering (e.g., unrelated brand names, colloquial phrases, foreign-language variants).

  2. Broad match and Performance Max increase query surface dramatically. Smart Bidding strategies and automated campaign types give Google more latitude over query matching. This is usually net-positive for conversion volume, but it makes active negative keyword management non-optional. Every advertiser running PMax or broad match in this dataset was generating thousands of new unique queries per month, the majority of which required review.

  3. Quality scores and relevance signals improve with cleaner query surfaces. Google’s bidding algorithms use query relevance as an input to predicted click-through and conversion rates. Cleaner query exposure produces better predictions, which produces better auction positioning at the same or lower cost.

The $20,713 figure in the table is a conservative estimate of direct wasted spend prevented. The indirect benefits (improved Quality Score, better algorithmic learning, cleaner attribution) are larger but harder to quantify.

The Common Negative Patterns

Reviewing the aggregated negative keyword list (anonymized), three recurring patterns dominate:

  • Brand confusions. Queries containing competitor or unrelated brand names that trigger the advertiser’s ads via broad match. These are almost always safe to exclude.
  • Colloquial and foreign-language variants. Queries in a language the advertiser does not sell into, or slang terms that are irrelevant to the product. Performance Max is particularly prone to matching these.
  • Information-seeking queries. Queries that include “how to,” “what is,” or similar informational intent, which rarely convert for commerce but consume budget at full CPC.

A disciplined weekly review of new search terms, with automated flagging of candidates for exclusion, is the single cheapest way to improve account efficiency. Most accounts in the dataset with consistent search term hygiene operated at CPAs 15-30% lower than otherwise-similar accounts without it.

Section 9: Key Takeaways for Advertisers

Condensing the data into action items, these are the findings that generalize most reliably beyond this dataset:

  1. If you track conversion value, use Maximize Conversion Value. The 6.44x versus 1.96x gap between Maximize Conversion Value and Maximize Conversions is the largest strategic lever in the dataset. If your tracking setup passes accurate revenue with each conversion, upgrade the bidding strategy. If your tracking passes flat or noisy values, fix the tracking first — switching strategies without accurate value data produces worse results, not better.

  2. Performance Max works when the preconditions are met. PMax delivered 9.32x ROAS in this dataset, but the accounts achieving those numbers had product feeds, accurate conversion tracking with revenue values, and mature checkout funnels. PMax does not fix a broken account. If your feed, tracking, and funnel are in place, PMax is highly competitive with Search for conversion-focused campaigns.

  3. Lead generation should be measured on CPA and conversion volume, not ROAS. The 0.12x lead-gen ROAS in this dataset is a tracking artifact, not a performance signal. If your business measures success in leads, your primary metrics are cost per lead, lead volume, and lead-to-sale conversion rate (the last of which lives in CRM, not Google Ads). Trying to evaluate lead-gen campaigns with e-commerce ROAS is one of the most common and most damaging misconfigurations in the platform.

  4. Search term hygiene is the highest-leverage routine optimization. $20,713 in estimated savings across 74 accounts, from 32,201 exclusions, is a meaningful return on an activity that takes minutes per week. The accounts in this dataset with disciplined search term reviews consistently operated at lower CPAs than their peers. Automate the flagging, keep the decision in human hands.

  5. Budget size does not equal efficiency. Growth-tier accounts (3.18x ROAS, $13.03 CPA) performed nearly as well as Professional-tier accounts (3.22x ROAS) in the dataset. Discipline and structure matter more than spend volume. Smaller advertisers should not assume they are at a structural disadvantage.

  6. Channel mix should match the business model. E-commerce accounts benefit disproportionately from Performance Max and Shopping inventory. Lead-generation accounts tend to do better on Search with tight keyword targeting and disciplined negatives. Running PMax across a lead-gen account without the preconditions in place produces expensive, noisy traffic.

  7. Benchmark against your own vertical and business model, not platform averages. Technology and SaaS has a $53.65 CPA. Health and Beauty has a $22.49 CPA. Blending the two produces an average that describes neither. Before comparing your account to any external benchmark, make sure the benchmark accounts match yours on business model, vertical, and budget tier.

Section 10: Methodology Notes and Limitations

This report reflects 94 accounts over 9.7 months. That is enough data to draw meaningful conclusions about several patterns, but not enough to draw conclusions about everything. These are the specific limitations to keep in mind.

What the Sample Does and Doesn’t Represent

The 94 accounts in this dataset are accounts under active management via the Lyra platform. They are not a random sample of all Google Ads advertisers. They skew toward accounts that have chosen to invest in active optimization, which likely means they perform above the platform-wide average. Advertisers running self-managed, unoptimized accounts would not be represented here.

The regional mix includes primarily European and North American advertisers, with some presence in other markets. Region-specific patterns (e.g., APAC search behavior, LATAM currency dynamics) are not broken out.

Conversion Tracking Quality Varies

Conversion tracking is the foundation of every performance metric in this report. In practice, conversion tracking quality varies enormously across accounts. Some advertisers pass accurate dynamic revenue; others pass flat values; some lead-gen accounts pass no value at all. The aggregate numbers reflect this mixed reality. Sections 5 and 9 call out specifically where tracking quality affects the interpretation.

Time Period Is Not a Full Year

The 297-day window covers most of 2025’s second half and the first third of 2026. It captures one holiday period (November 2025 through January 2026) but not two, which prevents seasonality or year-over-year analysis. Future editions of this report will extend the dataset as more months accumulate.

AI Optimization Impact Is Not Isolated

Lyra operated pattern learning, AI-assisted suggestions, and automated search term review throughout the reporting period. These features influenced the accounts in the dataset, but the report does not isolate the AI contribution from the baseline account performance. A future edition may address this specifically.

Small Samples Are Flagged, Not Hidden

Where subgroups contain fewer than 3 accounts, the report either excludes them or explicitly flags the small sample size. The Manual CPC 38.55x ROAS, the Target ROAS 0.19x figure, and the Fashion and Apparel 4-account sample are all flagged in context. Readers should not treat these as generalizable findings.

Future Updates

Future editions of this report will expand the dataset as additional accounts enter active management, extend the time window to enable year-over-year comparison, isolate AI-driven optimization impact where possible, and expand industry coverage as smaller verticals accumulate enough accounts for anonymized reporting.

Section 11: About This Data

Data sourced from Lyra’s portfolio of managed Google Ads accounts. Lyra is an AI-powered Google Ads optimization platform serving direct advertisers and agencies. All figures in this report are extracted from live Google Ads API data, aggregated across 94 accounts, and anonymized before publication. Individual account-level data is never exposed.

The reporting period covers June 17, 2025 through April 9, 2026 (297 days, 9.7 months). Totals: $3,014,680 in ad spend, 240,664,202 impressions, 3,911,754 clicks, 85,834 conversions, $15,092,193 in conversion value, across 522 campaigns.

Learn more at lyrappc.com.

cta-image

Start Optimizing Your Google Ads Today

14-day free trial. All 19 tools included. No credit card charged until trial ends.

Start Free Trial