tapebrief

ANET · Q1 2026 Earnings

Bullish

Arista Networks

Reported May 5, 2026

30-second summary

Arista printed $2.71B in Q1 FY2026 (+35.1% YoY, +8.9% QoQ), beating its own $2.6B guide by $109M (4.2%) and delivering non-GAAP operating margin of 47.8% — 180bps above the ~46% guide that management had pre-warned would compress on memory costs. More importantly, management raised the FY2026 outlook to 27.7% revenue growth / ~$11.5B and lifted the AI Fabrics target from $3.25B to $3.5B. Q2 FY2026 is guided to $2.8B (+26.7% YoY off the $2.21B Q2 FY2025 base) with non-GAAP gross margin of 62-63% and operating margin at 46-47%, signaling the operating leverage held through the supply-cost step-up the company spent all of Q4 flagging. The watch-list question of whether Q1 margin would crack 45% is decisively answered — it didn't, and it isn't being guided to crack in Q2 FY2026 either.

Headline numbers

EPS

Q1 FY2026

$0.87

Revenue

Q1 FY2026

$2.71B

+35.1% YoY

Gross margin

Q1 FY2026

61.9%

Free cash flow

Q1 FY2026

$1.64B

Operating margin

Q1 FY2026

42.7%

Key financials

Q1 FY2026
MetricQ1 FY2026YoYQ4 FY2025QoQ
Revenue$2.71B+35.1%$2.49B+8.9%
EPS$0.87$0.82+6.1%
Gross margin61.9%62.9%-100bps
Operating margin42.7%41.5%+120bps
Free cash flow$1.64B

Guidance

Q1 FY2026 beat both revenue and operating margin guidance; Q2 FY2026 guidance reflects strong momentum with 26.7% YoY revenue growth expected.

Guidance is issued for both next quarter and the full year. Both may appear below.

Actuals vs prior guidance

MetricPeriodPrior guideActualΔResult
RevenueQ1 FY2026$2.6 billion$2.709 billion+$0.109B above guideBeat
Non-GAAP Gross MarginQ1 FY202662-63%62.4%in-line with mid-rangeBeat
Non-GAAP Operating MarginQ1 FY202646%47.8%+1.8pts above guideBeat

New guidance

MetricPeriodGuideYoY
RevenueQ2 FY2026$2.8 billion+26.7% YoY
Non-GAAP EPSQ2 FY2026$0.88
Non-GAAP Operating MarginQ2 FY202646-47%

Product revenue

Q1 FY2026
SegmentQ1 FY2026YoY
Product$2.311B+36.5%
Service$0.398B+27.3%

Management tone

Q2 FY2025 anchor: "Once in a lifetime opportunity" → Q3 FY2025 anchor: FY2026 disaggregation → Q4 FY2025 anchor: FY2026 raise paired with supply discipline → Q1 FY2026 anchor: Supply-limited execution with three-fabric architecture maturing and FY2026 raised again.

From memory cost as a margin headwind to be absorbed, to supply constraints as a multi-year structural cap on what can ship. At Q4 FY2025, supply pressure was framed primarily through the margin lens — memory prices "order of magnitude exponentially higher" forcing the FY2026 operating margin guide to ~46%. This quarter, the framing shifted from cost-absorption to volume-rationing across the entire BOM. From the Newstreet Research Q&A: management acknowledged supply constraints are "a 1-2 year phenomenon affecting all chip types, not just memory," that growth guidance started at 20-25%, was raised to 27.7%, and "decommits don't look positive." Management is now explicit that the FY framework is what can be shipped, not what is demanded — a structural reframe that means upside comes from supply unlock, not demand surprise. Even within that supply-limited frame, the FY2026 outlook was raised to ~$11.5B with the AI Fabrics target lifted from $3.25B to $3.5B.

From scale-up as a 2027 event vaguely defined, to a maturing pipeline with named architectural milestones. Q4 FY2025 introduced scale-up as gated on the eSUN spec and tied to 1.6T speeds in 2027. This quarter, in the Wolf Research exchange, management confirmed "5-7 scale-up rack opportunities still active, some with multiple racks per customer" now in "active engineering phase," with "majority targeting 1.6T deployment in 2027" and "a handful potentially trying experimental 800G." The pipeline is no longer aspirational — it is in qualification. Simon Leopold's question on the $3.5B AI revenue composition extracted a further commitment: scale-across is "expected to contribute at least a third of the $3.5 billion AI revenue in 2026," disaggregating the AI center number for the first time across the three-fabric architecture.

Deferred revenue continued to build, with CFO commentary tying the increase to customer-specific acceptance clauses on new products and AI deployments. Deferred revenue grew to $6.2B from $5.37B last quarter (+$830M QoQ), with product deferred revenue alone up ~$643M sequentially per CFO Chantelle Breithaupt. She noted the company remains "in a period of ramping our new products, winning new customers, and expanding new use cases, including AI," which has "resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances." The implication is multi-year forward revenue visibility, but with diffuse calendarization.

From "1-2 new 10%+ customers in 2026" as a hedged forecast, to two specific candidates exhibiting all three AI fabric use cases. In response to Aaron Rakers, management did not retreat from the "1-2 new 10%+ customers" framing but added texture: the candidates "exhibit all three use cases (scale-up, scale-out, scale-across)," with "power constraints driving distributed, multi-tenant scale-across demand" and "EOS platform superiority vs. white-box" as the cited win factor. The shift is from possibility to characterization — and the architectural concentration of the candidates around scale-across specifically is a signal that the 10%+ customer wins will be tied to deployment timing of a feature that didn't exist 12 months ago.

Q&A highlights

Simon Leopold · Raymond James

How much revenue did scale-across contribute last year, and how material is it to the $3.5 billion AI forecast for 2026? How should it trend longer-term?

Scale-across was small last year with majority from scale-out. Scale-up is virtually zero and expected only post-2027 when ESOM spec launches. Scale-across is expected to contribute at least a third of the $3.5 billion AI revenue in 2026, with the balance from scale-out.

$3.5 billion AI revenue target for 2026Scale-across expected to be at least one-third of AI revenueScale-up expected in 2027-2028 timeframe post-ESOM specScale-out remains the heritage business

George Notter · Wolf Research

What is the status of scale-up rack design wins with customers? Previously mentioned 5-7 opportunities; can you provide an update on progress and timeline expectations?

Confirmed 5-7 scale-up rack opportunities still active, some with multiple racks per customer. Currently in active engineering phase with majority targeting 1.6T deployment in 2027, with only handful potentially trying experimental 800G deployments. Held to higher bar than ODMs for production-worthiness and eSUM spec adherence.

5-7 active scale-up rack design opportunities maintainedMajority targeting 1.6T deployment in 2027Handful attempting experimental 800GActive engineering phase underway

Antoine Gabin · Newstreet Research

How much of current supply capacity allows growth achievement of 27.7% guidance this year and next? Does the growth guide reflect achievable supply constraints?

Supply constraints are a 1-2 year phenomenon affecting all chip types, not just memory. 27.7% guidance represents best attempt given constraints; started at 20-25%, now at 27.7%, could improve but decomits don't look positive. Multi-year demand exists but company will sacrifice gross margins to meet customer needs and prevent GPU/AI infrastructure underutilization.

Supply constraints expected 1-2 year durationStarted guidance at 20-25%, raised to 27.7%Willing to sacrifice gross margins for supply continuityMulti-year purchase commitments in place

Aaron Rakers · Wells Fargo

What is the status on new 10% customers? Are you still expecting 1-2 new 10%+ customers in 2026? What are engagement characteristics with hyperscale customers beyond Microsoft and Meta?

Still expect at least 1-2 new 10%+ customers in demand, but achievement depends on shipments/supply. Two candidates exhibit all three use cases (scale-up, scale-out, scale-across), lack of power at sites driving multi-tenant scale-across demand, and strong appreciation for EOS reliability, observability, and Layer 2/3 stack versus white-box alternatives.

1-2 new 10%+ customers expected (demand visibility exists)New candidates utilize all three AI fabric use casesPower constraints driving distributed, multi-tenant scale-acrossEOS platform superiority vs. white-box cited as differentiator

Tal Liani · Bank of America

Deferred revenue doubled year-over-year and increased $826M in last four quarters. What conditions trigger recognition? Is it facility buildout, GPU arrival, installation?

Qualification cycles have extended from 2-4 quarters historically to 6-8 quarters currently. Requirements include facility readiness, GPU/accelerator availability, physical cable installation (thousands of people over months), and product testing/ecosystem validation. Both new customer designs and new product families (EtherLink AI products) require extended qualification periods.

Qualification cycles extended from 2-4 quarters to 6-8 quartersDeferred revenue $6.2 billion (up from $5.37 billion prior quarter)Product deferred revenue increased ~$643 million sequentiallyMultiple determinants: facility prep, accelerators, cabling, product testing

Answers to last quarter's watch list

Whether Q1 FY2026 non-GAAP operating margin lands at the 46% guide or compresses further — Q1 FY2026 printed 47.8%, 180bps above the 46% guide. The supply-cost absorption modeled by management ran below their own conservatism, and Q2 FY2026 is guided to 46-47%, not below 46%. The re-rate risk on the 62-64% FY gross margin band does not materialize on this print.
Resolved positively
Whether the newly raised FY2026 framework holds against worsening memory and silicon costs — Management raised the FY2026 outlook to 27.7% growth / ~$11.5B and lifted AI Fabrics from $3.25B to $3.5B, with campus held at $1.25B and operating margin maintained at ~46%. Framework not just held but raised.
Resolved positively
Whether the AI center target tracks ratably or back-half-loads — Not specifically quantified for the quarter, but the Newstreet exchange anchored the FY at 27.7% growth, and the Raymond James exchange disaggregated the AI revenue line at the newly raised $3.5B with at least one-third from scale-across.
Resolved positively
Concrete FY2025 AI center revenue dollar disclosure — The company didn't quantify FY2025 AI center revenue on this print; the discussion moved forward to FY2026's $3.5B composition rather than backward to FY2025's overshoot.
Not resolved
Named fifth AI customer or a fourth tier-one cloud customer GPU milestone — Management reiterated "1-2 new 10%+ customers expected" without naming, but for the first time characterized the candidates as exhibiting all three AI fabric use cases with multi-tenant scale-across as the win driver. Progress on characterization, not on naming.
Continue monitoring
Purchase commitments trajectory from the $6.8B Q4 FY2025 print — CFO disclosed purchase commitments rose to $8.9B from $6.8B at end of Q4 FY2025, consistent with multi-year supply agreements and elevated chip procurement for new products and AI deployments. Status: Resolved.
Whether management quantifies the non-AI, non-campus core business growth rate — Wells Fargo's question on hyperscale dynamics was answered with architectural texture but the non-AI/non-campus core line item remains unquantified for a third consecutive quarter.
Continue monitoring

What to watch into next quarter

Whether the newly raised $11.5B / 27.7% FY2026 framework is raised again on the Q2 FY2026 print — Q1 FY2026 ran at 35.1% YoY against an FY framework of 27.7%; a Q2 FY2026 above guide would put further pressure on management to refresh.

Whether Q2 FY2026 non-GAAP gross margin holds the 62-63% band or steps down further — Q1 FY2026 landed at 62.4%, at the low end of the range. The Newstreet exchange flagged management's willingness to "hurt our gross margins" to maintain shipments; a sub-62% print would convert that willingness into evidence.

Whether scale-across reaches the at-least-one-third-of-$3.5B AI revenue contribution implied by Simon Leopold's exchange — would validate the three-fabric architecture as commercially material in FY2026 rather than 2027, and is the most falsifiable element of the AI revenue framework.

Deferred revenue trajectory from $6.2B — CFO flagged increased volatility tied to customer-specific acceptance clauses on new products and AI deployments; continued build would extend forward visibility, while flattening would suggest acceptance is clearing faster, with implications for H2 FY2026 revenue calendarization.

Naming of a new 10%+ customer or progress on the two scale-across candidates — would convert the multi-quarter hedge into a falsifiable commitment.

Whether one of the 5-7 scale-up rack engagements moves from "active engineering" to a named 800G experimental deployment — would pull scale-up revenue contribution into 2026 rather than 2027 and represents the largest unmodeled upside in the framework.

Purchase commitments trajectory from the $8.9B Q1 FY2026 print — direction speaks directly to whether the 1-2 year multi-chip supply shortage is still inflating working capital or whether procurement is normalizing.

Sources

  1. Arista Networks Q1 FY2026 Earnings Release, filed with SEC: https://www.sec.gov/Archives/edgar/data/1596532/000159653226000074/ex991q126-earningsrelease.htm
  2. Arista Networks Q1 FY2026 earnings call prepared remarks and Q&A.
  3. Arista Networks Q4 FY2025 brief (Tapebrief), for cross-quarter comparison.
  4. Arista Networks Q3 FY2025 brief (Tapebrief), for cross-quarter comparison.
  5. Arista Networks Q2 FY2025 brief (Tapebrief), for cross-quarter comparison.

Get the next brief, free.

We publish analyst-grade earnings briefs the same day or morning after every call — headline numbers, segment KPIs, Q&A highlights, and tone analysis. Free during beta.

This is not investment advice.