Saudi Arabia Hyperscale Data Center Market Size and Share

Saudi Arabia Hyperscale Data Center Market Analysis by Mordor Intelligence
The Saudi Arabia hyperscale data center market size is valued at USD 1.65 billion in 2025 and is forecast to reach USD 4.99 billion in 2031, expanding at a 20.28% CAGR over the period. Rising AI workload density, Vision 2030 digitalization priorities and sovereign-cloud mandates are accelerating capital expenditure on liquid-cooled, GPU-rich campuses. Sovereign regions launched by AWS, Microsoft and Google support data-localization compliance while strategic Red Sea cable landings reduce round-trip latency to Europe, Asia and Africa below 25 ms. Grid-tied solar-plus-battery power-purchase agreements (PPAs) priced under USD 0.05/kWh improve total cost of ownership for energy-intensive AI factories. Capacity additions concentrate in Riyadh, Dammam and NEOM, yet secondary metros face execution delays due to skilled-labour shortages and immature liquid-cooling supply chains.
Key Report Takeaways
- By data center type, hyperscaler self-builds held 62% of Saudi Arabia hyperscale data center market share in 2024 while recording a 21.60% CAGR through 2030.
- By component, IT infrastructure commanded 43% of 2024 spending, whereas mechanical infrastructure is advancing at a 20.60% CAGR to 2030.
- By tier standard, Tier III facilities accounted for 75% of the Saudi Arabia hyperscale data center market size in 2024, while Tier IV deployments register the fastest 21.80% CAGR to 2030.
- By end-user industry, cloud and IT led with 41% revenue in 2024 but the government segment is expanding at a 22.40% CAGR to 2030.
- By data center size, massive-scale (25–60 MW) sites controlled 57% share in 2024, yet mega-scale (>60 MW) campuses show a 22.10% CAGR through 2030.
Saudi Arabia Hyperscale Data Center Market Trends and Insights
Drivers Impact Analysis
| Driver | % Impact on CAGR Forecast | Geographic Relevance | Impact Timeline |
|---|---|---|---|
| Exploding AI/ML GPU racks (greater than 50 kW) accelerate Riyadh hyperscale builds | 4.5% | Riyadh core, expanding to NEOM and Jeddah | Medium term (2-4 years) |
| Sovereign-cloud mandates for public sector and finance workloads | 3.2% | National, with concentration in Riyadh financial district | Short term (≤ 2 years) |
| Vision 2030 smart-city projects (NEOM, The Line) anchoring multi-campus demand | 2.8% | NEOM, The Line, Diriyah with spillover to Eastern Province | Long term (≥ 4 years) |
| Liquid-cooling readiness drives 80 MW AI-ready campus design | 2.1% | Riyadh, NEOM, Dammam industrial zones | Medium term (2-4 years) |
| Grid-tied solar-plus-battery PPAs less than USD 0.05/kWh slash TCO | 1.8% | National, with advantages in Northern regions | Medium term (2-4 years) |
| Red-Sea cable landing stations enable sub-25 ms tri-continent latency | 1.4% | Jeddah, Yanbu coastal regions with national connectivity | Long term (≥ 4 years) |
| Source: Mordor Intelligence | |||
Exploding AI/ML GPU racks (Greater than 50 kW) accelerate Riyadh hyperscale builds
HUMAIN’s procurement of 18,000 NVIDIA GB300 units underscores surging GPU density that traditional facilities cannot sustain. Direct-to-chip and immersion cooling architectures therefore dominate new Riyadh builds targeting greater than 80 MW IT loads. Aramco Digital’s Groq-powered inference cluster in Dammam reflects energy-sector digitalization that extends hyperscale demand beyond standard IT workloads.[1]Data Center Dynamics, “Saudi Arabian AI venture Humain buys 18,000 Nvidia GB300 chips,” datacenterdynamics.com GPU-rich deployments carry premium design-build costs yet drive higher revenue per rack, lifting overall Saudi Arabia hyperscale data center market value. Sovereign ambitions to develop Arabic large-language models further concentrate AI-ready capacity in core metros. Resulting power and cooling requirements position liquid-cooling vendors for sustained growth.
Sovereign-cloud mandates for public sector and finance workloads
The Personal Data Protection Law and Cloud Computing Services Regulations require sensitive workloads to reside within the Kingdom, triggering a wave of sovereign region launches by global providers.[2]Baker McKenzie, “Data Localization and Regulation of Non-Personal Data,” bakermckenzie.com The Cloud First Policy commits 80% of public services to cloud platforms by 2030, representing USD 4.7 billion of addressable spending. Banks such as Riyad Bank deploy active-active architectures with Huawei storage arrays to assure compliance and uptime. Demand for multi-zone resiliency fosters twin-region footprints across Riyadh and secondary metros. Domestic operator SCCC expands beyond the capital to capture workloads that must remain under local operational oversight. Collectively, these mandates anchor long-term hyperscale utilization rates.
Vision 2030 smart-city projects (NEOM, The Line) anchoring multi-campus demand
DataVolt’s USD 5 billion contract to build a 1.5 GW net-zero AI factory in NEOM marks the region’s largest single data-center investment.[3]NEOM, “DataVolt and NEOM to develop region’s first net-zero AI factory,” neom.com The Line’s linear topology requires distributed nodes for 9 million residents, embedding hyperscale backbones along the 170 km corridor. Oxagon’s industrial zone combines manufacturing and logistics, demanding edge analytics and massive centralized processing. Renewable energy co-location with desalination and hydrogen plants lowers power costs and aligns with sustainability targets. Spillover demand reaches Eastern Province as energy players digitalize, emphasizing the Saudi Arabia hyperscale data center market’s geographic diversification.
Liquid-cooling readiness drives 80 MW AI-ready campus design
Saudi Arabia’s desert climate renders air-cooling inefficient for racks exceeding 50 kW. Operators pivot to rear-door heat exchangers, direct-to-chip loops and immersion tanks, raising mechanical infrastructure budgets by 20.60% CAGR. DataVolt’s USD 20 billion Supermicro procurement accelerates local supply of liquid-cooled server platforms. Indigenous integrators such as Midis Energy develop customized coolant distribution units, although component localization lags mature markets. Liquid-cooling enables higher compute density, cutting land and shell costs per MW and boosting energy-efficiency metrics crucial for AI workload profitability.
Restraint Analysis
| RESTRAINTS | (~) % IMPACT ON CAGR FORECAST | GEOGRAPHIC RELEVANCE | IMPACT TIMELINE |
|---|---|---|---|
| Scarce data-center skills inflating project timelines | -2.3% | National, with acute shortages in secondary metros | Short term (≤ 2 years) |
| Immature liquid-cooling supply chain in GCC | -1.7% | Regional, affecting specialized cooling deployments | Medium term (2-4 years) |
| Potential water-usage caps for desert locations | -1.2% | National, with higher impact in inland regions | Medium term (2-4 years) |
| Local-grid curtailment greater than 30 MW in secondary metros | -0.8% | Secondary cities excluding Riyadh, Jeddah, Dammam | Short term (≤ 2 years) |
| Source: Mordor Intelligence | |||
Scarce data-center skills inflating project timelines
Saudi Arabia records a shortfall of certified electrical, mechanical and cybersecurity technicians, inflating wage premiums and lengthening build schedules. AWS’s Women’s Skills Initiative aims to train 4,000 professionals but scale remains limited relative to forecast capacity. The Uptime Institute has launched a Data Center Academy locally, yet graduation pipelines lag near-term deployment peaks. Operators rely on expatriate expertise, incurring mobility costs and visa lead times. Talent scarcity is more acute in Dammam and NEOM than in Riyadh, complicating multi-site expansion plans for secondary metros that already face grid-integration hurdles.
Immature liquid-cooling supply chain in GCC
Direct-to-chip cold plates and immersion tanks are largely imported from the United States and East Asia, creating 6–12-month lead times and price premiums that squeeze project budgets. Local integrators such as Ctelecoms offer traditional chilled-water solutions but lack engineering depth for high-density AI clusters. Government industrial-localization incentives under Vision 2030 intend to seed domestic manufacturing, yet economies of scale will take years to materialize. Smaller operators without hyperscaler purchasing power struggle to secure priority allocations, widening the technology gap between Tier I and Tier II providers. Resultant timetable elongation tempers the otherwise rapid Saudi Arabia hyperscale data center market growth.
Segment Analysis
By Data Center Type: Self-Build Dominance Drives Sovereign Infrastructure
Self-built hyperscale facilities captured 62% of Saudi Arabia hyperscale data center market share in 2024, buoyed by AWS, Microsoft and Google securing direct operational control of sovereign regions. Their 21.60% CAGR through 2030 keeps capital formation elevated as regulatory certainty and workload predictability justify ownership economics. Colocation suppliers retain niche roles providing swing capacity and risk diversification, yet face tightening margins as anchor tenants migrate to owned campuses. The self-build model optimizes network topologies and supports AI-specific designs, an imperative as rack densities exceed 50 kW.
AWS’s USD 5.3 billion Saudi region exemplifies this self-build momentum, integrating proprietary switch-fabric and custom silicon accelerators. Google’s Blue-Raman fibre system complements its data-center footprint, lowering transport costs and latency. Domestic joint ventures such as SCCC combine local regulatory familiarity with Alibaba Cloud technology stacks, offering an alternative where sovereignty or language localization is paramount. Together, these dynamics reinforce self-build primacy within the Saudi Arabia hyperscale data center market size expansion.

Note: Segment shares of all individual segments available upon report purchase
By Component: Mechanical Infrastructure Accelerates with Cooling Innovation
IT infrastructure accounted for 43% of total 2024 investment, yet mechanical infrastructure is forecast to rise at a 20.60% CAGR, propelled by liquid-cooling retrofits and high-capacity chillers. The Saudi Arabia hyperscale data center market size for mechanical plant thus outpaces server refresh cycles as operators target 1.3–1.2 power-usage-effectiveness benchmarks. Electrical infrastructure, including UPS and switchgear, remains critical for Tier IV ambitions but grows more steadily as modular power skids mature.
Construction contractors diversify into prefabricated modules, trimming onsite installation timelines amidst labour constraints. DCIM adoption heightens asset-efficiency, and Saudi Tabreed’s district-cooling expertise informs innovations such as centralised chilled-water farms serving clusters of halls. Collectively, component spending profiles reflect a shift from compute hardware dominance to balanced capital allocation across cooling, power and automation.
By Tier Standard: Tier IV Adoption Reflects Mission-Critical Requirements
Tier III facilities constituted 75% of live capacity in 2024, satisfying most enterprise and government service-level objectives. However, Tier IV footprints are expanding at a 21.80% CAGR as AI model training, financial transactions and national security workloads demand concurrent maintainability and fault-tolerance. Najm Insurance’s Tier III certification showcases baseline resiliency, while forthcoming Tier IV campuses from hyperscalers integrate dual power feeds, 2 N+1 cooling, and distributed-UPS topologies.
The Saudi Arabia hyperscale data center market size allocated to Tier IV sites is expected to triple by 2031, reflecting risk-mitigation priorities. Uptime Institute audits drive design discipline, and insurers reward higher tiers with premium discounts, balancing higher capex. Long-term, Tier IV serves as the de-facto standard for mega-scale AI factories tied to critical national objectives.
By End-User Industry: Government Sector Leads Digital Transformation
Cloud and IT tenants held 41% revenue in 2024, yet government workloads deliver the fastest 22.40% CAGR, propelled by Vision 2030’s Cloud First mandate. Ministries migrate ERP, national ID and public-health platforms into domestically operated zones, requiring compliance with data-sovereignty and cyber-resilience standards. Telecom operators extend 5G core functions into hyperscale halls, while BFSI institutions implement active-active topologies for zero-downtime settlement systems.
Industrial diversification makes manufacturing a rising consumer of edge-enhanced analytics. Aramco’s digital-oilfield initiative alone deploys petabyte-scale storage and real-time AI inference across upstream assets. E-commerce, media and gaming workloads, though smaller, benefit from low-latency Riyadh nodes and widened international transit via Red Sea cables. Overall utilisation diversity stabilises the Saudi Arabia hyperscale data center industry revenue streams against sector-specific downturns.

Note: Segment shares of all individual segments available upon report purchase
By Data Center Size: Mega-Scale Facilities Drive Capacity Expansion
Massive-scale (25–60 MW) plants represented 57% of live megawatt inventory in 2024, balancing quick-start construction with economies of scale. The Saudi Arabia hyperscale data center market size allocated to mega-scale (>60 MW) campuses, however, is rising at 22.10% CAGR as DataVolt and HUMAIN unveil 1.5 GW and 500 MW projects respectively. Projects surpassing 100 MW cluster where 380 kV grid interconnects and abundant land converge, notably NEOM, Dammam and Riyadh’s industrial belt.
Mega-campuses achieve sub-USD 7 million/MW build costs via modular blocks and shared central-utility plants. AI factories profit from contiguous compute pools that reduce training-cluster latency. Large-scale investors benefit from preferential tariffs and green-power agreements, underscoring a virtuous cycle whereby bigger sites achieve lower marginal energy and land costs.
Geography Analysis
Riyadh anchors the Saudi Arabia hyperscale data center market with 273 MW IT load, underpinned by proximity to ministries and banks adopting sovereign-cloud solutions. The King Abdullah Financial District’s district-cooling capacity demonstrates integrated infrastructure planning that attracts hyperscaler zones, including AWS’s forthcoming region. Planned expansions target northern and western suburbs, balancing grid capacity and land availability.
The Eastern Province, led by Dammam and Al-Khobar, is the fastest-growing cluster. Groq’s 19,000-LPU inference centre and Aramco-backed industrial cloud demand pull 123 MW live capacity toward specialised AI inference and seismic-data analytics workloads. Submarine cables via the Persian Gulf enhance Asian latency, appealing to multi-national energy and manufacturing firms with dual-hemisphere operations.
NEOM represents a greenfield mega-hub. DataVolt’s 1.5 GW net-zero campus, operational by 2028, integrates 100% renewable power, seawater desalination and hydrogen co-generation, pioneering sustainable digital infrastructure. The Line’s distributed topology mandates edge nodes every 20 km, creating incremental capacity adjacent to residential modules. Oxagon couples logistics automation with HPC clusters, solidifying NEOM’s position as a global lighthouse project.
Collectively, these three corridors—central, eastern and north-western—frame a tri-polar topology that supports national latency requirements and disaster-recovery separation standards. Secondary metros like Jeddah and Medina emerge as edge complements, especially for content delivery and gaming workloads utilizing new Red Sea cable landings. Geographical spread ensures workload resiliency while maximizing coverage across the Kingdom’s 2.1 million km² landmass.
Competitive Landscape
Competition is intensifying among global hyperscalers, regional telecoms and state-backed newcomers. AWS allocated USD 5.3 billion to its Saudi region, integrating Graviton processors to optimize energy efficiency and cost per core. Microsoft invests in sovereign cloud and developer training, while Google pairs its region with the Blue-Raman cable to secure routing advantages. These firms prioritize direct ownership rather than wholesale co-location, reinforcing self-build market domination.
stc leverages existing fibre and mobile networks to up-sell colocation and managed services, allocating SAR 1 billion (USD 266 million) into three mega data centers. DataVolt, backed by Vision 2030 funds, partners with Supermicro to fast-track liquid-cooled capacity and challenge incumbent share. HUMAIN differentiates via sovereign AI factories, sourcing both NVIDIA and AMD chips to diversify supply risk and cultivate a domestic AI-model ecosystem.
Smaller players such as Pure Data Centres, Dune Vaults and Khazna enter through joint ventures, focusing on niche enterprise and modular edge nodes. Barriers to entry include land permits, 380 kV grid access and specialised labour. Strategic collaboration around renewable energy and connectivity (eg, Center3 with DataVolt) further re-shapes alliances. Overall, the Saudi Arabia hyperscale data center market exhibits moderate fragmentation, creating scope for consolidation once construction waves stabilise.
Saudi Arabia Hyperscale Data Center Industry Leaders
STC (Saudi Telecom Company)
Amazon Web Services
Microsoft Corporation,
Google LLC
Oracle Corporation
- *Disclaimer: Major Players sorted in no particular order

Recent Industry Developments
- May 2025: DataVolt signed a USD 20 billion deal with Supermicro to deploy liquid-cooled hyperscale infrastructure across Saudi Arabia.
- May 2025: HUMAIN and NVIDIA agreed to build AI factories with up to 500 MW capacity, deploying 18,000 NVIDIA GB300 systems.
- May 2025: AMD and HUMAIN entered a USD 10 billion collaboration for 500 MW AI compute over five years.
- February 2025: DataVolt and NEOM sealed a USD 5 billion pact for a 1.5 GW net-zero AI factory in Oxagon.
Research Methodology Framework and Report Scope
Market Definitions and Key Coverage
Our study defines the Saudi Arabian hyperscale data center market as all new or expanded facilities located in the Kingdom that deliver at least four megawatts of IT load per campus building and are primarily owned, leased, or dedicated to cloud, AI, and large-scale digital platform operators. These values capture self-builds commissioned by hyperscalers as well as landlord-run halls that are contractually block leased to a single cloud tenant.
Scope exclusion: Modular edge sites below four megawatts, metro carrier hotels serving multiple small tenants, and traditional enterprise server rooms are excluded.
Segmentation Overview
- By Data Center Type
- Hyperscale Self-build
- Hyperscale Colocation
- By Component
- IT Infrastructure
- Server Infrastructure
- Storage Infrastructure
- Network Infrastructure
- Electrical Infrastructure
- Power Distribution Units
- Transfer Switches and Switchgears
- UPS Systems
- Generators
- Other Electrical Infrastructure
- Mechanical Infrastructure
- Cooling Systems
- Racks
- Other Mechanical Infrastructure
- General Construction
- Core and Shell Development
- Installation and Commissioning
- Design Engineering
- Fire Detection, Suppression and Physical Security
- DCIM/BMS Solutions
- IT Infrastructure
- By Tier Standard
- Tier III
- Tier IV
- By End-User Industry
- Cloud and IT
- Telecom
- Media and Entertainment
- Government
- BFSI
- Manufacturing
- E-Commerce
- Other End Users
- By Data Center Size
- Large ( Less than or equal to 25 MW)
- Massive (Greater than 25 MW and Less than equal to 60 MW)
- Mega (Greater than 60 MW)
Detailed Research Methodology and Data Validation
Primary Research
Mordor analysts interviewed hyperscaler real estate leads, local utilities, colo developers, and cooling equipment OEMs across Riyadh, Jeddah, Dammam, and NEOM. These discussions validated live capacity tallies, cross-checked average rack densities, and refined construction cycle assumptions that desk work alone could not pinpoint.
Desk Research
We began with public domain pillars, Vision 2030 program filings, Communications, Space & Technology Commission (CST) capacity statistics, and Saudi Customs import codes for servers and chillers, which ground the demand pool. Trade associations such as the Uptime Institute, GCC Interconnection Authority, and the Data Centre Alliance provided power density norms and regional PUE baselines. Company 10-Ks, investor decks, and national press releases revealed disclosed megawatt tranches and construction timelines.
Next, paid databases helped us stitch gaps: D&B Hoovers for developer financials, Dow Jones Factiva for deal chronology, and Questel for immersion cooling patent activity. Numerous additional secondary sources were reviewed; the list above is illustrative, not exhaustive.
Market-Sizing & Forecasting
A top-down model converts national server imports, disclosed build outs, and grid connection approvals into commissioned megawatts, which are then multiplied by blended fit-out costs to derive the value. Select bottom-up checks, sampled campus ASP multiplied by IT load and supplier shipment rolls, test the totals before adjustments. Key drivers include AI GPU rack penetration, sovereign cloud mandates, solar PPA pricing, average PUE drift, and liquid cooling adoption rates. Forecasts apply multivariate regression on these variables, supplemented by scenario analysis for utility scale renewable delays. Where bottom-up estimates lag disclosures, we interpolate using tier level capacity factors agreed during expert calls.
Data Validation & Update Cycle
Outputs undergo variance flags against historical CST data, peer investment signals, and currency parity trends, followed by a two-step analyst review. Reports refresh yearly, and material project announcements trigger interim updates; a final pre-publication sweep ensures clients see the latest view.
Why Mordor's Saudi Arabia Hyperscale Data Center Baseline Commands Reliability
Published estimates often diverge because firms select different facility thresholds, bundle colocation revenue, or assume static rack densities.
Key gap drivers include Mordor's stricter >=4 MW cutoff, our use of PUE adjusted fit-out costs, and our annual refresh cadence, whereas other publishers may pool enterprise halls, apply global ASPs, or roll forward older capacity maps.
Benchmark comparison
| Market Size | Anonymized source | Primary gap driver |
|---|---|---|
| USD 1.65 B (2025) | Mordor Intelligence | - |
| USD 1.33 B (2024) | Regional Consultancy A | includes enterprise refurbishments; uses static $/MW |
| USD 1.50 B (2024) | Trade Journal B | omits self-build AI factories announced post-2024 |
| USD 4.51 B (2024) | Global Consultancy C | measures full data center market, not hyperscale only |
Taken together, the comparison shows that Mordor's numbers sit between narrower enterprise counts and broad all facility tallies, giving decision makers a balanced, transparent baseline anchored to clear capacity thresholds and reproducible steps.
Key Questions Answered in the Report
What is the current size and projected growth of the Saudi Arabia hyperscale data center market?
The market is valued at USD 1.65 billion in 2025 and is forecast to reach USD 4.99 billion in 2031, reflecting a 20.28% CAGR.
Which segment holds the largest market share?
Colocation leads with 65% of the Saudi Arabia hyperscale data center market share in 2025.
Which data-center type holds the largest share today?
Hyperscaler self-build facilities account for 62% of installed capacity, growing at a 21.60% CAGR through 2030.
Why are liquid-cooled systems gaining traction in Saudi data centers?
AI workloads now exceed 50 kW per rack, and the desert climate limits air-cooling efficiency, so operators adopt direct-to-chip and immersion solutions that cut energy use and enable higher rack densities.
How is Vision 2030 influencing hyperscale investments?
Smart-city projects such as NEOM and The Line require multi-campus compute backbones, driving mega-scale sites (>60 MW) to expand at a 22.10% CAGR.
What restraints could slow near-term capacity additions?
Shortages of certified data-center engineers and an immature regional liquid-cooling supply chain are lengthening project timelines and adding cost pressures.
Which end-user segment is expanding the fastest?
Government workloads are advancing at a 22.40% CAGR as the Cloud First Policy moves 80% of public services to domestic cloud platforms by 2030.




