China Data Center Market Analysis by Mordor Intelligence
The China Data Center Market size is estimated at USD 29.23 billion in 2025, and is expected to reach USD 56.71 billion by 2030, at a CAGR of 14.17% during the forecast period (2025-2030). In terms of IT Load Capacity, the market is expected to grow from 7.05 thousand megawatt in 2025 to 9.37 thousand megawatt by 2030, at a CAGR of 5.85% during the forecast period (2025-2030). The market segment shares and estimates are calculated and reported in terms of MW. This capacity expansion reflects the country’s shift toward AI-optimized infrastructure, where hyperscale operators are responding to sovereign cloud mandates and GPU-driven rack densities that often exceed 100 kW. Rising state investments in digital sovereignty, aggressive 5G roll-outs, and a maturing liquid-cooling supply chain jointly sustain demand, while Western China incentives unlock renewable-powered sites that ease coastal power-grid stress. Competitive intensity remains moderate because strict security certifications and export curbs on advanced GPUs erect barriers that limit new entrants but do not allow any one provider to dominate. Operators able to combine sovereign-grade compliance with AI-ready capacity capture premium pricing, especially where rack densities command 30-50% higher rates than legacy colocation.
Key Report Takeaways
- By data center size, large facilities held 42.99% of the China data center market share in 2024, while medium facilities are forecast to register the fastest 7.22% CAGR through 2030.
- By tier standard, Tier 3 sites accounted for 60.57% share of the China data center market size in 2024, whereas Tier 4 sites are projected to grow at a 4.98% CAGR.
- By data center type, colocation services led with 69.77% market share in 2024, yet hyperscale self-built facilities are poised for a 7.01% CAGR under sovereign cloud mandates.
- By end-user industry, IT and telecom captured a 49.74% market share in 2024; BFSI is set to accelerate at an 8.22% CAGR on the back of digital yuan infrastructure investments.
- By hotspot, Beijing retained a 28.70% share in 2024, but Rest of China is expected to expand at the highest 7.30% CAGR as Western clusters gain policy support.
China Data Center Market Trends and Insights
Drivers Impact Analysis
| DRIVER | (~) % IMPACT ON CAGR FORECAST | GEOGRAPHIC RELEVANCE | IMPACT TIMELINE |
|---|---|---|---|
| Excess demand for high-density racks from AI training workloads | 1.2% | National, concentrated in Beijing and Shenzhen tech corridors | Medium term (2-4 years) |
| Surge in sovereign cloud spending by Chinese state-owned enterprises | 0.8% | National, with priority in strategic sectors | Long term (≥ 4 years) |
| Rapid roll-out of 5G and edge nodes boosting micro-DC demand | 0.6% | National, accelerated in Tier-1 and Tier-2 cities | Short term (≤ 2 years) |
| Growing availability of green power trading quotas for DC operators | 0.5% | Western China clusters, expanding to coastal regions | Long term (≥ 4 years) |
| Mainstream adoption of liquid cooling in new hyperscale builds | 0.4% | National, led by hyperscale operators | Medium term (2-4 years) |
| Government incentives for Western China DC clusters to offload coastal loads | 0.3% | Western provinces, particularly Inner Mongolia and Xinjiang | Long term (≥ 4 years) |
| Source: Mordor Intelligence | |||
Excess Demand for High-Density Racks from AI Training Workloads
AI training clusters now draw more than 100 kW per rack, a tenfold leap over legacy deployments, forcing operators to redesign power and cooling at greenfield sites and retrofit older halls to avoid stranded capacity.[1]Supermicro Computer, “GPU Server Shipment Milestones and AI Infrastructure Trends,” SUPERMICRO.COM Direct-to-chip liquid cooling eliminates thermal bottlenecks and achieves PUE below 1.15, well inside China’s sub-1.3 mandate, yet it introduces higher upfront capex and calls for specialized maintenance skills. Operators meeting these specifications charge 30-50% premiums for GPU-ready racks, creating a differentiated revenue stream. Compliance layers under the Network Data Security Management Regulations further constrain site options, so facilities that combine AI density with certified data-sovereign walls gain decisive pricing power.
Surge in Sovereign Cloud Spending by Chinese State-Owned Enterprises
The National Data Infrastructure Construction Guidelines require state-owned enterprises to migrate 80% of non-sensitive workloads to domestic clouds by 2027, locking in multi-year capacity reservations that improve visibility for builders.[2]State-owned Assets Supervision and Administration Commission, “SOE Digital Transformation and Cloud. As SOEs place long-term orders, developers enjoy lower vacancy risk and more favorable financing, although they must prove end-to-end infrastructure ownership and stringent encryption protocols that foreign hyperscalers cannot match. Capital intensity rises because purpose-built halls integrate trusted modules, enhanced physical security, and real-time audit gateways, but the guaranteed revenue tail offsets cost pressures. In parallel, SOE demand for AI model training shifts spend from generalized compute into GPU-dense nodes, intertwining the sovereign cloud push with the AI hardware cycle.
Rapid Roll-out of 5G and Edge Nodes Boosting Micro-DC Demand
China Mobile alone had installed 3.7 million 5G base stations by 2024, each requiring low-latency processing within 10 milliseconds for autonomous vehicles, industrial robots, and AR services.[3]China Mobile Limited, “5G Base Station Deployment and Edge Computing Strategy,” CHINAMOBILE.COM The dense radio network spurs thousands of micro data centers between 50 kW and 500 kW, an architecture that favors local real-estate holders and specialized integrators over traditional hyperscale firms. Although capacity per site is modest, aggregate demand scales quickly because roll-outs cover hundreds of cities. Unmanned operations, AI-assisted maintenance, and modular prefabrication reduce opex, yet introduce cybersecurity and orchestration complexities that create a services upsell for managed-edge providers.
Growing Availability of Green Power Trading Quotas for DC Operators
Renewable-rich provinces such as Inner Mongolia, Xinjiang, and Gansu now offer long-term green-power quotas that allow data center operators to execute virtual power purchase agreements without grid-curtailment risk. The arrangement supplies competitively priced wind and solar electricity, enabling facilities to achieve carbon-neutral operations in line with China’s dual-carbon goals. Early adopters differentiate via green-facility labels that attract multinational customers requiring verifiable renewable energy. However, transmission constraints mean that quota allocations often hinge on ultra-high-voltage line availability, pressing operators to assess grid-tie reliability as part of site selection.
Restraints Impact Analysis
| RESTRAINTS | (~) % IMPACT ON CAGR FORECAST | GEOGRAPHIC RELEVANCE | IMPACT TIMELINE |
|---|---|---|---|
| Inter-provincial power transfer congestion limits site selection | -0.7% | National, particularly affecting Western China clusters | Medium term (2-4 years) |
| Stricter PUE caps (<1.3) raising capex for legacy facilities | -0.4% | National, concentrated in Tier-1 cities | Short term (≤ 2 years) |
| Rising land-use taxes around Tier-1 cities | -0.3% | Beijing, Shanghai, Shenzhen metropolitan areas | Long term (≥ 4 years) |
| Export restrictions on advanced GPUs slowing AI cluster expansions | -0.2% | National, affecting hyperscale and enterprise segments | Medium term (2-4 years) |
| Source: Mordor Intelligence | |||
Inter-Provincial Power Transfer Congestion Limits Site Selection
Ultra-high-voltage corridors already run at up to 95% utilization during peak periods, constraining the ability of wind-rich Western provinces to serve coastal compute loads. Grid operators may prioritize residential and industrial consumption over data center draw, forcing companies to accept redundancy schemes or on-site storage to mitigate curtailment risk. These safeguards elevate capex and complicate power purchase agreements, diminishing the cost advantage of inland sites unless transmission infrastructure expands in tandem with new capacity.
Stricter PUE Caps Raising Capex for Legacy Facilities
Regulators now compel existing halls to retrofit toward a PUE below 1.3, a threshold that many pre-2020 sites exceed by up to 50%. Achieving compliance requires liquid-cooling retrofits, airflow containment, and next-generation power modules, often adding USD 500-800 per kW in incremental cost. Smaller operators lacking scale face disproportionate financial strain, and non-compliance can trigger forced shutdowns, creating churn in the secondary-capacity market as tenants migrate to modern, efficient sites.
Segment Analysis
By Data Center Size: Medium Sites Bridge the Cloud-to-Edge Divide
Medium facilities, typically 1-10 MW, registered the fastest 7.22% CAGR and are expected to solidify their role as the connective tissue between hyperscale clouds and edge devices. In 2024, large installations still accounted for 42.99% of total capacity within the China data center market, reflecting hyperscale AI training requirements that tend to require massive compute footprints. The medium-size cohort benefits from proximity to end users, lower latency, and regulatory flexibility, making it ideal for regional SaaS clusters and industrial IoT gateways. Developers leverage modular construction to shorten build cycles to under 12 months, capturing demand spikes from software launches and regional 5G densification. As liquid-cooling costs fall, medium halls can accommodate GPU-dense racks without the extensive power spine needed at megascale, preserving margin while meeting performance targets.
From a cost-of-capital perspective, banks view mid-range projects as lower-risk than Greenfield hyperscale builds because tenant concentration is lower and lease tenures are shorter. Operators blend wholesale contracts with retail colocation to maintain utilization above 80%, a threshold that drives attractive EBITDA yields. The balance between density and scale explains why the medium category acts as an early proving ground for innovations such as rack-level direct liquid cooling or on-site hydrogen backup. Over the next five years, medium sites are poised to capture incremental provincial incentives that favor edge compute proliferation, sustaining their outperformance relative to the broader China data center market.
Note: Segment shares of all individual segments available upon report purchase
By Tier Standard: Tier 4 Gains Momentum for Mission-Critical Loads
Tier 3 halls, with 60.57% share in 2024, remain the workhorse of the China data center market, offering 99.982% availability at an efficient cost point. Yet Tier 4 capacity grows at 4.98% CAGR as BFSI and algorithmic trading workloads demand 99.995% uptime. The upgraded tier integrates 2N+1 redundancy across power, cooling, and network layers, translating to higher leasing rates that BFSI tenants accept to safeguard millisecond-sensitive transactions. Government e-services deploying digital yuan frameworks also gravitate to Tier 4 designs to meet Data Security Law specifications.
Operators pursuing Tier 4 certifications face capex premiums of 10-15%, primarily in duplicate chillers, switchgear, and network fabrics. The payback period compresses where power densities exceed 50 kW per rack because higher rack revenue mitigates capital costs. Moreover, investors see Tier 4 assets as inflation-protected due to sticky contracts extending five years or longer. Consequently, the tier upgrade trend aligns with the strategic shift toward high-value, AI-centric workloads, reinforcing the upward-quality migration within the China data center market.
By Data Center Type: Hyperscale Self-Builds Accelerate under Sovereign Mandates
Colocation still dominates with 69.77% share, reflecting legacy enterprise outsourcing and multitenant economics. However, hyperscale self-builds post the fastest 7.01% CAGR as cloud majors internalize risk, meet sovereignty requirements, and customize infrastructure for AI clusters. Self-builds allow Alibaba, Tencent, and Baidu to deploy immersion-cooled tanks or GPU-optimized power trains without negotiating facility retrofits, thus accelerating time-to-market for new AI services. They also secure full lifecycle control over hardware supply chains, crucial when U.S. export regulations tighten GPU availability.
Wholesale colocation adapts by shifting toward fit-out-ready shells, enabling hyperscalers to lease raw space and install bespoke gear. For enterprise customers, retail colocation remains attractive for compliance-light workloads. The hybrid landscape pushes operators to maintain both wholesale and retail capability, often within the same campus. As sovereign cloud directives mature, the China data center market size attributed to self-builds is poised to rise further, even while colocation continues to serve a diversified tenant base.
Note: Segment shares of all individual segments available upon report purchase
By End User Industry: BFSI Demand Surges on Digital Yuan Roll-Out
IT and telecom retained 49.74% share in 2024, yet BFSI is on track for an 8.22% CAGR through 2030, outpacing all other verticals. Central bank digital currency pilots require distributed-ledger nodes hosted in compliant domestic facilities, shifting substantial loads from bank premises into Tier 4-certified halls. Simultaneously, algorithmic trading engines and risk analytics clusters migrate toward GPU-accelerated platforms that consume rack densities above 40 kW, further fueling the segment’s appetite for high-spec capacity.
Regulatory scrutiny from the China Banking and Insurance Regulatory Commission stipulates data residency, pushing foreign-hosted fintechs to repatriate applications into the China data center market. Beyond BFSI, e-commerce sites scaling livestream transactions, and manufacturing firms rolling out Industry 4.0 robotics, maintain steady growth, yet none match the velocity of financial workloads. As digital payments and wealth-tech platforms proliferate, BFSI is set to become the bellwether of premium pricing trends across the China data center industry.
By Hotspot: Western Provinces Capture the Next Wave of Growth
Beijing continues to anchor the China data center market with 28.70% share thanks to dense demand from AI research labs, internet majors, and public-sector clouds. However, power and real-estate scarcity lead operators to fringe districts such as Yanqing, where land costs undercut the urban core by 30% and connectivity is bolstered by new fiber corridors. Rest of China, encompassing Inner Mongolia, Gansu, and Xinjiang, expands at 7.30% CAGR as national incentives and renewable abundance combine to draw hyperscale investments.
Cities like Weinan leverage proximity to Xi’an’s talent pool and multimodal logistics to position themselves as secondary growth poles offering 20% lower PUE targets with wind-solar PPAs. Huai’an rekindles Jiangsu’s manufacturing might with edge-optimized mini-camps linked via low-latency fiber to Shanghai’s financial district, attracting fintech disaster-recovery nodes. This geographic diversification mitigates regional outage risk and broadens the footprint of the China data center market beyond its traditional coastal nucleus.
Geography Analysis
Beijing remains the flagship hub, yet land-use taxes and grid-capacity quotas escalate operating costs, prompting operators to migrate expansion phases to suburban parcels where zoning approvals are swifter and land prices are 40% lower. With 5G radios saturating the capital, edge micro-centers dot commercial districts, enabling sub-5 millisecond service for autonomous shuttles and urban IoT. Beijing’s adoption of liquid cooling leads the national averages, cementing its role as a proving ground for next-generation thermal technologies.
The rest of China consolidates its position as the most dynamic sub-region in the China data center market, driven by national policy that earmarks Western clusters as the primary vehicle for absorbing data-sovereignty spend while alleviating coastal power stress. Inner Mongolia boasts an average site PUE of below 1.25 by leveraging sub-zero winters for free cooling, while Xinjiang pairs 24-hour solar-plus-wind hybrid plants with redundant looped feeders to ensure a high-availability supply. Transmission congestion still curbs unrestrained build-out, so operators hedge by adding on-site battery energy storage sized at 15% of IT load, cushioning against curtailment.
Weinan and Huai’an typify the emergence of cost-optimized, connectivity-rich secondary markets. Weinan’s municipal authorities reduce business-tax surcharges for data centers by 50% for three years in exchange for local hiring commitments, lowering operators’ payback periods on 20 MW campuses. Huai’an entices edge operators with dark-fiber packages bundled into land grants, slashing network opex and enabling ultra-low latency routing to Shanghai’s Stock Exchange trading engines. Collectively, these trends illustrate how the China data center market is evolving into a multi-node ecosystem where each geography plays a specialized role in supporting the country’s AI ambitions.
Competitive Landscape
China’s data center arena displays moderate concentration as telecom-affiliated giants such as China Telecom, China Mobile, and China Unicom leverage nationwide fiber backbones and favorable spectrum allocations to retain large contract volume. GDS Holdings and VNET dominate premium wholesale colocation, courting hyperscale cloud providers with campus-style footprints that integrate green power and submarine-cable gateways. Foreign ownership caps and data-localization laws restrict the direct presence of international hyperscalers, channeling them into minority-stake structures with domestic partners that already hold coveted compliance certificates.
Strategic differentiation centers on technology adoption and compliance depth. Operators investing early in liquid-cooling know-how, direct-current power buses, and AI-driven facility management secure double-digit pricing premiums and lower churn. For example, GDS’s Shanghai complex achieves a PUE of 1.12, cutting power costs by 20% and allowing the operator to share savings with tenants while protecting margins. Meanwhile, Tencent and Alibaba’s self-build programs emphasize vertical integration, controlling everything from fiber trenching to GPU inventory, thus mitigating export-restriction risk and time-to-deploy bottlenecks.
Smaller regional players pivot toward edge-specialized offers, bundling managed services such as unattended operations and predictive maintenance. Although export restrictions on advanced GPUs impose procurement delays, operators with multivendor pipelines cushion the impact, whereas greenfield entrants lacking such relationships face nine-month lead times. Overall, competition coalesces around the ability to secure green power, comply with tightening PUE thresholds, and offer AI-ready density, factors that collectively define success in the China data center market.
China Data Center Industry Leaders
-
Chindata Group Holdings Ltd
-
Alibaba Cloud
-
Global Data Solutions Co., Ltd. (GDS)
-
Huawei Cloud Computing Technologies Co., Ltd
-
Space DC Pte Ltd
- *Disclaimer: Major Players sorted in no particular order
Recent Industry Developments
- December 2022: EdgeConneX entered a strategic partnership with Chayora Ltd to provide its services in China.
- September 2022: Chindata Group Holdings Ltd announced that it had acquired green energy of 100 million kWh by participating in China’s nationwide green energy transaction. This move will help the company reduce its carbon emissions by 94,000 tons.
- June 2022: Keppel Data Centers Pte Ltd acquired two data centers in Jiangmen and Guangdong from Guangdong BlueSea Development Co. Ltd.
China Data Center Market Report Scope
Beijing, Guangdong, Hebei, Jiangsu, and Shanghai are covered as segments by Hotspot. Large, Massive, Medium, Mega, and Small are covered as segments by Data Center Size. Tier 1 and 2, Tier 3, and Tier 4 are covered as segments by Tier Type. Non-utilized and utilized segments are covered by Absorption.
| Large |
| Massive |
| Medium |
| Mega |
| Small |
| Tier 1 and 2 |
| Tier 3 |
| Tier 4 |
| Hyperscale/Self-built | ||
| Enterprise/Edge | ||
| Colocation | Non-Utilized | |
| Utilized | Retail Colocation | |
| Wholesale Colocation | ||
| BFSI |
| IT and ITES |
| E-Commerce |
| Government |
| Manufacturing |
| Media and Entertainment |
| Telecom |
| Other End Users |
| Beijing |
| Weinan |
| Huai'an city |
| Rest of China |
| By Data Center Size | Large | ||
| Massive | |||
| Medium | |||
| Mega | |||
| Small | |||
| By Tier Standard | Tier 1 and 2 | ||
| Tier 3 | |||
| Tier 4 | |||
| By Data Center Type | Hyperscale/Self-built | ||
| Enterprise/Edge | |||
| Colocation | Non-Utilized | ||
| Utilized | Retail Colocation | ||
| Wholesale Colocation | |||
| By End User Industry | BFSI | ||
| IT and ITES | |||
| E-Commerce | |||
| Government | |||
| Manufacturing | |||
| Media and Entertainment | |||
| Telecom | |||
| Other End Users | |||
| By Hotspot | Beijing | ||
| Weinan | |||
| Huai'an city | |||
| Rest of China | |||
Market Definition
- IT LOAD CAPACITY - The IT load capacity or installed capacity, refers to the amount of energy consumed by servers and network equipments placed in a rack installed. It is measured in megawatt (MW).
- ABSORPTION RATE - It denotes the extend to which the data center capacity has been leased out. For instance, a 100 MW DC has leased out 75 MW, then absorption rate would be 75%. It is also referred as utilization rate and leased-out capacity.
- RAISED FLOOR SPACE - It is an elevated space build over the floor. This gap between the original floor and the elevated floor is used to accommodate wiring, cooling, and other data center equipment. This arrangement assist in having proper wiring and cooling infrastructure. It is measured in square feet (ft^2).
- DATA CENTER SIZE - Data Center Size is segmented based on the raised floor space allocated to the data center facilities. Mega DC - # of Racks must be more than 9000 or RFS (raised floor space) must be more than 225001 Sq. ft; Massive DC - # of Racks must be in between 9000 and 3001 or RFS must be in between 225000 Sq. ft and 75001 Sq. ft; Large DC - # of Racks must be in between 3000 and 801 or RFS must be in between 75000 Sq. ft and 20001 Sq. ft; Medium DC # of Racks must be in between 800 and 201 or RFS must be in between 20000 Sq. ft and 5001 Sq. ft; Small DC - # of Racks must be less than 200 or RFS must be less than 5000 Sq. ft.
- TIER TYPE - According to Uptime Institute the data centers are classified into four tiers based on the proficiencies of redundant equipment of the data center infrastructure. In this segment the data center are segmented as Tier 1,Tier 2, Tier 3 and Tier 4.
- COLOCATION TYPE - The segment is segregated into 3 categories namely Retail, Wholesale and Hyperscale Colocation service. The categorization is done based on the amount of IT load leased out to potential customers. Retail colocation service has leased capacity less than 250 kW; Wholesale colocation services has leased capacity between 251 kW and 4 MW and Hyperscale colocation services has leased capacity more than 4 MW.
- END CONSUMERS - The Data Center Market operates on a B2B basis. BFSI, Government, Cloud Operators, Media and Entertainment, E-Commerce, Telecom and Manufacturing are the major end-consumers in the market studied. The scope only includes colocation service operators catering to the increasing digitalization of the end-user industries.
| Keyword | Definition |
|---|---|
| Rack Unit | Generally referred as U or RU, it is the unit of measurement for the server unit housed in the racks in the data center. 1U is equal to 1.75 inches. |
| Rack Density | It defines the amount of power consumed by the equipment and server housed in a rack. It is measured in kilowatt (kW). This factor plays a critical role in data center design and, cooling and power planning. |
| IT Load Capacity | The IT load capacity or installed capacity, refers to the amount of energy consumed by servers and network equipment placed in a rack installed. It is measured in megawatt (MW). |
| Absorption Rate | It denotes how much of the data center capacity has been leased out. For instance, if a 100 MW DC has leased out 75 MW, then the absorption rate would be 75%. It is also referred to as utilization rate and leased-out capacity. |
| Raised Floor Space | It is an elevated space built over the floor. This gap between the original floor and the elevated floor is used to accommodate wiring, cooling, and other data center equipment. This arrangement assists in having proper wiring and cooling infrastructure. It is measured in square feet/meter. |
| Computer Room Air Conditioner (CRAC) | It is a device used to monitor and maintain the temperature, air circulation, and humidity inside the server room in the data center. |
| Aisle | It is the open space between the rows of racks. This open space is critical for maintaining the optimal temperature (20-25 °C) in the server room. There are primarily two aisles inside the server room, a hot aisle and a cold aisle. |
| Cold Aisle | It is the aisle wherein the front of the rack faces the aisle. Here, chilled air is directed into the aisle so that it can enter the front of the racks and maintain the temperature. |
| Hot Aisle | It is the aisle where the back of the racks faces the aisle. Here, the heat dissipated from the equipment’s in the rack is directed to the outlet vent of the CRAC. |
| Critical Load | It includes the servers and other computer equipment whose uptime is critical for data center operation. |
| Power Usage Effectiveness (PUE) | It is a metric which defines the efficiency of a data center. It is calculated by: (𝑇𝑜𝑡𝑎𝑙 𝐷𝑎𝑡𝑎 𝐶𝑒𝑛𝑡𝑒𝑟 𝐸𝑛𝑒𝑟𝑔𝑦 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛)/(𝑇𝑜𝑡𝑎𝑙 𝐼𝑇 𝐸𝑞𝑢𝑖𝑝𝑚𝑒𝑛𝑡 𝐸𝑛𝑒𝑟𝑔𝑦 𝐶𝑜𝑛𝑠𝑢𝑚𝑝𝑡𝑖𝑜𝑛). Further, a data center with a PUE of 1.2-1.5 is considered highly efficient, whereas, a data center with a PUE >2 is considered highly inefficient. |
| Redundancy | It is defined as a system design wherein additional component (UPS, generators, CRAC) is added so that in case of power outage, equipment failure, the IT equipment should not be affected. |
| Uninterruptible Power Supply (UPS) | It is a device that is connected in series with the utility power supply, storing energy in batteries such that the supply from UPS is continuous to IT equipment even during utility power is snapped. The UPS primarily supports the IT equipment only. |
| Generators | Just like UPS, generators are placed in the data center to ensure an uninterrupted power supply, avoiding downtime. Data center facilities have diesel generators and commonly, 48-hour diesel is stored in the facility to prevent disruption. |
| N | It denotes the tools and equipment required for a data center to function at full load. Only "N" indicates that there is no backup to the equipment in the event of any failure. |
| N+1 | Referred to as 'Need plus one', it denotes the additional equipment setup available to avoid downtime in case of failure. A data center is considered N+1 when there is one additional unit for every 4 components. For instance, if a data center has 4 UPS systems, then for to achieve N+1, an additional UPS system would be required. |
| 2N | It refers to fully redundant design wherein two independent power distribution system is deployed. Therefore, in the event of a complete failure of one distribution system, the other system will still supply power to the data center. |
| In-Row Cooling | It is the cooling design system installed between racks in a row where it draws warm air from the hot aisle and supplies cool air to the cold aisle, thereby maintaining the temperature. |
| Tier 1 | Tier classification determines the preparedness of a data center facility to sustain data center operation. A data center is classified as Tier 1 data center when it has a non-redundant (N) power component (UPS, generators), cooling components, and power distribution system (from utility power grids). The Tier 1 data center has an uptime of 99.67% and an annual downtime of <28.8 hours. |
| Tier 2 | A data center is classified as Tier 2 data center when it has a redundant power and cooling components (N+1) and a single non-redundant distribution system. Redundant components include extra generators, UPS, chillers, heat rejection equipment, and fuel tanks. The Tier 2 data center has an uptime of 99.74% and an annual downtime of <22 hours. |
| Tier 3 | A data center having redundant power and cooling components and multiple power distribution systems is referred to as a Tier 3 data center. The facility is resistant to planned (facility maintenance) and unplanned (power outage, cooling failure) disruption. The Tier 3 data center has an uptime of 99.98% and an annual downtime of <1.6 hours. |
| Tier 4 | It is the most tolerant type of data center. A Tier 4 data center has multiple, independent redundant power and cooling components and multiple power distribution paths. All IT equipment are dual powered, making them fault tolerant in case of any disruption, thereby ensuring interrupted operation. The Tier 4 data center has an uptime of 99.74% and an annual downtime of <26.3 minutes. |
| Small Data Center | Data center that has floor space area of ≤ 5,000 Sq. ft or the number of racks that can be installed is ≤ 200 is classified as a small data center. |
| Medium Data Center | Data center which has floor space area between 5,001-20,000 Sq. ft, or the number of racks that can be installed is between 201-800, is classified as a medium data center. |
| Large Data Center | Data center which has floor space area between 20,001-75,000 Sq. ft, or the number of racks that can be installed is between 801-3,000, is classified as a large data center. |
| Massive Data Center | Data center which has floor space area between 75,001-225,000 Sq. ft, or the number of racks that can be installed is between 3001-9,000, is classified as a massive data center. |
| Mega Data Center | Data center that has a floor space area of ≥ 225,001 Sq. ft or the number of racks that can be installed is ≥ 9001 is classified as a mega data center. |
| Retail Colocation | It refers to those customers who have a capacity requirement of 250 kW or less. These services are majorly opted by small and medium enterprises (SMEs). |
| Wholesale Colocation | It refers to those customers who have a capacity requirement between 250 kW to 4 MW. These services are majorly opted by medium to large enterprises. |
| Hyperscale Colocation | It refers to those customers who have a capacity requirement greater than 4 MW. The hyperscale demand primarily originates from large-scale cloud players, IT companies, BFSI, and OTT players (like Netflix, Hulu, and HBO+). |
| Mobile Data Speed | It is the mobile internet speed a user experiences via their smartphones. This speed is primarily dependent on the carrier technology being used in the smartphone. The carrier technologies available in the market are 2G, 3G, 4G, and 5G, where 2G provides the slowest speed while 5G is the fastest. |
| Fiber Connectivity Network | It is a network of optical fiber cables deployed across the country, connecting rural and urban regions with high-speed internet connection. It is measured in kilometer (km). |
| Data Traffic per Smartphone | It is a measure of average data consumption by a smartphone user in a month. It is measured in gigabyte (GB). |
| Broadband Data Speed | It is the internet speed that is supplied over the fixed cable connection. Commonly, copper cable and optic fiber cable are used in both residential and commercial use. Here, optic cable fiber provides faster internet speed than copper cable. |
| Submarine Cable | A submarine cable is a fiber optic cable laid down at two or more landing points. Through this cable, communication and internet connectivity between countries across the globe is established. These cables can transmit 100-200 terabits per second (Tbps) from one point to another. |
| Carbon Footprint | It is the measure of carbon dioxide generated during the regular operation of a data center. Since, coal, and oil & gas are the primary source of power generation, consumption of this power contributes to carbon emissions. Data center operators are incorporating renewable energy sources to curb the carbon footprint emerging in their facilities. |
Research Methodology
Mordor Intelligence follows a four-step methodology in all our reports.
- Step-1: Identify Key Variables: In order to build a robust forecasting methodology, the variables and factors identified in Step-1 are tested against available historical market numbers. Through an iterative process, the variables required for market forecast are set and the model is built on the basis of these variables.
- Step-2: Build a Market Model: Market-size estimations for the forecast years are in nominal terms. Inflation is not a part of the pricing, and the average selling price (ASP) is kept constant throughout the forecast period for each country.
- Step-3: Validate and Finalize: In this important step, all market numbers, variables and analyst calls are validated through an extensive network of primary research experts from the market studied. The respondents are selected across levels and functions to generate a holistic picture of the market studied.
- Step-4: Research Outputs: Syndicated Reports, Custom Consulting Assignments, Databases & Subscription Platforms