In-memory Computing Market Size and Share
In-memory Computing Market Analysis by Mordor Intelligence
The global in-memory computing market size is estimated at USD 14.4 billion in 2025 and is forecast to reach USD 31.72 billion by 2030, reflecting a 17.11% CAGR over the period 2025 to 2030. A steep rise in AI-driven workloads, plummeting persistent-memory pricing, and mounting expectations for sub-millisecond response times are pushing enterprises to redesign data architectures around memory-resident processing. Declining cost per gigabyte of storage-class memory enables larger data sets to remain in-memory, while CXL-enabled disaggregated clusters make capacity additions almost frictionless. Cloud hyperscalers now expose serverless in-memory services that scale instantly, letting even mid-market firms match the speed once reserved for the largest banks. Edge deployments are accelerating as sovereign AI regulations steer latency-sensitive inference to national boundaries. Together, these factors elevate data velocity to a strategic differentiator across every major industry vertical.[1]Christine Donato, “Mercedes-AMG Intensifies Speed with Real-Time Analytics,” SAP Community, community.sap.com
Key Report Takeaways
- By component, in-memory data management platforms held 62% of the in-memory computing market share in 2024, while in-memory application platforms are forecast to achieve a 22.4% CAGR through 2030.
- By deployment mode, cloud/SaaS led with 71.5% revenue share in 2024 and is advancing at a 27.6% CAGR to 2030.
- By application, real-time analytics captured 48.3% of revenue in 2024; IoT/edge stream processing is projected to expand at a 31% CAGR through 2030.
- By end-user vertical, BFSI commanded 29.4% of spending in 2024, whereas healthcare and life sciences are growing fastest at a 23.8% CAGR.
- By memory technology, DRAM accounted for 66.7% of revenue in 2024, while storage-class memory is poised for a 29.5% CAGR over the forecast period.
- By geography, North America contributed 37.8% of 2024 revenue; Asia-Pacific is the fastest-growing region with a 20.9% CAGR through 2030.
Global In-memory Computing Market Trends and Insights
Drivers Impact Analysis
| Driver | (~) % Impact on CAGR Forecast | Geographic Relevance | Impact Timeline |
|---|---|---|---|
| Explosion of Big Data | +4.20% | Global | Medium term (2-4 years) |
| Growing Need for Rapid Data Processing | +3.80% | North America and EU | Short term (≤ 2 years) |
| Proliferation of AI-centric Workloads (LLMs, vector search) | +5.10% | Global, concentrated in US and China | Short term (≤ 2 years) |
| Declining Cost/GB of Persistent Memory | +2.30% | Global | Long term (≥ 4 years) |
| Rising Adoption of Real-time Fraud Detection in BFSI | +1.20% | North America and EU | Medium term (2-4 years) |
| Edge-side In-Memory Analytics for 5G Telco Clouds | +0.50% | APAC core, spill-over to global | Long term (≥ 4 years) |
| Source: Mordor Intelligence | |||
Explosion of Big Data
Organizations now generate multi-quintillion-byte data streams that must be queried in real time, forcing a shift from batch processing to streaming architectures anchored in memory-centric platforms. Healthcare providers run continuous patient-monitoring pipelines that flag clinical anomalies within seconds, while high-frequency traders move billions of dollars on microsecond calculations. [2]Jieyi Li, “High-Performance Computing in Healthcare: An Automatic Literature Analysis Perspective,” Journal of Big Data, journalofbigdata.springeropen.com
Growing Need for Rapid Data Processing
Customer interactions, factory automation, and connected vehicles demand latencies measured in microseconds. Mercedes-AMG cut engine test-cycle times by 94% after adopting a real-time in-memory analytics layer, effectively gaining an extra production day each week.
Proliferation of AI-centric Workloads
Large language models, vector search, and embedding stores saturate traditional memory bandwidth. Processing-in-memory architectures show up to 6.94× lower TCO per queries-per-second than GPU-only baselines, making specialized in-memory fabrics integral to future inference clusters.
Declining Cost/GB of Persistent Memory
Next-generation ferroelectric HfO2 DRAM+ promises near-DRAM speed, non-volatility, and node scalability below 10 nm, narrowing the cost gap with NAND and spurring broader enterprise trials. [4]Skye Jacobs, “Next-Gen DRAM+ Could Transform AI and Edge Computing,” TechSpot, techspot.com
Restraints Impact Analysis
| Restraint | (~) % Impact on CAGR Forecast | Geographic Relevance | Impact Timeline |
|---|---|---|---|
| High Cost of DRAM at Hyperscale | -2.80% | Global | Short term (≤ 2 years) |
| Data Gravity and Inter-cluster Latency | -1.50% | Global | Medium term (2-4 years) |
| Vendor Lock-in Concerns for Proprietary IMC Appliances | -1.20% | North America and EU | Long term (≥ 4 years) |
| Shortage of Skilled IMC Architects & Developers | -0.80% | Global | Long term (≥ 4 years) |
| Source: Mordor Intelligence | |||
High Cost of DRAM at Hyperscale
DRAM price spikes of 50% in early 2025 raised the total cost of ownership for large clusters, delaying refresh cycles for memory-intensive workloads.
Shortage of Skilled IMC Architects and Developers
Limited pools of distributed-systems specialists lengthen project timelines and push enterprises toward managed cloud services that mask complexity.
Segment Analysis
By Component: Platforms Drive Enterprise Adoption
In-memory data management platforms held 62% revenue in 2024, underscoring demand for ACID-compliant, drop-in replacements to entrenched databases. Many banks migrated core analytics workloads without rewriting applications, capturing latency cuts of 20-40 ms per query. In contrast, in-memory application platforms are forecast to grow at 22.4% CAGR as digital-native firms design real-time microservices from the ground up. The component landscape is converging vendors weave SQL, streaming, and vector search into unified fabrics that house operational and analytical workloads side-by-side, shrinking data-movement overheads and easing DevOps.
By Deployment Mode: Cloud Dominance Accelerates
Cloud models accounted for 71.5% revenue in 2024 and will outpace the overall in-memory computing market through 2030. Hyperscalers bundle high-memory instances, CXL-attached pools, and serverless scaling under pay-as-you-go terms, lowering the barrier for midsize adopters. AWS’s Valkey-based ElastiCache tier costs 33% less than equivalent Redis clusters while boosting throughput over 2×, proving price-performance gains attractive to cost-sensitive SaaS providers.
By Application: Real-time Analytics Leads Growth
Real-time analytics commanded 48.3% of 2024 spend, as enterprises monetize instant insights from transactional and sensor data. PayPal leverages an in-memory fraud engine to inspect transactions in flight, curbing loss events before authorization completes. IoT and edge stream processing will grow fastest at 31% CAGR, propelled by 5G rollouts and federated learning scenarios that pre-process data near the source to cut backhaul traffic.
Note: Segment shares of all individual segments available upon report purchase
By End-user Vertical: BFSI Leads, Healthcare Accelerates
Finance retained 29.4% share in 2024 for high-frequency trading, real-time risk scoring, and compliance queries. Healthcare will grow 23.8% CAGR due to new data-sharing mandates such as the European Health Data Space, which entrusts life-critical analytics to low-latency platforms. Manufacturers also broaden usage, embedding memory-resident digital twins on production floors to slash downtime.
By Memory Technology: DRAM Dominance Faces Disruption
DRAM accounted for 66.7% of spending in 2024, anchoring latency-critical workloads. However, storage-class memory is on a 29.5% CAGR trajectory as enterprises adopt byte-addressable persistence that eliminates cache-warm-up windows after failover events. China’s push for domestic HBM3 supply by 2026 signals rising regional self-sufficiency and additional competitive pressure on global suppliers.
Note: Segment shares of all individual segments available upon report purchase
Geography Analysis
North America generated 37.8% of 2024 revenue, underpinned by deep capital markets, a robust talent ecosystem, and hyperscaler appetite for AI acceleration. Real-time payment rails, autonomous-vehicle pilots, and precision-medicine platforms keep memory footprints climbing across every state.
Asia-Pacific is scaling fastest at a 20.9% CAGR. China’s state-backed semiconductor programs and India’s Digital India cloud corridors are spawning megawatt-class data centers, many pre-wired for CXL fabric expansion. Regional 5G densification plus data-locality mandates pull inference tasks to country-level edges, favouring in-memory fabrics tuned for micro-services.
Europe is wrestling with capacity constraints yet funnelling record capital into new builds. Vantage Data Centers’ EUR 720 million securitization—the first of its kind on the continent—signals growing investor confidence that AI workloads will soak up new racks quickly. The EU AI Act and sustainability rules are nudging enterprises toward energy-efficient in-memory architectures that balance throughput with power caps. [3]Ganesh Rao, “Vantage Raises USD 820 Million in a Cloud and AI Data-Center Deal in Europe,” CNBC, cnbc.com
Competitive Landscape
The in-memory computing market shows moderate concentration. SAP, Oracle, and Microsoft expand bundled offerings that let customers unlock memory-resident performance within familiar ERP and database environments, reinforcing renewal stickiness. Redis and Aerospike pursue low-latency use cases such as fraud prevention and ad-tech bidding, carving high-growth adjacencies. GridGain marries compute and storage into a single in-memory layer to support AI pipelines that mix streaming events, SQL queries, and vector similarity search.
Vector-database start-ups attracted more than USD 350 million in 2024 funding rounds, highlighting investor conviction in memory-optimized retrieval for generative AI. IBM’s acquisition of DataStax further tightens links between in-memory key-value stores and model-training frameworks, reflecting a strategy to own the full AI lifecycle from data ingest to inference. Hardware-adjacent players are also entering Samsung and Micron outline CXL-aware DIMMs that promise multi-socket sharing without NUMA penalties, aimed squarely at cloud builders that need elastic memory footprints.
Price volatility on DRAM and HBM remains a wild card. Vendors with multi-sourcing contracts hedge exposure, whereas smaller ISVs ride cloud-provider roadmaps to buffer raw silicon risk. Talent scarcity in distributed memory systems gives an edge to service providers that wrap turnkey design, deployment, and managed operations in monthly subscriptions.
In-memory Computing Industry Leaders
-
SAP SE
-
Oracle Corporation
-
Microsoft Corporation
-
International Business Machines Corporation
-
Amazon Web Services, Inc.
- *Disclaimer: Major Players sorted in no particular order
Recent Industry Developments
- June 2025: Vantage Data Centers secured EUR 720 million via the first European asset-backed securitization of data-center assets.
- June 2025: Oracle reported USD 57.4 billion FY 2025 revenue, with multicloud database growth of 115% Q-to-Q.
- May 2025: Amazon ElastiCache and MemoryDB added Valkey 7.2 support, cutting costs by up to 33% and boosting throughput by 230%.
- January 2025: Fluidstack signed an MoU with the French government to build a 1 GW decarbonized AI supercomputer funded by EUR 10 billion.
Global In-memory Computing Market Report Scope
In-memory computing is the storage of information in the main random access memory (RAM) of dedicated servers rather than in complicated relational databases operating on comparatively slow disk drives. Type of components such as In-memory Data Management and In-memory Applications are considered unde the scope of the report. The In-memory Applications include in-memory analytics and in-memory application server.
| In memory Data Management Platforms |
| In memory Application Platforms |
| On-premises |
| Cloud / SaaS |
| Real-time Analytics & BI |
| High-frequency Trading |
| Fraud & Risk Management |
| IoT/Edge Stream Processing |
| BFSI |
| Healthcare & Life Sciences |
| IT & Telecom |
| Government & Public Sector |
| Manufacturing & Automotive |
| DRAM-based IMC |
| NAND-based IMC (Redis on-flash, etc.) |
| Persistent / Storage-class Memory (SCM) |
| Large Enterprises |
| Small & Medium Enterprises (SME) |
| North America | United States |
| Canada | |
| Mexico | |
| Europe | United Kingdom |
| Germany | |
| France | |
| Italy | |
| Rest of Europe | |
| Asia-Pacific | China |
| Japan | |
| India | |
| South Korea | |
| Rest of Asia | |
| Middle East | Israel |
| Saudi Arabia | |
| United Arab Emirates | |
| Turkey | |
| Rest of Middle East | |
| Africa | South Africa |
| Egypt | |
| Rest of Africa | |
| South America | Brazil |
| Argentina | |
| Rest of South America |
| By Component | In memory Data Management Platforms | |
| In memory Application Platforms | ||
| By Deployment Mode | On-premises | |
| Cloud / SaaS | ||
| By Application | Real-time Analytics & BI | |
| High-frequency Trading | ||
| Fraud & Risk Management | ||
| IoT/Edge Stream Processing | ||
| By End-user Vertical | BFSI | |
| Healthcare & Life Sciences | ||
| IT & Telecom | ||
| Government & Public Sector | ||
| Manufacturing & Automotive | ||
| By Memory Technology | DRAM-based IMC | |
| NAND-based IMC (Redis on-flash, etc.) | ||
| Persistent / Storage-class Memory (SCM) | ||
| By Organization Size | Large Enterprises | |
| Small & Medium Enterprises (SME) | ||
| By Geography | North America | United States |
| Canada | ||
| Mexico | ||
| Europe | United Kingdom | |
| Germany | ||
| France | ||
| Italy | ||
| Rest of Europe | ||
| Asia-Pacific | China | |
| Japan | ||
| India | ||
| South Korea | ||
| Rest of Asia | ||
| Middle East | Israel | |
| Saudi Arabia | ||
| United Arab Emirates | ||
| Turkey | ||
| Rest of Middle East | ||
| Africa | South Africa | |
| Egypt | ||
| Rest of Africa | ||
| South America | Brazil | |
| Argentina | ||
| Rest of South America | ||
Key Questions Answered in the Report
What is the current value of the in-memory computing market?
The market stands at USD 14.40 billion in 2025.
How fast is the in-memory computing market growing?
It is projected to post a 17.11% CAGR, doubling to USD 31.72 billion by 2030.
Which deployment model is growing quickest?
Cloud/SaaS deployments, already 71.5% of revenue, are expanding at 27.6% CAGR.
Why are AI workloads important to in-memory computing adoption?
Large language models and vector search saturate traditional memory bandwidth, making specialized in-memory fabrics essential for low-latency inference.
Which region is the fastest growing?
Asia-Pacific is forecast to expand at a 20.9% CAGR due to aggressive datacentre builds and 5G proliferation.
What is the biggest restraint to wider adoption?
Volatile DRAM pricing can raise total cost of ownership and delay large-scale refreshes, especially for hyperscale operators.
Page last updated on: