AI Servers Are Draining the Power Chip Supply: PMIC and BMC Shortages Now Threaten “Regular” Servers
If you thought GPUs were the only hot commodity in AI, think again. The quiet heroes that feed and control those accelerators—power management integrated circuits (PMICs) and server management chips—have become the latest choke point. Lead times are stretching to nearly a year. General-purpose servers are getting bumped in the queue. Even hard drives and CPUs are feeling the pinch.
Here’s the twist: none of this is happening because core compute is scarce. It’s happening because the world suddenly needs far more of the unglamorous, high-current-density power and control silicon that keeps AI servers alive.
In short: AI has made power delivery the new battleground—and it’s reshaping the entire server supply chain.
According to a report in The Register, PMICs and server management silicon (think BMCs) are now in widespread shortage as manufacturers prioritize higher-margin AI servers over conventional systems. TrendForce pegs lead times at 35–40 weeks, and hyperscalers like AWS, Microsoft, and Google are pushing a forecast of 28% AI server shipment growth in 2026—fuel on an already blazing fire. To make matters worse, Samsung’s decision to shutter an 8-inch wafer fab in Korea tightens supply where PMICs are most commonly made. Designers like uPI Semi are signaling shortages through 2026 as AI servers proliferate. Source: The Register.
Let’s unpack what’s going on, what it means for your roadmap, and how to keep your data center plans from stalling out.
The short version: What’s happening now
- AI servers eat power—and then some. Modern AI racks pack dozens of accelerators, pushing per-rack loads from 30 kW into the 60–120 kW range.
- That power has to be precisely converted, monitored, and managed—enter PMICs and server management chips (BMCs).
- Foundry lines that make these analog/mixed-signal chips (often 8-inch, mature nodes) can’t expand overnight. Some capacity is even coming offline.
- Suppliers are allocating scarce parts to the most profitable builds: AI servers. “Vanilla” compute and storage boxes are waiting their turn.
- Result: 35–40 week lead times on PMICs and knock-on delays for general-purpose servers, HDD-based storage, and even CPU-filled platforms that can’t ship without power and management silicon.
Why AI servers guzzle power (and PMICs)
GPU TDPs have gone vertical
AI accelerators have ramped from a few hundred watts to north of 700 watts per device—and some configurations push toward the kilowatt class. Multiply by 8–16 GPUs per server and you’re in multi-kilowatt chassis territory.
- NVIDIA’s latest AI systems illustrate the trend, with entire racks like the GB200 NVL72 rated at data center-scale power densities that were unthinkable a few years ago. See NVIDIA’s data center overview for context: NVIDIA Data Center.
- AMD’s MI300X class accelerators are likewise driving high per-node power demand. See AMD’s accelerator family: AMD Instinct.
High-current, low-voltage rails that feed GPUs require sophisticated, multiphase VRM designs, digital control loops, telemetry, and fine-grained power sequencing. That’s all PMIC territory—and you need more of them as currents soar.
From 12V to 48V and back again
As per-rack power rises, distributing power at 48V (instead of 12V) reduces copper losses. Many AI platforms convert from 48V down to GPU voltages using staged or direct conversion topologies. This creates strong demand for:
- High-efficiency front-end converters (AC-48V or 12V-48V).
- 48V-to-point-of-load PMICs and power stages with extreme current density.
- Digital controllers (PMBus) for telemetry and coordinated power orchestration across accelerators.
The Open Compute Project has championed 48V rack designs for years; the AI wave is pushing broader adoption. Learn more at the Open Compute Project.
BMCs are the brains behind the boards
Server management chips—best known as Baseboard Management Controllers (BMCs)—run out-of-band control, health monitoring, fan curves, power capping, remote KVM, and firmware management. They’re essential in AI servers, where thermal and power envelopes must be tightly controlled across dozens of hot components. Market leaders like ASPEED dominate this category, with alternatives from Nuvoton and Renesas. See ASPEED’s portfolio: ASPEED Technology.
With AI systems selling at premium prices, every BMC and PMIC inevitably gets allocated where margins are highest.
The 8-inch bottleneck: Why analog capacity is hard to grow
Many PMICs, power stages, and mixed-signal controllers are built on mature “BCD” or analog-friendly processes that historically live on 8-inch (200mm) wafers. While 12-inch (300mm) capacity is growing for analog, migration takes time—design porting, mask sets, qual cycles, and customer revalidation aren’t overnight affairs.
Two hard truths complicate relief:
- Demand curves shifted faster than fabs can. Analog/PMIC developers can’t simply “move to 5nm.” Power devices value voltage resilience, isolation, and analog performance more than dense logic.
- Some 8-inch capacity is exiting. Per The Register, Samsung is shuttering an 8-inch fab in Korea, removing precious capacity from a market that actually needs more mature-node output right now.
Yes, several analog giants are pouring capital into 300mm analog (more on that below), but newly built 300mm fabs take years to fully ramp, and tools for analog front-end and back-end are not as fungible as logic fabs.
Who’s getting squeezed: Not just general-purpose servers
When PMICs and BMCs are short, everyone downstream feels it:
- Enterprise and cloud general-purpose servers: OEMs shift scarce components to AI SKUs first. That means conventional CPU servers—your VM hosts, databases, and web tiers—wait longer.
- Storage systems: HDD- and SATA/NVMe-heavy arrays can be delayed if server motherboards, backplanes, or controllers can’t get their power parts. Even if disks and controllers are in stock, the system can’t ship without PMICs and BMCs.
- CPUs: Not truly “short,” but CPU-based server shipments slip if chassis and motherboards lack essential power and management devices.
The ripple effect shows how tightly coupled the server ecosystem is: a shortage in a $3–$10 PMIC can hold up a $7,000 server—or a $250,000 AI node.
Lead times and allocation games: 35–40 weeks becomes the new normal
TrendForce reports PMIC lead times now stretching to 35–40 weeks as hyperscalers plan for 28% AI server shipment growth in 2026. That’s a procurement planning nightmare for anyone trying to execute a refresh or expansion in the next two to three quarters. See TrendForce’s research hub: TrendForce.
And it’s not just time—it’s priority. Suppliers allocate based on:
- Margin per component (AI wins).
- Strategic accounts (hyperscalers get first call).
- Forecast accuracy (those with stronger, earlier POs secure supply).
- Design lock-in (if your board needs a specific controller without pin-compatible alternates, you’re at the back of the line).
uPI Semiconductor, among others, is signaling tightness throughout 2026 as AI platforms proliferate. See uPI’s corporate info: uPI Semiconductor.
What it means for buyers: Rethink procurement, platforming, and design
The old playbook—single-source the “best” PMIC, place JIT orders, assume 8–12 week lead times—no longer works. Here’s a pragmatic mix of near-term tactics and medium-term design moves.
Near-term survival tactics (0–6 months)
- Pull in POs and build buffers: If your planning horizon was 12 weeks, stretch it to 40–50 where feasible. Lock in quarterly allocations with suppliers now.
- Dual-source aggressively: Identify pin-compatible alternates for PMICs, controllers, power stages, and BMCs. If none exist, evaluate minor PCB ECOs to support a second footprint.
- Bundle demand across SKUs: Aggregate your buys to a handful of common PMICs across platforms to improve allocation priority.
- Ask for vendor-managed inventory (VMI): Some analog vendors will stage inventory against your rolling forecast—especially if you’re willing to sign volume commitments.
- Loosen “golden” specs prudently: Evaluate whether a 92% vs 93% efficiency PMIC is acceptable if it ships six months sooner. Revisit derating assumptions with thermal validation data.
Medium-term resilience (6–18 months)
- Standardize on 48V and modular VR designs: Commonize power stages and digital controllers across platforms to reduce unique BOMs.
- Migrate to 300mm-friendly PMIC families where possible: Many suppliers are offering next-gen PMICs fabbed on 300mm nodes; prioritize those in new designs to tap more scalable capacity.
- Expand BMC options: Validate at least two BMC vendors across your board families, including firmware readiness and security features (Secure Boot, attestation).
- Design for telemetry and flexibility: Use PMBus-capable devices that can be tuned in software for different loads, enabling SKU reuse.
Strategic moves (18–36 months)
- Co-plan with suppliers: Treat PMICs and BMCs like CPUs and GPUs—share roadmaps, align on qual schedules, and negotiate guaranteed capacity.
- Consider ODM partnerships: Tier-1 ODMs sometimes secure better allocations. Co-develop reference designs that use their preferred PMIC stacks.
- Invest in board redesigns that reduce unique parts: A small NRE today can prevent a major supply interruption later.
Suppliers that can help in this transition include Texas Instruments, Monolithic Power Systems, Renesas, Analog Devices, Infineon, onsemi, and Richtek—each with deep data center power portfolios. See examples: – TI’s 300mm analog expansion news: Texas Instruments 300mm manufacturing – MPS data center solutions: Monolithic Power Systems – Infineon GaN for power conversion: Infineon GaN
Alternative power paths: GaN, SiC, and smarter topologies
With demand outpacing capacity, innovation isn’t just nice—it’s necessary.
- Gallium nitride (GaN) and silicon carbide (SiC): These wide-bandgap semiconductors improve efficiency at higher voltages and frequencies. They’re gaining traction in front-end converters and 48V rails, reducing thermal load and BOM counts per watt delivered.
- Direct-to-load conversion: Some architectures cut intermediate stages, using high-ratio converters to step 48V down closer to GPU rails. Fewer stages can mean fewer total components and improved efficiency.
- Digital control everywhere: PMBus and proprietary digital loops allow dynamic current sharing, telemetry, and adaptive responses—valuable in AI thermals where workloads vary sharply.
- Liquid cooling synergy: Better thermal headroom can let engineers trade a touch of electrical efficiency for fewer unique PMICs—if cooling handles the delta. That’s not a free lunch, but it widens the viable parts set.
Expect more vendors to pitch “platformized” power stacks—pre-qualified combinations of controllers, power stages, and magnetics tuned for AI current densities.
Pricing and margin dynamics: Why AI gets first dibs
AI servers command eye-watering ASPs and margins. When a $10 PMIC helps ship a system with tens of thousands in gross margin, the allocation math is simple.
- Component pricing upward pressure: Expect continued price firmness for PMICs, power stages, and BMCs through 2026.
- System price creep for non-AI gear: If your standard 1U/2U servers slip in allocation priority or require redesigns, BOM costs and list prices will rise.
- Services and financing fill the gaps: OEMs may cushion delays with consumption models and extended support on older platforms that stay in service longer.
In other words, non-AI infrastructure will subsidize AI’s growth for a while.
When does relief arrive?
Relief will come, but not overnight—and not evenly across components.
New 300mm analog capacity
Multiple vendors are bringing 300mm analog fabs online or expanding them, which improves die-per-wafer economics and capacity per tool set.
- Texas Instruments has been vocal about 300mm analog manufacturing ramps in the U.S. (e.g., Sherman, Lehi), a structural shift for PMIC supply. See TI news: Texas Instruments Newsroom.
- Infineon, Renesas, and onsemi are also investing in higher-voltage and wide-bandgap lines to support data center power.
These fabs still need time to qualify specific PMIC families for hyperscaler-grade reliability and long-tail availability.
Migrating key PMICs to scalable processes
Expect a wave of “drop-in successor” PMICs on 300mm-friendly BCD flows. The catch: requalification in your platform. If you’re planning 2027 platforms, lock these parts in now so you can ride the capacity wave when it crests.
Geopolitics and raw materials
Export controls, regional subsidies, and materials volatility (copper, substrates, specialty chemicals) will shape timelines. Don’t assume a straight-line recovery. Build scenario plans for: – Slower-than-expected 8-inch output. – Packaging bottlenecks (QFN, LGA) in OSATs. – Local content or “friend-shoring” requirements impacting your AVL.
What to watch in 2026
- Lead time trend lines: If PMIC LT stabilizes below 30 weeks, the logjam is beginning to clear. If it breaches 45, plan for 2027 spillover.
- Allocation comments in earnings calls: Listen to OEMs (Dell, HPE, Supermicro), ODMs (Wiwynn, Quanta, Inventec), and analog vendors for color on allocations.
- Hyperscaler capex mix: A tilt toward AI factories implies continued PMIC prioritization; a rebalancing into general compute/storage suggests easing.
- 48V ecosystem maturity: More turnkey 48V solutions equals faster designs, fewer unique parts, and better availability.
- BMC diversification: Signs that second-source BMCs are gaining wins will reduce fragility in server management supply.
For background on AI factory trends and power envelopes, see NVIDIA’s AI factory overview: NVIDIA: The AI Factory.
Action checklist for data center operators and OEMs
- Place multi-quarter orders now for critical PMICs, BMCs, VR stages, and controllers—treat them as strategic parts.
- Approve at least one alternate per critical power rail in each platform.
- Standardize on 48V distribution and a small set of digital controllers across SKUs.
- Work with suppliers on VMI and consignment where volume justifies it.
- Validate thermal margins that allow slightly broader PMIC selection without compromising reliability.
- Communicate candidly with stakeholders: AI SKUs will get priority; align expectations for general-purpose fleet refresh timelines.
Sources and further reading
- The Register: AI servers driving PMIC/server-management chip shortages (2026-04-23): AI now gobbling up power and management chips for servers
- TrendForce industry insights: TrendForce
- Open Compute Project – Rack & Power: Open Compute Project
- NVIDIA data center platforms and AI factory context: NVIDIA Data Center
- AMD Instinct accelerators: AMD Instinct
- Texas Instruments manufacturing news (300mm analog): TI Newsroom
- Monolithic Power Systems data center solutions: MPS Solutions
- Infineon GaN for power conversion: Infineon GaN
- ASPEED Technology (BMCs): ASPEED Technology
- uPI Semiconductor: uPI Semi
FAQs
Q: What exactly are PMICs, and why are they critical for AI servers?
A: PMICs (power management integrated circuits) convert and regulate electrical power from rack inputs down to the low-voltage, high-current rails that GPUs and CPUs require. AI servers use many more—and higher-current—rails than conventional servers, so they need more PMICs with tighter control and telemetry.
Q: What are server management chips (BMCs), and why are they short?
A: BMCs handle out-of-band server management—power cycling, health monitoring, firmware updates, and remote access. AI servers depend heavily on BMCs to manage thermals and power budgets across accelerators. Because AI systems are prioritized for allocation, BMCs for general-purpose servers are getting crowded out.
Q: Why can’t manufacturers just make more PMICs quickly?
A: Many PMICs are built on 8-inch (200mm) mature-node processes optimized for analog/power. Adding capacity requires new fabs or retooling, which takes years. Porting a PMIC to a different process also requires design changes and full requalification.
Q: How long will the PMIC shortage last?
A: TrendForce indicates lead times of 35–40 weeks with ongoing strain through 2026. Relief depends on how fast 300mm analog capacity ramps, how quickly designs migrate to those nodes, and whether AI demand moderates.
Q: Is 48V distribution mandatory for AI servers?
A: Not mandatory, but increasingly common. 48V reduces distribution losses at high rack power and enables more efficient conversion stages. It’s becoming the de facto standard for high-density AI deployments.
Q: Will non-AI server prices rise because of this?
A: Likely yes. Scarcity of PMICs/BMCs, redesigns to second-source parts, and lower allocation priority for non-AI systems tend to raise BOM and logistics costs, which flow into system pricing.
Q: Can GaN or SiC solve the shortage?
A: GaN and SiC can improve efficiency and reduce stage counts, but they don’t eliminate the need for controllers, telemetry, and management silicon. They’re part of a longer-term shift to higher-efficiency power architectures.
Q: What can buyers do today to protect roadmaps?
A: Place longer-horizon POs, qualify alternates for critical rails, standardize on a small set of digital power controllers, pursue VMI with suppliers, and align platform designs with 300mm-friendly PMIC families.
The bottom line
AI didn’t just make GPUs scarce—it made power and management silicon strategic. With PMIC and BMC lead times pushing 35–40 weeks and 8-inch capacity under strain, the industry is being forced to reprioritize, redesign, and rethink. AI servers will keep getting first call on scarce components because they deliver the margins. To keep your broader infrastructure plans on track, treat power and management chips like first-class citizens in your roadmap: diversify, standardize, and secure supply early. The data center winners of 2026–2027 will be the teams that pair AI ambition with power-savvy, supply-aware engineering.
Discover more at InnoVirtuoso.com
I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.
For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring!
Stay updated with the latest news—subscribe to our newsletter today!
Thank you all—wishing you an amazing day ahead!
Read more related Articles at InnoVirtuoso
- How to Completely Turn Off Google AI on Your Android Phone
- The Best AI Jokes of the Month: February Edition
- Introducing SpoofDPI: Bypassing Deep Packet Inspection
- Getting Started with shadps4: Your Guide to the PlayStation 4 Emulator
- Sophos Pricing in 2025: A Guide to Intercept X Endpoint Protection
- The Essential Requirements for Augmented Reality: A Comprehensive Guide
- Harvard: A Legacy of Achievements and a Path Towards the Future
- Unlocking the Secrets of Prompt Engineering: 5 Must-Read Books That Will Revolutionize You
