Phison Electronics, a global leader in NAND flash controllers and advanced storage solutions, has announced a major expansion of its aiDAPTIV+ technology, extending high-performance AI processing to integrated GPU (iGPU) architectures. Built on more than 25 years of expertise in flash memory innovation, the enhanced aiDAPTIV+ architecture now accelerates AI inference, significantly increases usable memory capacity, and simplifies deployment. As a result, it enables large-model AI capabilities on notebook PCs, desktop PCs, and compact mini-PCs, making advanced AI more accessible than ever.
As organizations continue to face explosive data growth and increasingly complex AI training and inference workloads, the demand for solutions that are both affordable and easy to deploy on standard devices is rapidly increasing. Against this backdrop, aiDAPTIV+ directly addresses persistent memory shortages and performance bottlenecks. By leveraging NAND flash as an extension of system memory, the technology removes traditional compute constraints and enables on-premises inference and fine-tuning of large AI models on widely available hardware platforms.
AI Authority Trend: SiMa.ai Partners with Synopsys to Accelerate Automotive Physical AI
Moreover, Phison’s latest announcement highlights deepening collaborations with strategic partners, including demonstrations of larger large language model (LLM) training on Acer laptops while using significantly less DRAM. Consequently, users can now run sophisticated AI workloads on smaller systems without compromising data privacy, scalability, affordability, or ease of use. For OEMs, system integrators, and resellers, this approach delivers complete, end-to-end solutions that overcome long-standing GPU VRAM limitations.
“As AI models grow into tens and hundreds of billions of parameters, the industry keeps hitting the same wall with GPU memory limitations,” said Michael Wu, President and GM, Phison US. “By expanding GPU memory with high-capacity, flash-based architecture in aiDAPTIV+, we offer everyone, from consumers and SMBs to large enterprises, the ability to train and run large-scale models on affordable hardware. In effect, we are turning everyday devices into supercomputers.”
In parallel, Phison’s engineering collaboration with Acer demonstrates how aiDAPTIV+ can handle massive models on modest hardware configurations. “Our engineering collaboration enables Phison’s aiDAPTIV+ technology to accommodate and accelerate large models such as gpt-oss-120b on an Acer laptop with just 32GB of memory,” said Mark Yang, AVP, Compute Software Technology at Acer. “This can significantly enhance the user experience interacting with on-device Agentic AI, for actions ranging from simple search to intelligent inquiries that support productivity and creativity.”
AI Authority Trend: Accenture to Acquire Faculty to Scale AI Capabilities
Looking ahead, Phison will showcase aiDAPTIV+ and partner solutions at CES 2026, from January 6–8, in the Phison Bellagio Suite and partner booths. Key demonstrations will highlight reduced total cost of ownership (TCO) and memory consumption, faster inference performance, and the ability to handle larger datasets on smaller devices. For example, Phison testing shows that a 120B-parameter Mixture of Experts (MoE) model can now run with just 32GB of DRAM instead of the traditional 96GB. Additionally, inference response times improve by up to ten times while lowering power consumption, delivering noticeable gains in Time to First Token (TTFT) on notebook PCs.
AI Authority Trend: Bragg Gaming Group Leaps into ‘AI-First’ Future with Golden Whale Partnership
To share your insights, please write to us at info@intentamplify.com
