ZETIC × Qualcomm: Unlocking the NPU Era for Every Developer

ZETIC × Qualcomm: Unlocking the NPU Era for Every Developer

Simplifying NPU-powered AI deployment across mobile, compute, and edge devices

The future of AI isn't in the cloud. It is in the palm of your hand, in your car, and embedded in the world around you. But getting high-performance AI to run locally on edge devices has historically been a fragmented, painful process for developers.

Today, we are thrilled to announce that ZETIC is collaborating with Qualcomm to change that. We are bringing seamless, zero-friction on-device AI deployment to Qualcomm's industry-leading platforms.

From Qualcomm to ZETIC: Building What We Needed

Our founder, Yeonseok Kim, didn't just watch the on-device AI revolution happen; he spent years engineering it from the inside as a Senior ML Engineer with the Qualcomm AI Research team.

From developing real-time embedded neural network frameworks for resource-constrained NPUs (which saw commercialization into products like Amazon Alexa, Meta Oculus, and Samsung Bixby) to eventually building Qualcomm's company-wide AI development platform, his mandate was clear: build the high-level abstraction layers that allow developers to deliver machine learning solutions without getting bogged down in low-level memory optimizations, fully quantized networks, or hardware-specific APIs.

Through this hands-on work, he saw the industry's biggest disconnect firsthand. While mobile and edge silicon was advancing at breakneck speed, the software tooling for the broader developer community to actually use that hardware was painfully far behind. Building for the edge still meant wrestling with complex integrations and weeks of manual fine-tuning.

ZETIC was born from this exact frustration. We built a general AI infrastructure platform that strips away the complexity, allowing developers to deploy optimized AI models directly to edge devices with the simplicity of a standard API call.

Yeonseok Kim (CEO & Tech Lead), Manoj Khilnani (Director, Global Partner Marketing), Seongjun Kim (Co- founder & Business Head)

The NPU-First Future

While CPUs and GPUs paved the way for early AI and still deliver superior performance for certain workloads, the frontier for high-efficiency on-device inference is the Neural Processing Unit (NPU).

Through this collaboration, ZETIC is integrating its deployment technology seamlessly with Qualcomm's state-of-the-art Hexagon™ NPUs. We are enabling deployment across a wide range of environments, from mobile to compute to automotive platforms, including platforms like the Snapdragon® 8 Elite Gen 5 (mobile) and the Snapdragon® X2 Elite (compute).

By routing appropriate workloads directly to the NPU, developers can bypass traditional compute bottlenecks, achieving massive gains in inference speed and significant reductions in energy consumption on specific optimized models.

We aren't just exploring edge AI. We are actively enabling developers to tap into this dedicated silicon. What used to take an ML engineering team weeks of low-level optimization can now be executed by a single developer in minutes. Migrating that deployment to an entirely different hardware platform requires only a few additional minutes.

Inside the Workshop: On-Device AI in Action

Recently, ZETIC hosted an On-Device AI workshop in collaboration with Qualcomm, hosted at the Edge AI San Diego summit, with participants from across the Edge AI ecosystem.

This wasn't a presentation. It was a hands-on session.

Using Galaxy S25 Ultra devices powered by Snapdragon, participants built and ran AI applications directly on-device using Melange. No simulation. No cloud dependency. Real deployment on real hardware.

What stood out: engineers were focused on deploying models and evaluating performance across chipsets, formats, and quantization strategies. The demand for simpler, more reliable on-device deployment workflows was clear.

This reinforces what we are seeing across the industry. The bottleneck is no longer models. It is how quickly developers can deploy and run them efficiently on real devices.

We will continue supporting participants from the workshop as they push their projects toward real-world applications. If you would like to participate in a future workshop, stay tuned for announcements.

What Comes Next

This collaboration is a starting point.

ZETIC and Liquid AI will continue expanding model support, improving benchmarking workflows, and making on-device deployment more predictable for developers. We are also working to bring support to more devices and platforms, so developers can reach users wherever they are.

On-device AI is no longer limited by hardware capability. The real constraint is how quickly developers can benchmark, evaluate, and deploy new models on real devices.

ZETIC x Liquid AI is focused on closing that gap, starting from day zero.