Barcelona, Spain – Mar. 2nd, 2026 – Lanner Electronics, a global leader in edge AI and network computing platforms, today announced its collaboration with Ecrio and Qualcomm Technologies, Inc. at MWC 2026 to showcase a next-generation Edge AI solution designed to enable on-premise telecom (i.e., fraud detection, predictive QoS) and generative AI (i.e., large language model - LLM) applications at telecom cell sites.
The joint solution combines Lanner’s ECA-5555 Edge AI Server with the Qualcomm Cloud AI 100 Ultra card, delivering a high-performance, energy-efficient, and scalable edge AI platform purpose-built for telecom environments. This new Qualcomm Dragonwing AI on-prem solution enables telecom service providers to deploy AI processing services directly at the network edge, transforming cell sites into AI-ready infrastructure for a wide range of telecom and generative AI workloads. Ecrio’s platform unifies real-time communications with AI, enabling operators to detect, analyze and respond to network anomalies for faster resolution.
By bringing Gen AI capabilities closer to where data is generated, telecom operators can support use cases such as secure on-premise intelligence, AI-driven network monitoring, and predictive maintenance, while reducing latency, bandwidth consumption, and data sovereignty concerns.
Edge AI Platform for Telco-Grade Gen AI Deployment
At the core of the solution is the ECA-5555 Edge AI Server, Lanner’s compact and rugged edge server powered by the Intel® Xeon® 6 SoC. Designed for high-performance, scalable AI acceleration at the edge, the ECA-5555 features:
- Support for FHFL Gen5 PCIe x16 accelerators
- Two 100GbE QSFP28 and eight 25GbE SFP28 ports for high-throughput networking
- Wide operating temperature range of -40°C to 55°C for harsh edge deployments
- IEEE 1588 time synchronization, ideal for telecom and time-sensitive applications
Paired with the server is the Qualcomm Cloud AI 100 Ultra, a performance- and cost-optimized AI inference accelerator designed specifically for Generative AI and LLM workloads. The accelerator delivers:
- Up to 870 TOPS at 150W
- 64 AI cores per card
- 128GB LPDDR4x supporting AI models of up to 120B parameters per card
- Up to 576 MB of on-die SRAM for low-latency inference
- Flexible programmability supporting a wide range of AI models and acceleration techniques
“As telecom networks evolve toward AI-native architectures, deploying Generative AI at the edge is becoming a strategic priority,” said Jeans Tseng, CTO of Lanner Electronics. “By collaborating with Qualcomm and integrating the Cloud AI 100 Ultra accelerator into our ECA-5555 Edge AI Server, we are enabling telecom operators to deliver secure, low-latency, and on-premise AI processing services directly at cell sites—unlocking new revenue opportunities and accelerating the transition to intelligent networks.”
“Deploying Generative AI at the cell site demands an architecture that delivers strong AI performance without increasing power and cooling requirements,” stated Evgeni Gousev, VP, Technology, Qualcomm Technologies, Inc. “The Qualcomm Cloud AI 100 Ultra is designed to provide energy‑efficient AI inference for Generative AI workloads, enabling telecom operators to run advanced AI services on‑prem at the edge with lower power consumption, reduced infrastructure overhead, and predictable operating costs.”
Lanner, Ecrio, and Qualcomm will showcase the joint Edge AI solution at MWC 2026 (booth no.5C86), demonstrating how telco cell sites can be transformed into AI processing hubs capable of supporting next-generation Generative AI applications.
About Lanner
Lanner Electronics Inc is a world-leading provider of design, engineering, and manufacturing services for advanced network appliances and rugged applied computing platforms for system integrators, service providers, and application developers.

