Edge Talk – Lanner Vlog


Lanner project manager Eloisa talks about the product development of NCA-5230, including key features, CPU, expansion, NPI challenges, and product customization.

Lanner project manager Chole talks about the product development of NCA-1040, including optional accessories, NPI cycles, validation process and materials preparation.

All innovative fleet services generate massive volumes of data, and that's driven fleet management companies' data centers to be more agile by integrating compute, storage, and networking into one hyper-converged infrastructure, which simplifies and consolidates all the virtualization components through software. With its software-defined nature, the hyper-converged infrastructure leverages existing hardware storage while adopting a virtual controller to manage the physical devices. Lanner came up with the hyper-converged MEC server that seamlessly integrates high performance computing, massive storage, and networking functions into one single appliance. Powered by NVIDIA T4 Tensor Core GPU, the MEC server consolidates the taxi management tasks, such as emergency call services, video surveillance systems, and location-based services. Highest Storage Density for FX-3420 can record all driving and service data for customer analysis and demand forecasting.

The accelerating deployment of powerful AI solutions in competitive markets has evolved hardware requirements down to the very edge of our network due to eruption in AI-based products and services. For edge AI workloads, efficient and high-throughput inference depends on a well-curated compute platform. Advanced AI applications now face fundamental deep learning inference challenges in latency, reliability, multi-precision artificial neural networks support and solution delivery. NGC software runs on a wide variety of edge-to-cloud GPU servers, and Lanner’s edge AI appliance, LEC-2290E, optimized for NVIDIA® T4 have passed an extensive suite of tests that validate its ability to deliver high-volume, low-latency inference using NVIDIA GPU and NGC software components such as TensorRT, TensorRT Inference Server, DeepStream, CUDA toolkit, and various NGC-supported deep learning frameworks.

Description: Edge computing requires multitasking workloads at the edge compute site in order to reduce communication latency, power, and real estate. As some of the workloads at the customer premises internet of things devices can leverage GPU functions for video processing, further analytics requires an open and scalable network platform for accelerated AI workloads at the service provider edge and even further analysis at a centralized data center platform. In this session, Lanner will partner with Tensor Network to discuss how NVIDIA AI can be structured in a networked approach where AI workloads can be distributed within the edge networks. We will start from the NVIDIA AI-accelerated customer premises equipment over the aggregated network edge and to the hyper-converged platform deployed at the centralized data center.

Large scale enterprises with multiple national and international branches replace the traditional IT infrastructure with virtual network services and new business models. The increasing demand for Virtual Network Services during the pandemic requires a solid, secure network infrastructure with zero-touch provisioning for managed WAN services. To successfully deploy multi Virtual Network Functions (vNF) in multi-branch networks, it requires to implement hardware and software disaggregation that leverages commodity hardware scaling with the security elements to secure enterprise hardware and applications.

SD-WAN deployments in 2020 leveraging open, disaggregated architecture brings new platform requirements to support zero-trust security and zero-touch provisioning. In this keynote, Lanner will cover the latest SD-WAN use cases powered by whitebox uCPE, as well as converged edge cloud applications empowered by P4 programmable MEC servers.

Page 1 of 3