11:15 Special Update: AI in Telecoms, NVIDIA Platform and Ecosystem (H4 Hotel)
Join us for an overview on NVIDIA initiatives in the Telecommunications Industry. Special update on AI at the edge and its value proposition for 5G and Autonomous Machines. Learn how to leverage NVIDIA developer assets and tools for Telecoms ecosystem.
12:00 Working Lunch (H4 Hotel)
12:25 AI and ML Inference in the 5G Era (H4 Hotel)
Join us for an interactive panel discussion on AI Inference in the era of 5G. Learn about NVIDIA GPU, NVIDIA Software stacks, Inference use cases, challenges in 5G, innovation required.
13:30 Machine Learning Enabled 5G Wireless Networks: GPU Convex Feasibility Solvers (Room 14a, ICM)
In current wireless networks, most algorithms are iterative and may not be able to meet the requirements of some 5G technologies such as ultra-reliable low-latency communication within a very low latency budget. For instance, requiring end-to-end latency below 1ms, many signal processing tasks must be completed within microseconds. Therefore, only a limited number of iterations can be performed, which may lead to uncontrollable excessive errors. We highly recommend formulating the underlying optimization problems as convex feasibility problems in order to enable massive parallel processing on GPUs for online learning and vigorous tracking. Moreover, convex feasibility solvers allow for an efficient incorporation of context information and expert knowledge can provide robust results based on relatively small data sets. Our approach has numerous applications, including channel estimation, peak-to-average power ratio (PAPR) reduction in Orthogonal Frequency Division Multiplexing (OFDM) systems, radio map reconstruction, beam forming, localization, and interference reduction. We'll demonstrate how they can benefit greatly from the parallel architecture of GPUs.
14:30 An Efficient CUDA-Accelerated Machine Learning Inference for 4G and 5G Radio Networks (Room 14a, ICM)
We're describing the design of scalable CUDA-based service framework for ML model inference tasks to efficiently distribute such workloads across a cluster of dedicated GPU-based servers. These servers can also be easily integrated with existing telecom cloud infrastructure. In telecom data centres, ML models are increasingly being deployed for use cases such as automation, analytics and anomaly detection. Handling diverse datatypes and request rates ranging between hours and milliseconds can become a challenge with a legacy CPU-dominated cloud environment.
16:00 Connect with Experts (H4 Hotel)
Network with AI and Telecommunications experts.