Distributed Computational Workloads with Nx and Nerves

Speaker:
Elijah Scheele


Abstract:

Deciding where your computational models will run has become more complicated with the rise of low-cost and low-power ML model accelerators. In this talk, we’ll explore how to distribute your computational workloads using distributed Erlang, benchmarking these systems, new opportunities unlocked by edge device inference, and how patterns from Nx and Nerves can be extended to new classes of devices.

Talk objectives:

  • Learn how to apply Elixir tools and techniques for distributing your computational workloads across a wide variety of devices.

Target audience:

  • Data Scientists
  • Infrastructure Engineers
  • Edge/IoT Device Makers

Level: