‹Programming› 2022
Mon 11 - Thu 14 April 2022
Tue 22 Mar 2022 13:55 - 14:50 at Workshop II - Session 3

Machine learning (ML) models keep getting larger and more complex. Whereas before models used to be represented by static data-flow graphs, they are now implemented via arbitrary Python code. The so-called eager-mode frameworks, such as PyTorch, are now the standard for developing new ML models. The semantics of eager-mode frameworks is that operations are computed straight away, and thus one can inspect the result of any operation at any point. This simplifies the development process, and it enables more dynamic ML models.

Although eager-mode frameworks are more convenient, they are less efficient today as operations are dispatched to the hardware one at a time. This execution model precludes, for example, operation fusion, which is essential for performance of ML workloads.

In this paper we present Torchy, a tracing JIT compiler for PyTorch, one of the mainstream eager-mode frameworks. Torchy achieves similar performance as data-flow frameworks, while providing the same semantics of straight-away execution. Moreover, Torchy works with any PyTorch program unmodified. Torchy outperforms PyTorch by up to 12x in microbenchmarks, and PyTorch’s static compiler (TorchScript) by up to 5x.

Tue 22 Mar

Displayed time zone: Lisbon change

13:30 - 15:00
Session 3MoreVMs at Workshop II
Who You Gonna Call? A Case Study about the Call-Site Behaviour in Ruby-on-Rails Applications
Sophie Kaleba University of Kent, Octave Larose University of Kent, Stefan Marr University of Kent, Richard Jones University of Kent
Media Attached File Attached
Torchy: A Tracing JIT Compiler for PyTorch
Nuno P. Lopes Universidade de Lisboa
Day closing
C: Rodrigo Bruno INESC-ID / Técnico, ULisboa, C: Michael Engel Norwegian University of Science and Technology (NTNU)