Taming a Beast: Using ONNX Runtime in AAA Games by Jean-Simon Lapointe

Jean-Simon talks about Ubisoft’s journey in incoporating ML in game engines. This is mainly for things that were dominated by traditional AI methods like navigation. The main advantage of ML over traditional methods is that the resulting behavior is more “human” out of the box, without requiring any behavior “patching”.

The typical workflow is as follows.

  • A team of data scientists train an ML model using their favorite framework.
  • Trained model is exported to ONNX format, such a model is essentially a graph with input nodes, operator nodes, and output nodes. For example, the input nodes could be the player’s position and the destination, the operator nodes could be matrix multiplication or various mathematical functions, and the output node is the action the bot should take in the game.
  • In C++, ONNX runtime uses the exported model to compute things (aka “inferencing”).

I liked this talk because it really gave me a good picture of the infrastructure for using ML in “production”, where training and deployment of AI are decoupled.

Challanges:

  • integrating ONNX with game engine’s custom memory management
  • for the shipped version, ONNX and all libraries it uses need to be compiled and various licenses checked with legal
  • exceptions are not okay in game dev, but ONNX throws exceptions; this can be turned off, but then it would just abort instead in situations with exceptions; the workaround is to add prechecks for all the conditions that would trigger exceptions.

Advanced Ranges: Writing Modular, Clean, and Efficient code with Custom Views by Steve Sorkin

I remember the talk to be quite technically advanced so I’ll write about what I learned about the subject afterwards instead.

In programming, we often need to iterate over sequences. Ranges in C++20 is an abstraction for sequences, providing a way to operate on them in a declarative and functional manner.

Ranges are usually represented as a [begin, end) pair of iterators, but the C++ standard library provides helpful template specializations that recognize C-style arrays and strings as ranges.

The standard library provides range algorithms for operating on ranges. For example, for_each is a basic range algorithm that applies a function (possibly with side effects) to each element of the range.

In my opinion, what really makes C++20 ranges stand out are views. Views are ways to “look” at ranges; alternatively, think of views as ranges that do not own the data underneath. They are easily composable and performant. Here are some examples.

How would this allow us to write composable and performant code, you may ask? Well, you may write std::ranges::take_view(std::ranges::filter_view(arr, isEven), 3) to compose the two views, giving the first three even elements in the range. Still not convinced? Then consider the pipe syntax.

A reason this code is performant is that views are lazily evaluated. This means when you write arr | std::views::filter(isEven) | std::views::take(3), print), essentially nothing is done. The result is computed on the go as you iterate over it! Moreover, no space is needed to store any intermediate results.

When using the pipe syntax for views, the things in between bars are called range adaptors (for example, filter(isEven) and take(3)); applying range adaptors to views result in views. A complete list of range adaptors provided by the standard library can be found on cppreference.

C++ Contracts: A Meaningfully Viable Product by Andrei Zissu and Should I Check for Null Here? by Tony Van Eerd

These talks really resonated with me because I am a big fan of contracts in programming! When I program, I think about invariants, contracts, and guarantees… When I use an API, I’d like to know what a piece of code can guarantee if I use it correctly. Good news, in C++26, we get contracts!

Without going into too many details, here’s a minimal demo of what contracts in C++26 can achieve, taken from cppreference.

#include <limits>
#include <cmath>
float tolerance = 10 * std::numeric_limits<float>::epsilon();

bool isNormalized(std::array<float, 3> vector) {
    return std::abs(std::hypot(vector[0], vector[1], vector[2]) - 1.f) <= tolerance;
}

bool isNormalizable(std::array<float, 3> vector) {
    return std::hypot(vector[0], vector[1], vector[2]) > 0.f;
}

// function contract specifiers
std::array<float, 3> normalize(std::array<float, 3> vector)
    pre(isNormalizable(vector))
    post(output: isNormalized(output))
{
    auto norm = std::hypot(vector[0], vector[1], vector[2]);
    return std::array<float, 3> {vector[0] / norm, vector[1] / norm, vector[2] / norm};
}

In addition to function contract specifiers shown above, C++26 also supports contract_assert statements, which are basically the same as C asserts syntatically.

In the snippet, the part enclosed by pre(...) is the precondition; only the paramters of the function are in scope for the precondition. The part enclosed by post(...) is the postcondition; it names the return value output and asserts that it must be a normalized vector. Note how we can specify postconditions on unamed return values this way (we did not have to assign the return value to a variable named output and then assert before returning inside the function body.).

The exact semantics of these assertions (i.e whether an assertion is checked, and what happens when it fails, etc) can vary per build, as well as per evaluation of assertion, and are implementation defined. A basic reason is so that different builds can have different behaviors. For example, a debug build could use a semantics that checks the asserts and teriminates on violation, while a release build can simple ignore all asserts. But maybe a more important reason is customizability. C++26 provides the ability to specify your own contract-violation handler, which is a function that takes in an object of type std::contracts::contract_violation. This object will contain diagnostic information so that you can decide what to do on a contract violation.

But what about C asserts?

I think the advantage of contracts is precisely that it is a language feature rather than a macro! This leaves the door open for C++ compilers to use static analysis to optimize code (for example, a compiler can interpret an assert as an “assume”). This is also a step towards “code as proof”, where we can have safety guarantees by chaining together pre/post-conditions.

I think the proposals paper for contracts is very well-written and provides further context: P2900R14.

General Sentiment on the Advantage of C++ in the AI Age

At work, C is still the main language used, so a natural question to ask is: is the convenience provided by C++ replaced by AI? While at the conference, I briefly chatted with a few speakers and participants about this topic. The general sentiment is: no, it is not replaced. The main two points I gathered are as follows.

  1. AI still sucks at generating C (and C++) code. The code produced needs to be vetted by human experts, which often saves no time compared to writing it yourself.

My opinion is that when using AI for software, we ought to make coding a fill-in-the-blank task. Imagine a coding assignment where the program is broken down into functions, each function is declared, and pre- and post-conditions of each function are provided. Even then, we still need to thoroughly check the AI-generated code.

  1. C++ is a language whose central tenet is “zero-cost abstraction”. This means it aims to provide engineers with helpful abstractions (templates, objects, contracts, …) with zero runtime overhead.

The benefit C++ brings in this respect holds regardless of whether you are using human engineers or AI. In other words, AI and C++ help in two different dimensions, so one does not make the other redundant.

Updated: