Once upon a time there was a driver. You. Fifth place after three laps of clean racing, braking at exactly the right point, nursing your tires like they were hand-blown Murano glass. And then he appears. Someone who has spent two straights building up a run, who flies past you as if brakes were optional, who dives to the inside of the corner with the elegance of a supermarket trolley rolling downhill, and who takes out your front end, your patience, and three laps of impeccable work in one move.
The crash is not even the worst part. The worst part is what comes next.
The system deducts points from you too.
Four incident points. Him, four. You, four. As if you had both mutually agreed to smash into each other out of sheer boredom. Classic simracing justice had that particular way of working: blind, mathematical, and equal to the point of absurdity. A system designed in 2008 that started from a philosophical premise that sounds reasonable in theory and is maddening in practice. Responsibility, whoever is at fault, gets shared among everyone involved.
The System That Invented Everything, and Also Broke Everything
The Safety Rating, as popularised by iRacing, was a brilliant idea in its time. Before it existed, online chaos was ungovernable. With it, for the first time there was a numerical metric that statistically separated clean drivers from kamikazes. The more corners you completed without an incident, the better your score. A cold, objective cumulative average that, over the long run, worked reasonably well at reflecting a driver’s track record.

The problem is that “over the long run” is a phrase that offers zero comfort when you have just been sent off track at Turn 1 by someone who braked two hundred meters too late. The system does not understand context. It does not know who had the racing line. It does not know who was in control and who was praying. It only knows that two cars collided and that, therefore, both must pay. It is the logic of the school headmaster who punishes the entire class because someone threw a piece of paper.
And so, for years, competitive simracing lived with a fascinating structural paradox: the cleaner you tried to race, the more damage a single incident caused you, even one that was entirely someone else’s fault. Because if your CPI (Corners Per Incident, meaning your cleanliness average) was very high, a single contact you had no part in would collapse your statistic disproportionately. The reckless driver with a mediocre SR barely felt the hit. The impeccable driver lost everything in one blow. The victim paid more than the aggressor. Welcome to the system.
Great Idea, Logistical Nightmare
The obvious alternative was human arbitration. Real stewards, like in Formula 1, reviewing incidents and applying sanctions with genuine judgment. And in private leagues, in organised championships, this worked. It actually worked well. But it came at an enormous cost that nobody mentions in the official announcements.
To adequately monitor a forty-car race you need roughly six stewards working simultaneously. Six volunteer people, focused, staring at screens, listening to radios, processing spatial information in real time, and then absorbing the complaints of everyone who disagrees with the decision afterwards. League administrators themselves came to describe the steward’s role as a “monkey job”: exhausting, thankless, impossible to scale, and with a guaranteed bonus of harassment on forums and Discord.
The problem was not willingness. It was mathematics. Millions of laps driven daily on centralised platforms, and an army of volunteers that could never be large enough. Human arbitration was the bottleneck that was preventing competitive simracing from truly growing.
Something was needed that would not get tired. That had no biases. That did not need rest. That did not care about being insulted in Discord. What was needed, in short, was a machine.
What the AI Sees That You Cannot
This is where things get interesting, and also slightly unsettling in that particular way that good technologies have of making you mildly uncomfortable before proving you right.
The new AI Steward does not see the incident the way you do. It has no replay camera. It does not analyse pixels or compare frames. What it does is read the telemetry, and read it at a level of detail no human steward could process in real time no matter how hard they tried.
When two cars collide, the algorithm rewinds the tape millisecond by millisecond and starts asking very specific questions. When did the attacking driver begin braking? What exact pressure did he apply to the pedal? Did he hit the ABS all at once, like someone panicking, or did he modulate the brake progressively, like someone who has the car under control? At what precise moment did the defending driver begin turning in, and was there genuine physical overlap between the two cars at that instant?

A clean divebomb and a dirty divebomb leave completely different footprints in the data. The driver attempting an ambitious but legitimate overtake maintains control of the car, brakes progressively, and the vector of his inertia points toward the apex. The driver executing a textbook dirty divebomb stamps the brake pedal to one hundred percent at the last possible moment, locks up or fires the ABS, and when he tries to turn the wheel the car no longer responds because physics has its own opinions about what is possible. The machine distinguishes one from the other with a precision no human can match by watching a replay.
The algorithms that make this possible have names that sound like a university module you failed once: Support Vector Machines, XGBoost, Deep Neural Networks. But what they do, in essence, is the same thing a very very very fast data engineer would do without needing coffee: analyse thousands of simultaneous variables, compare them against tens of thousands of previously labelled historical cases, and deliver a verdict. No fatigue. No pressure. No fear of the comments section.
Low Fuel Motorsport & Le Mans Ultimate
The most public implementation of all this came from Low Fuel Motorsport, the largest independent competitive simracing platform in the world, at the 2024 ADAC SimRacing Expo. LFM unveiled its AI Racing Steward in partnership with the technology company Rennwelten, and they did so with a refreshing honesty that is rare in a sector that usually announces things as if they were already perfect.
Because the first figure they shared was this: in its earliest evaluations, before the large-scale model training, the AI was correct in 24% of cases when compared against previous human verdicts. Twenty-four percent. If you flip a coin you get fifty. It was not exactly a glorious start.
But that is precisely the point. The engineers did not hide that number. They published it. And they published it because it is part of the process: the AI needed to learn, and learning required failing first with real data. LFM fed the model with its colossal database from nearly two hundred thousand active members, let the system silently observe for months without applying real sanctions, and progressively refined its decision trees until the precision started to become operationally acceptable.
The transition was gradual and deliberate. First the AI only watched. Then it began suggesting verdicts to the human stewards, who could approve or reject them. The ultimate goal, planned for 2025 and beyond, is for the human steward to step back from being the front-line judge and become the court of appeal, intervening only when someone contests an automated decision or when the case is so unusual that the algorithm does not know what to do with it. Which also happens. Edge cases will always exist.
While LFM was building its AI as an external layer on top of the simulator, Studio 397 did something different and, in a certain sense, more elegant: they embedded the system directly into the core of Le Mans Ultimate, the official simulator of the FIA World Endurance Championship.
Their system, named LiveSteward, is not a post-race analysis tool. It is a steward that lives inside the game’s physics engine, processes data in real time, and has on its roadmap the ability to alter a race’s classification while it is still running, without waiting for the chequered flag.

In its first public phase, LiveSteward operated in ghost mode: analysing everything, sanctioning nothing, sending terabytes of data to central servers to validate its predictions. The caution is not cowardice; it is engineering. A single false positive, an incorrect sanction applied to an innocent driver in the middle of an important race, and the community’s trust would evaporate instantly. With systems of justice, whether human or algorithmic, legitimacy is everything.
What makes the Le Mans Ultimate ecosystem particularly interesting is that LiveSteward does not work alone. It operates alongside a system of dynamic badges that reflect each driver’s contact ratio across their last ten races, adjusted for the physical severity of each impact. It is not an abstract number accumulated over years of history. It is a recent snapshot of who you are as a driver right now. And that badge is visible to the entire grid before the start. Social stigma, digitised and automated.
If on top of that you keep repeating harmful driving patterns, the system does not wait. It activates progressive automated bans, and when you return, you do so wearing a special badge that essentially tells everyone you are on probation. It is the first time in simracing history that the offender has to sit down and genuinely reflect on their life choices in an operationally meaningful way.
What Changes for You, the Clean Driver
If until now you have been racing cleanly and protecting your Safety Rating by avoiding fights you knew you could win, you may be about to recover something the old system took from you without asking: the confidence to defend your position.
Because that is what algorithmic arbitration gives back to the honest driver. Not just retroactive protection. Anticipatory confidence. If you keep the car under control, if your inputs to the vehicle demonstrate that you are within the limits of physics, and someone hits you anyway, the machine will see it. And it will see it with a precision that no human steward reviewing a replay on the side while drinking their second coffee could ever match.
Survival driving is dying. That style of racing in which you surrendered positions at the first sign of aggression from anyone nearby because the cost of a minor contact hurt you more than it hurt the aggressor. The system that conditioned you to be conservative not out of conviction but out of mathematical fear. The AI is redrawing the boundaries of what is reasonable to attempt on track, and it is doing so without morality, without opinions, and without mercy for whoever crosses the wrong line.
You can buy the game with a big discount by clicking here:
There is something poetic about that. After nearly twenty years of simracing dominated by the ballistic impunity of the braking zones, the steward that is finally going to restore order has no eyes, no name, and does not care what you post on the forums. It only reads millivolts on the brake pedal potentiometer. And that, it turns out, is enough.
This website uses affiliate links which may earn a commission at no additional cost to you.









