I’ve held the stance for several years now that if iRacing fail to change how they approach sim racing, there lies the potential for the developer to face much bigger problems beyond “Austin Ogonoski is mad at them on his blog again.”
This came to fruition Tuesday night, as the eNASCAR Coca-Cola Series race at Charlotte proved to be such a disaster, that the incident is now being covered by real-world motorsports outlets such as RACER Magazine.
For those who might not have the time to read the excellent piece by Ryan Kish, the rundown is fairly simple.
In a recent hotfix deployed a few weeks ago, iRacing accidentally included beta aerodynamic values for only the Chevrolet Camaro. This made the car highly unstable in traffic, whereas the Toyota Camry and Ford Mustang were unaffected. All cars are intended to be equal for the sake of fair play in a virtual environment.
iRacing were made aware of the issue a week in advance as teams began to test for the Charlotte round of the eSports tour, but botched their response.
Typically, when there are imminent technical issues with their servers or a specific car, iRacing will flash a warning across the top of their members’ site with specific instructions on how to avoid said technical issues.
No such action was taken. The average iRacing member had no idea this was even a problem.
Furthermore, the developer opted not to fix the issue in time for their flagship eSports Series race, even as a group of drivers spanning several teams attempted to explain the severity of the situation to iRacing staff.
As some teams are locked into using a specific brand of car due to personal sponsorship contracts with either Chevrolet, Ford, or Toyota, this led to a situation akin to the most recent attempt by Lotus to qualify for the Indianapolis 500 in 2012. Multiple teams were entering an event with a car they knew was fundamentally broken and undrivable, in a competition that is intended to provide near-total parity.
Chevrolet was completely locked out of the top 10, with Nick Ottinger holding on for dear life in 12th place as the highest finishing Camaro of the bunch. Many other Camaro drivers were destroyed in incidents throughout the race, including a spin on lap 3 of 200 caused by, as expected, a Chevrolet driver losing control all by himself.
iRacing’s response to the incident was to add a drop week to the 2021 Coca-Cola Series season, and to issue a private statement to members of the Coca-Cola Series. The car would be reverted to its’ previous aero values in an upcoming hotfix.
However, everyday iRacing members were still not told about this issue, and instead expected to infer from intentionally vague hotfix notes that there may have been a problem with the Chevrolet Camaro. iRacing made finding information about this incident, which may have affected the competitive playing field of several other third party eSports championships not directly affiliated with iRacing, intentionally convoluted.
Not to mention, if you’re just a regular paying customer who finally attained your Class A license and were in the market to buy a new piece of DLC, you had no way of knowing that for multiple weeks, the Chevrolet Camaro was not in a state to be sold to the public.
In a statement provided to RACER Magaine, iRacing claimed the following:
In the future, if something appears incorrect with the software or service, we need to be brought up to speed with it as soon as humanly possible. If errors are caught in time, we can do more to ensure that nothing escapes us or at least allows time to rectify (especially prior to these high-profile events).
I was a quality assurance guy in the sim racing industry for four years. I cannot believe they supplied this as an official response and it’s making me question if iRacing is properly testing their game, or what their testing process even is, prior to pushing updates out to the general public.
At no point was I ever instructed to rely on paying customers to discover issues. While my employer did have avenues to report edge case scenarios about our games via Discord, this was not our primary avenue for discovering issues. It was expected that 90% to 95% of bugs would be found internally.
The majority of game development studios, whatever genre you’re in, will hire both a small internal squad of QA testers, as well as larger external firms for mass scale tests. They do not outsource this operation to random community members.
Bug and exploit tracking, when done properly, is a lot more systematic than I think people realize which is why I’ve gotten my panties into a twist here.
Developers make use of bug tracking software such as Jira, which is essentially a giant online database to track tasks and assign bugs to the correct people. There are several drop-down menus to categorize a bug by type (physics, sound, AI, whatever), an area to explain the reproduction steps, and the ability to attach screenshots or small video clips, and even a comments area to converse with the staff member you’ve assigned it to. They might have questions, they might ask for another video clip, that sort of thing.
Testing itself is extremely formulaic. For a racing game, not only might there be a spreadsheet featuring core functionality checks like “does the car load into the game world without crashing”, “do the decals or livery patterns appear on a car without any distortion” or “does the driver model appear in the cockpit without clipping through other objects”, but racing games often center around subjective traits requiring an additional round of testing.
A second round of checks may consist of things like “is drafting effective like it should be”, “is the handbrake user-friendly”, “what happens when you induce a moderate slip angle”, and “are the AI in this car capable of speeds higher than what the player can attain.” Each check will also have steps associated with them, so you can complete them as efficiently as possible.
I know this, because I was the bastard writing these checks, then carrying them out.
You eventually get into a pretty good rhythm and while these checks seem rather intensive and time consuming, it quickly becomes routine and you simply know what to spot. You get a changelog that says the Dallara IR-18 IndyCar physics have been updated, you take it for a few laps at Indy behind an AI car to see if it sucks up and slingshots properly, then you bust out the wheel and do a few laps at Sonoma.
If it does weird shit in the draft or tries to kill you in a low speed corner, you boot up Jira, assign the bug to the physics guy, describe what the problem is, and attach a Shadowplay clip. This whole process is maybe a ten minute thing.
You will also have a third spreadsheet for lap times at a variety of circuits.
I’ll give you an example; for Project CARS 3, I had benchmark laps for every single car in the game at Long Beach, Sugo, and the shorter Donnington layout. When possible, you cross-reference with real world laptimes using something like IMSA’s online results portal to see if you’re either too fast or too slow. I then had a second page that indicated, approximately, how much time you’d shave off per lap with each level of each performance upgrades. It goes that deep. Nothing is left to chance or the community.
Setup exploits are admittedly trickier to spot and slightly more time consuming due to having to run several short stints with only minor setup changes, but veteran sim racers will have a library of “tricks” they will immediately try. Minimum wing on street courses in an open wheel car, shouldn’t work. If it does, you’ve got a problem. Minimum tire pressures shouldn’t always warrant the quickest lap time. If it does, time to ask the physics guy why. Minimum ride height shouldn’t always work; again, if it does, log it. If your sim offers multiple differential options, make sure you can actually use all of them on a given car without it breaking the physics engine and causing it to explode.
Yes it’s a lot of driving and tinkering, but that’s the can of worms you open when making a racing sim.
This process sounds insanely time consuming, but remember that cars aren’t all immediately added to a new sim, or updated in an old sim, all at once. For example, with Project CARS 2, SMS released DLC in packs of I think eight cars, and the pack would release every month – maybe a little longer than that. You could usually dedicate two or three days to a single car. iRacing by comparison updates quarterly, with only a handful of cars receiving updates at a time, theoretically resulting in more time to test fewer cars.
For iRacing to ship multiple cars in states that frequently fail one or more of these checks that I’ve outlined, whether it be drafting/aero oddities such as the Camaro issue, traction rolling, lap times being too fast/slow, certain setup adjustments being overpowered/broken, or stuff like the Dallara IR-18 flipping uncontrollably after crashes, makes me concerned they don’t have anyone thoroughly testing their sim.
Again, my own personal opinion, I don’t think the leading developer in sim racing is bothering to properly test their product in ways that are standard across the rest of the video game industry. There are simply too many bugs that should have been caught during standard checks, that for whatever reason, haven’t been.
If I had to speculate, I would say that they potentially recruit community members to test bits and pieces here and there, but there is no rigid testing schedule or process as outlined above, and most of the feedback consists of saying “it’s great”, followed by asking how to buy them pizza for a job well done. I also believe that due to not wanting to taint the eSports playing field and give competitors an unfair advantage over one another, iRacing does not seek the help of the top 1% of drivers on the service who are the exact people they need to help them spot integral issues, but rather rabid fanboys who will give them biased feedback.
This is not testing. This is not… anything, really.
This opinion of mine is reinforced by iRacing’s comments in the RACER Magazine article. A simple statement saying it slipped by their testers and they are extremely embarrassed to not have caught it would have sufficed, because in most normal development studios, it’s likely that’s exactly what would have happened. Spanky forgot to take all three brands out for a dirty air test at multiple tracks after seeing they all received updates to their aero model.
They did not mention their testers at all.
They instead imply that “people need to tell us these things”, as if they don’t actually have a department dedicated to this like any video game company should.
It’s also an opinion shared by Kligerman Sport’s David Schildhouse, who writes:
It’s really hard to take [the statement] as anything other than them saying: ‘this is your problem. You’re supposed to tell us that our product doesn’t work correctly so that we can try and fix it,’” he says. “The problem is a little more complex than that, obviously. We rely on them as a business to provide us with the products that they say is what it is.
“We’re not their quality assurance testers, we’re not the beta testers, we’re their consumers,” Schildhouse says. “We expect a fully polished product that works as advertised. So I was a bit surprised and disappointed to see their perspective and approach to this.
I don’t know how else to look at this situation, other than believe the most important sim developer in the industry might not be thoroughly testing their own product, instead gaslighting their customers into believing it’s totally normal for paying customers to double as a QA team.
I did this shit for a living.