TL;DR: Some studio apparently spent $660 million to make a game and then several hundred million more to market it and spent north of a billion dollars. That shit ain't sustainable and now everything is on fire.
I still do not understanding how marketing budgets on games get that big anymore.
Sure it made sense when every AAA game needed TV commercials and a billion physical displays in every different game store in the world, but that's basically over. Those cardboards and window stickers add up bit by bit.
But now, marketing is literally making a trailer you put on your own Youtube channel and then tweet about it. The only major cost is how many influencers and game journos you pay to directly shill for you, which is still a fraction of an investment compared to the physical costs of old.
The marketing is paying a streamer 100k to play their game for a couple of hours, hoping that some people will buy it, and continue the word of mouth marketing.
Problem is that this only works when your game is good or has a very specific niche.
Streamers are absolutely RAKING it in and the advertisement agencies have no way of measuring the effectiveness.
Every streamer and their mother had a paid sponsorship for xDefiant when it came out and now the game is shutting down. What is 200k eyes on your game when it looks like shit?
We are in the era of the informed consumer gamer where most people will seek out actual real gameplay footage, or watch their brand of trusted youtuber reviews. You can't just make flashy trailers and post ads everywhere because people have become immune to that shit, or simply just use an adblocker. There have been so many games the past 5 years I had NO IDEA about because I just never saw the marketing for it.
Even then, paying 10 huge name streamers 100k is still only a million, while these companies are reaching hundreds of millions for their marketing. And I know most streamers do not cost anywhere near 100k. The Markiplier and Jacksectieye types might, but the rest is a small 4 figure sum and a free copy of the game.
That's my question is how are they reaching the sheer cost of some of these budgets, because they clearly aren't paying tens of millions to each streamer.
...even then, they play as many if not more indy games than AAA (markiplier in particular goes out of his way to play indy games to support smaller devs), so people have a cornucopia of options to compare it to.
...and while I'm not crazy about certain trends in Indie Gaming (low-poly horror for example), I know there's always something out there worth playing that sparks my interest...
You've also got to account for the fact that old games still get market share. One major advantage monolithic studios had back in the day(particularly on consoles) was that eventually, your sega genesis would die, and you'd have to buy their new game on a new console, or else watch for the nostaliga-boner inducing assraming of a collection to come out on the current gen console.
Nowadays, your old favorites are only a click away in an emulation scene that's been working overtime to keep those games alive for the better part of two decades.
...Besides, I'll usually play one game most of the time anyway, then switch to another title. (working on Oxygen not included right now. having a blast)
Just contributing to your argument. The big youtubers/content creators can't really afford to be pinned down to only AAA titles, though. it's not enough content to satiate the algorithm, and besides, one of the reasons they find an audience is digging up obscure gems with little to no marketing budget (slender: the 8 pages, FNAF in the early days, and a more modern example, sprunki).
So even if a publisher convinces a bigtime CC to play their game, they're still inevitably competing with indy games that are often more affordable, if a bit less polished.
The industry over-compensating for Covid and failing to adapt afterwards is a pattern I've witnessed personally multiple times, so that's definitely part of it.
I think the bloat of the industry, and hedge-fund investment desperately seeking to make a bajillion dollars with no investment is a terrible supply-side monetization strategy that should lead to the downfall of most of the AAA industry excluding DIE incompetence, rampant corruption, and the worst quality control anywhere in the consumer marketplace.
I think the real threat to AAA gaming is clearly from 3 sources:
AA Developers
Indie Developers
Backlog
For AA Development, it's very clear that games that cost 50% of AAA development can still very clearly make at least as much money, and still be significantly better.
For Indie Developers, they'll never make billions of dollars, but if you acquire enough of them under a publisher, you can get a good, continuous, revenue stream which is really what investors are looking for. An indie Franchise could be minimally funded and still knock out several solid games from their developers that sell reliably well and have a strong community.
For the backlog, before ShortFatOtaku went off the deep-end, he made a very good comment about the fact that if AAA gaming ceased to exist today, most gamers wouldn't feel it because the backlog of games is so immeasurably massive that we'd probably see older games re-spawn in popularity simply because a whole slew of gamers finally got around to playing it again. If your new games are so bad that they work and look as bad as games from 5 years ago, how are you going to compete with those exact games now that they are on sale again, or that they are already in my library and I just want to play it but never got the chance to. I don't think there's an answer to it.
Hopefully we can start to see some of these shithole developers, publishers, and corporations go under. "Nature is healing", just as soon as we start seeing Game Journos and DIE directors get made homeless and lose their careers.
If your new games are so bad that they work and look as bad as games from 5 years ago
I don't even think this is a fair metric. Its hard to beat the point that graphics are already at in the last 5-7~ years short of going full body scans on everything. In fact trying to constantly one up your previous iteration is what has driven the cost of AAA gaming so high for these companies, as they keep paying millions of dollars more so that way the handful of people with MEGA ULTRA displays can see 5 additional pores on the character's face.
The problem continues to be that AAA companies are obsessed with making sure to kill off their previous games to keep the newest one the only one being played. Rather than let people choose which is their favorite in the series to play based on its specific gimmick or iteration, they must always be chasing a new carrot to move everyone over with. Which means a shit ton of money spent trying to one up themselves over and over, even if they creatively don't have anything to do it with.
There is a lot more factors at play, but I think everyone, consumer and company, spend so long thinking about "how is this one better than what came before?" and not enough time on "what make's this one worth playing *as well?" Being better everytime is a losing game because eventually you run out of bugs to fix and optimizations to make, whereas being also worth playing is always possible.
The “diminishing returns” narrative is 100% bullshit. Modern game devs are simply incompetent, lazy, cheap, and dishonest. They use temporal rendering to hide shit design and skip out on optimization. They model everything with way too many polygons and then just drop it all directly into games with no consideration for rendering budgets. They slap on DLSS, producing horrific motion blur and artifacts and input latency, and then they blame performance issues on you not having a $2000 GPU. They flock to UE5 for convenience and ease of use and saving money, using crap systems like lumen and nanite, and then they refuse to tailor the engine to their needs. Worst of all, they say you’re crazy for noticing any of this. Meanwhile, source engine games from a decade ago have superior image clarity versus anything made today. But you wouldn’t know it from game trailers, which are always played with a controller to hide motion blurring during faster movement.
The “diminishing returns” narrative is 100% bullshit
It is 100% valid because I 100% do not have the eyesight level necessary to even see the difference between 1080p and 4k. That's if I even owned a system capable of running and displaying such a high end picture. I don't imagine I'm alone in not spending thousands every other year to maintain a top of the line rig/monitor set up.
If a majority of people run your game on Medium settings, than the money you spent milking out the Ultra end is usually wasted. Double so if the System Requirements end up so high it scares off potential customers.
It doesn't matter if it happens because they are lazy and bad at their job, what matters is that it is wasted money/time on something that will not show anywhere near the return in sales.
And I'd wager you are not missing much in the slightest using that compared to a brand new 8k Giga System out in 2024. Whereas the jump from 2003 to 2013 would have been a massive difference.
There are absolute diminishing returns of graphical power whether from price, human limits, or technological breakthroughs.
They flock to UE5 for convenience and ease of use and saving money, using crap systems like lumen and nanite, and then they refuse to tailor the engine to their needs.
Which is funny since Epic says they should tailor it to their needs and that it is made for their fortnite and even there we can see lumen and naninte not really giving what you want on stylized game, it works best for the movie/architecture stuff, but we all know once a company go to UE5 they fire their engine devs, haha
Worst of all, they say you’re crazy for noticing any of this. Meanwhile, source engine games from a decade ago have superior image clarity versus anything made today.
But the source engine games and mods require you to do the hard work of in order to get the optimization, we need 9 years of dev time in order teach idiots to use UE5, haha.
Still it would not surprise me if part of the marketing budget went into shilling and gaslighting regarding this.
I wouldn't say the diminishing returns argument is bullshit. The work required for marginally more fidelity, at an asset level, has been increasing exponentially with each generation. As scene density increases, the asset count also increases. Couple the two and either budgets balloon or significant comprises must be made. The compromises often result in more generic looking assets (owing to extensive outsourcing with minimal art director, the use of photogrammetry at some stage of the pipeline or parametric tools drawing on a common data set). The pursuit of realism is a matter of diminishing returns, and it reduces creative freedom.
Same with the computational cost of rendering techniques. Compare early stencil shadows with PCSS. Now compare PCSS to RT shadows. The former is a significant leap, the latter not so much. Same can said for AO, reflections and GI. The problem with a lot of approximations is they usually proved to be unstable or heavily constrained. Cryengine's SVOGI, for example, can impose substantial constraints on how you build environments, especially interiors, and still suffers from ghosting and artifacts.
Rasterization based approximations had decades to mature and still weren't great. Developer laziness is inexcusable, but the biggest sin was selling real-time raytracing, which is desirable for any number of reasons, as viable multiple hardware generations too early and doing so before the associated tech had matured. And it was done to sell GPUs, not games.
the use of photogrammetry at some stage of the pipeline or parametric tools drawing on a common data se
Unless your game is a stylised Nintendo game, it's always cheaper to use photogrammetry, which will give you 1:1 realistic topology, and with today's pipelines you can use your smartphone for asset capture. It's also how Capcom managed to cut millions out of the budget of the newer Resident Evil games -- they talked about it in an interview a while back, using LiDAR for a bunch of the baked in scene decoration to avoid having to model the assets manually.
The real cost in "realistic" art assets is in retopology and baking a usable mesh into your runtime, which, funnily enough, requires a lot of time downscaling the mesh to work well within the engine pipeline (ergo, creating various LODs). But it's till faster/cheaper than manually making the asset, if you're actually a good modeler.
But as Current Horror pointed out, a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them, even though it results in horrible performance, which they attempt to bypass with frame generation and motion blur, using horrible temporal anti-aliasing to hide the bad optimisation, which results in smeary ghosting, which is really apparent in a ton of games.
The thing is, a lot of indie studios make a ton of photorealistic walking sims using photogrammetry or LiDAR on no-nothing budgets. It's not the "realism" that is costly, it's actually making a functional, optimised game out of those realistic assets that is costly, but most studios do not put in the time to do so.
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality. The entire retopology/unwrap/baking process can, and often is, entirely automated by the likes of Houdini TOPs for certain classes of photogrammetry assets. Again, by necessity given the sheer number of assets modern games require.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that the process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's a new approach, with the associated growing pains, but "realism" demanded more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets.
We just use an iPhone and gaussian splatting. You don't even have to go outside, you can even use AR captured imagery as well:
https://youtu.be/UdCKeO4c_xM
EDIT: Just wanted to address this part because I forgot something...
Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns
This is true, and part of the point, but also we see that on the flip side we have games like Bodycam, made by two guys, one of whom was 17 at the time when they started, and it looks more realistic than any AAA shooter and most people are none the wiser to how it was made. Mostly UE5 Blueprints and asset packs made from laser-scanned entities.
In this regard, they managed to make a top-selling game that fools a lot of people into thinking it looks "real" without having spent an arm and a leg to do so.
It's possible to get creative, push boundaries and make use of these tools to build out fascinating, unique, or groundbreaking games using these tools and techniques, but as you stated, most studios do not do this.
Fidelity and design are fundamentally different concepts. Films with memorable visual identity that are remembered decades after their release aren't a product of fidelity. The same is largely true for games or any other visual medium. Architectural styles, furniture, clothing, weapons, foliage and entire biomes where designed, with much thought by talented people, to maximize interest/appeal. Even in the case of realistic settings, set designers curate and build specialized props to maximize appeal. Scanning your immediate vicinity is the antithesis of this, which is why, at great expense, AAA studios combine externally sourced photogrammetry assets and scouting operations with a large number of custom assets.
Simply put, photogrammetry may provide a shortcut to fidelity. But fixation on fidelity over design is a shortcut to churning out a generic and visually uninteresting product. There are settings that get away with this. Most do not. And I'd rather play an interesting game than a realistic one.
The generative AI/photogrammetry approach is really interesting. I'd wager it's a poor substitute for coherent, deliberate design throughout a project though. Good design involves intent, understanding and consistency. AI, for the time being, lacks all of the above.
I don't even think this is a fair metric. Its hard to beat the point that graphics are already at in the last 5-7~ years short of going full body scans on everything.
Pretty much. Doing so also comes at the cost of design freedom. When pursuing realism at all cost one deviates as little as possible from the source scan, limiting the potential appeal and variety of character designs. The same is true for environments, made worse by the usage of outsourced photogrammetry. The whole modern content pipeline needs revaluation, with an emphasis on design, not realism.
"how is this one better than what came before?" and not enough time on "what make's this one worth playing *as well?"
Very much this, made worse by the culture of modern studios. Ultimately, ideas and execution dictate player experience. Studios being responsible to multiple external authorities while walking on egg shells internally precludes the spontaneity that resulted in some of the industries best ideas. It's not just a matter of priorities, but the ideas themselves not materialising because a creative process has been stripped of the creativity.
Yeah "graphical fidelity" is a luxury status signal within games.
I agree with your complaint on their monetization strategy. Games, as creative works, should be understood to have loyal followings, and publishers should be making money off of their diverse array of products that hit lots of little niches. Not the "it does everything" single game model. To be honest, publishers should be trying to build passive income by releasing different games, not trying to get people addicted to one and only one game.
What's weird is that you would think the "Games as Service" model would allow for exactly the type of development your talking about. Making a 10 year game playable consecutively on all modern platforms, constantly updated, and always working with minor feature improvements here and there. Instead they just bounce from one gimmick to the next and never fix anything, assuming they even released the game.
I think it's just because the people running these companies are shit.
Video games went the way of Disney and Netflix. Really makes you question if minority investors in those studios can yank control from ESG funders through lawsuits, its abundantly clear the majority investors want the companies to bomb financially.
The majority investors are all hedge funds that have government sponsored objectives, and whom exist to protect the oligarchs at the top of each industry, so yes they want to bankrupt the industry.
I say: fuck it. Games shouldn't be investor-driven organizations anyway. They are the essence of a private business with a sole proprietor or partnership at the top.
It does feel like A LOT of western studios fell for the chicom covid DEI bucks and were either too stupid or shortsighted greedy to know it would kill their studio doing it for China to swoop in afterwards.
The only benefit is there are a lot of indie studios in the west to take over the market and Nintendo aren't falling so maybe China's own internal issues prevent it from becoming the new media hegemony.
TL;DR: Some studio apparently spent $660 million to make a game and then several hundred million more to market it and spent north of a billion dollars. That shit ain't sustainable and now everything is on fire.
I still do not understanding how marketing budgets on games get that big anymore.
Sure it made sense when every AAA game needed TV commercials and a billion physical displays in every different game store in the world, but that's basically over. Those cardboards and window stickers add up bit by bit.
But now, marketing is literally making a trailer you put on your own Youtube channel and then tweet about it. The only major cost is how many influencers and game journos you pay to directly shill for you, which is still a fraction of an investment compared to the physical costs of old.
The marketing is paying a streamer 100k to play their game for a couple of hours, hoping that some people will buy it, and continue the word of mouth marketing.
Problem is that this only works when your game is good or has a very specific niche.
Streamers are absolutely RAKING it in and the advertisement agencies have no way of measuring the effectiveness.
Every streamer and their mother had a paid sponsorship for xDefiant when it came out and now the game is shutting down. What is 200k eyes on your game when it looks like shit?
We are in the era of the informed consumer gamer where most people will seek out actual real gameplay footage, or watch their brand of trusted youtuber reviews. You can't just make flashy trailers and post ads everywhere because people have become immune to that shit, or simply just use an adblocker. There have been so many games the past 5 years I had NO IDEA about because I just never saw the marketing for it.
Even then, paying 10 huge name streamers 100k is still only a million, while these companies are reaching hundreds of millions for their marketing. And I know most streamers do not cost anywhere near 100k. The Markiplier and Jacksectieye types might, but the rest is a small 4 figure sum and a free copy of the game.
That's my question is how are they reaching the sheer cost of some of these budgets, because they clearly aren't paying tens of millions to each streamer.
...even then, they play as many if not more indy games than AAA (markiplier in particular goes out of his way to play indy games to support smaller devs), so people have a cornucopia of options to compare it to.
...and while I'm not crazy about certain trends in Indie Gaming (low-poly horror for example), I know there's always something out there worth playing that sparks my interest...
You've also got to account for the fact that old games still get market share. One major advantage monolithic studios had back in the day(particularly on consoles) was that eventually, your sega genesis would die, and you'd have to buy their new game on a new console, or else watch for the nostaliga-boner inducing assraming of a collection to come out on the current gen console.
Nowadays, your old favorites are only a click away in an emulation scene that's been working overtime to keep those games alive for the better part of two decades.
...Besides, I'll usually play one game most of the time anyway, then switch to another title. (working on Oxygen not included right now. having a blast)
Right, I just picked the only names I knew that might be big enough to cost 100k to shill.
I figured, lol.
Just contributing to your argument. The big youtubers/content creators can't really afford to be pinned down to only AAA titles, though. it's not enough content to satiate the algorithm, and besides, one of the reasons they find an audience is digging up obscure gems with little to no marketing budget (slender: the 8 pages, FNAF in the early days, and a more modern example, sprunki).
So even if a publisher convinces a bigtime CC to play their game, they're still inevitably competing with indy games that are often more affordable, if a bit less polished.
Got to be money laundering.
What game? That’s insanity.
Good summary, but it's not just one studio on one game. It's basically all of the AAA developers doing that.
The industry over-compensating for Covid and failing to adapt afterwards is a pattern I've witnessed personally multiple times, so that's definitely part of it.
I think the bloat of the industry, and hedge-fund investment desperately seeking to make a bajillion dollars with no investment is a terrible supply-side monetization strategy that should lead to the downfall of most of the AAA industry excluding DIE incompetence, rampant corruption, and the worst quality control anywhere in the consumer marketplace.
I think the real threat to AAA gaming is clearly from 3 sources:
For AA Development, it's very clear that games that cost 50% of AAA development can still very clearly make at least as much money, and still be significantly better.
For Indie Developers, they'll never make billions of dollars, but if you acquire enough of them under a publisher, you can get a good, continuous, revenue stream which is really what investors are looking for. An indie Franchise could be minimally funded and still knock out several solid games from their developers that sell reliably well and have a strong community.
For the backlog, before ShortFatOtaku went off the deep-end, he made a very good comment about the fact that if AAA gaming ceased to exist today, most gamers wouldn't feel it because the backlog of games is so immeasurably massive that we'd probably see older games re-spawn in popularity simply because a whole slew of gamers finally got around to playing it again. If your new games are so bad that they work and look as bad as games from 5 years ago, how are you going to compete with those exact games now that they are on sale again, or that they are already in my library and I just want to play it but never got the chance to. I don't think there's an answer to it.
Hopefully we can start to see some of these shithole developers, publishers, and corporations go under. "Nature is healing", just as soon as we start seeing Game Journos and DIE directors get made homeless and lose their careers.
I don't even think this is a fair metric. Its hard to beat the point that graphics are already at in the last 5-7~ years short of going full body scans on everything. In fact trying to constantly one up your previous iteration is what has driven the cost of AAA gaming so high for these companies, as they keep paying millions of dollars more so that way the handful of people with MEGA ULTRA displays can see 5 additional pores on the character's face.
The problem continues to be that AAA companies are obsessed with making sure to kill off their previous games to keep the newest one the only one being played. Rather than let people choose which is their favorite in the series to play based on its specific gimmick or iteration, they must always be chasing a new carrot to move everyone over with. Which means a shit ton of money spent trying to one up themselves over and over, even if they creatively don't have anything to do it with.
There is a lot more factors at play, but I think everyone, consumer and company, spend so long thinking about "how is this one better than what came before?" and not enough time on "what make's this one worth playing *as well?" Being better everytime is a losing game because eventually you run out of bugs to fix and optimizations to make, whereas being also worth playing is always possible.
The “diminishing returns” narrative is 100% bullshit. Modern game devs are simply incompetent, lazy, cheap, and dishonest. They use temporal rendering to hide shit design and skip out on optimization. They model everything with way too many polygons and then just drop it all directly into games with no consideration for rendering budgets. They slap on DLSS, producing horrific motion blur and artifacts and input latency, and then they blame performance issues on you not having a $2000 GPU. They flock to UE5 for convenience and ease of use and saving money, using crap systems like lumen and nanite, and then they refuse to tailor the engine to their needs. Worst of all, they say you’re crazy for noticing any of this. Meanwhile, source engine games from a decade ago have superior image clarity versus anything made today. But you wouldn’t know it from game trailers, which are always played with a controller to hide motion blurring during faster movement.
It is 100% valid because I 100% do not have the eyesight level necessary to even see the difference between 1080p and 4k. That's if I even owned a system capable of running and displaying such a high end picture. I don't imagine I'm alone in not spending thousands every other year to maintain a top of the line rig/monitor set up.
If a majority of people run your game on Medium settings, than the money you spent milking out the Ultra end is usually wasted. Double so if the System Requirements end up so high it scares off potential customers.
It doesn't matter if it happens because they are lazy and bad at their job, what matters is that it is wasted money/time on something that will not show anywhere near the return in sales.
I'm still using my 1080p 144hz TN panel from 2013; the extra frames was worth it over my previous monitor, a basic 21" 60hz screen.
The next worthwhile upgrade would have to be OLED, and I can't justify the price right now.
And I'd wager you are not missing much in the slightest using that compared to a brand new 8k Giga System out in 2024. Whereas the jump from 2003 to 2013 would have been a massive difference.
There are absolute diminishing returns of graphical power whether from price, human limits, or technological breakthroughs.
Which is funny since Epic says they should tailor it to their needs and that it is made for their fortnite and even there we can see lumen and naninte not really giving what you want on stylized game, it works best for the movie/architecture stuff, but we all know once a company go to UE5 they fire their engine devs, haha
But the source engine games and mods require you to do the hard work of in order to get the optimization, we need 9 years of dev time in order teach idiots to use UE5, haha.
Still it would not surprise me if part of the marketing budget went into shilling and gaslighting regarding this.
I wouldn't say the diminishing returns argument is bullshit. The work required for marginally more fidelity, at an asset level, has been increasing exponentially with each generation. As scene density increases, the asset count also increases. Couple the two and either budgets balloon or significant comprises must be made. The compromises often result in more generic looking assets (owing to extensive outsourcing with minimal art director, the use of photogrammetry at some stage of the pipeline or parametric tools drawing on a common data set). The pursuit of realism is a matter of diminishing returns, and it reduces creative freedom.
Same with the computational cost of rendering techniques. Compare early stencil shadows with PCSS. Now compare PCSS to RT shadows. The former is a significant leap, the latter not so much. Same can said for AO, reflections and GI. The problem with a lot of approximations is they usually proved to be unstable or heavily constrained. Cryengine's SVOGI, for example, can impose substantial constraints on how you build environments, especially interiors, and still suffers from ghosting and artifacts.
Rasterization based approximations had decades to mature and still weren't great. Developer laziness is inexcusable, but the biggest sin was selling real-time raytracing, which is desirable for any number of reasons, as viable multiple hardware generations too early and doing so before the associated tech had matured. And it was done to sell GPUs, not games.
Unless your game is a stylised Nintendo game, it's always cheaper to use photogrammetry, which will give you 1:1 realistic topology, and with today's pipelines you can use your smartphone for asset capture. It's also how Capcom managed to cut millions out of the budget of the newer Resident Evil games -- they talked about it in an interview a while back, using LiDAR for a bunch of the baked in scene decoration to avoid having to model the assets manually.
The real cost in "realistic" art assets is in retopology and baking a usable mesh into your runtime, which, funnily enough, requires a lot of time downscaling the mesh to work well within the engine pipeline (ergo, creating various LODs). But it's till faster/cheaper than manually making the asset, if you're actually a good modeler.
But as Current Horror pointed out, a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them, even though it results in horrible performance, which they attempt to bypass with frame generation and motion blur, using horrible temporal anti-aliasing to hide the bad optimisation, which results in smeary ghosting, which is really apparent in a ton of games.
The thing is, a lot of indie studios make a ton of photorealistic walking sims using photogrammetry or LiDAR on no-nothing budgets. It's not the "realism" that is costly, it's actually making a functional, optimised game out of those realistic assets that is costly, but most studios do not put in the time to do so.
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality. The entire retopology/unwrap/baking process can, and often is, entirely automated by the likes of Houdini TOPs for certain classes of photogrammetry assets. Again, by necessity given the sheer number of assets modern games require.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that the process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's a new approach, with the associated growing pains, but "realism" demanded more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
We just use an iPhone and gaussian splatting. You don't even have to go outside, you can even use AR captured imagery as well: https://youtu.be/UdCKeO4c_xM
EDIT: Just wanted to address this part because I forgot something...
This is true, and part of the point, but also we see that on the flip side we have games like Bodycam, made by two guys, one of whom was 17 at the time when they started, and it looks more realistic than any AAA shooter and most people are none the wiser to how it was made. Mostly UE5 Blueprints and asset packs made from laser-scanned entities.
https://www.youtube.com/shorts/_Gh0x9mtIuQ?feature=share
In this regard, they managed to make a top-selling game that fools a lot of people into thinking it looks "real" without having spent an arm and a leg to do so.
It's possible to get creative, push boundaries and make use of these tools to build out fascinating, unique, or groundbreaking games using these tools and techniques, but as you stated, most studios do not do this.
Fidelity and design are fundamentally different concepts. Films with memorable visual identity that are remembered decades after their release aren't a product of fidelity. The same is largely true for games or any other visual medium. Architectural styles, furniture, clothing, weapons, foliage and entire biomes where designed, with much thought by talented people, to maximize interest/appeal. Even in the case of realistic settings, set designers curate and build specialized props to maximize appeal. Scanning your immediate vicinity is the antithesis of this, which is why, at great expense, AAA studios combine externally sourced photogrammetry assets and scouting operations with a large number of custom assets.
Simply put, photogrammetry may provide a shortcut to fidelity. But fixation on fidelity over design is a shortcut to churning out a generic and visually uninteresting product. There are settings that get away with this. Most do not. And I'd rather play an interesting game than a realistic one.
The generative AI/photogrammetry approach is really interesting. I'd wager it's a poor substitute for coherent, deliberate design throughout a project though. Good design involves intent, understanding and consistency. AI, for the time being, lacks all of the above.
Pretty much. Doing so also comes at the cost of design freedom. When pursuing realism at all cost one deviates as little as possible from the source scan, limiting the potential appeal and variety of character designs. The same is true for environments, made worse by the usage of outsourced photogrammetry. The whole modern content pipeline needs revaluation, with an emphasis on design, not realism.
Very much this, made worse by the culture of modern studios. Ultimately, ideas and execution dictate player experience. Studios being responsible to multiple external authorities while walking on egg shells internally precludes the spontaneity that resulted in some of the industries best ideas. It's not just a matter of priorities, but the ideas themselves not materialising because a creative process has been stripped of the creativity.
Yeah "graphical fidelity" is a luxury status signal within games.
I agree with your complaint on their monetization strategy. Games, as creative works, should be understood to have loyal followings, and publishers should be making money off of their diverse array of products that hit lots of little niches. Not the "it does everything" single game model. To be honest, publishers should be trying to build passive income by releasing different games, not trying to get people addicted to one and only one game.
What's weird is that you would think the "Games as Service" model would allow for exactly the type of development your talking about. Making a 10 year game playable consecutively on all modern platforms, constantly updated, and always working with minor feature improvements here and there. Instead they just bounce from one gimmick to the next and never fix anything, assuming they even released the game.
I think it's just because the people running these companies are shit.
Video games went the way of Disney and Netflix. Really makes you question if minority investors in those studios can yank control from ESG funders through lawsuits, its abundantly clear the majority investors want the companies to bomb financially.
The majority investors are all hedge funds that have government sponsored objectives, and whom exist to protect the oligarchs at the top of each industry, so yes they want to bankrupt the industry.
I say: fuck it. Games shouldn't be investor-driven organizations anyway. They are the essence of a private business with a sole proprietor or partnership at the top.
It does feel like A LOT of western studios fell for the chicom covid DEI bucks and were either too stupid or shortsighted greedy to know it would kill their studio doing it for China to swoop in afterwards.
The only benefit is there are a lot of indie studios in the west to take over the market and Nintendo aren't falling so maybe China's own internal issues prevent it from becoming the new media hegemony.
Yeah, they're trying, but thankfully Communists are still terrible at everything.