it's always cheaper to use photogrammetry
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality. The entire retopology/unwrap/baking process can, and often is, entirely automated by the likes of Houdini TOPs for certain classes of photogrammetry assets. Again, by necessity given the sheer number of assets modern games require.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that the process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's a new approach, with the associated growing pains, but "realism" demanded more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
it's always cheaper to use photogrammetry
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality. The entire retopology/unwrap/baking process can, and often is, entirely automated by the likes of Houdini TOPs for certain classes of photogrammetry assets. Again, by necessity given the sheer number of assets modern games require.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that he process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's new approach, with the associated growing pains, but "realism" demanded more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
it's always cheaper to use photogrammetry
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality. The entire retopology/unwrap/baking process can, and is, entirely automated by the likes of Houdini TOPs for certain classes of photogrammetry assets. Again, by necessity given the sheer number of assets modern games require.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that he process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's new technology, requires more work and changes in how optimisation is approached, but "realism" requires more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
it's always cheaper to use photogrammetry
Yes. It's cheaper to build film sets and fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that he process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's new technology, requires more work and changes in how optimisation is approached, but "realism" requires more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
it's always cheaper to use photogrammetry
Yes. It's cheaper to fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that he process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce required GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's new technology, requires more work and changes in how optimisation is approached, but "realism" requires more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.
it's always cheaper to use photogrammetry
Yes. It's cheaper to fly scouting teams to multiple exotic locations for months on end to capture approximations of your art direction, than hand author assets. But realism isn't costly... Look past the criticisms of dunning-kruger infused youtube videos man, and you'll see that asset creation is incomparable to the basic textures on primitive geometry of a couple generations ago. Production has changed in a way that not only stifles creativity, but lends itself to poorly polished and poorly performing games. All in the pursuit of "realism", where many prefer the aesthetics of last generation. That's the definition of diminishing returns.
which will give you 1:1 realistic topology
Photogrammetry doesn't produce "realistic topology" - there isn't such a thing. Topology refers to the mathematical structure of geometry, primarily in regards to how it deforms and renders. Outside of the handful of deforming meshes, the only concerns are 1) Density, 2) Correctness (manifold geometry, excessive concavity, micro-triangles introducing overdraw, incompatibility with the rest of your pipeline, etc.) By necessity, retopology sees heavy automation. LOD generation, with very few exceptions, is done parametrically, with engines providing the functionality in engine. Seen that Simplygon logo at startup? That's one middleware that provides such functionality.
The time spent on photogrammetry is primarily clean up - whether it be removing objects from environments, filling areas not available for imaging or fixing the myriad artifacts that he process produces - it's far from perfect. That is IF you can find a 1:1 analogue for what you want in your game, which brings me to this notion:
Unless your game is a stylised Nintendo game
Because even half the assets released in a third of games in a year are viable candidates for photogrammetry? Sure. Overlooking that games are already criticized for looking generic, a by-product of excessive photogrammetry, art direction is still a thing, and is more important than fidelity in regards to appeal. One key concern is consistency - that the style and fidelity of assets is consistent throughout a scene. Introduce photogrammetry, and every hand authored asset now has to target that level of realism lest it stick out like a sore thumb. Even when utilising scanned materials as bases, maintaining that level of realism is time consuming and limiting.
Now consider how environments are actually constructed. The majority of game worlds rely on modularity - utilizing instanceable geometry along with trim textures/geometry, topped off with a small number of versatile tiling textures. This isn't just to speed up environment creation, it's to reduce GPU bandwidth. Real world objects seldom conform to this approach, outside of surface scans used as tileables, and photogrammetry results in unique texture data per asset. Wonder why games have ballooned in size and constantly suffer from streaming hitches? Look no further.
a lot of studios are now bypassing the optimisation phase by using Nanite to do the work for them
Nanite is an alternative to traditional triangle rasterization as to allow more complex geometry than traditional LOD systems can practically provide. Better handling unoptimized scenes is a side-effect, not it's purpose or a recommendation. Outside of Unreal, mesh shaders are being used for the same reason, with similar results - additional overdraw. See Alan Wake 2. It's new technology, requires more work and changes in how optimisation is approached, but "realism" requires more geometry, so here we are.
Long term, it'll be resolved. That's not to say it's a substitute for optimisation, or was billed as such. It's just a convenient scapegoat. Ironically, Unreal does have some major architectural issues. The entire streaming system is built with Fornite in mind - with the idea of of a persistent server side world. The actor system/tick handling is poor for complex non-linear worlds, resulting in game thread congestion, and the actual streaming is far too course for large, dense worlds. The collaboration with CDPR is at least seeing some progress there - here's to hoping more games benefit from it moving forward.