Guerrilla Sauce
So, for some time now, games have used low-res assets within rendering a game to ease the workload on the system. Once all of this process completes, frames are sent via video signal to our displays.
"Games often employ different resolutions in different parts of their rendering pipeline. Most games render particles and ambient occlusion at a lower resolution, while some games even do all lighting at a lower resolution. This is generally still called native 1080p"
This means that ALL OF US have pointed to a "native" resolution of a game, not knowing/seeing that in the process of rendering that game, low-res assets are incorporated. Yet, we still call it native, we still call it 720p, 1080p, etc.
Now to Killzone Shadowfall MP........ We first must understand that over the course of these 3-4 months since the game's launch, no one, not a single pixel-counter came out saying Killzone MP was running at 960x1080. Why is that? Why did it take Eurogamer mentioning GG's temporal reprojection technique for people to even know this?
Answer: Because all of that is done internally, as part of the rendering process, just like games that "do all lighting at a lower resolution". Because when counting the pixels, you would see a "native" 1080p resolution of 1920x1080. And if the game was rendered at 960x1080 and upscaled in the conventional sense, pixel-counters would have seen it....... just like sub-HD games last gen or 900p games this gen.
I am no game development guru, would never claim anything close to that. I know there are some very knowledgeable people here in SW....... some. I can GUARANTEE you that some of best Xbox 360 and Xbox One games use similar techniques with similar assets. And this will be proven over the next few weeks as people look into how Xbone games are rendered, or how last-gen games were rendered on PS3/360.
PA-LEASE, dont come in with "bu, bu, but its rendering at 960x1080" or "its 1080i"
this isnt a video, its a game. And the complexity of what its doing is beyond most of you, for example:
- We keep track of three images of “history pixels” sized 960x1080
- The current frame
- The past frame
- And the past-past frame
- For each pixel we store its color and its motion vector – i.e. the direction of the pixel on-screen
- We also store a full 1080p, “previous frame” which we use to improve anti-aliasing
Then we have to reconstruct every odd pixel in the frame:
- We track every pixel back to the previous frame and two frames ago, by using its motion vectors
- By looking at how this pixel moved in the past, we determine its “predictability”
- Most pixels are very predictable, so we use reconstruction from a past frame to serve as the odd pixel
- If the pixel is not very predictable, we pick the best value from neighbors in the current frame
If using low-res assets at any point in the "pipeline" means the resolution isn't native........ lems are prepping for some self-ownage.
Log in to comment