ttboy's forum posts

#1 Posted by ttboy (268 posts) -

@ttboy:

Problem is, even after they used DX12. It will just be like Horza2. It looks nice and plays well. But, just like Horizon 2, they wouldn't bring any prove that the new SDK helped them to achieve things easier.

If they don't say it they I lose the bet. Its fairly simple I think.

#2 Posted by ttboy (268 posts) -

Guys, you know tomato will win because he can spin it. There is no way you can prove DX12 can help on XboxOne. Because no devs would be stupid enough to make two versions of the same game on XboxOne to prove it is faster on DX12. And on PC, it wont matter with XboxOne as the DX11 on PC and XboxOne are different, MS already said that on DX12 announcement.

The only thing you can get from devs are just saying they don't have to work extra when developing both PC and XboxOne versions. And you can't guarantee such statement can translate to better graphics.

The only thing that may help is saying the CPU is better at multi-threading now. But, that is such a minor improvement when most of the renderings are on GPU. Hell, Forward+ does GPGPU instead of CPU.

And most of all, it will take at least a year after DX12 released to see how development process gets improved. Most of them will just use the old ways and slowly transition to the new way. It takes a much longer time to see it in effect.

Anyway, would hope Fable Legend looks as great as this experiment. As long as I get great games like Horizon 2, I am happy.

DX12 is in certain Devs hands right now. When they announce it they will want software to display. I don't think it will take more than a year before we see some tangible evidence. If so then I leave this place you will have Tomato to yourselves :) ..

#3 Posted by ttboy (268 posts) -

@ttboy said:

@GrenadeLauncher said:

@ttboy said:

@GrenadeLauncher said:

@ttboy said:

@gamersjustgame said:

Can we please get this bet official?

@tormentos @GrenadeLauncher Lets make the bet. Since both of you are so confident that DX12 will do nothing for the Xbox One lets bet your accounts on it. Put up or shut up time gentlemen.

And how do you intend to objectively measure the difference in graphical output between exclusives released at launch and by the end of 2016?

What's the cut off for DX12 justifying itself as the sekrit sauce that saves the Xbone?

I propose the following:

  1. A Developer quote that says implicitly or explicitly that DX12 has increased the fps/ Amount of objects on screen or resolution.

If we use screen shots then people will not be objective.

Time duration: No more than 1 year after DX 12 sdk has been in Developer's hands.

Any developer? What if that developer is working on a game that has a marketing deal with MS like The Division? What if the source is a click bait article on a website no one's heard of?

Any Developer who legitimately has access to the SDK. There are already 2 devs who have stated that its a big deal for the Xbox One (hence my confidence). Lets say it has to be a Dev not an anonymous source :).

How will we know if they legit have access to the SDK? Their word on it?

Out of interest, which devs are they so far?

Can you think of any Dev who has said that they have access to a SDK and were lying? Its rare for a Dev to try and lie directly because it really destroys their reputation and is easily found out. The Killzone 1080p thing could be argued as a lie since they're being sued over it however thats rare.

Hehe I've posted one dev many times but some do not want to believe him (Brad Wardell). He has been questioned many times on his statements and has defended them. You can research his background and who he currently works with. The other Dev group was asked at E3 directly from a fan who took pics with them. They stated that it was a big deal. I'm not posting links since Google is your friend. If you don't believe me then take the bet :)....

#4 Edited by ttboy (268 posts) -

@ttboy said:

@GrenadeLauncher said:

@ttboy said:

@gamersjustgame said:

Can we please get this bet official?

@tormentos @GrenadeLauncher Lets make the bet. Since both of you are so confident that DX12 will do nothing for the Xbox One lets bet your accounts on it. Put up or shut up time gentlemen.

And how do you intend to objectively measure the difference in graphical output between exclusives released at launch and by the end of 2016?

What's the cut off for DX12 justifying itself as the sekrit sauce that saves the Xbone?

I propose the following:

  1. A Developer quote that says implicitly or explicitly that DX12 has increased the fps/ Amount of objects on screen or resolution.

If we use screen shots then people will not be objective.

Time duration: No more than 1 year after DX 12 sdk has been in Developer's hands.

Any developer? What if that developer is working on a game that has a marketing deal with MS like The Division? What if the source is a click bait article on a website no one's heard of?

Any Developer who legitimately has access to the SDK. There are already 2 devs who have stated that its a big deal for the Xbox One (hence my confidence). Lets say it has to be a Dev not an anonymous source :).

#5 Posted by ttboy (268 posts) -

@ttboy said:

@gamersjustgame said:

Can we please get this bet official?

@tormentos @GrenadeLauncher Lets make the bet. Since both of you are so confident that DX12 will do nothing for the Xbox One lets bet your accounts on it. Put up or shut up time gentlemen.

And how do you intend to objectively measure the difference in graphical output between exclusives released at launch and by the end of 2016?

What's the cut off for DX12 justifying itself as the sekrit sauce that saves the Xbone?

I propose the following:

  1. A Developer quote that says implicitly or explicitly that DX12 has increased the fps/ Amount of objects on screen or resolution.

If we use screen shots then people will not be objective.

Time duration: No more than 1 year after DX 12 sdk has been in Developer's hands.

#6 Posted by ttboy (268 posts) -

Can we please get this bet official?

@tormentos @GrenadeLauncher Lets make the bet. Since both of you are so confident that DX12 will do nothing for the Xbox One lets bet your accounts on it. Put up or shut up time gentlemen.

#7 Posted by ttboy (268 posts) -

Anyone else feel cheated? Neither of fanboys will take the bet.

Yes .. I think they're more interested in arguing than putting their account on the line. I've seen enough to feel confident.

#8 Edited by ttboy (268 posts) -

I hate to be that guy.....

But is the bet on or off?

I'm down to do it but none of the usual Sony guys want to. I wonder why.

#9 Edited by ttboy (268 posts) -

This was posted last year and its one of the better explanations of the reasons why MS chose such a strange architecture.

so-could-the-x1s-secret-sauce-be-voxel-cone-ray-tracing

I did the leg work. I'll just present verifiable evidence. You decide. I'm going to try to keep it much simpler this time. I did in a question and answer format so that if some members might not be up to speed can also understand, but if you know what these things are just skip ahead to the next question.

What is Ray Tracing?

Some would say it's the holy grail of computer graphics lighting. It gives you realistic dynamic reflections, lights, shadows, materials. Huge increases in geometry. It's the future. If you don't know, you probably should stop right here and go look it up.

Why is ray tracing awesome?

Ray Tracing using a 3D graphics program POV-Ray.

A true ray tracing engine gives you true global illumination, shadows, lights, reflections, specular maps, realistic material surfaces, all standard. Energy conservation is easy to do. Lighting looks amazing. Materials can finally look realistic without the need for a whole bunch of texture tricks. Refractions and transparencies(glass looks like glass, water looks like water and bends and distorts light as it should). It removes all the necessary hacks that never quite look as good from development.

Developers don't have to create a whole bunch of reflection maps to do reflections, water, and many other materials. It's a pain in the ass and they never quite look good enough anyway. It's also going to offer a huge increase in geometry. Rasterized graphics are easy to get up and going, but past a certain point, the more polygons you have the slower it gets. Ray tracing is mainly dependent on resolution and it's the complete opposite. It takes a lot of power, more than currently available to get it up and running. But once you cross that point, you can have lots and lots of geometry at very little additional cost. It won't affect it. That means more actual 3D detail in objects, especially organic plants like trees. So there's an expected switch in the industry coming shortly. Well known figureheads like John Carmack is already gearing his studio towards preparing for it.

Can we have it?

Not yet. We're actually really close on the PC, but still nowhere near being able to run on next gen consoles. Real ray tracing is around the corner on the PC though. See the Brigade ray tracing engine. Keep an eye on it. It's going to be awesome. The only thing confirmed for consoles are screen spaced reflections, which is NOT to be confused with the real thing.

What's the difference between ray tracing and screen spaced reflections?

SSR is a standard feature of Crytek's Cryengine, Frostbite, Unreal Engine 4, Guerrilla's KZ:SF engine, and a lot of others, but it's not the same thing. They are not even remotely close to being similar. It's a hack mainly to give you dynamic reflections only. It's not a lighting engine. It's just used for getting dynamic reflections for rasterized graphics, but that's all. No transparency, no lighting and shadows, no global illumination, no refraction benefits, none of that. And the reflections themselves only take into considerations objects on screen, not objects that may be out of your view, but where you should still see their reflection if you are at an angle. It just won't be there.

Screen spaced reflections:

Brigade 2 ray tracing lighting engine:

Note how close it gets to images you typically only see in 3D animation programs or Pixar movies. Except, it's in real time and runs at roughly 30fps but currently requires roughly a Titan to run and it's still very noisy in motion.

The entire lighting engine, shadows, material creation is driven by the ray tracing engine. Not a lot of texturing going on here. The look of materials isn't dependent so much on the original texture, but rather on how light bounces off the defined material surface. Procedural texture work fantastic with ray tracing in creating realistic materials. Notice the reflections at the top which reflect objects not on scree, which beyond the capabilities of screen spaced reflections.

Most developers still use reflection maps for their reflections which is just a texture, taken from the point of view of the reflective object. It's as fake as it gets and your next gen racing games like Forza 5 and Drive Club are still both using this age old hack.

What is Voxel Cone Ray Tracing?

Ray tracing done cheap. It works in a similar same way. Instead of individual rays, or lines, being shot from each pixel, it uses cones and voxels. It covers a larger area and uses a few cones. Prior implementations used along the lines of 9-12 cones which obviously saves a lot of power compared to a ~million rays at a 1080p resolution. It makes ray tracing almost practical on current GPU power. But the triangle or polygon data being calculated on in the scenes are converted and stored as voxels.

It's an approximation, but you can't argue with the results:

What are voxels?

\

Unlike polygons, where the basic unit is the triangle, a voxel is a 3D cube. They're volume based. Never really been that popular in videogames other than the very well known game Outcast in the 90s, but now they are making a comeback. Project Spark, Everquest Next are using it are using it for their graphics engines. They're really great at creating organic looking graphics as well as have superior scalability(zooming in and out to great draw distances) and performance compared to polygons.

Ever notice how you can't typically make out the individual polygons in Project Spark?

That's because those aren't your typical polygon graphics made out of triangles you're looking at. They're voxels. That's why those objects scale big or small so nicely. Kodu, Project Spark's father, uses a voxel engine as well.

So those voxels are somehow now being used to do ray tracing?

Yes.

Who came up with Voxel Cone Ray Tracing?

Cyril Crassin is typically credit with it. Here's a paper outlining a course on it which was the big clue in this research. You can try to decipher it, or you can take my word for it and I will save you a headache and time. Up to you.

Why didn't it make it big? Why didn't I hear about it?

It did. It was a big hit at Siggraph and Unreal Engine 4 was initially based on it. Turns out some are saying they eventually had to strip it out(quietly) due to them not being able to get it up to speed on next generation consoles and mid-range PC's. However, there's hope. There's also a plugin for Unity and it runs quite well.

What was the problem with it?

The data was being stored in a Sparse Voxel Octree. Don't worry about what it means if you don't understand but let's just say it's a 3D, layered, voxel grid. What's important is that traversing this grid is very slow.

How did they fix it?

Instead of using voxels to store the data, they're using a 3D texture. Like a cube, or a voxel, but stored as an array of 2D textures. Now it was fast, but this had some problems of its own.

What was the problem with using 3D textures for cone ray tracing?

The 3D textures were big and required a lot of memory.

This demo served both as a means to familiarize myself with voxel cone tracing and as a testbed for performance experiments with the voxel storage: plain 3D textures, real-time compressed 3D textures, and 3D textures aligned with the diffuse sample rays were tested. Sparse voxel octrees were not implemented due to time constraints, but would have been nice to have as a baseline reference. Compared to SVO in the context of voxel cone tracing (as opposed to ray casting, where SVO is a clear winner), 3D textures allow for easier filtering, direct lookups without evaluating the octree structure, and potentially better cache and memory bandwidth utilization (depending on cone size and scene density). The clear downside is the space requirement: 3D textures can’t scale to larger scenes or smaller, more detailed voxels. There may be ways to work around this deficiency: sparse textures (GL_AMD_sparse_texture), compression, or hybrid schemes that mix tree structures with 3D textures.

http://www.geeks3d.com/20121214/voxel-cone-tracing-global-illumination-in-opengl-4-3/

How did they fix that?

Using partial resident textures.

What the the hell are partial resident textures?

It chops up an enormous texture into tiny little tiles, and streams only what is needed, saving both RAM and bandwidth.

Left: Original texture.

Middle: It gets stored in a tile pool.

Right: How it's stored in memory.

The original texture on the left is big. The combined 64kb tiles on the right, takes up very little space in RAM. So powerful you can store textures as big as 3GB in 16MB of RAM(or eSRAM?).

Why is this important in reference to the X1?

Because DirectX 11.2 and the X1 chip architecture is built for doing partial resident resources in hardware. Removes the limitations other software implementations had, which held some engines back, such as John Carmack's Rage.

The X1's architecture and data move engines have tile and untile features natively, in hardware:

Doesn't the PS4 have this too?

Both AMD GPUs support partial resident textures, but we do know for a fact Microsoft added additional dedicated hardware in the X1 architecture to focus on this area beyond AMD's standard implementations.

Did Sony?

Ask Sony.

How come no one's talked about this?

They have. MS talked about partial resident resources in their DirectX build conference. They explained the move to partial resident resources as a solution in this paper. It just might end up being even more important than originally believed. And more recently an unnamed third party developer is touting better ray tracing capabilities on the X1:

Xbox One does, however, boast superior performance to PS4 in other ways. “Let’s say you are using procedural generation or raytracing via parametric surfaces – that is, using a lot of memory writes and not much texturing or ALU – Xbox One will be likely be faster,” said one developer.

http://www.edge-online.com/news/pow...erences-between-ps4-and-xbox-one-performance/

When might we hear something about it?

DirectX11.2 was only recently unveiled earlier this year. No launch games would have been designed for this. Partial resident textures are still a fairly new technique, and just now getting supported in hardware. Voxel cone ray tracing is also a fairly new implementation. And the alternative of using 3D texture along with partially resident textures is even newer, and not many have attempted it. Developers will certainly need time to start messing around with both.

So what now?

We don't know to what extend partially resident resources will be used on the X1, but if it's enough to pull off cone ray tracing using partially resident 3D textures, it's going to be a pretty big deal. It was a big blow when Epic had to yank it out of UE4, but the Unity plug in and 3d texture implementations still gives hope. Even Epic are considering re-introducing it at some point. Perhaps second and third generation games will attempt to use this. If possible, I'd expect to hear more about it soon.

#10 Posted by ttboy (268 posts) -

@daious said:

@ttboy said:

@daious said:

@ttboy said:

again the main difference will be on PC and not on consoles. It will only help console games that are being bottlenecked by the CPU.

Do you have a link for that assertion? Console games are bottlenecked by the CPU. We will find out in less than a year. Brad Wardell was already asked about the same thing that you mentioned. You can google his response.

I would guess the biggest bottleneck of the xboxone is the GPU not the CPU. People don't understand the existing API that consoles have. Its the reason why a console with the exact same specs of a PC will outperform the PC.

The best thing about DX12 on consoles is that more PC games will be ported to consoles and the easy of developing a game for both xboxone and PC. Xboxone will get more games out of this.

The DX 11.X that the Xbox One uses is primarily single core bound. This is the similar situation with the PC. 11.X is a lower level API but does not bring the DX 12 feature set.