In a new interview published on Spanish site Xataka, Director of Program Management for Xbox Series X Jason Ronald said that Microsoft could have easily used the same approach to reach a higher theoretical TFLOPS figure, but that would have made it harder for developers to optimize their games.
We focus on optimizing the developer experience to deliver the best possible experience for players, rather than trying to 'hunt' down certain record numbers. We've always talked about consistent and sustained performance.
We could have used forced clocks, we could have used variable clock rates: the reality is that it makes it harder for developers to optimize their games even though it would have allowed us to boast higher TFLOPS than we already had, for example. But you know, that's not the important thing. The important thing is the gaming experiences that developers can build.
The Microsoft executive also suggested that the mere I/O speed of the Xbox Series X (which is inferior to that of the PlayStation 5, according to the official specifications) doesn't tell the full story.
Things go beyond the numbers that we may or may not share. Sampler Feedback Streaming (SMS) allows us to load textures and makes the SSD drive act as a multiplier of physical memory that adds to the memory that the machine itself has.
We also have a new API called Direct Storage that gives us low-level direct access to the NVMe controller so that we can be much more efficient in managing those I / O operations.
So even MS says TFLOPS isn't important! However, they're making an interesting claim here. The claim is that variable rates make it difficult to program around and their fixed rate addresses that. Jason also seems to echo what Cerney said about having to cap the frequencies at what they are on PS5. They could go higher but chose not to for system stability (I'm assuming).
What do you guys think about his claim? Will we see this extra development difficulty manifest in games in the future?