I'm not going to post my resume. I've worked with virtualization since its inception. It's my job. MS is not talking much about how the "3 OS"s" are running on the X1. The hypervisor layer or virtual host has direct access to the hardware layer of the system. This hypervisor or virtual host can run virtual machines on top of the hypervisor. The reason I think this is important to understand is that anything running as a host on the hypervisor is not going to get 100% access to the hardware. The X1 has a hypervisor integrated OS that hosts 2 virtual hosts. One is the X1 gaming OS and the other is the apps. OS.
The hypervisor host machine manages what resources go to each virtual computer. Microsoft is sure to be running a modification of the newest Hyper-V available on windows server. This means the host server that is running the hypervisor is dynamically moving resources around when applications request the resources. While this is pretty cool, it's not something that is going to be taking 0% resources to do all this stuff. The dynamic resource management takes CPU cycles. The host OS always needs to reserve a good chunk of CPU, Memory, and Disk resources for itself.
To put things simple, Microsft's virtual architecture with the X1 is stealing much needed resources with a console that is already under powered. If the X1 was a hardcore gaming machine games would get bare metal access. Instead we're stuck with the middle man here. And we all know the middle man always get his cut.
I'm pasting some information from Microsofts documentation with Hyper-V (MS's Virtual Technology) below. It is pretty clear I think. This will give you an idea of the performance penalties to the virtual hosts. CPU, Memory, Disk Assess all take a penalty. Even the GPU will take a small hit. It won't be a huge hit. Maybe 2-3% but still... Things just start adding up. This isn't a system that is 100% efficient. MS really did go with the jack off all trades and master of none approach.
I guess I'm just a little jaded with what MS has determined is next gen. Virtualization is a really cool technology, but if you pair that with an overwhelmingly underpowered system, you have to wonder how much MS really cares about gamers. I guess we know why MS went with DDR3 memory. This kind of architecture would not work well with GDDR5. I doubt GDDR5 was even an option unless they were going to do split memory pools. This was on purpose and probably was set in stone much further back than people reallize.
The CPU overhead associated with running a guest operating system in a Hyper-V virtual machine was found to range between 9 and 12%. For example, a guest operating system running on a Hyper-V virtual machine typically had available 88-91% of the CPU resources available to an equivalent operating system running on physical hardware.
The memory cost associated with running a guest operating system on a Hyper-V virtual machine was observed to be approximately 300 MB for the hypervisor, plus 32 MB for the first GB of RAM allocated to each virtual machine, plus another 8 MB for every additional GB of RAM allocated to each virtual machine. For more information about allocating memory to guest operating systems running on a Hyper-V virtual machine, see the Optimizing Memory Performance section in Optimizing Performance on Hyper-V.
Network latency directly attributable to running a guest operating system in a Hyper-V virtual machine was observed to be less than 1 ms and the guest operating system typically maintained a network output queue length of less than one. For more information about measuring the network output queue length, see the Measuring Network Performance section in Measuring Performance on Hyper-V.
When using the pass-through disk feature in Hyper-V, disk I/O overhead associated with running a guest operating system in a Hyper-V virtual machine was found to range between 6 and 8 %. For example, a guest operating system running on Hyper-V typically had available 92-94% of the disk I/O available to an equivalent operating system running on physical hardware as measured by the open source disk performance benchmarking tool IOMeter.
For information about measuring disk latency on a Hyper-V host or guest operating system using Performance Monitor, see the Measuring Disk I/O Performance section in Measuring Performance on Hyper-V.