Google Announces Stadia, A Powerful New Game Streaming Service

  • 113 results
  • 1
  • 2
  • 3
Avatar image for deactivated-5d78760d7d740
deactivated-5d78760d7d740

16386

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#101 deactivated-5d78760d7d740
Member since 2009 • 16386 Posts

@lundy86_4: Lol the housing market is a mess right now. It inspired so many memes of the worst places going for obscene prices.

Imo the internet speeds past 100 are useless (if you're using it for everyday things). I got 3 months of 1gbps free as a promotion and didn't even notice the difference between that and my usual 300 since 100 is enough for fast downloads + 4k streaming.

I imagine tourists would get pretty annoying for people who live there lol. Do you see them very frequently around your neighbourhood?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#102  Edited By ronvalencia
Member since 2008 • 29612 Posts

P

@michaelmikado said:

@ronvalencia:

No the video is wrong, what he is suggesting is literally impossible the GPU and GPU cannot be on two separate packages while the CPU utilizes vRAM from the GPU. This configuration is only feasible when the cpu and GPU are single packages and bus with access to the same memory controller.

Edit: to be fair later in article they claim that the HBM2 is shared which contradicts what they claimed earlier but still state that they are separate packages. It sounds like there’s a mix of marketing and technical information. So will wait for more info but the fact that they contradict themselves on whether the HBM2 is just vRAM or not would be a basic spec to get.

FALSE.

The original Xbox 360 has separate CPU and GPU/NB/MCH packages with unified GDDR3 memory architecture.

Separate CPU package is connected to GPU package which is then connected to unified GDDR3 memory.

PC's CPU can access GPU's VRAM via PCI-E links. 1990s PCI protocols still has server RAM expansion cards via PCI expansion slots! PCI-E still runs with PCI protocols.

PC's Windows NT/HAL wasn't designed to register memory pools in GPU's VRAM as system memory. Linux's flexibility doesn't have Windows NT's rigidity.

Avatar image for lundy86_4
lundy86_4

61534

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#103  Edited By lundy86_4
Member since 2003 • 61534 Posts

@XVision84: Yeah, 100+ can be largely unnecessary... I guess it depends on the servers on the other end as to what speeds you actually get. I download games pretty damn quick on major suppliers like Steam.

Technically i'm in Virgil, which is the throughput for the old town (it's also the armpit of NOTL as it's newer lol)... Less tourists here, but I used to hit the LCBO in the old town and it was a nightmare... We just had a new LCBO open in Virgil and I live a 1-minute drive away lol.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#104 michaelmikado
Member since 2019 • 406 Posts

@ronvalencia said:

P

@michaelmikado said:

@ronvalencia:

No the video is wrong, what he is suggesting is literally impossible the GPU and GPU cannot be on two separate packages while the CPU utilizes vRAM from the GPU. This configuration is only feasible when the cpu and GPU are single packages and bus with access to the same memory controller.

Edit: to be fair later in article they claim that the HBM2 is shared which contradicts what they claimed earlier but still state that they are separate packages. It sounds like there’s a mix of marketing and technical information. So will wait for more info but the fact that they contradict themselves on whether the HBM2 is just vRAM or not would be a basic spec to get.

FALSE.

The original Xbox 360 has separate CPU and GPU/NB/MCH packages with unified GDDR3 memory architecture.

Separate CPU package is connected to GPU package which is then connected to unified GDDR3 memory.

PC's CPU can access GPU's VRAM via PCI-E links. 1990s PCI protocols still has server RAM expansion cards via PCI expansion slots! PCI-E still runs with PCI protocols.

PC's Windows NT/HAL wasn't designed to register memory pools in GPU's VRAM as system memory. Linux's flexibility doesn't have Windows NT's rigidity.

The 360 having separate packages was only possible by having direct access to each other's cache and aren't required to go through a memory bus. While I can't speak on Linux, unless they've replicated the xbox 360 cache scheme at the server level which would arguably be a larger accomplishment. It makes no sense from a performance stance and even if they did, the cost would be prohibitive for that kind of customization when there are many off the shelf options. Ignoring the fact that the cache differences don't mean they are the same chips they could have done any number of configurations to the cache as stated.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#105  Edited By ronvalencia
Member since 2008 • 29612 Posts

@michaelmikado said:
@ronvalencia said:

P

@michaelmikado said:

@ronvalencia:

No the video is wrong, what he is suggesting is literally impossible the GPU and GPU cannot be on two separate packages while the CPU utilizes vRAM from the GPU. This configuration is only feasible when the cpu and GPU are single packages and bus with access to the same memory controller.

Edit: to be fair later in article they claim that the HBM2 is shared which contradicts what they claimed earlier but still state that they are separate packages. It sounds like there’s a mix of marketing and technical information. So will wait for more info but the fact that they contradict themselves on whether the HBM2 is just vRAM or not would be a basic spec to get.

FALSE.

The original Xbox 360 has separate CPU and GPU/NB/MCH packages with unified GDDR3 memory architecture.

Separate CPU package is connected to GPU package which is then connected to unified GDDR3 memory.

PC's CPU can access GPU's VRAM via PCI-E links. 1990s PCI protocols still has server RAM expansion cards via PCI expansion slots! PCI-E still runs with PCI protocols.

PC's Windows NT/HAL wasn't designed to register memory pools in GPU's VRAM as system memory. Linux's flexibility doesn't have Windows NT's rigidity.

The 360 having separate packages was only possible by having direct access to each other's cache and aren't required to go through a memory bus. While I can't speak on Linux, unless they've replicated the xbox 360 cache scheme at the server level which would arguably be a larger accomplishment. It makes no sense from a performance stance and even if they did, the cost would be prohibitive for that kind of customization when there are many off the shelf options. Ignoring the fact that the cache differences don't mean they are the same chips they could have done any number of configurations to the cache as stated.

Did you forget GCN's X86-64 pointer compatibility?

GCN is design for fusion with x86 CPU regardless if it's discrete CPU+discrete GPU combo or an APU.

X86 CPU can pass it's X86-64 pointer to GCN IGP or GCN dGPU.

Both X86 and GCN are little endian data formatted.

PS5 dev kits are Ryzens with Vega 56/64 or VII which can run PS4 games e.g. GTS at 8K resolution. Sony could release high end desktop AMD PC that can run PS4 games.

PS4's fusion link is better than PCI-E 16X version 2.0 but slightly inferior to PCI-E 16X version 3.0 and substantially inferior to incoming PCI-E 16X version 4.0.

---

MI60/MI50, ZEN 2 and X570 chip-set supports PCI-E version 4.0

Avatar image for Heil68
Heil68

60721

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#106 Heil68
Member since 2004 • 60721 Posts

I would be interested if the games are good.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#107  Edited By ronvalencia
Member since 2008 • 29612 Posts

@qx0d said:

https://www.youtube.com/watch?v=wi9DtYSOMks

https://www.gameinformer.com/gdc-2019/2019/03/19/google-announces-stadia-a-powerful-new-game-streaming-service

Google CEO Sundar Pichai admits he's not a big gamer, but he's got big plans for gamers around the world. Today Google revealed its new game streaming platform, Stadia, with a stated goal to bring the best games to everyone in the world. "When we say for everyone, we really mean it. It's one of our most cherished values at the company," Pichai says.

Here's everything we know about the platform right now.

Reasons for NVIDIA skipping HBM v2, the argument for GDDR6

From http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/

Vega 64's HBM v2 has 484 GB/s theoretical with 303 GB/s practical memory bandwidth before memory compression is applied. Efficiency is 62.60 percent

RX-580's GDDR5 has 256 GB/s theoretical with 193 GB/s practical memory bandwidth before memory compression is applied. Efficiency is 75.39 percent

-----------

For Xbox Anaconda and PS5, assume GDDR6 has similar practical memory bandwidth efficiency as GDDR5 i.e. 75.39 percent efficient .

Candidates

PS4 like with 256 bit PCB motherboard

GDDR6-13000 x 256 bit yields 313.62 GB/s practical memory bandwidth estimate. <----- Not brainier to why NAVI's RX-580 replacement to be Vega 56/64 range.

GDDR6-14000 x 256 bit yields 337.75 GB/s practical memory bandwidth estimate. <----- Not brainier to why NAVI's RX-580 replacement to be Vega 56/64 range.

X1X like with 384 bit PCB motherboard

GDDR6-13000 x 384 bit yields 470.43 GB/s practical memory bandwidth estimate. <----- this configuration will crush Google's Stadia specs.

GDDR6-14000 x 384 bit yields 506.62 GB/s practical memory bandwidth estimate. <----- this configuration will crush Google's Stadia specs.

MS's X1X has GDDR5-6800 which is GDDR-7000 chips which is one step bellow GDDR5-8000. Pattern, MS selects 1 step below.

Sony's PS4 Pro has GDDR5-7000 which is one step bellow GDDR5-8000.Pattern, Sony selects 1 step below.

Both Sony and MS will crush Google's Stadia box.

GDDR6-13000 x 384 bit config's 470.43 GB/s has higher gap against Vega 64's HBM v2's 303GB/s when compared to X1X vs PS4 Pro.

GDDR6-13000 x 384 bit config would be my candidate for Xbox Anaconda to replace Xbox One X i.e. drop in replacement for GDDR5 with GDDR6.

Avatar image for dzimm
dzimm

6615

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#108 dzimm
Member since 2006 • 6615 Posts

If streaming is the future of gaming, then gaming has no future. Games will become disposable entertainment with no longevity. As soon as a game fails to turn an acceptable profit *POOF* it'll be scrubbed from the server to preserve bandwidth make room for the next flavor of the month.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#109  Edited By michaelmikado
Member since 2019 • 406 Posts

@ronvalencia said:
@michaelmikado said:
@ronvalencia said:

P

@michaelmikado said:

@ronvalencia:

No the video is wrong, what he is suggesting is literally impossible the GPU and GPU cannot be on two separate packages while the CPU utilizes vRAM from the GPU. This configuration is only feasible when the cpu and GPU are single packages and bus with access to the same memory controller.

Edit: to be fair later in article they claim that the HBM2 is shared which contradicts what they claimed earlier but still state that they are separate packages. It sounds like there’s a mix of marketing and technical information. So will wait for more info but the fact that they contradict themselves on whether the HBM2 is just vRAM or not would be a basic spec to get.

FALSE.

The original Xbox 360 has separate CPU and GPU/NB/MCH packages with unified GDDR3 memory architecture.

Separate CPU package is connected to GPU package which is then connected to unified GDDR3 memory.

PC's CPU can access GPU's VRAM via PCI-E links. 1990s PCI protocols still has server RAM expansion cards via PCI expansion slots! PCI-E still runs with PCI protocols.

PC's Windows NT/HAL wasn't designed to register memory pools in GPU's VRAM as system memory. Linux's flexibility doesn't have Windows NT's rigidity.

The 360 having separate packages was only possible by having direct access to each other's cache and aren't required to go through a memory bus. While I can't speak on Linux, unless they've replicated the xbox 360 cache scheme at the server level which would arguably be a larger accomplishment. It makes no sense from a performance stance and even if they did, the cost would be prohibitive for that kind of customization when there are many off the shelf options. Ignoring the fact that the cache differences don't mean they are the same chips they could have done any number of configurations to the cache as stated.

Did you forget GCN's X86-64 pointer compatibility?

GCN is design for fusion with x86 CPU regardless if it's discrete CPU+discrete GPU combo or an APU.

X86 CPU can pass it's X86-64 pointer to GCN IGP or GCN dGPU.

Both X86 and GCN are little endian data formatted.

PS5 dev kits are Ryzens with Vega 56/64 or VII which can run PS4 games e.g. GTS at 8K resolution. Sony could release high end desktop AMD PC that can run PS4 games.

PS4's fusion link is better than PCI-E 16X version 2.0 but slightly inferior to PCI-E 16X version 3.0 and substantially inferior to incoming PCI-E 16X version 4.0.

---

MI60/MI50, ZEN 2 and X570 chip-set supports PCI-E version 4.0

This is exactly what I stated though. GCN is able to use either traditional vRAM or system RAM by design, specifically in an APU. The slide you show literally explain the exact problem I described. Trying to have the GPU and CPU both access the RAM simultaneously through different buses without any direct communication between the two is the issue. AMD's solution is the same as the 360 where the GPU and CPU communicate directly via cache without attempting to have two different access points to the memory controller. The speed of the bus doesn't matter if the flow of data cannot be managed correctly between the two computing units competing for memory resources. Again, I'm not saying Google didn't do that, only that in order for this to work they would need to give the GPU and CPU direct access to each other. It also doesn't mean they aren't using 7601s as they could just customize the caches to be shared with the GPU cores to allow only stacks of VRAM.

The problem with this is that it would require a very custom setup when they basically already have premade equivalent off the shelf parts. There would not be allot of benefit in designing a custom spec in this manner besides the heat and space savings.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#110 michaelmikado
Member since 2019 • 406 Posts

@ronvalencia said:
@qx0d said:

https://www.youtube.com/watch?v=wi9DtYSOMks

https://www.gameinformer.com/gdc-2019/2019/03/19/google-announces-stadia-a-powerful-new-game-streaming-service

Google CEO Sundar Pichai admits he's not a big gamer, but he's got big plans for gamers around the world. Today Google revealed its new game streaming platform, Stadia, with a stated goal to bring the best games to everyone in the world. "When we say for everyone, we really mean it. It's one of our most cherished values at the company," Pichai says.

Here's everything we know about the platform right now.

Reasons for NVIDIA skipping HBM v2, the argument for GDDR6

From http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/

Vega 64's HBM v2 has 484 GB/s theoretical with 303 GB/s practical memory bandwidth before memory compression is applied. Efficiency is 62.60 percent

RX-580's GDDR5 has 256 GB/s theoretical with 193 GB/s practical memory bandwidth before memory compression is applied. Efficiency is 75.39 percent

-----------

For Xbox Anaconda and PS5, assume GDDR6 has similar practical memory bandwidth efficiency as GDDR5 i.e. 75.39 percent efficient .

Candidates

PS4 like with 256 bit PCB motherboard

GDDR6-13000 x 256 bit yields 313.62 GB/s practical memory bandwidth estimate. <----- Not brainier to why NAVI's RX-580 replacement to be Vega 56/64 range.

GDDR6-14000 x 256 bit yields 337.75 GB/s practical memory bandwidth estimate. <----- Not brainier to why NAVI's RX-580 replacement to be Vega 56/64 range.

X1X like with 384 bit PCB motherboard

GDDR6-13000 x 384 bit yields 470.43 GB/s practical memory bandwidth estimate. <----- this configuration will crush Google's Stadia specs.

GDDR6-14000 x 384 bit yields 506.62 GB/s practical memory bandwidth estimate. <----- this configuration will crush Google's Stadia specs.

MS's X1X has GDDR5-6800 which is GDDR-7000 chips which is one step bellow GDDR5-8000. Pattern, MS selects 1 step below.

Sony's PS4 Pro has GDDR5-7000 which is one step bellow GDDR5-8000.Pattern, Sony selects 1 step below.

Both Sony and MS will crush Google's Stadia box.

GDDR6-13000 x 384 bit config's 470.43 GB/s has higher gap against Vega 64's HBM v2's 303GB/s when compared to X1X vs PS4 Pro.

GDDR6-13000 x 384 bit config would be my candidate for Xbox Anaconda to replace Xbox One X i.e. drop in replacement for GDDR5 with GDDR6.

You can't hyperfocus on the bandwidth as the main decider.

The cost of having a 385 bit bus is going to increase your cost, heat, and space which all factor into the end price. HBM2 is a fraction of the size and space and heat of a GDDR6 chip. In a blade server or console these factors are high priorities as well and its not outside reason that consoles could utilize small pools of HBM2 like they uses to use eScram or other high bandwidth memory pools in the past. Remember, PS4s unified memory without smaller high bandwidth pools was the exception. Not the rule.

Avatar image for Guy_Brohski
Guy_Brohski

2221

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#111 Guy_Brohski
Member since 2013 • 2221 Posts

OnLive ver 2.0 looks cool..

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#112  Edited By ronvalencia
Member since 2008 • 29612 Posts

@michaelmikado said:
@ronvalencia said:
@qx0d said:

https://www.youtube.com/watch?v=wi9DtYSOMks

https://www.gameinformer.com/gdc-2019/2019/03/19/google-announces-stadia-a-powerful-new-game-streaming-service

Google CEO Sundar Pichai admits he's not a big gamer, but he's got big plans for gamers around the world. Today Google revealed its new game streaming platform, Stadia, with a stated goal to bring the best games to everyone in the world. "When we say for everyone, we really mean it. It's one of our most cherished values at the company," Pichai says.

Here's everything we know about the platform right now.

Reasons for NVIDIA skipping HBM v2, the argument for GDDR6

From http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/

Vega 64's HBM v2 has 484 GB/s theoretical with 303 GB/s practical memory bandwidth before memory compression is applied. Efficiency is 62.60 percent

RX-580's GDDR5 has 256 GB/s theoretical with 193 GB/s practical memory bandwidth before memory compression is applied. Efficiency is 75.39 percent

-----------

For Xbox Anaconda and PS5, assume GDDR6 has similar practical memory bandwidth efficiency as GDDR5 i.e. 75.39 percent efficient .

Candidates

PS4 like with 256 bit PCB motherboard

GDDR6-13000 x 256 bit yields 313.62 GB/s practical memory bandwidth estimate. <----- Not brainier to why NAVI's RX-580 replacement to be Vega 56/64 range.

GDDR6-14000 x 256 bit yields 337.75 GB/s practical memory bandwidth estimate. <----- Not brainier to why NAVI's RX-580 replacement to be Vega 56/64 range.

X1X like with 384 bit PCB motherboard

GDDR6-13000 x 384 bit yields 470.43 GB/s practical memory bandwidth estimate. <----- this configuration will crush Google's Stadia specs.

GDDR6-14000 x 384 bit yields 506.62 GB/s practical memory bandwidth estimate. <----- this configuration will crush Google's Stadia specs.

MS's X1X has GDDR5-6800 which is GDDR-7000 chips which is one step bellow GDDR5-8000. Pattern, MS selects 1 step below.

Sony's PS4 Pro has GDDR5-7000 which is one step bellow GDDR5-8000.Pattern, Sony selects 1 step below.

Both Sony and MS will crush Google's Stadia box.

GDDR6-13000 x 384 bit config's 470.43 GB/s has higher gap against Vega 64's HBM v2's 303GB/s when compared to X1X vs PS4 Pro.

GDDR6-13000 x 384 bit config would be my candidate for Xbox Anaconda to replace Xbox One X i.e. drop in replacement for GDDR5 with GDDR6.

You can't hyperfocus on the bandwidth as the main decider.

The cost of having a 385 bit bus is going to increase your cost, heat, and space which all factor into the end price. HBM2 is a fraction of the size and space and heat of a GDDR6 chip. In a blade server or console these factors are high priorities as well and its not outside reason that consoles could utilize small pools of HBM2 like they uses to use eScram or other high bandwidth memory pools in the past. Remember, PS4s unified memory without smaller high bandwidth pools was the exception. Not the rule.

PS4 has direct CPU to GPU link and PS4 programmer's CPU optimizations guide recommended game code within the L2 cache boundary which is similar to AMD's K7 optimizations guide.

PS4 is not an exception since PS4 Pro and X1X followed PS4's basic design. PS4 GPU was optimized for async compute and TMU read/write with 512 KB L2 cache compute micro-tile render.

X1X GPU difference is it's ROPS includes 2MB render cache.

Both PS4 Pro and X1X GPUs has Polaris 2MB L2 cache.

Maxwell introduces micro-tile cache render on it's L2 cache with it's TMU and ROPS read/write units.

Vega 56/64 introduces micro-tile cache render on it's 4MB L2 cache with it's TMU and ROPS read/write units.

X1X GPU has TMU has 2MB L2 cache and ROPS has 2MB render cache, hence X1X's GPU is half way house to Vega 56's 4MB L2 cache design

XBO's 32MB eSRAM bandwidth is NOT substantially than PS4's GDDR5 bandwidth. Microsoft didn't repeat XBO's eSRAM design!

NVIDIA is winning the absolute GPU performance argument and AMD do NOT have design authority.

------

https://www.reddit.com/r/Amd/comments/ao43xl/radeon_vii_insanely_overvolted_undervolting/

Radeon VII is overvolted, console GPU would have little provision for PC overclocking.

RTX 2080 has 215 watts TDP. Undervolting enabled VII to rival RTX 2080's performance per watt.

VII's over-voltage shows AMD's lazy craftsmanship. End users should NOT do AMD's quality assurance!

Microsoft has designed it's own tight voltage regulators and power management circuits to reduce Xbox One X's power consumption.

This is a repeated pattern against AMD's half ass'ed power management circuitry efforts.

Avatar image for general_solo76
General_Solo76

578

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#113 General_Solo76
Member since 2013 • 578 Posts

This thing is going to fade into oblivion just like the Nokia N-Gage and Ouya did

Avatar image for son-goku7523
Son-Goku7523

955

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#114 Son-Goku7523
Member since 2019 • 955 Posts
@evilross said:

I really hate to see this moving forward. Streaming is bad for consumers, and bad for developers. It’s only good for the service provider and publishers.

You want to see a company like Google come in and put a strangle hold on the games industry?

Your going to be renting perpetually. You have no ownership, no rights, and the hardware you do have is going to be worth nothing. Your going to be paying your ISP, then paying your different streaming providers and have no ownership whatsoever. On top of that I guarantee you AAA games are NOT all going to be included under a one price umbrella. You get access to 1000’s of mobile quality games filled with micro transactions, and then you will pay on top of all that for other software streams from big name publishers. Want to play Madden? Then for and additional 9.99 a month subscribe to the EA channel....

Developers are going to be shafted as well, getting paid a flat rate for work, like hourly employees of the publishing houses. Quality is going to suffer, creativity is going to suffer and the consumer is going to be put on the back burner and seen as nothing but a revenue service for the providers.

I’m ashamed any of my fellow gamers out there welcome this at all.

And let’s not even talk about loss of internet service, or bandwidth restictions. Not very long ago people were screaming about DRM, and now your going to tell me you want to accept games as service???

What has happened to you guys? This is bad, bad, bad.

I agree with you. This is a very bad turn for the industry.