Is there any desktop PC can load fast travel in 0.8 sec on a modern open world game?

  • 49 results
  • 1
  • 2
Avatar image for acepilot
AcePilot

98

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1  Edited By AcePilot
Member since 2017 • 98 Posts

I think Sony somehow managed to make something that none of the current gen PC can't perform inimitably.

Should PC Master Race freak out for this? What do you guys think?

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#2  Edited By lundy86_4
Member since 2003 • 61478 Posts

We have zero idea what is going on in the background, or if anything is initially pre-loaded and allows a lag for the idea that it's loading faster. At the end of the day, let's just all wait and see what happens.

Avatar image for Gatygun
Gatygun

2709

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#3 Gatygun
Member since 2010 • 2709 Posts

I dont think you understand the point of the demonstration even remotely.

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#4 JasonOfA36
Member since 2016 • 3725 Posts

@Gatygun said:

I dont think you understand the point of the demonstration even remotely.

Avatar image for acepilot
AcePilot

98

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5  Edited By AcePilot
Member since 2017 • 98 Posts

@Gatygun: Yes, it was a demonstration on the low-speed devkit according to the Wired article, which means that it will perform even better on the commercial hardware.

Avatar image for npiet1
npiet1

3576

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#6 npiet1
Member since 2018 • 3576 Posts

@acepilot: It's a bit harder to compare it to PC, just like GPU's you can say it's similar to this or that but its still different. The answer is probably not but there's reasons for this like it's not running a lot in the background or the programs run completely. I've got a feeling the PS5 just stores everything in the RAM. So that's why your seeing such insane speeds in the demo. The SSD helps with the initial loading which is slower and not shown for that reason.

Avatar image for Gatygun
Gatygun

2709

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7  Edited By Gatygun
Member since 2010 • 2709 Posts

@acepilot said:

@Gatygun: Yes it was a demonstration on the low-speed devkit according to the Wired article, which means that it will even perform better on the commercial hardware.

U don't understand what was demonstrated. The number means nothing. It's about the performance increase over a 5400 rpm discs and how developers can now snowball more data without the endless load times that moves forwards drastically. This generation was particular bad with load times through more and more data without a faster harddrive.

Devs needed with the PS5 a better harddrive solution as it wasn't going to work well with larger data files. Sony fixes this. The demonstration of spiderman just means that devs have more room to work with and can push loading times down while increasing data push through it at the same time.

For example, Ps3/xbox360 games could run entirely in the memory pool of the PS4 and not have loading as a result. yet we now sit with even more data then before, because data gets pushed forwards for reasons.

This generation spidy is 5gb that needs to be readed out in load solutions, next generation spidy is 40gb. Loads the same, quality goes up.

But if you want to compare some useless data vs other useless data, that means absolutely nothing. here you go.

The question is only will microsoft use the same solution or will they basically be the crippling factor for next gen multiplatform solutions. PC will adopt whatever is best and move further on it if the tech is actually infront of PC right now.

Personally i am not a fan of SSD's in consoles. This means SSD's could become the requirement of next generation games. which will result in longer load times across the board for everybody involved.

Sadly its needed as consoles are hitting a limit now on this front without it.

Avatar image for son-goku7523
Son-Goku7523

955

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#8 Son-Goku7523
Member since 2019 • 955 Posts

No, and that's why it's Sony's proprietary technology that they patented. It will be interesting to see if other manufacturers come out with their own alternatives soon for PC.

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#9 lundy86_4
Member since 2003 • 61478 Posts

@son-goku7523 said:

No, and that's why it's Sony's proprietary technology that they patented. It will be interesting to see if other manufacturers come out with their own alternatives soon for PC.

What was patented?

Avatar image for kali-b1rd
Kali-B1rd

2241

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#10 Kali-B1rd
Member since 2018 • 2241 Posts

@lundy86_4 said:
@son-goku7523 said:

No, and that's why it's Sony's proprietary technology that they patented. It will be interesting to see if other manufacturers come out with their own alternatives soon for PC.

What was patented?

Magic, Sony invented better hardware than the actual hardware devs.

Absolute magic tech.

The Cell. v2.0

Avatar image for npiet1
npiet1

3576

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#11 npiet1
Member since 2018 • 3576 Posts

@lundy86_4 said:
@son-goku7523 said:

No, and that's why it's Sony's proprietary technology that they patented. It will be interesting to see if other manufacturers come out with their own alternatives soon for PC.

What was patented?

The custom SSD and The PSVR 2.0

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#12  Edited By lundy86_4
Member since 2003 • 61478 Posts

@kali-b1rd said:

Magic, Sony invented better hardware than the actual hardware devs.

Absolute magic tech.

The Cell. v2.0

If @son-goku7523comes forth with an actual patent, i'd be shocked...

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#13  Edited By lundy86_4
Member since 2003 • 61478 Posts

@npiet1 said:

The custom SSD and The PSVR 2.0

Could you show the patent? The PSVR 2.0 isn't relevant to the topic.

Avatar image for son-goku7523
Son-Goku7523

955

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#14  Edited By Son-Goku7523
Member since 2019 • 955 Posts

@lundy86_4 said:
@kali-b1rd said:

Magic, Sony invented better hardware than the actual hardware devs.

Absolute magic tech.

The Cell. v2.0

If @son-goku7523comes forth with an actual patent, i'd be shocked...

Here you go, an article linking to the Custom SSD and Patent discovered by a Resetera user.

Unlike clowns on SW I don't post anything without confirming it first. People here like talking out of their butts and don't post any links to back up their claim. I don't...

As demonstrated in the meeting, PlayStation 5 takes full advantage of a new SSD that allows it to load entire levels in Spider-Man 0.83 seconds.

According to a new patent discovered by ResetEra user gofreak, Sony’s SSD for the PlayStation 5 isn’t a casual one. The patent shows just how much Sony has customized PS5’s SSD.

Gofreak detailed some of the interesting points from Sony’s patent in his ResetEra thread:

– SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written – e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later).

– The SSD’s read unit is ‘expanded and unified’ for efficient read operations.

– A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

Of course, the main differences in SRAM and DRAM are the modes of integrated-circuit. SRAM is comparatively faster than DRAM; hence SRAM is used for cache memory while DRAM is used for main memory.

The whole patent...

This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread.

I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might be.

I found several Japanese SIE patents from Saito Hideyuki along with a single (combined?) US application that appear to be relevant.

The patents were filed across 2015 and 2016.

Caveat: This is an illustrative embodiment in a patent application. i.e. Maybe parts of it will make it into a product, maybe all of it, maybe none of it. Approach it speculatively.

That said, it perhaps gives an idea of what Sony has been researching. And does seem in line with what Cerny talked about in terms of customisations across the stack to optimise performance.

http://www.freepatentsonline.com/y2017/0097897.html

There's quite a lot going on, but to try and break it down:

It talks about the limitations of simply using a SSD 'as is' in a games system, and a set of hardware and software stack changes to improve performance.

Basically, 'as is', an OS uses a virtual file system, designed to virtualise a host of different I/O devices with different characteristics. Various tasks of this file system typically run on the CPU - e.g. traversing file metadata, data tamper checks, data decryption, data decompression. This processing, and interruptions on the CPU, can become a bottleneck to data transfer rates from an SSD, particularly in certain contexts e.g. opening a large number of small files.

At a lower level, SSDs typically employ a data block size aimed at generic use. They distribute blocks of data around the NAND memory to distribute wear. In order to find a file, the memory controller in the SSD has to translate a request to the physical addresses of the data blocks using a look-up table. In a regular SSD, the typical data block size might require a look-up table 1GB in size for a 1TB SSD. A SSD might typically use DRAM to cache that lookup table - so the memory controller consults DRAM before being able to retrieve the data. The patent describes this as another potential bottleneck.

Here are the hardware changes the patent proposes vs a 'typical' SSD system:

- SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written - e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later). The patent also notes that by ditching DRAM, reduced complexity and cost may be possible, and cost will scale better with larger SSDs that would otherwise need e.g. 2GB of DRAM for 2TB of storage, and so on.

- The SSD's read unit is 'expanded and unified' for efficient read operations.

- A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

- The main CPU, the secondary CPU, the system memory controller and the IO bus are connected by a coherent bus. The patent notes that the secondary CPU can be different in instruction set etc. from the main CPU, as long as they use the same page size and are connected by a coherent bus.

- The hardware accelerator and the IO controller are connected to the IO bus.

An illustrative diagram of the system:

At a software level, the system adds a new file system, the 'File Archive API', designed primarily for write-once data like game installs. Unlike a more generic virtual file system, it's optimised for NAND data access. It sits at the interface between the application and the NAND drivers, and the hardware accelerator drivers.

The secondary CPU handles a priority on access to the SSD. When read requests are made through the File Archive API, all other read and write requests can be prohibited to maximise read throughput.

When a read request is made by the main CPU, it sends it to the secondary CPU, which splits the request into a larger number of small data accesses. It does this for two reasons - to maximise parallel use of the NAND devices and channels (the 'expanded read unit'), and to make blocks small enough to be buffered and checked inside the SSD SRAM. The metadata the secondary CPU needs to traverse is much simpler (and thus faster to process) than under a typical virtual file system.

The NAND memory controller can be flexible about what granularity of data it uses - for data requests send through the File Archive API, it uses granularities that allow the address lookup table to be stored entirely in SRAM for minimal bottlenecking. Other granularities can be used for data that needs to be rewritten more often - user save data for example. In these cases, the SRAM partially caches the lookup tables.

When the SSD has checked its retrieved data, it's sent from SSD SRAM to kernel memory in the system RAM. The hardware accelerator then uses a DMAC to read that data, do its processing, and then write it back to user memory in system RAM. The coordination of this happens with signals between the components, and not involving the main CPU. The main CPU is then finally signalled when data is ready, but is uninvolved until that point.

A diagram illustrating data flow:

Interestingly, for a patent, it describes in some detail the processing targets required of these various components in order to meet certain data transfer rates - what you would need in terms of timings from each of the secondary CPU, the memory controller and the hardware accelerator in order for them not to be a bottleneck on the NAND data speeds:

Though I wouldn't read too much into this, in most examples it talks about what you would need to support a end-to-end transfer rate of 10GB/s.

The patent is also silent on what exactly the IO bus would be - that obviously be a key bottleneck itself on transfer rates out of the NAND devices. Until we know what that is, it's hard to know what the upper end on the transfer rates could be, but it seems a host of customisations are possible to try to maximise whatever that bus will support.

Once again, this is one described embodiment. Not necessarily what the PS5 solution will look exactly like. But it is an idea of what Sony's been researching in how to customise a SSD and software stack for faster read throughput for installed game data.

Avatar image for Xabiss
Xabiss

4749

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#15 Xabiss
Member since 2012 • 4749 Posts

Well consoles have an advantage over PCs in one regard. The hardware is standard across the platform so they can focus on aspects that take advantage of that. If a game developer focused on game loading just for SSD drives for PC I am pretty sure they could do the same thing that was demonstrated by Sony. Unfortunately that is much harder since hardware is an open platform on PCs.

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#16  Edited By lundy86_4
Member since 2003 • 61478 Posts

@son-goku7523: Colour me shocked, my man. As noted in the resetera post, there's no guarantee that any of it will be included. Is there any guarantee they're running off their patented SSD deriviant?

Avatar image for son-goku7523
Son-Goku7523

955

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#17 Son-Goku7523
Member since 2019 • 955 Posts

@lundy86_4 said:

@son-goku7523: Colour me shocked, my man. As noted in the resetera post, there's no guarantee that any of it will be included. Is there any guarantee they're running off their patented SSD deriviant?

Well Mark Cerny did say it's considerably faster than any SSD solution available for PCs today so I guess from that bit of info it's safe to say it's using something similar to this patent or maybe something newer we aren't aware of. We'll just have to wait and see when the PS5 comes out.

It's impossible to say for certain whether it's using that particular patent or not this early in the game.

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#18 lundy86_4
Member since 2003 • 61478 Posts

@son-goku7523 said:

Well Mark Cerny did say it's considerably faster than any SSD solution available for PCs today so I guess from that bit of info it's safe to say it's using something similar to this patent or maybe something newer we aren't aware of. We'll just have to wait and see when the PS5 comes out.

It's impossible to say for certain whether it's using that particular patent or not this early in the game.

True that. No way to say for certain. I'm excited to find out... But this thread's a dud.

Avatar image for deactivated-5f4e2292197f1
deactivated-5f4e2292197f1

1374

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#19 deactivated-5f4e2292197f1
Member since 2015 • 1374 Posts

Every game I play I can't read what they say on cut scenes cause they always go so fast. In 2-5 years it'll be even faster.

Avatar image for mrbojangles25
mrbojangles25

58300

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#20 mrbojangles25
Member since 2005 • 58300 Posts

It's pretty incredible. I really have not had issues with loading times since swapping to SSD's a few years ago--if they are slow, that's more of an issue with the game than the hardware--but if anything can reduce loading times, then I am all for it. What is good for SONY and the PS4 is, eventually, good for the rest of the community once the tech gets out there, and vice versa.

But, referring to my earlier point, will games take advantage of it? Will they code their game to specifically take advantage of it?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#21  Edited By ronvalencia
Member since 2008 • 29612 Posts

@son-goku7523 said:
@lundy86_4 said:
@kali-b1rd said:

Magic, Sony invented better hardware than the actual hardware devs.

Absolute magic tech.

The Cell. v2.0

If @son-goku7523comes forth with an actual patent, i'd be shocked...

Here you go, an article linking to the Custom SSD and Patent discovered by a Resetera user.

Unlike clowns on SW I don't post anything without confirming it first. People here like talking out of their butts and don't post any links to back up their claim. I don't...

As demonstrated in the meeting, PlayStation 5 takes full advantage of a new SSD that allows it to load entire levels in Spider-Man 0.83 seconds.

According to a new patent discovered by ResetEra user gofreak, Sony’s SSD for the PlayStation 5 isn’t a casual one. The patent shows just how much Sony has customized PS5’s SSD.

Gofreak detailed some of the interesting points from Sony’s patent in his ResetEra thread:

– SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written – e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later).

– The SSD’s read unit is ‘expanded and unified’ for efficient read operations.

– A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

Of course, the main differences in SRAM and DRAM are the modes of integrated-circuit. SRAM is comparatively faster than DRAM; hence SRAM is used for cache memory while DRAM is used for main memory.

The whole patent...

I copied Reflections_Demo.zip with 1.37 GB size less than 4 seconds with Samsung 840 EVO (750 GB, SATA III) with Intel X299 chipset.

If 1 GB file takes 4.1 seconds to copy, then there's something wrong with the storage stack.

Avatar image for rmpumper
rmpumper

2133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#22 rmpumper
Member since 2016 • 2133 Posts

I'm sure that different file compression on consoles has something to do with the results. Anyway, PCIe 4.0 SSDs are coming to PC this year, so we will be able to see how that improves things on the master race (not that a couple seconds makes a difference).

Avatar image for howmakewood
Howmakewood

7702

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#24 Howmakewood
Member since 2015 • 7702 Posts
@rmpumper said:

I'm sure that different file compression on consoles has something to do with the results. Anyway, PCIe 4.0 SSDs are coming to PC this year, so we will be able to see how that improves things on the master race (not that a couple seconds makes a difference).

there has to be some other improvements were that the case as current pci-e ssd's dont even saturate 4x pci-e 3.0

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#25  Edited By tormentos
Member since 2003 • 33784 Posts

@acepilot:

Final hardware is never better than dev kits,dev kit tent to have higher memory,and some times even faster speed or higher GPU performance.

Avatar image for R4gn4r0k
R4gn4r0k

46260

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#26 R4gn4r0k
Member since 2004 • 46260 Posts

Both AC: Odyssey and Origins load fast travel pretty fast for me. And I have the games installed on an HDD.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#27 tormentos
Member since 2003 • 33784 Posts

@Gatygun said:
@acepilot said:

@Gatygun: Yes it was a demonstration on the low-speed devkit according to the Wired article, which means that it will even perform better on the commercial hardware.

U don't understand what was demonstrated. The number means nothing. It's about the performance increase over a 5400 rpm discs and how developers can now snowball more data without the endless load times that moves forwards drastically. This generation was particular bad with load times through more and more data without a faster harddrive.

Devs needed with the PS5 a better harddrive solution as it wasn't going to work well with larger data files. Sony fixes this. The demonstration of spiderman just means that devs have more room to work with and can push loading times down while increasing data push through it at the same time.

For example, Ps3/xbox360 games could run entirely in the memory pool of the PS4 and not have loading as a result. yet we now sit with even more data then before, because data gets pushed forwards for reasons.

This generation spidy is 5gb that needs to be readed out in load solutions, next generation spidy is 40gb. Loads the same, quality goes up.

But if you want to compare some useless data vs other useless data, that means absolutely nothing. here you go.

The question is only will microsoft use the same solution or will they basically be the crippling factor for next gen multiplatform solutions. PC will adopt whatever is best and move further on it if the tech is actually infront of PC right now.

Personally i am not a fan of SSD's in consoles. This means SSD's could become the requirement of next generation games. which will result in longer load times across the board for everybody involved.

Sadly its needed as consoles are hitting a limit now on this front without it.

PS3 games are much bigger in size than 360,i don't know why you round them together but PS3 games can be much longer than 360 ones,in fact 30GB or more while 360 ones were 7.2GB DVD9 i think.

Spider man is 45GB without patches,on ram on PS4 you can only use about 5GB for video memory but not all of the is for textures and assets, as effects and other process as well require memory,so your argument is quite wrong.

I don't know how sony loaded this game so fast,but is not the way you are describing.

How can SSD been a requirement increase loading time across the board? Are you high?

@son-goku7523 said:

Here you go, an article linking to the Custom SSD and Patent discovered by a Resetera user.

Unlike clowns on SW I don't post anything without confirming it first. People here like talking out of their butts and don't post any links to back up their claim. I don't...

As demonstrated in the meeting, PlayStation 5 takes full advantage of a new SSD that allows it to load entire levels in Spider-Man 0.83 seconds.

According to a new patent discovered by ResetEra user gofreak, Sony’s SSD for the PlayStation 5 isn’t a casual one. The patent shows just how much Sony has customized PS5’s SSD.

Gofreak detailed some of the interesting points from Sony’s patent in his ResetEra thread:

– SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written – e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later).

– The SSD’s read unit is ‘expanded and unified’ for efficient read operations.

– A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

Of course, the main differences in SRAM and DRAM are the modes of integrated-circuit. SRAM is comparatively faster than DRAM; hence SRAM is used for cache memory while DRAM is used for main memory.

The whole patent...

This will be one for people interested in some potentially more technical speculation. I posted in the next-gen speculation thread, but was encouraged to spin it off into its own thread.

I did some patent diving to see if I could dig up any likely candidates for what Sony's SSD solution might be.

I found several Japanese SIE patents from Saito Hideyuki along with a single (combined?) US application that appear to be relevant.

The patents were filed across 2015 and 2016.

Caveat: This is an illustrative embodiment in a patent application. i.e. Maybe parts of it will make it into a product, maybe all of it, maybe none of it. Approach it speculatively.

That said, it perhaps gives an idea of what Sony has been researching. And does seem in line with what Cerny talked about in terms of customisations across the stack to optimise performance.

http://www.freepatentsonline.com/y2017/0097897.html

There's quite a lot going on, but to try and break it down:

It talks about the limitations of simply using a SSD 'as is' in a games system, and a set of hardware and software stack changes to improve performance.

Basically, 'as is', an OS uses a virtual file system, designed to virtualise a host of different I/O devices with different characteristics. Various tasks of this file system typically run on the CPU - e.g. traversing file metadata, data tamper checks, data decryption, data decompression. This processing, and interruptions on the CPU, can become a bottleneck to data transfer rates from an SSD, particularly in certain contexts e.g. opening a large number of small files.

At a lower level, SSDs typically employ a data block size aimed at generic use. They distribute blocks of data around the NAND memory to distribute wear. In order to find a file, the memory controller in the SSD has to translate a request to the physical addresses of the data blocks using a look-up table. In a regular SSD, the typical data block size might require a look-up table 1GB in size for a 1TB SSD. A SSD might typically use DRAM to cache that lookup table - so the memory controller consults DRAM before being able to retrieve the data. The patent describes this as another potential bottleneck.

Here are the hardware changes the patent proposes vs a 'typical' SSD system:

- SRAM instead of DRAM inside the SSD for lower latency and higher throughput access between the flash memory controller and the address lookup data. The patent proposes using a coarser granularity of data access for data that is written once, and not re-written - e.g. game install data. This larger block size can allow for address lookup tables as small as 32KB, instead of 1GB. Data read by the memory controller can also be buffered in SRAM for ECC checks instead of DRAM (because of changes made further up the stack, described later). The patent also notes that by ditching DRAM, reduced complexity and cost may be possible, and cost will scale better with larger SSDs that would otherwise need e.g. 2GB of DRAM for 2TB of storage, and so on.

- The SSD's read unit is 'expanded and unified' for efficient read operations.

- A secondary CPU, a DMAC, and a hardware accelerator for decoding, tamper checking and decompression.

- The main CPU, the secondary CPU, the system memory controller and the IO bus are connected by a coherent bus. The patent notes that the secondary CPU can be different in instruction set etc. from the main CPU, as long as they use the same page size and are connected by a coherent bus.

- The hardware accelerator and the IO controller are connected to the IO bus.

An illustrative diagram of the system:

At a software level, the system adds a new file system, the 'File Archive API', designed primarily for write-once data like game installs. Unlike a more generic virtual file system, it's optimised for NAND data access. It sits at the interface between the application and the NAND drivers, and the hardware accelerator drivers.

The secondary CPU handles a priority on access to the SSD. When read requests are made through the File Archive API, all other read and write requests can be prohibited to maximise read throughput.

When a read request is made by the main CPU, it sends it to the secondary CPU, which splits the request into a larger number of small data accesses. It does this for two reasons - to maximise parallel use of the NAND devices and channels (the 'expanded read unit'), and to make blocks small enough to be buffered and checked inside the SSD SRAM. The metadata the secondary CPU needs to traverse is much simpler (and thus faster to process) than under a typical virtual file system.

The NAND memory controller can be flexible about what granularity of data it uses - for data requests send through the File Archive API, it uses granularities that allow the address lookup table to be stored entirely in SRAM for minimal bottlenecking. Other granularities can be used for data that needs to be rewritten more often - user save data for example. In these cases, the SRAM partially caches the lookup tables.

When the SSD has checked its retrieved data, it's sent from SSD SRAM to kernel memory in the system RAM. The hardware accelerator then uses a DMAC to read that data, do its processing, and then write it back to user memory in system RAM. The coordination of this happens with signals between the components, and not involving the main CPU. The main CPU is then finally signalled when data is ready, but is uninvolved until that point.

A diagram illustrating data flow:

Interestingly, for a patent, it describes in some detail the processing targets required of these various components in order to meet certain data transfer rates - what you would need in terms of timings from each of the secondary CPU, the memory controller and the hardware accelerator in order for them not to be a bottleneck on the NAND data speeds:

Though I wouldn't read too much into this, in most examples it talks about what you would need to support a end-to-end transfer rate of 10GB/s.

The patent is also silent on what exactly the IO bus would be - that obviously be a key bottleneck itself on transfer rates out of the NAND devices. Until we know what that is, it's hard to know what the upper end on the transfer rates could be, but it seems a host of customisations are possible to try to maximise whatever that bus will support.

Once again, this is one described embodiment. Not necessarily what the PS5 solution will look exactly like. But it is an idea of what Sony's been researching in how to customise a SSD and software stack for faster read throughput for installed game data.

Didn't see this coming so sony is not using a standard SSD,is using a custom one built for lower latency and seek times,and there is even talk about a second CPU connected to the first one by coherent bus?

Apparently they were serious about ending load times,i still wait and see mode.

Avatar image for djoffer
djoffer

1856

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#28 djoffer
Member since 2007 • 1856 Posts

Ohh my so a prelaunch event for a console promises monster improvement that will kick pc to the curve... yeah we have never seen that before, and I am sure they will deliver on that, lol...

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#29 michaelmikado
Member since 2019 • 406 Posts

@tormentos:

I initially assumed before the patent came to light that it was likely to use UFS 3.0 as it doesn’t have the nvme overhead that wouldn’t be relevant to gaming. However the patent suggest its taking the same principles of UFS which include high reads and low writes and using SRAM for an expanded index and table. This could fundamentally change game design at least for the PS5.

Avatar image for zaryia
Zaryia

21607

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#31  Edited By Zaryia
Member since 2016 • 21607 Posts
@acepilot said:

I think Sony somehow managed to make something that none of the current gen PC can't perform inimitably.

Should PC Master Race freak out for this? What do you guys think?

Just like the master race freaked out over "Teh Cell".

I hope you guys are prepared for another gen of being 2nd/3rd place in hardware.

Much Lower fps. Much Lower gfx. Much Longer Load times.

Avatar image for zaryia
Zaryia

21607

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#32 Zaryia
Member since 2016 • 21607 Posts
@djoffer said:

Ohh my so a prelaunch event for a console promises monster improvement that will kick pc to the curve... yeah we have never seen that before, and I am sure they will deliver on that, lol...

It's like they never learn.

Avatar image for foxhound_fox
foxhound_fox

98532

Forum Posts

0

Wiki Points

0

Followers

Reviews: 13

User Lists: 0

#33 foxhound_fox
Member since 2005 • 98532 Posts

I'm not sure what this is referring to, but I distinctly remember Far Cry 4 and Primal both having no load times at all, unless you quick travel or engage a cutscene, and even then it's barely 1-2 seconds on a SSD.

Avatar image for ivangrozny
IvanGrozny

1845

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#34  Edited By IvanGrozny
Member since 2015 • 1845 Posts

Yes, you know ssd and fast ram have been a trending thing on PC for a while now, much earlier than consoles still using 5200 rpms harddrives lol.

Sony is using the same architecture and same hardware that is commercially available on pc. So yes, if it's possible on Ps, it will be possible on PC. PC will probably outdo consoles again because of better quality components.

When it comes down to new technologies and hardware, consoles have always been playing catching up with PC.

Avatar image for br0kenrabbit
br0kenrabbit

17859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 10

User Lists: 0

#35  Edited By br0kenrabbit
Member since 2004 • 17859 Posts

I wonder if Sony is using something like Optane to pre-cache?

For those who don't know, Optane is an Intel-only option that is kinda half-way between an SSD and RAM. AMD has their own version, can't remember the name, though.

Avatar image for Dark_sageX
Dark_sageX

3561

Forum Posts

0

Wiki Points

0

Followers

Reviews: 236

User Lists: 0

#36 Dark_sageX
Member since 2003 • 3561 Posts
@acepilot said:

I think Sony somehow managed to make something that none of the current gen PC can't perform inimitably.

Should PC Master Race freak out for this? What do you guys think?

Freak out? lol, I promise the cows that this was the last time they say a 0.8 seconds load screen, it will be back to your 30 seconds to a whole minute as soon as the system roles out, the only hope these results could ever be replicated again is if they make their consoles a stream machine, where the heavy lifting will actually be done by a very powerful PC.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#37  Edited By 04dcarraher
Member since 2004 • 23829 Posts

@br0kenrabbit said:

I wonder if Sony is using something like Optane to pre-cache?

For those who don't know, Optane is an Intel-only option that is kinda half-way between an SSD and RAM. AMD has their own version, can't remember the name, though.

That's is what im thinking, 64-128gb of onboard storage for caching.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#38 tormentos
Member since 2003 • 33784 Posts

@ivangrozny:

Did you read songuku post on this thread? Is a modified dry no exactly like the ones on the market.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#39 ronvalencia
Member since 2008 • 29612 Posts

@br0kenrabbit said:

I wonder if Sony is using something like Optane to pre-cache?

For those who don't know, Optane is an Intel-only option that is kinda half-way between an SSD and RAM. AMD has their own version, can't remember the name, though.

AMD has StoreMI which is AMD's Optane alternative.

Avatar image for rmpumper
rmpumper

2133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#40 rmpumper
Member since 2016 • 2133 Posts

BTW, what is the reason exactly for using a SONY exclusive game to show the load times? Is it because they don't want direct comparisons to current PCs? If so, why, because the regular SSDs would show the same results?

Avatar image for deactivated-5efed3ebc2180
deactivated-5efed3ebc2180

923

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#41 deactivated-5efed3ebc2180
Member since 2006 • 923 Posts

Building a huge hype, stunning tech demoes at reveal, performance that will ridicule top end PC's for just a few bucks... And then the usual downgrades and ''PC medium settings @ 30fps'' for multiplats on actual launch. So typical of Sony, they should try something new...

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#42 tormentos
Member since 2003 • 33784 Posts

@rmpumper:

Because spiderman is their ip and can use it how ever they want,unlike 3rs party games,also you don't need a comparison for the same game,just look for a similar game on pc like spiderman let's say GTA. And test it.

Avatar image for rmpumper
rmpumper

2133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#43  Edited By rmpumper
Member since 2016 • 2133 Posts

@tormentos said:

@rmpumper:

Because spiderman is their ip and can use it how ever they want,unlike 3rs party games,also you don't need a comparison for the same game,just look for a similar game on pc like spiderman let's say GTA. And test it.

Not how comparisons work. Just look at Battlefield, the maps are a lot smaller than GTA, but the loading times take forever even on SSDs.

Avatar image for calvincfb
Calvincfb

0

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#44  Edited By Calvincfb
Member since 2018 • 0 Posts

The desperation, salt and hate here are laughable. ?????

Avatar image for ocinom
ocinom

1385

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#45  Edited By ocinom
Member since 2008 • 1385 Posts

Sony uses a specific HDD format. IDK what it is but it's modified to work very well on their consoles.

Avatar image for The_Stand_In
The_Stand_In

1179

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#46 The_Stand_In
Member since 2010 • 1179 Posts

PCIe 4.0 is coming soon to a PC near you!

PCIe 4.0 is twice as fast as 3.0 and SSDs that take advantage of this have already been announced and are in production.

Since the PS5 supposedly uses Ryzen 3000 (which has 4.0), I'm willing to bet their SSD is just a PCIe 4.0 SSD.

Avatar image for calvincfb
Calvincfb

0

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#47 Calvincfb
Member since 2018 • 0 Posts

I find really laughable how people here claim to be more knowledgeable than Sony's Engineers and their R&D department. lmao

Avatar image for APiranhaAteMyVa
APiranhaAteMyVa

4160

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#48 APiranhaAteMyVa
Member since 2011 • 4160 Posts

Thank you Sony

Avatar image for navyguy21
navyguy21

17425

Forum Posts

0

Wiki Points

0

Followers

Reviews: 10

User Lists: 0

#49 navyguy21
Member since 2003 • 17425 Posts

Elex fast travel on ssd is less than a second. Witcher 3 takes about 4. It depends on the game, game engine, amount of RAM, CPU speed, etc.

If the game is able to cache data into extra RAM then its possible. Xbox one does this currently with 360 games

Avatar image for fedor
Fedor

11612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#50 Fedor
Member since 2015 • 11612 Posts

@calvincfb: Yea, they never use hyperbole to lure in dimwits.