GDDR 5 vs DDR3

  • 73 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#51 deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@blackace: That’s what’s important.

YES!

But tech talk is interesting sometimes and might teach you new things.

Don’t get me wrong. I wasn’t trying to say that Xbox One was better or PS4 was better. I was just asking what people thought and if latency would be such a serious factor. Most people didn’t even take that in mind and just spitted out something completely irrelevant, notably someone saying that DDR5 is faster while DDR5 doesn’t exist yet. GDDR5 !!!!!!!!!! Anyway, you have got the important factor at hand. Both consoles will play awesome games. :)

Avatar image for inggrish
inggrish

10503

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#52 inggrish
Member since 2005 • 10503 Posts

eh, GDDR5 is probably overall 'superior' for a games console, but we have not and we will probably never see the difference in reality.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#53 tormentos
Member since 2003 • 33784 Posts

@acp_45 said:

Please no fanboys.

Just speak sense. No Fanboy logic.

Be open minded and try not to be bias toward PS4 or Xbox One.

Thanks

Why Sony went for GDDR5 ram and why MS went with DDR3.

They are both rich, and not stupid at all. A lot of money could be at stake, so why choose this ?

The following is without having Xbox One’s ESRAM in mind.

A computer (server or console) doesn't need a large pool of DDR5 memory to perform at a high level.

The only reason that Sony went with one unified pool of DDR5 memory is that they have CPU and GPU on the same package which share a single memory controller.

MS has the same dilemma as Sony: One memory controller limits you to using only one type of memory since each memory type has it's own controller.

So you either go with cheaper DDR3 or you go with more expensive DDR5.

The problem with using all DDR3 is that your GPU needs faster memory to do its work. Sony solved this problem by just going all DDR5. 4 GB was economical but they wanted 8 GB mainly with system functions in mind.

MS decided to solve the problem another way: Go with 8 GB of the more economical (and easily available) memory and deal with the memory bandwidth problem with DME's and a fast L4 cache. The DMEs + L4 cache is like an HOV lane for GPU work only.

So we have two companies that faced the same problem but chose to tackle it in different ways achieving the same results.

Sony chose the brute force method and MS chose to make a few adjustments.

Sony went with 8 GB mainly for system functions and this will have a negligible effect on graphics from 4 GB of memory.

PS4 graphics would have been fine with 4 GB of memory, but with all they want to do with the OS (new dash, sharing, Gakai, etc) more memory was necessary.

I think personally think GDDR5 is sort of a bad choice in a way. PS4 is suffering from a bit of GDDR5 latency, although Mark Cerny was saying in an article that it’s “NOT MUCH OF A FACTOR”. But in the article he only talks about the GPU which higher latency won’t be a problem at all. What I was wondering was...... why he didn’t talk about the CPU. OS might be affected by latency. Logically.

8 GB of GDDR5 was included for more storage space. PS4's GPU won't come close to using half of that. The other 4 GB is for game data and OS functions. You'll have game logic and data just sitting in half of your expensive memory while your GPU barely needs 3 GB of it and I'm being generous. GDDR5 is basically cache memory, whatever goes in there doesn't sit in there for very long.

It's really like buying a 1 TB SSD and storing mostly family pictures and documents on it.

But it still fixes the problem.

I don’t want to sound biased at all. I love my PS4. But I think MS came up with the better idea. An efficient, well designed solution to the same problem engineering high speed traffic lanes to it's memory.

Adding the idea of ESRAM to the Xbox One adds a few complications, not to the hardware but for the developers.

The ESRAM is making it a bit harder for the devs.

Reportedly there was a test that the ESRAM does however boost the Xbox One’s bandwidth to 192gb/s

Not sure what PS4’s is but I think 192gb/s is still fast even if PS4’s is higher.

Apparently, some game developers who had access to the console before it was launched claimed that the Xbox One’s bandwidth is faster than even MS thought.

But GDDR5 is still very tough. I wonder if the DDR3 and ESRAM actually comes close to it.

This is some funny sh** you are not a fanboy yet every post you make is skewed to kiss MS ass and ESRAM.

Is funny that you say the xbox one has 192GB/s bandwidth,2 days ago you claimed 218Gb/s,and 200GB,when i told you and i even quote MS engineer on it saying it was capable realistic of only 140 to 150 GB/s a second.

But let me tell you this the xbox one can has 500Gb/s bandwidth way more than the 7970,it means total sh** because it can't fill that bandwidth it has power relative of that of a 7770,which has 72GB's bandwidth,in fact the full 7790 has 96Gb/s,and is stronger than the xbox one is,the xbox one having more bandwidth than the PS4 means little,because it doesn't have the power to do sh** with that bandwidth.

The PS4 use more than 4GB for games dude,see you are totally clueless.

And yeah the so call developer who claimed that the xbox one had more bandwidth that MS thought was MS it sell,it wasn't a developer is was MS who told that to DF and they pretended it was a developer,it was latter confirm when questions were ask and DF responded saying that they didn't know something an that they would have to as MS about it..lol

But once again the xbox one can has 500GB it mean little,is like having a 4 lane race track were you could take any car without having to wait for a hole to pass,but your car has 85 horse power the other car has 135 horse power and more torque and actually racing tires.

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#54 deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@tormentos: WTF ?? So now I can go around and say everything Nvidia is saying about their new Tegra K chip is BS because it sells. In that case don’t buy any product from now on..... It’s probably still not faster than PS4 so why do you cry ???

I love how I said Theoretically 218GB/s

Meaning Freaking Theoretically.

Stop saying things half. You are A FOOL.

Yes the PS4 uses 5GB on games. But there aren’t any games higher than 4GB. So I said 3GB and I said I was being generous.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#55 btk2k2
Member since 2003 • 440 Posts

@acp_45 said:

@btk2k2: Judging me from what I’ve heard and the information I have does not make me ignorant. I do digging on a lot of topics.

Even on iPhone vs Other flagship phones. To freaking IOS vs Windows and now the next Linux push.

How can you say that I’m ignorant and will not see reason.

Here’s what I know. If you disagree. Tell me.

1) DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus GDDR5 which is at 1V )

2) DDR3 uses a 64-bit memory controller per channel ( 128-bit bus for dual channel and 256-bit for quad channel), whereas GDDR5 is paired with controllers of 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64bit per channel.

3) The GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit....... you can go on.)

4) The memory is also fundamentally set up specifically for the application it uses:

5) System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seem a little slow in relation to DDR3, but the speedof VRAM is blazing fast in comparison with desktop RAM, this has resulted from the relative workloads that a CPU and GPU undertake.

6) Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload.

7) There is however still a little bit of more latency on GDDR5. Notibly at least 9 nanoseconds which isn’t a lot of a difference.

I made this post, only to see what people think of this choice. Because most people in these forums spit out their consoles specs not knowing what’s really going on.

I see a lot of Xbox One Fanboys spitting ESRAM around stating that it’s faster and therefore the Xbox One is better.

They are right about it being faster but the fact is that ESRAM is making it harder for developers and they haven’t found an efficient solution to it.

I see a lot of people spitting CU’s, the PS4 having more meaning better console.

But there is a lot more to it than you think.

Also note that both companies might still have something they haven’t really talked about. Notibly the Hardware based Tiled resources on DX.11.2, that which they showed of at the MS build event late 2013.

Which could be something huge.

Just think about this quickly.

They were able to build a world worth 3GB in size with 16MB while Tiled Resources were on. Now combine that with Cloud Gaming.

This isn’t 100% sure but it’s highly possible.

This won’t only be good news for MS but to the entire industry.

All in All there are so many factors. that could count to both companies.

I just hate the controversies these days. Because they are without sense.

I hope you do not mind but I added numbers to your quote so you could see what I was responding to easier.

1) 2133 Mhz DDR3 is 1.5 - 1.65v or there abouts, different modules have different specs but if you look at newegg (or ebuyer if you are UK based) you can see the voltages there.

GDDR5 is about the same in terms of voltage but because it runs at a higher clock speed it does draw more power than DDR3 of the same capacity and voltage.

2) DDR3 is 64 bit/channel and GDDR5 is 32bit/channel. The memory controller on a CPU will obviously be configured for DDR3 but there is no reason that a CPU cannot use a GDDR5 configured memory controller, there is very little difference to be honest.

3) This is true but usually the limiting factor for bus width is not how small they can make the memory controller it is how small they can make the physical connections to the socket the GPU/CPU/APU sits in.

4) Not really. GDDR5 is not used in PC systems as standard memory because it is a) expensive and b) overkill. There are very few CPU bound processes that you use on a standard PC that would take advantage of GDDR5.

5) Timings and clock speed are related, hence my car analogy. If you have loose timings but a high clock speed the net effect is the same as tight timings and a slow clock speed.

6) True

7) According to Hynix from their specification documents the following are the timings.

ActionDDR3 2133N (timings in ns)5.5Gbit GDDR5 (timing in ns)
tAA13.09-
tRCD13.09Multiple Entries, 10 - 12
tRP13.0912
tRAS3328
tRC46.0940

DDR3 document Here (download) pg 31

GDDR5 document Here (opens in browser) pg 131 - 135

As you can see in this case the GDDR5 actually has a lower latency than the DDR3. I do not know the exact timings used in the Xbox One or the PS4 but the latency is going to be pretty much the same in both consoles so neither one has an advantage when it comes to latency.

The advantage the GDDR5 gives the PS4 is higher sustained bandwidth, the Xbox 1 can burst small amounts of data at high bandwidth than the PS4 if you combine the ESRAM and the DDR3 but once you have used what is stored in the ESRAM and are required to stream directly from the main memory pool your bandwidth drops. Managing the ESRAM is key to maximising Xbox 1 performance.

The other problem the Xbox 1 has is that it has a less powerful GPU, so even if you were to manage the ESRAM perfectly to maximise the Xbox 1 performance you are still going to be at a lower performance level than the PS4 which is also easier to extract performance from.

With regards to your PRT (tiled resources) talk, the thing is both consoles have it, it is not unique to the Xbox 1. The idea is to enable the use of higher resolution textures whiling reducing the high speed memory requirements. On a PC you can store the main textures in your main pool of system ram and then just send the small bits of the textures that you are drawing on the screen to the GPU for it to use them.

On the consoles the main pool of system ram is also the graphics ram, sure on the X1 you can send the cropped texture data to the ESRAM but is that the best use of the ESRAM? While it might work really well there might be other things a developer wants to use the ESRAM for that are even more bandwidth constrained. What happens if the scene you are drawing actually requires a large chunk of your high res texture so even after you have reduced it down it still requires more than 32 MB of ram? This is why the ESRAM is going to be a headache for devs until the tools improve and the devs learn what to use the ESRAM for and what not to.

PRT will never work with cloud gaming because the latency to send data to and from the internet is too high, you will have unloaded/ low res textures suddenly changing to their nice, crisp high res textures and it will break immersion. 16MB is not a lot of data but the issue is the math required for PRT to work requires latency measured in nanoseconds not milliseconds.

To render a scene at 30FPS requires that the work for each frame takes no longer than 33.3ms, if you want to use cloud based PRT and your ping is 100ms then by the time you get the texture back from the server the texture data is at least 3 frames old, perhaps longer depending on server side latency, download speed and all sorts of other factors that affect the internet.

In the end the PS4 has better graphics hardware. It can push more pixels, deal with better quality textures and throw more or higher quality effects around the place, it is upto devs to use what it is capable of to create breathtaking games and just because the PS4 has an advantage does not mean the Xbox 1 will have ugly games. Behind the numbers there is a lot of subjectivity in graphics because of the artistic nature of the medium.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#56 tormentos
Member since 2003 • 33784 Posts

@acp_45 said:

@tormentos: WTF ?? So now I can go around and say everything Nvidia is saying about their new Tegra K chip is BS because it sells. In that case don’t buy any product from now on..... It’s probably still not faster than PS4 so why do you cry ???

I love how I said Theoretically 218GB/s

Meaning Freaking Theoretically.

Stop saying things half. You are A FOOL.

Yes the PS4 uses 5GB on games. But there aren’t any games higher than 4GB. So I said 3GB and I said I was being generous.

Exactly theory mean sh** practical use is what matter and you keep throwing a false number again and again,is not 192GB,is 140 to 150 GB/s that is all,and it comes from MS should even that number should be taken with a grain of salt.

No you are the fool and a blind fanboy to,hell look at btk2k2 post GDDR5 and DDR3 have about the same latency,but speed in GGRD5 is way faster,the first post i quote from you had endless crap about how the CPU in the PS4 will be hurt by latency,remember the test i post which you refuse to admit.?

Yeah the PS4 not only has different memory controllers from those on PC,it also has a true HSA design with hUMA,with GDDR5 apparently rather than been an issue to the PS4,who knows if it is the reason why the PS4 CPU even at lower clock speed beat the xbox one CPU.

Your theories are mostly based on fantasies and wishful thinking,time for excuses is over the PS4 is stronger and will remain that way the whole generation,no drive from MS or anything will change that.

Avatar image for Martin_G_N
Martin_G_N

2124

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#57 Martin_G_N
Member since 2006 • 2124 Posts

The memory design of the X1 takes up more space, is alot slower moving data bigger than those 32 MB of ESRAM, and is hard to utilize. Because of the high transistor count of the ESRAM and the Data Move Engines, the GPU had to be smaller. Also, having a more powerfull GPU on that memory setup would probably be pointless. Most GPU's now a days that is around 2 Tflops and up uses GDDR5.

Sony had made up they're mind using a unified pool of 4GB of GDDR5 a few years back, but it was in the last minute that 8GB was possible.

Avatar image for jhcho2
jhcho2

5103

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 2

#58 jhcho2
Member since 2004 • 5103 Posts

@acp_45 said:

Adding the idea of ESRAM to the Xbox One adds a few complications, not to the hardware but for the developers.

The ESRAM is making it a bit harder for the devs.

Reportedly there was a test that the ESRAM does however boost the Xbox One’s bandwidth to 192gb/s

Not sure what PS4’s is but I think 192gb/s is still fast even if PS4’s is higher.

Apparently, some game developers who had access to the console before it was launched claimed that the Xbox One’s bandwidth is faster than even MS thought.

But GDDR5 is still very tough. I wonder if the DDR3 and ESRAM actually comes close to it.

The problem is that the ESRAM adds another variable to the equation. From a programming standpoint, having less intermediaries is always better. Considering that 100% hardware utilization is near impossible, having more hardware components with separate sets of code for each hardware will only serve to reduce your efficiency.

The 192Gb/s bandwidth is faster than any another bandwidth Sony has, but IMO, it's not really a plus point. The higher bandwidth is mitigated by the fact that a separate set of code has to be written to make use of the ESRAM.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#59  Edited By ronvalencia
Member since 2008 • 29612 Posts

@acp_45 said:

Please no fanboys.

Just speak sense. No Fanboy logic.

Be open minded and try not to be bias toward PS4 or Xbox One.

Thanks

Why Sony went for GDDR5 ram and why MS went with DDR3.

They are both rich, and not stupid at all. A lot of money could be at stake, so why choose this ?

The following is without having Xbox One’s ESRAM in mind.

A computer (server or console) doesn't need a large pool of DDR5 memory to perform at a high level.

The only reason that Sony went with one unified pool of DDR5 memory is that they have CPU and GPU on the same package which share a single memory controller.

MS has the same dilemma as Sony: One memory controller limits you to using only one type of memory since each memory type has it's own controller.

So you either go with cheaper DDR3 or you go with more expensive DDR5.

The problem with using all DDR3 is that your GPU needs faster memory to do its work. Sony solved this problem by just going all DDR5. 4 GB was economical but they wanted 8 GB mainly with system functions in mind.

MS decided to solve the problem another way: Go with 8 GB of the more economical (and easily available) memory and deal with the memory bandwidth problem with DME's and a fast L4 cache. The DMEs + L4 cache is like an HOV lane for GPU work only.

So we have two companies that faced the same problem but chose to tackle it in different ways achieving the same results.

Sony chose the brute force method and MS chose to make a few adjustments.

Sony went with 8 GB mainly for system functions and this will have a negligible effect on graphics from 4 GB of memory.

PS4 graphics would have been fine with 4 GB of memory, but with all they want to do with the OS (new dash, sharing, Gakai, etc) more memory was necessary.

I think personally think GDDR5 is sort of a bad choice in a way. PS4 is suffering from a bit of GDDR5 latency, although Mark Cerny was saying in an article that it’s “NOT MUCH OF A FACTOR”. But in the article he only talks about the GPU which higher latency won’t be a problem at all. What I was wondering was...... why he didn’t talk about the CPU. OS might be affected by latency. Logically.

8 GB of GDDR5 was included for more storage space. PS4's GPU won't come close to using half of that. The other 4 GB is for game data and OS functions. You'll have game logic and data just sitting in half of your expensive memory while your GPU barely needs 3 GB of it and I'm being generous. GDDR5 is basically cache memory, whatever goes in there doesn't sit in there for very long.

It's really like buying a 1 TB SSD and storing mostly family pictures and documents on it.

But it still fixes the problem.

I don’t want to sound biased at all. I love my PS4. But I think MS came up with the better idea. An efficient, well designed solution to the same problem engineering high speed traffic lanes to it's memory.

Adding the idea of ESRAM to the Xbox One adds a few complications, not to the hardware but for the developers.

The ESRAM is making it a bit harder for the devs.

Reportedly there was a test that the ESRAM does however boost the Xbox One’s bandwidth to 192gb/s

Not sure what PS4’s is but I think 192gb/s is still fast even if PS4’s is higher.

Apparently, some game developers who had access to the console before it was launched claimed that the Xbox One’s bandwidth is faster than even MS thought.

But GDDR5 is still very tough. I wonder if the DDR3 and ESRAM actually comes close to it.

Like a gaming PC, most of PS4's memory bandwidth is allocated towards the GPU.

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#60 deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@tormentos: LOL at least he explains things. An I can say that he definitely knows what he is talking about. Unlike you, who just keep on telling me I’m talking BS but not explaining anything.

You should stop your sh!t because you aren’t proving anything. You are an idiot.

You keep saying, I’m talking BS while you are saying Direct X is an API while Direct 3D is the API. I mean really ?

At least I would take btk2k2 post and use that info in the future.

You are Pathetic.

Get a life.

Stop trying downplay Xbox One because it’s getting old and extremely ridiculous.

I don’t want to be childish.

Grow Up. And accept that both consoles are good Next Gen consoles.

It’s all in your head.

Avatar image for BlessedChill
BlessedChill

697

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#61  Edited By BlessedChill
Member since 2013 • 697 Posts

Ps4 ram is massively better.

On top of PS4 also having DDR3 ram also

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#62 deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@btk2k2: I can’t disagree on most of the things you have posted because you have proof and you seem to know what you are talking about.

But I don’t know if this is relevant but doesn’t each specific ram you buy have a certain CAS latency. Meaning you can buy a GDDR5 with lower latency and higher latency. Lower latency being more expensive but only by a small amount due to it not being that important overall. . I saw this on sites where you can order PC parts.

I don’t know if I just understood it wrong.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#64  Edited By tormentos
Member since 2003 • 33784 Posts

@acp_45 said:

@tormentos: LOL at least he explains things. An I can say that he definitely knows what he is talking about. Unlike you, who just keep on telling me I’m talking BS but not explaining anything.

You should stop your sh!t because you aren’t proving anything. You are an idiot.

You keep saying, I’m talking BS while you are saying Direct X is an API while Direct 3D is the API. I mean really ?

At least I would take btk2k2 post and use that info in the future.

You are Pathetic.

Get a life.

Stop trying downplay Xbox One because it’s getting old and extremely ridiculous.

I don’t want to be childish.

Grow Up. And accept that both consoles are good Next Gen consoles.

It’s all in your head.

So how is that theory of the PS4 GPU getting hold by the PS4 CPU when the PS4 one perform better than the xbox one,while GDDR5 has the same or even batter latency than DDR3.?

Haaha

See your whole points were all base on sh** repeat crap that has been shut down long ago,ESRAM,DME all that crap serve for sony 1080 vs 720p the time for excuse is over.

Funny how you even claimed the PS4 have diminishing returns,and how you try to prove without success that the PS4 would stop giving because it would reach its peak and it would give no more,like if that wasn't the case with the xbox one,how you try to downplay 1.84TF while at the same time cheer leading for the damn xbox one who doesn't even have now 1.2TF usable for games.

Your theories are all pull from internet crap that has been debunked,just like the 192GB/s or 218GB/s all theory non practical,yet you keep trowing the number around as if it really mean anything,.

So what new theory you have to downplay the PS4,while making the xbox one seem better than it is.?

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#65 tormentos
Member since 2003 • 33784 Posts
@acp_45 said:

@btk2k2: I can’t disagree on most of the things you have posted because you have proof and you seem to know what you are talking about.

But I don’t know if this is relevant but doesn’t each specific ram you buy have a certain CAS latency. Meaning you can buy a GDDR5 with lower latency and higher latency. Lower latency being more expensive but only by a small amount due to it not being that important overall. . I saw this on sites where you can order PC parts.

I don’t know if I just understood it wrong.

See and there you go trying to spin his damn info,yes the same apply to DDR3 not all have the same CAS,but you will assume that MS has the DDR3 with the best CAS while sony will have the GDDR5 with the worse right.?

So the new spin after been owned is that sony may be using GDDR5 with worse CAS than MS..lol

And while you keep trowing all this crap,there are test showing the PS4 CPU performing better,so this all is meaning less since it shows that GDDR5 latency is not affecting the PS4 CPU in any way,all the contrary the PS4 CPU is performing better while been lower clocked...hehehehe

Avatar image for kraken2109
kraken2109

13271

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#66  Edited By kraken2109
Member since 2009 • 13271 Posts

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#67 btk2k2
Member since 2003 • 440 Posts

@acp_45 said:

@btk2k2: I can’t disagree on most of the things you have posted because you have proof and you seem to know what you are talking about.

But I don’t know if this is relevant but doesn’t each specific ram you buy have a certain CAS latency. Meaning you can buy a GDDR5 with lower latency and higher latency. Lower latency being more expensive but only by a small amount due to it not being that important overall. . I saw this on sites where you can order PC parts.

I don’t know if I just understood it wrong.

You can buy memory with slightly different memory timings and I do not know what is in the Xbox One or the PS4. They are probably middle of the road in terms of latency and the tRC is going to be about the same for both types of memory. A few ns +/- is not going to give you a noticeable advantage either way or the other.

When buying DDR3 for a PC a lower latency module is likely to achieve better overclocking results vs a higher latency module of the same speed. DDR1600 7-7-7 will more likely reach DDR1866 9-9-9 than DDR1600 9-9-9 even though the performance difference at stock between DDR1600 7-7-7 and DDR16 9-9-9 is almost 0.

Avatar image for leandrro
leandrro

1644

Forum Posts

0

Wiki Points

0

Followers

Reviews: -2

User Lists: 0

#68 leandrro
Member since 2007 • 1644 Posts

@Floppy_Jim said:

@Heil68 said:

DDR5 is better.

GDDR5, chum. The G stands for "Great".

the G stands for Standart memory type on any graphics card above 50 dollars

Avatar image for leandrro
leandrro

1644

Forum Posts

0

Wiki Points

0

Followers

Reviews: -2

User Lists: 0

#69 leandrro
Member since 2007 • 1644 Posts

@DarthaPerkinjan said:

Hermits use DDR3 in their 1440p ultra settings 120fps PCs, it must be great

ddr3 is only for cpu

not even a cheap 300 dollar PC works with shitty ddr3 on graphics

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#70  Edited By deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@tormentos: GTFO.

Get a life. Other than trying to break people down. because you are failing and no one gives a crap.

Avatar image for lostrib
lostrib

49999

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#71 lostrib
Member since 2009 • 49999 Posts

@acp_45 said:

@tormentos: GTFO.

Get a life. Other than trying to break people down. because you are failing and no one gives a crap.

lol, fail

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#72 deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@btk2k2: Yeah, I thought so.

Can’t the RAM be a little bit more modified on PS4 and Xbox One compared to the normal RAM.

Because compared to a normal PC it’s not doing much.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#73  Edited By btk2k2
Member since 2003 • 440 Posts

@acp_45 said:

@btk2k2: Yeah, I thought so.

Can’t the RAM be a little bit more modified on PS4 and Xbox One compared to the normal RAM.

Because compared to a normal PC it’s not doing much.

That would require the memory chips to be custom made for the PS4 and the Xbox 1 which would add a significant amount to the cost of those chips. Sony and MS would both be better off buying faster memory modules instead of modifying slower modules as faster modules would have more tangible benefits.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#74 tormentos
Member since 2003 • 33784 Posts

@acp_45 said:

@tormentos: GTFO.

Get a life. Other than trying to break people down. because you are failing and no one gives a crap.

You poor sad lemming that epic meltdown...

Any more theories on how the xbox one will over perform while the PS4 will under perform.?

Avatar image for deactivated-5a44ec138c1e6
deactivated-5a44ec138c1e6

2638

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 5

#75 deactivated-5a44ec138c1e6
Member since 2013 • 2638 Posts

@tormentos: Okay.....

I’m fine..........

Are you ?