The Cell and its hidden secrets

  • 95 results
  • 1
  • 2
Avatar image for harnessthepower
HarnessThePower

267

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1  Edited By HarnessThePower
Member since 2017 • 267 Posts

I was reading an article the other day about processor architecture and how important it is to developers and their ability to tap into the power of the processor.

The Cell, designed by IBM, was one of the most advanced and complicated processors in history, and many articles discuss how it was never fully optimized.

Just how dominant could PS3 have been had The Cell's architecture been less of a secret, or if the developers had been less lazy? Imagine the hidden secrets still stored within the PS3. Could this also happen with Scorpio if it's architecture is complicated?

Avatar image for biack_goku
BIack_Goku

724

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#2 BIack_Goku
Member since 2016 • 724 Posts

Cell had basically unlimited potential. He had Saiyan cells, Frieza's cells, and Piccolo's cells. He would've been one of the most powerful beings in the galaxy through continuous training and zenkai boosts.

Avatar image for The_Stand_In
The_Stand_In

1179

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#3  Edited By The_Stand_In
Member since 2010 • 1179 Posts

The architecture wasn't "secret", it was just really weird and hard to develop for. That made it not worth the time to tap its full potential, which frankly wouldn't have made that much of a difference anyway paired with such a weak, memory starved GPU.

Also, Scorpio is using x86 and, from what they've shared, is aiming to be even less complicated than the Xbox One by eliminating the weird ESRAM.

Avatar image for harnessthepower
HarnessThePower

267

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#4 HarnessThePower
Member since 2017 • 267 Posts

@The_Stand_In: I don't know, I thought I remember reading an article which was saying The Cell had enough untapped power to compete with the latest i7 processors, and that it's true form has yet to be optimized.

Avatar image for silversix_
silversix_

26347

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5 silversix_
Member since 2010 • 26347 Posts

Its probably 2x more powerful than the Switch's CPU. A 2017 console.

Avatar image for gfxpipeline
gfxpipeline

543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#6  Edited By gfxpipeline
Member since 2014 • 543 Posts

@The_Stand_In said:

The architecture wasn't "secret", it was just really weird and hard to develop for. That made it not worth the time to tap its full potential, which frankly wouldn't have made that much of a difference anyway paired with such a weak, memory starved GPU.

What is hilarious is you honestly believe the garbage you just typed.

For anyone who is honestly interested in why the PS3's Cell/RSX graphics architecture was able to put out graphics that the Windows/Xbox 'graphics experts' at sites like beyond3d claimed were 'all Sony lies/impossible CGI level graphics' in Killzone 2 this presentation is a good high level overview:

THE PLAYSTATION 3’S SPUS IN THE REAL WORLD: A KILLZONE 2 CASE STUDY

https://www.guerrilla-games.com/read/the-playstation-3s-spus-in-the-real-world-a-killzone-2-case-study

Avatar image for adamosmaki
adamosmaki

10718

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 2

#7 adamosmaki
Member since 2007 • 10718 Posts

@harnessthepower said:

@The_Stand_In: I don't know, I thought I remember reading an article which was saying The Cell had enough untapped power to compete with the latest i7 processors, and that it's true form has yet to be optimized.

No it didnt had enough power to compete with the latest i7. The only think cell could do much better than a traditional x86 CPU was SIMD processing. Anything else a traditional 4 core x86 CPU was much much better. And even those instruction that Cell was great at could be done much faster on GPU's when AMD and Nvidia introduced unified shader architecture

Avatar image for Martin_G_N
Martin_G_N

2124

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#8  Edited By Martin_G_N
Member since 2006 • 2124 Posts

They knew multithreading and multicore was the future, so they invested in it instead of waiting for someone to develop a multicore CPU. Intel didn't launch it's first quad core CPU until 2007. Remember, the X360 got it's 3 core CPU because of the development of the Cell.

It's hard to call the Cell a big failure since developers learned quite a lot from it. Would Guerrilla games be where they are now if they hadn't worked on the Cell? Or Naughty Dog? They are still doing stuff few else can, even after the move to X86.

The biggest faults of the PS3 hardware was the slow and old Nvidia GPU, the split pools of RAM, and only having one PPU (main CPU core on the Cell). If it had had two cores, porting games from PC/X360 would probably have been a lot easier with that fix alone.

Avatar image for imperials23
imperials23

45

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#9 imperials23
Member since 2010 • 45 Posts

@biack_goku: lol that was awesome

Avatar image for uninspiredcup
uninspiredcup

59238

Forum Posts

0

Wiki Points

0

Followers

Reviews: 86

User Lists: 2

#10  Edited By uninspiredcup
Member since 2013 • 59238 Posts
@biack_goku said:

Cell had basically unlimited potential. He had Saiyan cells, Frieza's cells, and Piccolo's cells. He would've been one of the most powerful beings in the galaxy through continuous training and zenkai boosts.

To bad love defeated him!

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#11 waahahah
Member since 2014 • 2462 Posts
@Martin_G_N said:

They knew multithreading and multicore was the future, so they invested in it instead of waiting for someone to develop a multicore CPU. Intel didn't launch it's first quad core CPU until 2007. Remember, the X360 got it's 3 core CPU because of the development of the Cell.

It's hard to call the Cell a big failure since developers learned quite a lot from it. Would Guerrilla games be where they are now if they hadn't worked on the Cell? Or Naughty Dog? They are still doing stuff few else can, even after the move to X86.

The biggest faults of the PS3 hardware was the slow and old Nvidia GPU, the split pools of RAM, and only having one PPU (main CPU core on the Cell). If it had had two cores, porting games from PC/X360 would probably have been a lot easier with that fix alone.

The multi core in 360 had nothing to do with the multi core in the cell, IBM already had multi core CPU's back in 2001 (POWER4). They borrowed some of the technology developed by the CELL but left the cell's async arch on the table.

Naughty Dog/ Guerilla would likely be in the same position even without the cell. The things they probably learned from it probably ended up being tossed out. They pushed work loads onto the CELL, and now they are pushing workloads onto GPU... Although they probably had a head start with good threaded game engines.

If it had two cores it probably still would have been difficult also because of the split ram. The multi threaded design focus was beneficial on 360 anyway when porting over, but when red faction guerrilla was released, due to the memory restraints the developer struggled a lot to get that sucker on there.

Avatar image for The_Stand_In
The_Stand_In

1179

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#12  Edited By The_Stand_In
Member since 2010 • 1179 Posts

@gfxpipeline: Do you know what a bottleneck is? Because I don't think you do. By the end of the generation, the GPUs in both the Xbox 360 and PS3 were spent. Sure, the processor in the PS3 could of carried it further IFit was paired with a better GPU, but was that the case? No.

Was it good on it's own? Sort of (would have been great if development was easier). But it's not freaking magic. Don't delude yourself. It can't just make up for severe shortcomings of outdated hardware.

Avatar image for ConanTheStoner
ConanTheStoner

23719

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#13 ConanTheStoner
Member since 2011 • 23719 Posts
@harnessthepower said:

or if the developers had been less lazy?

Go work in game development for a bit. Then come back and tell me if the word lazy applies to anything they do.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#14  Edited By ronvalencia
Member since 2008 • 29612 Posts

@harnessthepower said:

I was reading an article the other day about processor architecture and how important it is to developers and their ability to tap into the power of the processor.

The Cell, designed by IBM, was one of the most advanced and complicated processors in history, and many articles discuss how it was never fully optimized.

Just how dominant could PS3 have been had The Cell's architecture been less of a secret, or if the developers had been less lazy? Imagine the hidden secrets still stored within the PS3. Could this also happen with Scorpio if it's architecture is complicated?

CELL's SPE is one of the most primitive processor design ie. it's no better than PowerPC 601 (G1) in terms architecture design with a very high clock speed and kit-bashed Altivec 128 register SIMD units. Copy-and-paste this SPE design 8 times on a chip. SPE lacks PC GPU's fix math raster function design.

NVIDIA G80 has a few thousands of registers and giga-threads (thousands of hyper-threads) design. NVIDIA G80's stream processors are design with MIMD (multiple instruction multiple data) design instead of SIMD (single instruction multiple data) design. G80 includes PC GPU's fix math raster function design.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#15  Edited By ronvalencia
Member since 2008 • 29612 Posts

@gfxpipeline said:
@The_Stand_In said:

The architecture wasn't "secret", it was just really weird and hard to develop for. That made it not worth the time to tap its full potential, which frankly wouldn't have made that much of a difference anyway paired with such a weak, memory starved GPU.

What is hilarious is you honestly believe the garbage you just typed.

For anyone who is honestly interested in why the PS3's Cell/RSX graphics architecture was able to put out graphics that the Windows/Xbox 'graphics experts' at sites like beyond3d claimed were 'all Sony lies/impossible CGI level graphics' in Killzone 2 this presentation is a good high level overview:

THE PLAYSTATION 3’S SPUS IN THE REAL WORLD: A KILLZONE 2 CASE STUDY

https://www.guerrilla-games.com/read/the-playstation-3s-spus-in-the-real-world-a-killzone-2-case-study

That's bullshit. There's nothing special with SPUs.

My background was from PowerPC before Sony switched to PowerPC. Amiga/PowerMac PowerPC Altivec was before PS3.

For Fold at Home protein folding , NVIDIA G80 destroys CELL by more than 2X.

X86 CPUs with 128bit SSE3/SSSE3/SSE4.1 can do software rasterization via Swiftshader

Loading Video...

Swiftshader 3.0 is 100 percent CPU driven Direct3D9c LLVM JIT software render.

LLVM JIT technique is similar to AMD/NVIDIA GPU driver's LLVM JIT tricks.

Care for a year 2008 rematch between NVIDIA G80 vs PS3 debate?

From https://forum.beyond3d.com/posts/1460125/

------------------------

"I could go on for pages listing the types of things the spu's are used for to make up for the machines aging gpu, which may be 7 series NVidia but that's basically a tweaked 6 series NVidia for the most part. But I'll just type a few off the top of my head:"

1) Two ppu/vmx units

There are three ppu/vmx units on the 360, and just one on the PS3. So any load on the 360's remaining two ppu/vmx units must be moved to spu.

2) Vertex culling

You can look back a few years at my first post talking about this, but it's common knowledge now that you need to move as much vertex load as possible to spu otherwise it won't keep pace with the 360.

3) Vertex texture sampling

You can texture sample in vertex shaders on 360 just fine, but it's unusably slow on PS3. Most multi platform games simply won't use this feature on 360 to make keeping parity easier, but if a dev does make use of it then you will have no choice but to move all such functionality to spu.

4) Shader patching

Changing variables in shader programs is cake on the 360. Not so on the PS3 because they are embedded into the shader programs. So you have to use spu's to patch your shader programs.

5) Branching

You never want a lot of branching in general, but when you do really need it the 360 handles it fine, PS3 does not. If you are stuck needing branching in shaders then you will want to move all such functionality to spu.

6) Shader inputs

You can pass plenty of inputs to shaders on 360, but do it on PS3 and your game will grind to a halt. You will want to move all such functionality to spu to minimize the amount of inputs needed on the shader programs.

7) MSAA alternatives

Msaa runs full speed on 360 gpu needing just cpu tiling calculations. Msaa on PS3 gpu is very slow. You will want to move msaa to spu as soon as you can.

Post processing

360 is unified architecture meaning post process steps can often be slotted into gpu idle time. This is not as easily doable on PS3, so you will want to move as much post process to spu as possible.

9) Load balancing

360 gpu load balances itself just fine since it's unified. If the load on a given frame shifts to heavy vertex or heavy pixel load then you don't care. Not so on PS3 where such load shifts will cause frame drops. You will want to shift as much load as possible to spu to minimize your peak load on the gpu.

10) Half floats

You can use full floats just fine on the 360 gpu. On the PS3 gpu they cause performance slowdowns. If you really need/have to use shaders with many full floats then you will want to move such functionality over to the spu's.

11) Shader array indexing

You can index into arrays in shaders on the 360 gpu no problem. You can't do that on PS3. If you absolutely need this functionality then you will have to either rework your shaders or move it all to spu.

Etc, etc, etc...

Sony(SCEA)'sstudypaper on "Deferred Pixel Shading on the Playstation 3" and comparative performance to Geforce 7800 GTX. Can be found from http://research.scea.com/ps3_deferred_shading.pdf

Quote

D. Comparison to GeForce 7800 GTX GPU

We implemented the same algorithm on a high end state of

the art GPU, the NVIDIA GeForce 7800 GTX running in a

Linux workstation. This GPU has 24 fragment shader

pipelines running at 430 Mhz and processes 24 fragments

in parallel. By comparison the 5 SPEs that we used process

20 pixels in parallel in quad-SIMD form.

The GeForce required 11.1 ms to complete the shading

operation. In comparison the Cell/B.E. required 11.65 ms

including the DMA waiting time

From Sony's own words, 5 SPEs(with DMA) is roughly equal to Geforce 7800 GTX.

For NVIDIA G80 (CUDA Gen1)

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16  Edited By ronvalencia
Member since 2008 • 29612 Posts

@silversix_ said:

Its probably 2x more powerful than the Switch's CPU. A 2017 console.

Wrong, Switch's SoC/APU *is* NVIDIA's CELL but with Maxwell v2's fix function GPU hardware.

For workloads that matter for games, CUDA 8.x capable CUDA GPU inside Switch is more capable than SPU.

@harnessthepower said:

@The_Stand_In: I don't know, I thought I remember reading an article which was saying The Cell had enough untapped power to compete with the latest i7 processors, and that it's true form has yet to be optimized.

That's not correct.

Intel Core i7 "Haswell" quad CPU cores has eight 256 bit wide SIMD units with gather and FMA/FMAC instructions.

Latest Intel Core i7 Skylake has Iris Pro IGP with upto ~1 TFLOPS shader/OpenCL power and fix function PC GPU hardware.

AMD Jaguar at 1.6 Ghz has superior IPC (instructions per clock) over IBM's PPE and SPUs at 3.2 Ghz.

The benchmark is cloth physics simulation.

AMD RYZEN vs Jaguar on SIMD power per CPU core = +2X the hardware resources

Intel Haswell/Skylake/Kabylake vs AMD Jaguar on SIMD power per CPU core = +2X the hardware resources

Avatar image for Heil68
Heil68

60721

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#17 Heil68
Member since 2004 • 60721 Posts
@silversix_ said:

Its probably 2x more powerful than the Switch's CPU. A 2017 console.

Well a cell phone can keep up with Switch...lolz

Avatar image for clyde46
clyde46

49061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#18 clyde46
Member since 2005 • 49061 Posts

I remember when Nvidia launched the 8800GT. That card basically demolished the PS3 and 360.

Avatar image for lavamelon
Lavamelon

853

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#19 Lavamelon
Member since 2016 • 853 Posts

@biack_goku said:

Cell had basically unlimited potential. He had Saiyan cells, Frieza's cells, and Piccolo's cells. He would've been one of the most powerful beings in the galaxy through continuous training and zenkai boosts.

Cell is the biggest badass in the entire universe. If I could be any anime character, it would be Cell. I would conquer the Earth, then raise an army of Cell Juniors to colonise all the planets in the Solar System. Cell = Pure Awesomeness. I even have a Cell toy sitting at my desk at my workplace, lol. All around, he is one hell of an amazing character. Hopefully he returns in DB Super.

Avatar image for gfxpipeline
gfxpipeline

543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#20 gfxpipeline
Member since 2014 • 543 Posts

@The_Stand_In said:

@gfxpipeline: Do you know what a bottleneck is? Because I don't think you do. By the end of the generation, the GPUs in both the Xbox 360 and PS3 were spent. Sure, the processor in the PS3 could of carried it further IFit was paired with a better GPU, but was that the case? No.

Was it good on it's own? Sort of (would have been great if development was easier). But it's not freaking magic. Don't delude yourself. It can't just make up for severe shortcomings of outdated hardware.

What an absolutely mind boggling ignorant statement.

Cell+RSX is the GPU in the PS3.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#21  Edited By ronvalencia
Member since 2008 • 29612 Posts

@clyde46 said:

I remember when Nvidia launched the 8800GT. That card basically demolished the PS3 and 360.

NVIDIA's desktop G80/G92 was a long live card which ended as GeForce GTS 250.

Avatar image for harnessthepower
HarnessThePower

267

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#22 HarnessThePower
Member since 2017 • 267 Posts

@ConanTheStoner said:
@harnessthepower said:

or if the developers had been less lazy?

Go work in game development for a bit. Then come back and tell me if the word lazy applies to anything they do.

I would argue many developers are lazy, especially when they have to learn new hardware like The Cell. Developers were complaining when they had a super-computer like processor on their hands. I don't know, seems ridiculous to me to not just put in the extra effort to get 100% out of The Cell.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#23 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

CELL's SPE is one of the most primitive processor design ie. it's no better than PowerPC 601 (G1) in terms architecture design with a very high clock speed and kit-bashed Altivec 128 register SIMD units. Copy-and-paste this SPE design 8 times on a chip. SPE lacks PC GPU's fix math raster function design.

NVIDIA G80 has a few thousands of registers and giga-threads (thousands of hyper-threads) design. NVIDIA G80's stream processors are design with MIMD (multiple instruction multiple data) design instead of SIMD (single instruction multiple data) design. G80 includes PC GPU's fix math raster function design.

Nvidia G80 is not a CPU is a GPU Cell is a CPU with GPU capabilities,so once again your sony bashing is a joke.

Call me when the G80 can run an os.

@clyde46 said:

I remember when Nvidia launched the 8800GT. That card basically demolished the PS3 and 360.

It was a more advance GPU and was as expensive as the PS3 it self just for the GPU so damn it should demolish them other wise it would be a total let down.

@ronvalencia said:

That's bullshit. There's nothing special with SPUs.

My background was from PowerPC before Sony switched to PowerPC. Amiga/PowerMac PowerPC Altivec was before PS3.

For Fold at Home protein folding , NVIDIA G80 destroys CELL by more than 2X.

You are such a sony hater is not even funny not to mention one of the biggest hypocrites and dishonest fanboys this place has ever seen.

Cell is a CPU get this fu**ing word on your head CPU CENTRAL PROCESSING UNIT.

The Freaking G80 is a GPU GRAPHICS PROCESSING UNIT.

The only reason you compare Cell to the G80 is because it suit you freaking sony hating agenda,in fact you don't even like Nvidia and all you do here is suck MS and AMD ass all day long.

Now since you are such a fierce hater of cell answer me this on 2006 what CPU was even fu**Ing close to Cell in Folding home?

Not even Intel Shiny new core due was close.

Your background is in buffoonery and ass kissing of MS and AMD it has always been,and you comparison are freaking god awful and dishonest why the fu** would you compare a freaking CPU vs a GPU just blows any ones mind,oh yeah because it is the only way you can use to downplay Cell which was powerful enough to make the xbox 360 look silly even with its stronger GPU.

As bad as you claim Cell was The Xenon CPU was 3 times worse,and you say shit about it,oh and it was PPC to so how come it sucked so much compare to Cell.?

Oh and that was 5 SPE,funny enough you never say anything bad about the shitty Xenon CPU.

The only reason you compare the G80 is because even CPU of that era ate dirt vs Cell,and you know it,so the best you can do is hide on G80 which can't run an OS by the way or do smart task typically suited for CPU.

Avatar image for LustForSoul
LustForSoul

6404

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#24 LustForSoul
Member since 2011 • 6404 Posts

They're not lazy, that's for sure. There's a reason mutli-platform games performed worse on PS3.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#25 tormentos
Member since 2003 • 33784 Posts
@LustForSoul said:

They're not lazy, that's for sure. There's a reason mutli-platform games performed worse on PS3.

Many perform better on PS3 what is the excuse there?

Avatar image for LustForSoul
LustForSoul

6404

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#26 LustForSoul
Member since 2011 • 6404 Posts

@tormentos said:
@LustForSoul said:

They're not lazy, that's for sure. There's a reason mutli-platform games performed worse on PS3.

Many perform better on PS3 what is the excuse there?

*Correction: More games ran better on 360 than on PS3. Developers have stated that it was harder to develop for PS3 due to teh cell, so there's your reason.

We could also assume 75% of the developers was just lazy but it's probably more complicated.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#27  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

CELL's SPE is one of the most primitive processor design ie. it's no better than PowerPC 601 (G1) in terms architecture design with a very high clock speed and kit-bashed Altivec 128 register SIMD units. Copy-and-paste this SPE design 8 times on a chip. SPE lacks PC GPU's fix math raster function design.

NVIDIA G80 has a few thousands of registers and giga-threads (thousands of hyper-threads) design. NVIDIA G80's stream processors are design with MIMD (multiple instruction multiple data) design instead of SIMD (single instruction multiple data) design. G80 includes PC GPU's fix math raster function design.

Nvidia G80 is not a CPU is a GPU Cell is a CPU with GPU capabilities,so once again your sony bashing is a joke.

Call me when the G80 can run an os.

@clyde46 said:

I remember when Nvidia launched the 8800GT. That card basically demolished the PS3 and 360.

It was a more advance GPU and was as expensive as the PS3 it self just for the GPU so damn it should demolish them other wise it would be a total let down.

@ronvalencia said:

That's bullshit. There's nothing special with SPUs.

My background was from PowerPC before Sony switched to PowerPC. Amiga/PowerMac PowerPC Altivec was before PS3.

For Fold at Home protein folding , NVIDIA G80 destroys CELL by more than 2X.

You are such a sony hater is not even funny not to mention one of the biggest hypocrites and dishonest fanboys this place has ever seen.

Cell is a CPU get this fu**ing word on your head CPU CENTRAL PROCESSING UNIT.

The Freaking G80 is a GPU GRAPHICS PROCESSING UNIT.

The only reason you compare Cell to the G80 is because it suit you freaking sony hating agenda,in fact you don't even like Nvidia and all you do here is suck MS and AMD ass all day long.

Now since you are such a fierce hater of cell answer me this on 2006 what CPU was even fu**Ing close to Cell in Folding home?

Not even Intel Shiny new core due was close.

Your background is in buffoonery and ass kissing of MS and AMD it has always been,and you comparison are freaking god awful and dishonest why the fu** would you compare a freaking CPU vs a GPU just blows any ones mind,oh yeah because it is the only way you can use to downplay Cell which was powerful enough to make the xbox 360 look silly even with its stronger GPU.

As bad as you claim Cell was The Xenon CPU was 3 times worse,and you say shit about it,oh and it was PPC to so how come it sucked so much compare to Cell.?

Oh and that was 5 SPE,funny enough you never say anything bad about the shitty Xenon CPU.

The only reason you compare the G80 is because even CPU of that era ate dirt vs Cell,and you know it,so the best you can do is hide on G80 which can't run an OS by the way or do smart task typically suited for CPU.

SPU is a DSP not a CPU. I already corrected you with an IBM quote.

7th SPU handles DRM related services.

From http://public.dhe.ibm.com/software/dw/power/pdfs/pa-cmpware1/pa-cmpware1-pdf.pdf

The collection of processors in a Cell Broadband Engineâ„¢ (Cell/B.E.) processor displays a DSP-like architecture. This means that the order in which the SPUs.

Notice IBM didn't state SPU being CPU like. IBM > YOU.

I posted the IBM link in 2014 against your stupid claims AND you still went against the designer for the damn CELL.

Try again.

I'm continuing this debate https://www.gamespot.com/forums/system-wars-314159282/the-biggest-lie-in-console-gaming-history-32314638/?page=2

There's no need to mention Xbox 360's PPE 3X solution since graph already shown it's inferior to AMD Jaguar's IPC and results.

SPU wouldn't pass Motorola 68000 supervisor and user mode instruction set OS requirement since SPU only handles user mode instructions just like any non-hardware VM capable PC GPUs. To self host early Unix type OS requires supervisor and user instruction set modes.

SPU doesn't have proper MMU to self host Unix like operating system. Only PPE meets the requirements for self host Unix type OS.

You failed University basic Operating System subjects.

Read https://www.cs.utah.edu/~wbsun/kgpu.pdf for OS functions being assisted by CUDA GPU and it's mostly DRM related workload examples.

The ability to run OS function argument is nearly pointless for game math related workloads i.e. a distraction to machine's main purpose.

Avatar image for skektek
skektek

6530

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#28 skektek
Member since 2004 • 6530 Posts

@adamosmaki said:
@harnessthepower said:

@The_Stand_In: I don't know, I thought I remember reading an article which was saying The Cell had enough untapped power to compete with the latest i7 processors, and that it's true form has yet to be optimized.

No it didnt had enough power to compete with the latest i7. The only think cell could do much better than a traditional x86 CPU was SIMD processing. Anything else a traditional 4 core x86 CPU was much much better. And even those instruction that Cell was great at could be done much faster on GPU's when AMD and Nvidia introduced unified shader architecture

The Cell was superior to all its x86 peers in floating point performance as well as parallelization.

Cell BE: 9 cores with 10 execution threads 100 GFLOPS (8 cores are 9 execution threads in the PS3 Cell).

Xeon X5355: 4 cores with 4 execution threads 50 GFLOPS.

Avatar image for skektek
skektek

6530

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#29  Edited By skektek
Member since 2004 • 6530 Posts

@LustForSoul said:
@tormentos said:
@LustForSoul said:

They're not lazy, that's for sure. There's a reason mutli-platform games performed worse on PS3.

Many perform better on PS3 what is the excuse there?

*Correction: More games ran better on 360 than on PS3. Developers have stated that it was harder to develop for PS3 due to teh cell, so there's your reason.

We could also assume 75% of the developers was just lazy but it's probably more complicated.

The Cell presented 2 difficulties: most of the cores lacked branch prediction and most of its performance relied on parrelization at a time when parrelization was not a solved problem.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#30  Edited By ronvalencia
Member since 2008 • 29612 Posts

@skektek said:
@adamosmaki said:
@harnessthepower said:

@The_Stand_In: I don't know, I thought I remember reading an article which was saying The Cell had enough untapped power to compete with the latest i7 processors, and that it's true form has yet to be optimized.

No it didnt had enough power to compete with the latest i7. The only think cell could do much better than a traditional x86 CPU was SIMD processing. Anything else a traditional 4 core x86 CPU was much much better. And even those instruction that Cell was great at could be done much faster on GPU's when AMD and Nvidia introduced unified shader architecture

The Cell was superior to all its x86 peers in floating point performance as well as parallelization.

Cell BE: 9 cores with 10 execution threads 100 GFLOPS (8 cores are 9 execution threads in the PS3 Cell).

Xeon X5355: 4 cores with 4 execution threads 50 GFLOPS.

From http://public.dhe.ibm.com/software/dw/power/pdfs/pa-cmpware1/pa-cmpware1-pdf.pdf

The collection of processors in a Cell Broadband Engineâ„¢ (Cell/B.E.) processor displays a DSP-like architecture. This means that the order in which the SPUs.

Notice IBM didn't state SPU being CPU like.

For PS3's Cell BE, there's 1 PPE (SMT) + 7 SPUs hence 8 threads. 8th SPU is disabled for yield issues.

SPU thread is not the same as PPE thread.

Avatar image for svaubel
svaubel

4571

Forum Posts

0

Wiki Points

0

Followers

Reviews: 133

User Lists: 0

#31 svaubel
Member since 2005 • 4571 Posts

Not sure why people keep banging on about the Cell, as its in an old-gen system.

Super CPU doesn't mean squat if its a technical nightmare to program for, which it was.

Avatar image for harnessthepower
HarnessThePower

267

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#32 HarnessThePower
Member since 2017 • 267 Posts

@svaubel: The mysteries of The Cell may never be unlocked, which makes it an interesting piece of hardware to discuss.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#33  Edited By ronvalencia
Member since 2008 • 29612 Posts

@harnessthepower said:

@svaubel: The mysteries of The Cell may never be unlocked, which makes it an interesting piece of hardware to discuss.

CELL is not a magical device ie. think of 8 128 bit units of Altivec/VMX SIMD.

Intel Core i7-4770K "Haswell" has 8 256 bit units of AVX v2 SIMD. 8 256bit units are assigned for 8 threads.

Both Haswell's and CELL's SPU SIMD units has gather and FMA3 instructions.

Only CELL's PPE can self host Unix type OS while any of Haswell's CPU cores can self host Unix type OS.

AMD Bulldozer with 4 modules (with 8 threads) has 8 128bit SIMD units with FMA4.

AMD was the first X86 CPU vendor that nullified PowerPC's non-destructive FMA3 advantage.

---------

FMA3 = 3 non-destructive operands fused multiply-add instruction.

FMA4 = 4 non-destructive operands fused multiply-add instruction.

Avatar image for HalcyonScarlet
HalcyonScarlet

13669

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#34 HalcyonScarlet
Member since 2011 • 13669 Posts

It's not too complicated for development, it's too complicated for the games business. Developers haven't got time, it's a fast moving business. With the 360 CPU, and the current CPU that the Xbox and the PS4 use, they can decide what they need and decide how best to run the code. No one wants to stop and fight to get that code running.

People need to change their view that there is a best and a worst of things. There is a solution that going by performance/power consumption/cost fits a situation better than others.

Avatar image for ArchoNils2
ArchoNils2

10534

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#35 ArchoNils2
Member since 2005 • 10534 Posts

@biack_goku said:

Cell had basically unlimited potential. He had Saiyan cells, Frieza's cells, and Piccolo's cells. He would've been one of the most powerful beings in the galaxy through continuous training and zenkai boosts.

It's a shame they didn't add a Cell from another universe to the huge battle in the current Super saga. Can you imagine how awesome a perfect Cell with several absorbed planets would have been? On a battlefield with 70 enemies he absorbs to get stronger and stronger? They really missed a fantastic enemy

Avatar image for PAL360
PAL360

30573

Forum Posts

0

Wiki Points

0

Followers

Reviews: 31

User Lists: 0

#36  Edited By PAL360
Member since 2007 • 30573 Posts

'Just how dominant could PS3 have been had The Cell's architecture been less of a secret, or if the developers had been less lazy?'

It had nothing to do with the devs. The fact The Cell was powerful on paper, didn't stop the console from having a weak GPU, bad RAM solution, etc. PS3 was simply a badly designed console, something PS4 gladly isn't.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#37 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

SPU is a DSP not a CPU. I already corrected you with an IBM quote.

7th SPU handles DRM related services.

From http://public.dhe.ibm.com/software/dw/power/pdfs/pa-cmpware1/pa-cmpware1-pdf.pdf

What GPU runs at 3.2 ghz you MORON...

Even better what 2005 GPU run at 3.2ghz.

Cell is a CPU with CPU speed it a hybrid and nothing you quote there from IBM claim that Cell is a GPU,SPE are part of Cell and Cell is a CPU,no GPU runs at 3.2ghz even less on 2005 when Cell came out.

DSP are digital signal processors are good at some GPU task because how good they are repeating the same task over and over again in loops,which is good for some jobs like folding home for example,as well as other medical applications outside gaming.

Digital signal processing (DSP) is the use of digital processing, such as by computers, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency.

Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech signal processing, sonar, radar and other sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control of systems, biomedical engineering, seismic data processing, among others.

Digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing is closely related to nonlinear system identification[1] and can be implemented in the time, frequency, and spatio-temporal domains.

A graphics processing unit (GPU), occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing, and their highly parallel structure makes them more efficient than general-purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. In a personal computer, a GPU can be present on a video card, or it can be embedded on the motherboard or—in certain CPUs—on the CPU die.

http://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1002&context=theses

A DSP and GPU are not the same.

So stop comparing the damn G80 vs Cell,Cell is a CPU and DSP are part of it,no GPU runs at 3.2ghz,in fact modern ones run at 1300mhz or little more stock,in 2005 when cell was ready the GPU speed was like 550mhz not even close to DSP on Cell.

I remember how Major Nelson call SPE useless DSP, lol how wrong he was..lol

@skektek said:

The Cell presented 2 difficulties: most of the cores lacked branch prediction and most of its performance relied on parrelization at a time when parrelization was not a solved problem.

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

@ronvalencia said:

From http://public.dhe.ibm.com/software/dw/power/pdfs/pa-cmpware1/pa-cmpware1-pdf.pdf

The collection of processors in a Cell Broadband Engineâ„¢ (Cell/B.E.) processor displays a DSP-like architecture. This means that the order in which the SPUs.

Notice IBM didn't state SPU being CPU like.

For PS3's Cell BE, there's 1 PPE (SMT) + 7 SPUs hence 8 threads. 8th SPU is disabled for yield issues.

SPU thread is not the same as PPE thread.

The Cell Broadband Engine Architecture integrates an IBM PowerPC processor element (PPE) and eight synergistic processor elements (SPEs) in a unified system architecture. The PPE provides common system functions, while the SPEs perform data-intensive processing.

What is a Power PC Processor?

Is a damn CPU you buffoon,there is x86 CPU and there are PPC CPU and they are classify as CPU.

So yeah Cell was a hybrid comparing Cell to the G80 is a joke there are task Cell murder that G80 because that GPU can't run CPU process,Cell is compose of a PPC and SPE in 1 chip,DSP are not even GPU so you can make that comparison the only real reason why you compare Cell to the G80 is blind fanboysm and your hate for anything sony.

Find me 1 CPU than on 2005 or 2006 beat Cell in every single regard.

Avatar image for NFJSupreme
NFJSupreme

6605

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#38 NFJSupreme
Member since 2005 • 6605 Posts

i honestly thought this was a necro thread lmao. people still care about this in 2017?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#39  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

SPU is a DSP not a CPU. I already corrected you with an IBM quote.

7th SPU handles DRM related services.

From http://public.dhe.ibm.com/software/dw/power/pdfs/pa-cmpware1/pa-cmpware1-pdf.pdf

What GPU runs at 3.2 ghz you MORON...

Even better what 2005 GPU run at 3.2ghz.

Cell is a CPU with CPU speed it a hybrid and nothing you quote there from IBM claim that Cell is a GPU,SPE are part of Cell and Cell is a CPU,no GPU runs at 3.2ghz even less on 2005 when Cell came out.

DSP are digital signal processors are good at some GPU task because how good they are repeating the same task over and over again in loops,which is good for some jobs like folding home for example,as well as other medical applications outside gaming.

Digital signal processing (DSP) is the use of digital processing, such as by computers, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency.

Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech signal processing, sonar, radar and other sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control of systems, biomedical engineering, seismic data processing, among others.

Digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing is closely related to nonlinear system identification[1] and can be implemented in the time, frequency, and spatio-temporal domains.

A graphics processing unit (GPU), occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing, and their highly parallel structure makes them more efficient than general-purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. In a personal computer, a GPU can be present on a video card, or it can be embedded on the motherboard or—in certain CPUs—on the CPU die.

http://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1002&context=theses

A DSP and GPU are not the same.

So stop comparing the damn G80 vs Cell,Cell is a CPU and DSP are part of it,no GPU runs at 3.2ghz,in fact modern ones run at 1300mhz or little more stock,in 2005 when cell was ready the GPU speed was like 550mhz not even close to DSP on Cell.

I remember how Major Nelson call SPE useless DSP, lol how wrong he was..lol

@skektek said:

The Cell presented 2 difficulties: most of the cores lacked branch prediction and most of its performance relied on parrelization at a time when parrelization was not a solved problem.

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

@ronvalencia said:

From http://public.dhe.ibm.com/software/dw/power/pdfs/pa-cmpware1/pa-cmpware1-pdf.pdf

The collection of processors in a Cell Broadband Engineâ„¢ (Cell/B.E.) processor displays a DSP-like architecture. This means that the order in which the SPUs.

Notice IBM didn't state SPU being CPU like.

For PS3's Cell BE, there's 1 PPE (SMT) + 7 SPUs hence 8 threads. 8th SPU is disabled for yield issues.

SPU thread is not the same as PPE thread.

The Cell Broadband Engine Architecture integrates an IBM PowerPC processor element (PPE) and eight synergistic processor elements (SPEs) in a unified system architecture. The PPE provides common system functions, while the SPEs perform data-intensive processing.

What is a Power PC Processor?

Is a damn CPU you buffoon,there is x86 CPU and there are PPC CPU and they are classify as CPU.

So yeah Cell was a hybrid comparing Cell to the G80 is a joke there are task Cell murder that G80 because that GPU can't run CPU process,Cell is compose of a PPC and SPE in 1 chip,DSP are not even GPU so you can make that comparison the only real reason why you compare Cell to the G80 is blind fanboysm and your hate for anything sony.

Find me 1 CPU than on 2005 or 2006 beat Cell in every single regard.

Gaming PC system comes with CPU and discrete GpGPU. AMD APU includes CPU and GpGPU.

GpGPU handles data-intensive parallel processing.

The long time ago, DSP was a separate chip.

The decision to combine DSP and CPU is mostly for cost and latency reduction reasons. The only buffoon is you.

PC did have quad SPU based add-on PCI card and it flopped since data-intensive parallel processing is already handled by the GpGPU.

From http://www.geautomation.com/node/12912

GE Intelligent Platforms Speeds Development, Deployment Of GPGPU-based Digital Signal Processing Applications

NVIDIA GpGPU handles radar DSP work you stupid clown.

http://www.soundonsound.com/sound-advice/using-your-graphics-card-process-plug-ins

Using CUDA GpGPU for DSP audio work .

You are using the old GPU definition not the new GpGPU definition. Again, the only buffoon is you.

http://gpuopen.com/amd-trueaudio-next-and-cu-reservation/

AMD TrueAudio Next and CU Reservation

For TrueAudio, AMD ditched Cadence Tensilica HiFi EP DSP and that's one less 3rd party IP payment.

-----------

IBM CELL lost the war!!!!

Facts,

1. SPU can't self host Unix type OS.

2. PPE is a shit CPU.

AMD Jaguar has superior IPC over IBM PPE and SPU i.e. IBM played the bullshit GFLOPS marketing.

http://www.anandtech.com/bench/product/1224?vs=53

PS3 was release around November 2006 NOT 2005

Pure 3.2 Ghz clock speed argument is meaningless MORON.

Avatar image for skektek
skektek

6530

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#40  Edited By skektek
Member since 2004 • 6530 Posts

@tormentos said:
@skektek said:

The Cell presented 2 difficulties: most of the cores lacked branch prediction and most of its performance relied on parrelization at a time when parrelization was not a solved problem.

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

The Cell doesn't have any DSPs (although DSP functions can be emulated).

The SPE cores aren't "bad" at branch prediction. They don't have any branch prediction at all. Branch prediction has to be emulated (predicted) in the compiler which isn't nearly as efficient as hardware branch prediction.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#41  Edited By ronvalencia
Member since 2008 • 29612 Posts

@skektek said:
@tormentos said:
@skektek said:

The Cell presented 2 difficulties: most of the cores lacked branch prediction and most of its performance relied on parrelization at a time when parrelization was not a solved problem.

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

The Cell doesn't have any DSPs (although DSP functions can be emulated).

The SPE cores aren't "bad" at branch prediction. They don't have any branch prediction at all. Branch prediction has to be emulated (predicted) in the compiler which isn't nearly as efficient as hardware branch prediction.

RSX's branch was very bad.

SPU's branch was a better than very bad.

I don't see anything special with 7 kit-bashed Altivec based SPU instruction set.

Avatar image for skektek
skektek

6530

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#42 skektek
Member since 2004 • 6530 Posts

@ronvalencia said:
@skektek said:
@tormentos said:
@skektek said:

The Cell presented 2 difficulties: most of the cores lacked branch prediction and most of its performance relied on parrelization at a time when parrelization was not a solved problem.

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

The Cell doesn't have any DSPs (although DSP functions can be emulated).

The SPE cores aren't "bad" at branch prediction. They don't have any branch prediction at all. Branch prediction has to be emulated (predicted) in the compiler which isn't nearly as efficient as hardware branch prediction.

RSX's branch was very bad.

SPU's branch was a better than very bad.

Neither of those devices even support branch prediction.

You are using words you don't know.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#43  Edited By ronvalencia
Member since 2008 • 29612 Posts

@skektek said:
@ronvalencia said:
@skektek said:
@tormentos said:

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

The Cell doesn't have any DSPs (although DSP functions can be emulated).

The SPE cores aren't "bad" at branch prediction. They don't have any branch prediction at all. Branch prediction has to be emulated (predicted) in the compiler which isn't nearly as efficient as hardware branch prediction.

RSX's branch was very bad.

SPU's branch was a better than very bad.

Neither of those devices even support branch prediction.

You are using words you don't know.

Refer to static branch.

SPU has static branch prediction and prepare-to-branch operations. https://www.research.ibm.com/cell/cell_compilation.html

You are using words you don't know.

-------------

http://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/cudaexperiments/kernellevel/branchstatistics.htm

Note why NVIDIA has PhysX CUDA.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#45  Edited By scatteh316
Member since 2004 • 10273 Posts

And this is separates the idiots from other people.

Anyone with half a brain cell knows that Cell (See what I did there) over achieved, you only have to read developer white papers to see how much it was used to make up for the POS that was RSX.

But of course this is system wars and idiots can't read.

Avatar image for skektek
skektek

6530

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#46 skektek
Member since 2004 • 6530 Posts

@ronvalencia said:
@skektek said:
@ronvalencia said:
@skektek said:
@tormentos said:

DSP are bad for branch prediction,which is funny because Ronvalencia quotes a sony hater from Beyond3D claiming you have to move branching to SPE which is a joke..lol

The Cell doesn't have any DSPs (although DSP functions can be emulated).

The SPE cores aren't "bad" at branch prediction. They don't have any branch prediction at all. Branch prediction has to be emulated (predicted) in the compiler which isn't nearly as efficient as hardware branch prediction.

RSX's branch was very bad.

SPU's branch was a better than very bad.

Neither of those devices even support branch prediction.

You are using words you don't know.

Refer to static branch.

SPU has static branch prediction and prepare-to-branch operations. https://www.research.ibm.com/cell/cell_compilation.html

You are using words you don't know.

-------------

http://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/cudaexperiments/kernellevel/branchstatistics.htm

Note why NVIDIA has PhysX CUDA.

/facepalm holyfuckingfuckfuck

Branching and predicting the branch are two separate functions.

Branching is like walking through a maze.

Branch prediction is like calculating the most efficient path from the beginning to the end of a changing maze.

Saying that SPUs have branch prediction is like saying that you have a GPS because you printed out instructions to grandma's house from Mapquest.com.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#47 Wasdie  Moderator
Member since 2003 • 53622 Posts

All of the arguing in this thread is pointless. The Cell was destined to fail simply because it was not X86. Nobody on the planet wanted to bother to support it and what little support it did get for the PS3 was focused around video games, which didn't benefit from the SIMD instructions since video game logic is almost always linear through a single main loop.

Game benefit from increased parallelism, but the SPE's of the Cell were not autonomous like a general processing core and had to use the PPE to "guide" them more or less. (massive oversimplification) This basically meant that if you really wanted the Cell to shine, the engine's main loop/timer (the core loop that runs the game) had to be extremely well optimized otherwise you would be wasting a lot of time waiting for the SPEs to finish their work. It's not really parallelism in the way we think it and it's very difficult for developers to develop for.

So they put a chip that is terrible at general processing in a machine that needs a lot of autonomous cores to do their own thing. The benchmarks, of course, would favor the Cell since if you did optimize for it properly, it could number crunch very well compared to comparable x86 processors at the time. However, as we saw in the real world, that rarely translated into better performance on the PS3. Throughout the entire life of the PS3, the vast majority of multiplatform games ran better on the Xbox 360. There were a handful that ran better on the PS3, but they were few and far between.

Sony, Microsoft, and now Nintendo were all smart to drop IBM as their chip provider. X86 makes a hell of a lot more sense than the PowerPC cores and whatever instruction set they were using and ARM makes sense for a mobile platform.

It's not that the Cell was an abomination. It was one of the first attempts to utilize "number cruncher" processors in tandem with a general purpose core for improved performance. Intel and AMD have since gone that route with their processors as being able to offload number crunching intensive processing onto the chip, has a lot of real-world applications. It also allowed them to sell a line of CPUs with sufficient GPU capability for outputting higher resolutions at higher framerates for day-to-day usage all while saving on energy consumption. A perfect combination for the mobile world.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#48 scatteh316
Member since 2004 • 10273 Posts

@Wasdie said:

All of the arguing in this thread is pointless. The Cell was destined to fail simply because it was not X86. Nobody on the planet wanted to bother to support it and what little support it did get for the PS3 was focused around video games, which didn't benefit from the SIMD instructions since video game logic is almost always linear through a single main loop.

Game benefit from increased parallelism, but the SPE's of the Cell were not autonomous like a general processing core and had to use the PPE to "guide" them more or less. (massive oversimplification) This basically meant that if you really wanted the Cell to shine, the engine's main loop/timer (the core loop that runs the game) had to be extremely well optimized otherwise you would be wasting a lot of time waiting for the SPEs to finish their work. It's not really parallelism in the way we think it and it's very difficult for developers to develop for.

So they put a chip that is terrible at general processing in a machine that needs a lot of autonomous cores to do their own thing. The benchmarks, of course, would favor the Cell since if you did optimize for it properly, it could number crunch very well compared to comparable x86 processors at the time. However, as we saw in the real world, that rarely translated into better performance on the PS3. Throughout the entire life of the PS3, the vast majority of multiplatform games ran better on the Xbox 360. There were a handful that ran better on the PS3, but they were few and far between.

Sony, Microsoft, and now Nintendo were all smart to drop IBM as their chip provider. X86 makes a hell of a lot more sense than the PowerPC cores and whatever instruction set they were using and ARM makes sense for a mobile platform.

It's not that the Cell was an abomination. It was one of the first attempts to utilize "number cruncher" processors in tandem with a general purpose core for improved performance. Intel and AMD have since gone that route with their processors as being able to offload number crunching intensive processing onto the chip, has a lot of real-world applications. It also allowed them to sell a line of CPUs with sufficient GPU capability for outputting higher resolutions at higher framerates for day-to-day usage all while saving on energy consumption. A perfect combination for the mobile world.

Couldn't be more wrong

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#49  Edited By Wasdie  Moderator
Member since 2003 • 53622 Posts

@scatteh316: You mean like when every multiplatform game under the sun ran better on the Xbox 360? A handful late in the gen that ran better on the PS3 just proves my point that the thing was too complicated for developers to bother wasting the time to make it perform as well as the 360 version.

The only games where the devs bothered were the exclusives. Those were few and far between. Naughty Dog pushing the PS3 to its absolute limits is the exception, not the rule. The real world proved that the Cell and the entire PS3 was a pain in the ass to develop on and hindered development on the platform constantly. It rarely translated to better performance in practice because it was too difficult to code for in the little amount of time game devs have to build and launch a game.

What's the point of having some super power chip if only a handful of games over the entire life of the machine are going to use it? There is no logic behind that. Winning some internet argument on which is technically more powerful doesn't mean jack shit when a the 360, that launched months earlier for $100 less, had comparably good looking games throughout the entire lifespan to the PS3. It's a pathetic waste of time that cost the PS3 a lot of potential sales early on to the 360 because of the bad PR gained from it's noticeably bad ports.

Why do you think they dumped that piece of crap in favor of x86?

Avatar image for deactivated-63d2876fd4204
deactivated-63d2876fd4204

9129

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#50 deactivated-63d2876fd4204
Member since 2016 • 9129 Posts