Feature Article

The Past, Present, and Future of the CPU, According to Intel and AMD

Power trip.

Ten years ago, if you were to buy the best CPU for gaming or otherwise, you'd have chosen AMD's Athlon 64. My, how times have changed. While AMD has struggled to rekindle its glory days as the CPU-performance leader, Intel's CPUs have gone from strength to strength over the past decade. Today, Intel's CPUs perform best, and use the least amount of power, scaling admirably from powerhouse gaming PCs all the way down to thin and light notebooks and tablets--segments that didn't even exist a decade ago. But this return to CPU dominance might never have happened had it not been for the innovations taking place at AMD back in the early half of the 2000s, which makes the company's fall from grace all the more galling.

The 64-bit extensions of AMD's Athlon 64 meant it could run 64-bit operating systems, which could address more than 4GB of RAM, while still being able to run 32-bit games and applications at full speed--all important considerations for PC players at the time. These extensions proved so successful that Intel eventually ended up licensing them for its own compatible x86-64 implementation. Two years after the launch of the Athlon 64, AMD introduced the Athlon 62 X2, the first consumer multicore processor. Its impact on today's CPUs cannot be overstated: everything from huge gaming rigs to tiny mobile phones now use CPUs with two or more cores. It's a change that even Intel's Gaming Ecosystem Director, Randy Stude, cited when I asked him what had the biggest impact on CPU design over the last decade.


AMD's Athlon 64 kickstarted the 64-bit revolution. Image credit: flickr.com/naukim

"So, the answer to the question is cores," Stude tells me. "I was here at Intel through the Pentium IV days. We hit a heat issue with that part and took a big right turn and introduced a very efficient product out of Israel [the Core CPU] that helped us take over performance leadership that--for the most part--we've enjoyed for the better part of a decade. We've been able to add cores quite efficiently, and that's led to some substantial performance gains for the PC in general."

This focus on cores has dominated the last decade of CPU development. Prior to the introduction of multicore CPUs, the focus was very much on increasing clock speeds. This gave games and applications an instant performance boost, with very little effort required from developers to take advantage of it. Moore's Law--which states that the number of transistors in a dense integrated circuit would double around every two years--was in full swing in the 90s and early 2000s. In the period from 1994 to 1998, CPU clock speeds rose by a massive 300 per cent. However, by the mid 2000s, power consumption and clock speed improvements collapsed, with both Intel and AMD fighting the laws of physics. The solution was to introduce more cores, so that multiple tasks could be executed simultaneously by individual CPUs, thus increasing performance.

The trouble is, unlike increasing clock speed, increasing the number of cores requires developers to change the way their code is written in order to see a performance increase. And, in the case of games development, that's been a slow process.

Games like Battlefield 4 that make use of multiple CPU cores are still the exception, rather than the rule.

"[Multicore CPUs] have required that the software industry come along with us and understand the notion of threading," says Stude. "For gaming, it's been challenging. Threading on gaming is a much more difficult scenario that both us and AMD have experienced. In general, you've got one massive workload thread for everything, and up until now that's been handled by, let's say, the zero core. The rest of the workload, whatever it might be for a particular game, goes off to the other cores. Today, game engine success is a bit hit and miss. You have some games, the typical console games that come over, that don't really push performance at all, and isn't threaded or lightly threaded."

"[Multicore CPUs] have required that the software industry come along with us and understand the notion of threading. For gaming, it's been challenging." - Intel

"The nature of development work for those platforms, especially in the early years, is that you'd get your game running and publish it and you'd rely heavily on the game engines that you as a publisher own, or that you acquire from third parties like Crytek and Epic," Stude continued. "If Epic and its Unreal engine on console don't have a threaded graphics pipeline--which to date they don't--then you're looking at the same issue that you see on the PC, which is a heavily emphasised single-core performance workload, and then everything else that happens like physics and AI happens on the other cores. It's not a completely balanced scenario, because by far the biggest workload is that render pipeline."

The problem has been more pronounced for AMD. Its Bulldozer CPU architecture (which all of its recent processors are based on modified versions of) tried to both ramp up clock speeds by lengthening the CPU’s pipeline, increasing latency (an approach not too dissimilar to the disastrous Prescott Pentium 4 from Intel), and by increasing the number of cores by sharing resources like the scheduler and floating point unit, rather than by duplicating them like in a standard multicore CPU. Unfortunately for AMD, Bulldozer's high power consumption meant that clock speeds were limited, leaving the CPU dependent on software that made use of those multiple cores to reach acceptable performance. I asked Richard Huddy, AMD's Gaming Scientist and former Intel and Nvidia employee, whether chasing more cores was the right decision. After all, to this day, Intel's Core series of CPUs consistently outperform AMD's.

"So if you talk to games programmers--there are other markets as well--they have typically found it easy to share their work over two, four cores," says AMD's Huddy. People have changed the way they program for multi-core stuff recently over the last five years to cope with six-eight cores. They understand this number is the kind of thing they need to target. It's actually genuinely difficult to build work-maps of the kind of tasks you have with games to run on something 32 cores or more efficiently."

AMD's Richard Huddy had a hand in creating Direct X, as well as stints at ATI, Intel, and Nvidia.

"The more cores you have, the harder it gets, so there is a practical limit," continued Huddy. "If we produced 1000-core CPUs then people would find it very hard to drive those efficiently. You'll end up with a lot of idle cores at times and it's difficult. From a programmer's point of view it's super-easy to drive one core. So yeah, if we could produce a 100 GHz single-core processor, we'd have a fantastic machine on our hands. But it's mighty difficult to clock up silicon that fast, as we're up against physical laws here, which make it very difficult. There's only so much you can do that ignores the real world, and in the end you need to help programmers understand the kind of constraints they're building to."

"I'd love for us to build a single-core CPU. Truth is, if you built a single-core CPU, that just took all of the power of the CPU and scaled up in the right kind of way, then no programmer would find it difficult to program, but we have to deal with the real world."

The Death of Moore's Law?

The real world is Moore's Law, or rather, the end of it. The death of Moore's Law has been talked about on and off for years, and yet Intel and AMD have continued to see significant performance boosts across their CPU lines. But the upcoming launch of Intel's Broadwell architecture and its die shrink from 22nm to 14nm has seen several delays, prompting many to call out the death of Moore's Law once again. Certainly, both companies face a number of technical challenges when working at such small manufacturing processes. Intel, for example, developed its 3D Tri-Gate transistor technology--which essentially allows three times the surface area for electrons to travel--to deal with current leakage at 22nm and beyond.

"For the last decade--which is a strong portion of our existence, the dominant decade in terms of our revenues and unit sales--we were told Moore's law was dead and that the physics wouldn't allow us to continue to make those advances, and we've proven everyone wrong," says Intel's Stude. "I'm a futurist as a hobby, and I've learned a lot being at Intel. The day I started we had introduced the Pentium and even then the conversation was about what was possible from a die shrink perspective. I'm not ever going to believe in my mind that the pace of innovation will outstrip the human brain."

"I just don't subscribe the concept that there isn't a better way. I think that evidence of the last 50 years would argue that we've got a long way to go on silicon engineering. What we think is possible may completely be eclipsed tomorrow if we find a new element or a new process that would just flip everything on its head. I'm not going to play the Moore's Law is dead game, because I don't think it will be dead. Maybe the timeline slows down, but I just can't subscribe it dying based on what I've seen at my time at Intel."

Intel's "tick-tock" strategy has helped the company stick to Moore's Law, but how just long can it last?

AMD's Huddy shares a similar viewpoint: "Moore's Law looks alive and well, doesn't it? It's always five years from dying. For all practical purposes, I expect us to live on something very much like Moore's Law up until 2020. Our biggest problem is feeding the beast, it's getting memory bandwidth into these designs. I want the manufacturers of DRAM to just keep up with us, and give us not only the higher density--and they do a spectacular job of giving us more memory--but also make that memory work faster. That's a real problem, and if we could just get a lot of super fast memory and not pay the price of that wretched real world physics that gets in the way all the time. I blame them, it's all down to DRAM!"

Better Integrated Graphics

While cores have dominated CPU development over the last decade, both AMD and Intel have made great strides in bringing other parts of a system onto the CPU to improve performance and decrease system size, most notably with graphics. Until recently, Intel's integrated graphics were considered a poor choice for gaming, with performance that was only really good for rendering the 2D visuals of an operating system, rather than sophisticated 3D graphics. But this has changed of late. While Intel's Iris Pro integrated graphics can't compete with a separate GPU, they are able to run many games at acceptable frame rates and resolutions. There have even been some neat small-form-factor gaming systems designed around Iris Pro, such as Gigabyte's Brix Pro.

But when it comes to integrated graphics, AMD is far and away the performance leader. The company's purchase of ATI in 2004--despite some integration issues at the time--has given the company quite the performance lead; AMD's APU range of CPUs with built-in Radeon graphics are the best choice for building a small gaming PC without a discrete GPU. It might be just a small win for the company on the CPU side, but it's one that has had a significant impact on the company's focus.

"We took a decision 18 months ago to focus heavily on graphics IP," says Darren Grasby, AMD's VP of EMEA. "Driving the APU first, first with Llano, and fast forward to where we are today with Kaveri. Kaveri is the most complex APU ever built, and if you look at the graphics performance within that, you're not going to get the high-end gamers with that. But if you look at mainstream and even performance gaming, an A10 Kaveri is your product to get in there. And you don't have to go spend $1500 or $2000 dollars on a very high-spec gaming rig, that quite frankly, a mainstream or performance gamer isn't going to be using to its full capability."

"If you think about it from a gaming aspect, what are gamers looking for? They're looking for the compute power from the graphics card. The CPU almost becomes secondary to it in my mind." - AMD

"So you're right on the ‘halo effect’ on the CPU side," continued Grasby. "Obviously we can't talk about forward-looking roadmaps, but it's leaning into where the graphics IP is, and where that broader market is, and where the real revenue opportunities sit within that. That's why, if you look at Kaveri, if you look at the mass market and gaming market you're getting right up there. Then you start to get into 295 X2, and then you're talking about where the gamers are. If you think about it from a gaming aspect, what are gamers looking for? They're looking for the compute power from the graphics card. The CPU almost becomes secondary to it in my mind."

The Growing Threat of ARM and Mobile

While Intel continues to lead on pure CPU performance and AMD leads on integrated graphics, both companies have stumbled when it comes to mobile, which is problem as PC sales continue to decline. All-in-one system on a chip designs based on designs from the UK's ARM Holdings power the vast majority of the world's mobile devices--and that doesn't just mean cellphones and tablets; Sony's PlayStation Vita is built on a quad-core ARM chip. Intel has tried to stay the course with X86, creating the Atom line of processors specifically for low-power devices like phones and tablets. They haven't exactly set the world on fire, though. Intel's Mobile and Communications group lost over $900 million earlier this year.

AMD, meanwhile, took a different path and signed an ARM license to begin developing its own ARM processors. The question is--with the vast majority of the company's experience being in X86 architecture--why?

Phones and tablets like the Nvidia Shield mostly make use of ARM processors, rather than the traditional X86-based designs that AMD and Intel produce.

"Did you see Intel's earning results yesterday? [Note: this interview took place on July 17, 2014] Just go and have a look at their losses on mobile division," says AMD's Grasby. "I would suggest at some stage their shareholders are going to have a challenge around it. I can’t remember the exact number, it’s on public record, but I think it was 1.1 billion dollars they lost on 80 million dollars of turnover. Our clients suggest that isn’t the best strategy. I encourage them to keep doing it, because if they keep losing that amount of money, it’s definitely not good...the primary reason why we signed the ARM license was because two years ago we bought a company called SeaMicro. We were basically after its Freedom Fabric [storage servers], and that’s why we signed the ARM licence, to go after that dense, power server opportunity that’s out there. It’s a huge opportunity."

"As soon as we got the ARM 64-bit license, other opportunities opened up on the side. Think embedded, for example. Embedded from an AMD perspective had always been an X86 Play. Just to give you an idea, ARM and X86 are a nine to ten billion dollar business. Take ARM out of that it comes to around four to five billion dollars. It’s to exercise the opportunity."

PC Market Decline

Despite AMD's efforts, though, its ARM strategy and planned turnaround hasn't gone entirely to plan. The company posted a $36 million net loss in its recent financials, and predicted that its games console business to Sony, Microsoft, and Nintendo--which, to date, has been one of its biggest successes--would peak in September. Shares plummeted by 18 percent after the announcement. The still declining PC market means both Intel and AMD are looking for ways to expand beyond the desktop, but the companies maintain that their CPU lineups, and in particular their CPUs aimed at gamers and overclockers, remain an important part of what they do.

Despite a decline in recent years, overclocking is still alive and well.

"The overclocker market certainly is relevant," says Intel's Stude. "Every time we come out with a part there's a fraction of a fraction of people that are the utmost enthusiasts. They care about every last aspect of that processor and they want to want to push it to the limits. They are tinkerers, they don't mind buying a handful of processors to blow 'em up just to see what they can do, and to make their own living, be it working in Taiwan for the ODMs who make motherboards, or be it in other capacities in the media to submit their opinions on Intel's top end parts."

"We love the boutique nature of it," continued Stude, "because the people in that seat typically have very interesting compute perspectives that influence the decisions that others make. So, if you're very overclockable, you have a very influential position...so we do the best we can to feed this community our best story and we'll continue to that."

While there's no doubt AMD CPUs offer excellent value for money (we used one to great effect in our budget PC build), they still lag behind Intel when it comes to outright performance and performance per watt; to stay in the PC market, AMD has a much tougher job ahead of it then its rival.

"From an engineering perspective, performance per watt becomes the limiting factor in a lot of situations so there's no doubt that we need to do a better job," says AMD"s Huddy. "It's very clear that Intel and Nvidia, and everyone that competes in the silicon market has to be more aware of this. If you go back 10 and in particular 20 years ago, performance per watt, wasn't a big issue, but it increasingly is, and we aim to do better. I have absolutely no doubt about that. There's a lot of attention being paid to that. There are limits over how much we control our own destiny, but particularly for us where we use companies such as TSMC as others do, then those companies work with the same constraints as us and we should be able to just match them."

"From an engineering perspective, performance per watt becomes the limiting factor in a lot of situations so there's no doubt that we need to do a better job." - AMD

AMD Bets On Mantle

The future for AMD may lie in more than just hardware too. Mantle--its competitor API to OpenGL and Direct X--allows console-like low-level access to the CPU and GPU, and it's clear from speaking to the company that it has a lot of hopes pinned on the technology, even if Microsoft's upcoming Direct X 12 promises to do something very similar.

"It's very clear people have seen there's an artificial limitation that really needs to be fixed, and it's not just about giving you more gigahertz on your CPU," says AMD's Huddy. "We can be extremely proud of Mantle, getting the CPU out of the way when there was an artificial bottleneck. There's no doubt that people will use the extra CPU horsepower for good stuff, and we're seeing that in the demos that we're already able to show. However, let's not get hung up on gigahertz, sometimes it's smarts that get you there, and if you're looking for the fastest throughput API on the planet, then you'd have to say it is Mantle, and you'd have to say 'okay, now I get why AMD is leading the way', don't just count the CPU gigahertz, but look at the technology innovation that we're coming up with."

"Amusingly, and I don't know how relevant it is, you can make your own decision on that, for me it's entertaining: one of the companies that approached us [about Mantle] was Intel, and we said to Intel, 'You know what, can you give us some time, to fully stabilise this because this has to be future proof, but we'll publish the API spec before the end of the year.’ And if Intel want to do their own Mantle driver and want to contribute to that they can build their own. We're trying to build a better future."

For more on AMD's Mantle, and why the company thinks Nvidia is doing "something exceedingly worrisome" with Gameworks technology, check back later in the week for our look at the developing war between PC graphics most prolific companies.

Got a news tip or want to contact us directly? Email news@gamespot.com

Did you enjoy this article?

  • markypants

    Mark Walton

    Mark is a senior staff writer based out of the UK, the home of heavy metal and superior chocolate.


    256 Comments  RefreshSorted By 
    • 256 results
    • 1
    • 2

    PC market is in decline over all. But not PC GAMING market, as that is on the rise and is larger than consoles.

    PC sells less now because non gamers will opt for tablets or smart phones for social and some work related stuff. Its a statistic that should be ignored as it doesn't affect the gaming market.


    PC market declining? hahahahah tell these jokes to someone else, 700 million total pc users, about 300+ million pc gamers, and about 8-9 million steam users online 24/7.... start brainwashing someone else... Consoles are but limited PC, people with boxed minds buy it or those who just want to play exclusives....


    << LINK REMOVED >> Yes PC(desktop) market declining. There needs to be a DEMAND for new cpus/graphic cards/memory, etc. Hell, the sound card has been replaced by the video card since then technology has advanced enough where it's no longer necessary. Eventually, the graphic cards will be replaced by the APUs.; except for serious gamers. With all the downsizing, the big metal boxes we know and love will be less common since it would make more business sense to go mobile; reducing the R&D for PCs.


    << LINK REMOVED >> Hello dear fanboy.


    << LINK REMOVED >> not pc gaming, pc market. people buy more tablets and phones than laptops, and more laptops than desktops


    For some reason i always choose Intel/Nvidia


    I'm still using my AMD phenom 2 x4 955 for over 5 years and it's still running new games great at hi res. The only upgrade I've made was to buy an amd 7750 card. I doubt I'll need to rebuild my pc anytime soon. I think the pc market will continue to go down the tubes regardless of what intel or amd does since people today would rather play stupid games on their 4 inch phones instead of a 46 inch screen hooked up to an avr. Eventually both companies are going to say F it and switch over to the mobile market while the PC market will be niche and business only.


    With as much complaining as I do about the quality of the articles on this site, I think it only right that I applaud the good examples. Good read, Mark, thank you.


    How about a DUAL INTEGER CORE on a BIGGER die fed off by a Bulldozer/Piledriver type module? Like a Pentium G3258 cores built into the FX8350's 4 modules.


    Intel is going down hill, their Baytrail is a real POS and their Cherry trail, being 14nm gate size, is hitting a brick wall due to low yield from the fab. The only edge Intel has for now, is their fab, they are only surviving because of 1. The only 10nm Fab in the world, 2. server market.


    the icore's have been around forever it seems (in cpu years). I'm actually surprised we don't see something like a full scale desktop with arm processors.


    << LINK REMOVED >> AMD has gone that direction with servers


    Just wanted to drop by to say this is excellent journalism. Well sourced, well thought out article that is too rare these days not just on Gamespot but the internet abroad. Great work Mark!


    I have an 8 Core Vishera OCed to 4.60 ( i was lazy to juice it up to it's limit) from the stock 3.50 ! Performance wise Intel is still beating my ass to a pulp but there is a huge difference to that : People (like me) that bet on value for money 8core system will rip their rewards post Xmas when good ol' Nvidia and ATI will be FORCED to give us drivers " optimized for 8 cores" and then you'll see the huge gap between 8 cores of the cheaper piledriver based than the expensive 4core i7 ( current gen) Pretty much what happened with the jump from 2 to 4 cores when ppl were mocking the mates that were investing to 4cores because dual cores were dishing out more frames (due to games being optimized for those rather than 4cores) and overnight optimized drivers came out and they were biting their lips!

    Consoles wear 8cores and let's face it , 80% of games are heavily manufactured based on consoles or even worse PORTED from them! It's only a matter of time before full optimization for 8 cores come and then i will smile grandly for my trusty AMD !
    Having that said the day that Intel 8 core releases (and if they make it accessible value-wise which i doubt) then we will be back to the same situation: Intel beating the crap out of AMD on CPUs


    The growth in performance from ARM and Imagination Technologies' chips is so much faster than that of Intel of AMD. Both companies claim the next generation of their technology coming next year will be on par with the PlayStation 4 (my dad works at ImgTech as a senior design engineer).

    And so, quite soon AMD and Intel will be pushed out of the market by ARM/Imagination due to the massive increase in efficiency that they provide in terms of space, power, heat dissipation etc.

    In other words, computers are getting smaller and cooler and Intel are way behind.


    We are more software limited than hardware


    << LINK REMOVED >> that won't stop em' from making you feel like you need to go out and buy new hardware for the latest releases. (I'm still using a q6600 desktop for my design work).


    << LINK REMOVED >>

    As the author stated, and I said. What the consumer doesn't know. This is how business works ..

    Free Market ?! Hahahaaa!!


    << LINK REMOVED >> Exactly, Moore lived in a time when technological advancements meant saving millions more lives; nowadays, the only thing that matters is padding the bottom line.

    Just like the constitution needs serious work, so do our dogmatic rules that frame the human condition as constant.


    I know the author knows this but , Intel did not win the cpu war . Intel won the dirty under handed business war and set back tech and cpu tech in general for decades. If the so called Free Market truly was free this would be a different story.

    I wont support or buy Intel , they are the perfect picture of the person who couldn't compete , and cheated. On the surface these people or companies look like smart and savvy business winners, but the truth is much more scummy then 99% of people know. The real story is Intel was really good at strong arming the retail chain ,and partners, and back ally deals then actually competing on fair ground.

    Sad fact,most successful people or companies don't get to the top because they were smarter or better. Business and life are dirty, and you don't get to the top Hallmark/ABC special style. You get there by being a son of a b*%ch!


    << LINK REMOVED >> So then basically they won the cpu war.. Cry some more noob while us intel people will always have the better cpu. ;)


    << LINK REMOVED >><< LINK REMOVED >> they didn't win the CPU war, they won the business war, dumb schitz. Learn to read.


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> i win, no moor work for me.


    ...It's funny how the article starts with saying Intel this and that. But when you think about it, it's a debatable situation. Intel would win this hands down if they kept the prices in reasonable range. AMD on the other thinks smarter. They're cpu's are a bit slower but with that the prices are incredibly affordable. Take the FX-6300 6 core it's over clocked to 4.1 ghtz right out the box for $100 bucks, and its a black edition so you can continue feeding the beast. With all that and enjoy doing everything a Intel processor does be it games or apps. Overall Intel and AMD need to put they're heads together to keep the PC genre alive and kicking for the years to come. But a little competition never hurt no one and makes us the consumers the winners.


    << LINK REMOVED >> It's kinda bugged me for ages that AMD claims 6-8 cores when they're not true cores. Intel could claim 8 cores just as easily instead of hyperthreading.

    "Its Bulldozer CPU architecture (which all of its recent processors are based on modified versions of) tried to both ramp up clock speeds by lengthening the CPU’s pipeline, increasing latency (an approach not too dissimilar to the disastrous << LINK REMOVED >> from Intel), and by increasing the number of cores by sharing resources like the scheduler and floating point unit, rather than by duplicating them like in a standard multicore CPU"


    Make the CPU hexagon shaped instead of square. Hexagons are futuristic and advanced.

    I know this because video games.


    << LINK REMOVED >> I cannot argue with your virtual logic and so agree!


    << LINK REMOVED >> they would have to go in all three dimensions if they decided to take that route, like a crystal, whats up?


    CPUs are not about who produces the fastest processor, anyone can produce a monster for $10.000 that will beat anything

    its also not only about power consumption, a i3 and a athlon 740k; also a i5 and a fx-6300 have both about the same performance and TDP

    the real winner is the one that is able to sell the fastest processor for the lowest price, this means they have the lead on that technology and are able to produce with less cost

    AMD dominates low and mid-end market, intel gets the high-end but not by much


    << LINK REMOVED >> well the 4770k and 4790k are not that pricey and bang for your buck are not bad either so yeah things are close but intel still leads but as you said not by much.

    but then things are getting to the point where we need a complete flip on its head innovation in order to regain the jumps of the late 90's early 2000's


    How about adding that new IBM brain chip? Maybe games could have not-horrid ai.


    << LINK REMOVED >> won't be in the market for the next 10 years.


    << LINK REMOVED >> idk, a few user created mods I've seen make NPCs rather difficult, rather intelligent.


    I have to give everyone who posted here kudos for being level headed. It's such a good article by Gamespot and takes us PC enthusiast away from the long drawn out console wars on this site.

    Great post everyone nice to see some intellectual debates than pointless dribble.


    Now this is cool chip technology (I'm a brain nerd): << LINK REMOVED >>

    Thanks IBM :-)


    << LINK REMOVED >> It is very cool. I might be perusing a masters in neural engineering once I finish up my undergrad as I have always been interested in how the brain works and related biochem too. 'Smart drugs' do seem to be shaping up better as well, at-least for healing damaged brains. Our brains are essentially electrical circuits that are driven by the electric (action) potential created by calcium/potassium/sodium/chloride transfer. It applies voltage in the same way (on or off) to interact at synapses. Charge builds up on either the inside or outside of cell membranes to drive the interaction in the way it wants to. Ions are at the heart of our brains, the same way the transistors in our electronics are driven by the ions within the doped substrates. Very simplified, but I think that is the basic concept of how they work.

    I looked at your article, it was interesting. The 70mW power consumption comes from the power that our brain cells (simplified) generate in real life, about 60-80mW. It is still a silicon based chip however, but it is a good step into pushing the most we can out of the technology while we are working on completely new ways.

    Look up DNA computation and quantum computing as well, they both have the power to do computations fully parallel (as they are 3D concepts/structures). I believe they actually used salmon DNA integrated with a silicon chip so far. This means that they will be magnitudes more powerful and even smaller than anything we have today. It can also easier make integration with say prosthetics or implants a lot better. Most importantly, they will be powerful enough to perhaps begin to model how our brain works. Even our best supercomputers can only run a basic simulation for a very short period of time, from the last I had read about it.


    good article i dunno why i choose amd over intel same as i always buy nvidia gfx cards one of those things i guess ive stuck with. my last intel pc rig i had was a pentium 2 single core 0.60 mhz it was the dogs danglies back in the day :)


    @Gegglington thats funny that you would choose and amd cpu but not an amd gpu one would think they would work well together, and a bit less well when not paired up but i guess they don't ;)


    In few day's intl going to announce next generation CPU, new standarts inc.

    Great article good job


    AMD boasts HSA in their Kaveri APUs.

    But the thing is, softwares are not optimized to take advantage of that yet.


    They say "the biggest problem is feeding the beast" (talking about RAM) but end the discussion only a few lines later. Would've liked to learn more about this factor. For example, whats the difference between faster RAM and simply more ram? Is it the same silicon heat restraints keeping RAM from getting faster?


    << LINK REMOVED >> my guess is not yet as gddr5 ram is in the 5000+ mhz range and ddr3 is in the 3000+ mhz range so i'm guessing it is a problem with getting a memory controller on the die that small that can handle it but then i don't know that much about this stuff and if someone knows for sure i would like to know


    << LINK REMOVED >><< LINK REMOVED >> Well its too complicated to really give you a good answer here but I'll try. Memory is the ability to store data as bits (1s or 0s; ON or OFF). Bits can be moved(shifted) in either serial (one after another) or parallel methods (a group at a time). Memory is also not just the storage of the bits, but the ability to read/write the bits using decoders/refresh counters/data selectors/registers/buffers/ect. depending on your type of memory. Your CPU has a clock rate, which is what times the transfer of bits around; it is the brain of your computer while the motherboard is the body. All your components are tied to this in a sense, even if they are faster the data is not read until the proper cycle, that is partly why faster CPUs can perform more calculations. I'd be writing a textbook if I went into what everything was, but they are things you can look up if you are interested. My Digital Fundamentals by Floyd book served me pretty well in my courses if you can find it online or in a library.

    It is mainly the architecture of how memory works; there are different types of RAM, that work in different ways. There is asynchronous and synchronous static ram and then there is dynamic ram with many subsets, there is there is solid state memory, there is High Density RAM. These use either transistors/capacitors or flip-flops (a Boolean logic gate setup). There are even newer developments such as purely silicon based RAM or even organic DNA based memory. Can't go into it here, and I wouldn't even be able to necessarily tell you everything so these are just things to try to look up on your own. If you can find the logic diagrams and schematics it might help. Don't worry you don't need to be an engineer to get at least a basic understanding.

    Anyways, more RAM only gives you the ability to store more bits, while faster RAM means that read/write speeds might be higher or it has a more efficient architecture; such as having a cache that stores frequently accessed data or tries to generate future dependencies/calculations based upon what is happening at the moment (pre-fetching). Sure you can just keep stacking more RAM on, but it depends on 1) if your OS will address that much 2) if you program is coded to request that much from the OS and 3) the physical limitations in the form of clock timing, internal latency, and the speed of the actual wiring/components 4) any others I'm forgetting right now. Just shoving in more really accomplishes nothing tangible at this point; so they are working on making faster and more efficient designs.

    Your PC is most likely using DDR SDRAM, double date rate synchronous dynamic random access memory.

    The double data rate part means that it transfers on both the rising and falling edge of the clock cycle (picture a line that starts at 0 and goes to 1, rises up, goes over a unit, then down and continues the line; this pattern is repeated in the same frequency as your CPU; this is the clock cycle). This means it address memory at two times per clock cycle

    The synchronous part means that it is timed to read/write at the same cycles as your CPU clock. This means you don't have to have extra buffers to feed data in the correct order and at the time that it will actually be read.

    The dynamic part means that this form uses a transistor and a capacitor sequence to store electrical charge. The transistor acts as a gate while the capacitor is like a little bucket to hold an electron. If it is 'full' it sores a '1', if it is 'empty' it stores a '0'. This charge decreases over time (milliseconds), so it must be refreshed every once and while to keep the stored bit value correct. This means you need your CPU or a memory controller in the chip to keep recharging every capacitor with a '1'.

    Some have features such as error correction which takes extra space to check values essentially.

    So in the end, silicon is partly responsible for the speeds (as it is in the transistors), but there is a lot more going on as well that determines efficiency. I say efficiency over speed, because more speed in computers isn't necessarily beneficial on its own. It also depends on the other components and coding (OS and micro-coding) as well. This is why people talk of CPU bottleneck (overblown on this forum). Once we more to a new form of RAM to use, things will get a lot faster, although motherboards will have to change as well. Sorry if this is a little disjointed or not fully correct, it was done quickly, but its a s

    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> Put simply, a "CPU" (central processing unit) requires instructions and data to work with. Both of which need to be "read" into the CPU and at some point the data needs to written out and ether used or stored somewhere.

    Since almost forever a CPU can execute instructions in less time than it takes to read both the instructions and the data and write the result back out.

    All the CPU vendors have gone to great lengths in the past to make their parts "process" information as quickly as possible however getting that information to/from your screen, USB port, SDD/HDD or even the (mostly) computers main memory has become problematic.

    Modern computers have ginormous amounts of Random Access Memory (RAM). Much of it is used to cache information coming to/from even slower devices such as HHD's and even modern SSD's however even with 16, 32 or even 64GB of RAM for storing entire applications and all the data a "bottle neck" will always form at the weakest link - namely in modern times between the CPU and main memory.

    When DDR4 is pushed into the main stream it will help a bit but unless CPU makers start including ginormous GDDR5/GDDR6 on die "cache" to help feed data and instructions to the CPU's there's not much sense in making the CPU's faster.


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> whoa that is a lot to read and i did read all of it. that was very insightful and addressed, many of the things i did not know what they meant some i did know. as i am interested in this stuff as a hobby i will look into what you suggested. my thanks

    i myself have been wrestling with whether i should keep my 2400mhz ram at its stock speed or drop it to 1866 and get a better cas latency, it is for mostly gaming and i got it before i found out the 1600mhz is ideal for gaming. latter i saw some benchmarks the suggested that 1866 was better so i thought that 1866 with a lower cas latency would be better since you have way more knowledge in this field then me maybe you can tell me if what i have found is correct


    << LINK REMOVED >><< LINK REMOVED >> I can't really say how much that would actually help. The lower lagtency is usually desirable, but it is always a crapshoot to how much tangible benefit it will have compared to the higher speed. With lagtency, you are probably only talking nanoseconds or less here. If it is the slowest component of your PC by a noticeable margin, than sure it will definitely help, otherwise probably won't make a difference. I'm guessing that increasing RAM frequency and/or decreasing lagtency really won't have a huge impact for your overall system anyways at this point.

    Say your 2400mhz has a cas 9 lagtency, that means it operates at 2400 million cycles per second, while the cas 9 means that it takes 9 of those cycles before each operation is started after one is completed. That gives you 9/2400, or 0.00375ms of lag.

    You go down to 1866mhz with say a cas 6 lagtency. That gives you 6/1866, or 0.00322. Smaller by 0.00053ms, or a 14.13% decrease from 0.00375ms. So, this would be an increase in performance. You have to be careful tightening up your timings after under clocking, it could result in instability or not being able to boot. This can be fixed, but messing with voltages has the ability to fry your hardware, it has small ranges to work with dealing with transistors/heat tolerances and all. In fact your hardware most likely goes bad from heat damage that irreversibly harms the silicon/doping or even melting the casings/connections if you are unlucky enough.

    Plug in your equipment into the fractions and you can see what you end up with. I'd say if you have the time just drop it down and test the results firsthand, that is usually the best way to do things. You can design things and calculate them theoretically, but real world application always throws a wrench into it.

    Would I personally do it? No, probably won't make a difference for the effort you're putting into it. If you want to, by all means, shouldn't ****(always a change to though)**** hurt anything. I haven't overclocked my system, it works good enough for me since I was able to find good parts for pretty cheap, so I am only answering based on what I have had some experience with from an engineering perspective. I'd look on an overclocking or gaming forum too if you feel uncomfortable about it.



    thanks again.

    i bought a maximus 6 formula mobo a 4770k i'm going to get a liquid cooler, the reason i got this stuff is was to learn how to OC with gear that would make it as safe and easy as it can be, baby steps right.

    i have spent months reading about how to do it, what can go wrong and why, and the only reason i have not tried it yet is a massive expense cleaned me out of money, so thing are tight right now, and i can't get the new cooler which is one of the more important parts as far as safety goes.

    now that i see the math that is not much of a jump but that said its the total of all the tweaks that may add up to a more noticeable improvement like get the ram running better get the cpu turn up to its safe and stable max, which from what i have read will make the most out of the ram tweaks, then see if my gpu can go a bit higher (i know that there is not much room to play with in gpus) now when you take all that into consideration it should add up to a marked improvement

    won't know till i try


    I believe the future of gaming is integrated graphics. The Iris Pro 5200 for Intel was a big step in the right direction. It is capable of 1080p gaming in a lot of cases, but isn't quite there yet. Hopefully the Iris Pro 6200 will see a 30%-40% boost in performance. That would make it capable to play a game like Crysis 3 at a 1080p resolution. If you've never built a PC before it can be a daunting task to select the proper parts. AMD and Intel need to up their igpu to a mid level range so people can stop worrying about GPU's and such and have the ability to just buy a Desktop. A strong igpu in a pre built desktop for a reasonable price could boost sales imo.


    << LINK REMOVED >> The GPU is about the only part gamers look at when choosing a prebuilt, no one is annoyed to look a graphic card, it's part of life, and shoving two huge blocks of fans and computer chips together in SLI or Crossfire is part of the fun when building, if everything is integrated, there is no more choice, well like a prebuilt, but building your PC the entry test to the Master Race.


    << LINK REMOVED >> That is a tall order. Games are pushing a lot of pixels around now, just wait till 4K starts to hit mainstream. The dedicated GPU allows a much more doable process as it has its own bank of fast and dedicated memory and the ability to perform calculations internally parallel and independent of the CPU clock. This frees up a lot of your CPU for the number crunching, while letting the GPU handle rendering and maybe some physics. I really don't see integrated graphics ever being anything past something just to let people be able to use a desktop for non-gaming purposes. Even if we could reach GPU levels now on a small integrated chip, it would still never be as good as what we could achieve in a GPU.


    I personally find it very amusing and interesting to consider the whole situation of Intel's and AMD's CPU development and what paths they might choose in the future. Kinda funny to think of it, when I got my first own PC, it had a AMD Athlon 64 3200+ CPU and most of the time I used this CPU for games like UT2004 and Doom 3. However, today I got Intel Core i5 3450 and to compare these CPUs is pure madness somehow. I mean, the time difference is only something like 8 years (release), but if you consider the technology, they achieved so very much and they done a good job. Of course, there are many other CPUs on market, which are much better and faster than my own one, but I'm happy with it.

    What I wonder about is, if the developer can get a better CPU optimization done. I don't play all current games, not even a hand full, but it is kinda interesting to see, how a application uses your CPU cores. However at this point, you also have to pay attention to the fact, that it is not that piece of cake to get this done. I remember talking about this at school and we only scratched the tip of the iceberg. Interesting indeed, but also pretty complicated, but it is not something you can force. It takes time. Looking forward to see how this thing turns out in case of 6 core and even 8 core CPUs. Having a 6 or 8 core CPU is indeed very much fun, but does any game really require this? I'm not really sure about this!

    In the end, I'm really looking forward to the future. Let's see how DX12 turns out, more use of Mantle, which next step CPU development might take, etc. Despite that, I never really got the point of this Nvidia Gameworks business and why AMD is worried about it to be something negative. How many games use this Nvidia Gameworks? Not that many, aye.


    so about ten years ago intel was playing around with using diamonds instead of silicon because it can endure 2-4 times the heat and get speeds in the range of 81ghz.

    now here we are 10 years later and still no diamond based cpus if they wish to keep moores law intact then should they not be pursuing this.

    << LINK REMOVED >> They are very expensive and the diamond has to be very very very chemically pure so it can be doped correctly to create a p-type substrate. The issue is also creating an n-type substrate to form the transistors, as you need both types of doping to create the potential. You also have to be careful with the radiant heat frying everything else on your motherboard. We have pretty much pushed our silicon MOSFETs to their physical limits so they are working more with stacking and better architecture.

    Not sure how well we can create synthetic diamonds, but if that improves, this might become more commonplace, along with diamond coatings to create stronger and lower friction parts.


    @DanGleeSack aren't artificial diamonds too perfect, a discerning feature, and dead giveaway for a jeweler.

    and they are made en masse, many industries require either pieces, or dust of diamond, for cutting and drilling tools/equipment

    but the heat wash issue seems like a damning problem


    << LINK REMOVED >> Yeah I believe that is an issue for cosmetic reasons, it looks too artificial, although now even some jewelers are having a hard time because they can approximate the natural conditions better and use different components.

    You would still need the diamond dust I'm sure for the transistors, but that shouldn't be too big of a problem. Synthetic diamonds are softer than real diamond, so a diamond drill and/or heat should be able to grind them down no problem while keeping impurities to a minimum.

    Again the real issue comes down to the composition of the material; it has to be almost an 100% pure element so that you can reliably dope (introduce impurities, in the form of another pure element, to allow for free and bound electrons) the substrates. It is that potential that allows current to flow with applied voltage. Unaccounted for impurities can potentially harm sensitive electronics that must operate within a finite range of voltage/current/heat.


    @DanGleeSack @suppaphly42 so i looked up all of that before i made that post and you are right on all points but one.

    there are a few companies who make diamonds and with the right infrastructure they could be on par in price with silicon now i don't know how much adding the dope will cost and that might be the problem.

    thank you for responding


    << LINK REMOVED >> No problem, I'm studying for an engineering and physics duel degree, so figured I'd give my 2 cents. Had a class on transistors/semi-conductor physics and it really only went into silicon based components due to time. That's cool what you found, maybe there is good hope in new technology coming public


    I just want IBM CPU's to join the home PC market then we'll have an actual competition and cheaper prices. Currently it is like Intel and AMD decide to to really compete against each other(1 takes the high ground other takes the low ground)...


    To kill AMD is to kill the console market since every gaming console carries an AMD GPU.


    << LINK REMOVED >> Well, AMD fits very well into weak systems :P


    << LINK REMOVED >><< LINK REMOVED >> That's a pretty ignorant statement from someone who probably has to pay someone to fix his computer.


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> How is that ignorant? They made weaker CPUs and they are in (weak) consoles (compared to Pcs)? Oh and lol, you are totally wrong, I actually studied Computer Science. Nice try though :)


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> As an engineer i tell you, you know the reallity and directX is the real problem. I hope Mantle envolve and substitude the old directX. Or the programmers inproves into multicore programming which would be better.


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> Because AMD isn't weak, Intel Lemming. Did you sleep through computer science class or did you study at Intel University? The only thing you have shown is your are narrow minded and clearly don't know as much as you would like people to think you do. Let me guess, anytime your computer has a problem the solution is to format C: right? LOL


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> lol? What's up with you? That Intel is stronger than AMD right now is a fact and is also stated like this in this very article. So yeah, when it comes to power, AMD < Intel and since Consoles (with AMD) < PC I thought it would be funny, hence the :P smiley.

    I do like how you try to insult me though. i don't know why you need to attack me personal that much, but I guess this is for your lack of actual arguments? I really do not need to proof my knowledge to some random dude in the internet :)


    @ArchoNils2 @klugenbeel @mjapox You should probably PROVE you have a grasp of the English language first. Also if you want to use the writer's, Mark Walton, article as facts, you should probably check a few things. It's fairly interesting that someone who studied Music production at Buckinghamshire New University, is some how a expert on computer processing chip sets. Next time your computer breaks, you bringing it to the experts at Geek Squad?


    << LINK REMOVED >> What are you talking about. AMD is clearly behind at the moment and it's hurting the CPU market as a whole. Intel have not been pushed in years and release incremental upgrades each year because the performance gap is so large back to AMD.

    AMD make very cheap parts that were ideally suited to the console market. I mean the consoles are using 8 core AMD CPU's running at 1.6ghz (PS4) and 1.75ghz (X1). AMD even have 5Ghz FX-9590 8 Core CPU on the market at the moment which is clearly outperformed by the cheaper i5 Intel CPUs.

    It's not rocket science to see AMD are lagging behind at the moment and that consoles used extremely cheap AMD parts. PC's are my hobby, love building them, over clocking them and of coarse gaming on them.

    If you want some proper numbers head over to guru3d and check out some benchmarks, you don't need a computer science degree to do that do you ???


    << LINK REMOVED >><< LINK REMOVED >> just wanted to say that... that's insane there's no currently i5 processor that outperforms FX-9590, the best i5 intel has in the market 4690, 4690k is like 30~32% weaker than 9590, maybe in single application performance i5-4690 could be a little better, there's plenty of strong i7 and xeon models which obviously are better than 9590, some very high xeon e5's are almost twice as powerful as FX-9590, but you have to be joking if you think the best i5 at the moment is better than amd's 9590, the best i5 is probably not even better in performance than a fx-8320...


    << LINK REMOVED >><< LINK REMOVED >><< LINK REMOVED >> That's not even nearly true, even the i5 4670 beats the FX-9590 easily. Not sure where you're getting your information from.

    << LINK REMOVED >>

    << LINK REMOVED >>


    << LINK REMOVED >> Well I am sorry, but after swiss german, german and french, english was the 4th language I learnt. So yeah, I might not be as good as a native speaker, but I'm at least good enough that you understand me.

    I really don't get why you are so ndefensive over AMD? Just check any recent high tier CPU benchmark. Go over to tomshardware and ccheck some charts and show me any where AMD is in the top 5 spots. Factss are against you and all you can do is insulting me. So this is where I leave the conversation. If all you can do is insult me (without even knowing me), I'm done writing to your responses


    << LINK REMOVED >><< LINK REMOVED >> Yes AMD is the weaker processor but to just call it weak pretty much shows your ignorance on the subject. A Lambourghini is weak compared to a Bugatti, but is a Lambourghini really weak. We call this a reality check.


    Where did I call it weak? I called the consoles weak. And I just thought it's funny that the weaker major player on CPUs works with the weaker major player on gaming systems. How anyone can get crazy over a funny comment with a :p smiley is beyond me


    Where did I call it weak? I call consoles weak and that the weaker of the two major players fit to them. As I said I just thaught that's funny, why everyone goes crazy for a comment with a :p smiley is beyond me


    << LINK REMOVED >> AMD always sucked, they still suck, they will continue to suck.


    << LINK REMOVED >>

    Geek Squad Experts... lol...I had a good laugh at that gem of a joke....



    1: Gametech should have a permanent spot in the GameSpot homepage.

    2: You guys should do reviews of stuff which are just really hard to decide to come to conclusion to: Gaming mice, keyboard, headsets.


    I know this might sound stupid, but why don't they just increase the size of the CPU with a new soket? Or build them on more than one layer? If the physical restrictions in going smaller are the problem?


    @ArchoNils2 As far as I know, making the die of the cpu larger also means longer physical pipelines, which has a negative effect on speed. But even more importantly: larger dies / cpu's means you'll be able to cut less dies out of the wafer, thus making the cpu more expensive to produce. Also it increases the number of faulty dies, once again making the cpu more expensive to produce.

    As @shymis rightfully said, the way to go might be using extra layers, but this is still quite early in the development phases.


    << LINK REMOVED >> thanks for the info :)


    @ArchoNils2 Well it's not stupid at all - Intel introduced a 3D CPU prototype in 2004. There are many challenges to create and mass produce something like this - physical restriction, complexity in designing the dye itself, building the mass production ecosystem and so on. On a side note - there will be a 3D part in the Carrizo APUs - they will have stacked memory. I hope I helped a bit :)


    << LINK REMOVED >> Thank you very much for your reply, looking forward to see how CPUs progress :)


    I'll echo everybody else's comments here: brilliant! Just write a good balanced article, and keep to the facts, and let the praise come pouring in! I recently read an article entitled "AMD vs. Intel..." on another site, which in the end was entirely about Intel, and had nothing but generic info about AMD, and that article was roundly and properly criticized. Good on you to actually do more than a half-assed job! I guess we'll be praising more journalists like this these days, just for handling their jobs properly. ;)


    Bloody brilliant article... it's great to hear from industry experts.

    Very informative, even though I am by no way a computer expert and had to look up most of the terms; saying that, I appreciated that Mr Walton didn't address the reader as if we were children.



    Great article mr. Walton! Really enjoyed the reading!

    Regarding the push from desktop to mobile, i think that every PC oriented company which decides to enter a mobile market will end with a loss and that move will result with a fail. Same happened to Microsoft and many others when they tried to push their way into markets which weren't their primary field. In such cases there is also extra downfall which appears on their primary market since they allocate their resources all over instead of focusing on the primary target.

    I hope that both AMD and Intel, as well as AMD/ATi and Nvidia will stay true to the PC platform and cooperate with others (like mobile platform) in cases where the exchange of technology and patents can boost innovation in their primary field. Otherwise they'll face more and more loss in the process of experimentation, while the dominant players (ie. ARM) will continue to dominate the mobile platform.


    << LINK REMOVED >> Another problem with jumping into the mobile market in a word "saturation". You end up with too many companies vying for your dollar because the market is inundated with too many brands. It was a problem in the 80's with Colecovision vs Atari 2600 vs Intellivision vs Leasurevision (Arcadia 2001) vs Odyssey vs Vectrex vs Telestar etc. That is another reason why Microsoft had issues trying to get a foothold on mobile.


    Great article! Enjoyed reading it.


    Great article Walton.


    Gametech is probably the best feature in Gamespot IMO. :)


    32-bit memory limitations are a function of the OS, not the 32-bit architecture. There a number of 32-bit systems out there that can address more than 4GB.

    It took years for software to take advantage of 64-bit extensions, because at the time, the only 64-bit version of Windows was XP-64, which remains a broken mess to this day. Same goes for additional cores. By the time those awesome features were properly utilized, AMD had already lost the performance crown. Kinda sad, but that's how adoption goes.

    Prescott was actually the "good" Pentium 4, at least it was as good as the Pentium 4 ever really got, and good enough to finally be competitive (unlike the first batch of P4s which were outperformed by PIIIs). Longer pipeline was the basis of the Netburst architecture, which started with 28 stages and eventually expanded to 39 with Prescott (Bulldozer is somewhere in the low-20s), because longer pipelines let you run higher frequencies, and Intel was chasing 10 GHz (their original engineering samples started melting somewhere around 4 or 5, at which point they realized they had a problem).

    The idea behind a longer pipeline is that a single core can only process a single piece of data per cycle. So by adding stages to the pipeline, you can sequentially process as many pieces of data in a single cycle as stages you can fill. So for instance, once stage one is done, you send the data to stage two while the first stage operates on a new piece of data and so on. So with 28 stages, you're effectively manipulating 28 pieces per cycle.

    The problem is that it's sequential. Data has to move from one stage to the next in the pipeline and isn't done until it's through the whole pipeline. That means that the data in stage 1 might be doing something that requires knowledge of the output of the data in stages 2 through 28. For that, you have what is called branch prediction, which basically predicts what is happening to the data within the CPU before it's done. The CPU determines the likely output after stage 28 and operates based on that prediction.

    Obviously, if the prediction is wrong, then the data that you've just done operations on is no longer valid and has to be thrown out. That's a branch misprediction, and it effectively stalls the pipeline. The number of simultaneous operations is what allows long pipelines to scale higher in frequency; it also increases the likelihood of branch mispredictions, which lowers the efficiency of each cycle.

    Bulldozer actually clocks pretty well, although the performance per watt isn't that great, and IPC is fairly low compared to Intel's chips, but I don't think it's fair to say that both of those are due to pipeline length, as it's not all that much longer than the pipeline in Sandy Bridge. Intel's advantage comes from their process advantage and design differences that allow them to help mitigate branch prediction errors (among other things). Also, the design of the physical cores in the Bulldozer architecture is such that care has to be taken as to how the cores are loaded, since cores within a module share a number of resources.

    No idea why I bothered to write all that out.


    << LINK REMOVED >> The bottom line is that while CPU's are frequency limited by physics getting instructions into them and data in/out of them is the real bottle neck. Instruction 'pipelines' may mitigate the problem but is dependent on instruction mix. Intel and AMD need to bolt on larger+faster cache memory along with much faster RAM to keep modern multicore CPU's feed with instructions&data.


    << LINK REMOVED >>

    I'd argue that cache sizes and speeds are perfectly adequate, as we routinely see very little performance differences with increased cache sizes and the higher-end chips already have gobs of very speedy cache to work with.

    And in Intel's case, faster RAM is really not that big a deal, either. Intel's graphics rather stink regardless, their drivers always stink, and any real gamer will be using a dGPU anyway, so feeding the CPU with faster RAM than what is already out there will make practically zero difference with the current architectures. AMD is a different story because their graphics cores inside their APUs need as fast RAM as they can get.

    The main problem is still programming. It's difficult to efficiently use multiple cores. x86 processors decode their instructions into micro-operations or even combine them into more complex macro-operations. They operate out of order and separate the decode units from the execution units, all of which allows them to feed their cores much faster. But they can only do what they're told to do, and that's up to the programmer.


    << LINK REMOVED >> Ahh an EE or CE major perhaps :P ? Sounds like my digital logic class...


    << LINK REMOVED >>

    College dropout, actually. I studied audio production for a short period of time, then did some time in the Navy as a nuclear machinist mate and after a couple of different jobs, I now fix computers and phones/tablets. I also love working on cars, so I do a little bit of everything. I just like to learn as much as possible so I read a lot. :)


    << LINK REMOVED >><< LINK REMOVED >> Nothin wrong with that, as long as you have passion :)

    I understand where you come from, sometimes college just drags on I'd rather be doing something physical or creative; makes it hard to do well when it's just a grind, classes are finally getting interesting so I'm sticking with it. I love music too so I got into music production and theory on my own, entered some remix competitions and such. I'm kinda the same into everything way. You seemed like you knew what you were taking bout so figure I'd ask, see what someone older has to say about the field.


    << LINK REMOVED >> because you're passionate about hardware :-)


    This was the 7th articles I ever read word by word on GS. Way to go GS. That was really informative.


    Really great article. I'm surprised it didn't mention Intel's upcoming "Knight's Landing" processor. It uses Microns new Hybrid Memory Cube technology... Pretty cool stuff: << LINK REMOVED >>


    Nice Article, it's been a long time since I read an article with this type of information. Keep up the good work with Gametech. I am a PC gamer and definitively will be looking foward to read more of these articles. Great job Gamespot.


    CPU is barely relevant for gaming at this point, I'd say 20% importance, 10% for RAM and the rest on your GPU, which is a good thing. It's amazing to see how old quad cores like the Phenom II X4 955 still won't bottleneck the newest mid level cards like the GTX 750 Ti.


    << LINK REMOVED >> I'm gonna have to disagree. I got a 30+ FPS boost in Watch Dogs, which went from making it barely playable at around 22 FPS to pretty smooth at 55 FPS, when I went from an A8-6600K to an FX-8350, using an R9 290 with both. CPU is still pretty important, as it can bottleneck the GPU and degrade gaming performance.


    << LINK REMOVED >><< LINK REMOVED >> It hugely depends on how the game (or any program) is written on how well it will use your GPU and CPU. It takes a lot of effort to code it well for multiple cores and to divide up the work between the CPU and GPU.


    << LINK REMOVED >> the c3s like my 965 ocd past 4ghz still beasts nowadays, gpu is the only thing i change from time to time


    << LINK REMOVED >> A year or two ago I was still using Athlon card. Nuff' said.


    Way to go GameSpot with these nice articles. Although i will say that some of your reviews aren't accurate in my point of view (Watch Dogs should've gotten more like a 6 because it is essentially Assassin's Creed: Chicago Parkour Hacking and CoD should get lower scores because as far as i see it's the same dang game since MW2 with different textures) but these articles are nice


    Moore's Law, when size does matter.


    << LINK REMOVED >> owned.


    << LINK REMOVED >>

    "Its not the size that counts, its how you use it"....The Sheriff of Rottingham


    << LINK REMOVED >><< LINK REMOVED >> "Motion of the Ocean" Don't know about you but, it takes a long time to get to England in a rowboat from the USA.


    Mobile devices are the best things that has happened in the hardware world.

    The new boom of mobile devices has brought out serious innovation in cpu/hardware makers. At the rate mobile devices are inceasing every 6 months, soon intel will be a forgotten memory.

    Seriously what has changed in the PC world in the last 15 years? Nothing. Ram still plugs into the mobo, cpu still plugs into the center, the chipset is still integrated onto the motherboard.

    But in mobile devices they have baked most of the important components right onto the CPU itself. Mobile cpu's have the chipset, ram, AND gpu built right in. That cuts out SO much latency between the parts that PC's struggle with.. With ram built directly onto the CPU itself that means it can run at CACHE speeds which are like 1800x faster and have ZERO cas latency,

    Because the ram, gpu, and chipset is built right into mobile chipset, there is no such thing as screen-tearing. Because the latency between all the parts are non-existant. Whereas on pc the gpu has to contact the chipset, which contacts the cpu, which sends a signal to the monitor..

    PC technology is fast, but the setup (plugging multiple parts into one another) is so primite. At the very LEAST todays desktop pc's should have the motherboards chipset baked right into the cpu, and the ram baked right into the motherboard.


    << LINK REMOVED >>

    What has changed in the PC world in the last 15 years? Processors have gotten dramatically faster, that's all. Oh, and software is finally taking advantage of 64-bit extensions and multiple processors. That wasn't the case even 8 years ago. And memory controllers have moved on-die.

    The main problem I see with putting the GPU on-die is that now you have hamstrung it with the very slow system RAM. The only way to overcome that problem is to change the system RAM to fast graphics RAM, like Sony did with the PS4. None of the ARM SoCs currently out have RAM built into their silicon; they're all using low-power memory which is even slower than desktop/laptop RAM. There's no such thing as "zero CAS latency," because that would mean that the RAM provides data the instant the memory controller sends the signal, and that's not physically possible (do you have a quantum cell phone or something?).

    I don't think you understand what causes screen tearing. It's not caused by latency, it's caused by the way in which the screen is actually drawn (a throwback to CRT monitors that actually scanned lines from a framebuffer onto the screen).

    All of this prebaking stuff that you seem so keen on is a major buzzkill when it's four years down the road and your current equipment is running kind of slow and isn't upgradeable at all. Maybe you like spending hundreds of dollars on computing equipment that isn't in any way configurable or upgradeable, but I sure don't. The Northbridge functions have already been integrated into CPUs, but the benefit of having a Southbridge is that motherboard features aren't tied to specific CPUs.


    << LINK REMOVED >>

    OK, where to start... it is clear you do not understand modern computing:

    1. Mobile cpu's have the chipset, ram, AND gpu built right in - No they don't. they still have DRAM chips soldered onto a BGA connection, which is no better/worse than an edge connector... just smaller. The chipset is on the board just as in desktops, but there are less chips due to a lack of expandability.

    2. run at CACHE speeds which are like 1800x faster and have ZERO cas latency - Actually mobile RAM is lower wattage and clock speeds to preserve battery. they generally have higher latency as well due to low cost of components to make a complete system for $200-$300.

    3. Because the ram, gpu, and chipset is built right into mobile chipset, there is no such thing as screen-tearing - This is more due to the design of a tablet. the display is built to match the hardware. There is nothing special that would prevent screen tearing on a tablet.

    4. Whereas on pc the gpu has to contact the chipset, which contacts the cpu, which sends a signal to the monitor - AMD and Intel CPUs both handle the PCIe bus. What you are describing is a northbridge connection to an AGP port... either your information is out of date, or you have no clue how modern PC's actually work.

    5. At the very LEAST todays desktop pc's should have the motherboards chipset baked right into the cpu, and the ram baked right into the motherboard. - first of all, the chipset is mostly on the CPU already: the DRAM controllers, GPUs, PCIe controllers, And solder the RAM onto the board? Building a computer is a hobby for most PC Gamers, but clearly this concept eludes you.


    @rolla020980 @dannydopamine8 Lol none of your corrections are true, just the ramblings of a narcissistic d0ucher hellbent on making someone look bad.

    And what exactly is the point of insulting me? Insulting people you don't know is the definition of irony. You should be able to present a formal argument without having to resort to insults.

    You are discrediting my statements based on particular outdated circumstances. No matter what you say. Having all the components on the cpu cuts out tons of latency and provides a much simpler build.

    Perhaps you are talking about the older mobile phones or something. Back when you worked on them before getting fired for being such a self centered ahole. Anyone who speaks to others like that can't get far.

    And this new technology allows for ram at cache speeds. Its not a new thing. It will be here soon enough.


    You just tore that post to shreds, lol. Are you a software/hardware engineer by any chance?


    << LINK REMOVED >>

    I am a hardware engineer for a small Microsoft OEM... can't say which one of course, but I engineer tablets and desktops. It is an amazing gig. I get my hands on the latest tech every day. I can't imagine doing anything else.


    That does sound like a dream job.

    Let me ask your opinion on the Surface tabs... What do you think about the Pro 3 vs the Pro 2? I just bought a Pro 2 instead of the 3 because A) a 12+" tab isn't very portable to me and B) they use the same processors (the newer Pro 2 & Pro 3) and other internals. The 3 seems like it offers little in the way of upgrades. Then there's the move away from Wacom to N-trig ect..

    The only real benefit seems to be the reduced thickness and weight (because it's obviously spead out more over the bigger unit). The Pro 2 is a fat, heavy, beast of a tab. But the Power Keyboard is great.

    Being you work for a MS OEM, I'm curious on your thoughts of the Pro 3.


    << LINK REMOVED >>

    TBH I think of the Pro 3 as a mini laptop, which kindof aligns with their marketing. It is an awesome device and is very well built, but you are right... it is too big for a tablet. I have an HP Split X2 as a work PC right now and it is a good Surface alternative. The CPU is in the same class as the Surface Pro 2 and since it is marketed as a laptop that can be a tablet... not the other way around, it includes the keyboard dock for less money. Too bad the Split didn't catch on, it had real potential.

    For a tablet, the Pro 2 is definitely the way to go. The Pro 3 is lighter, but it is too big to use comfortably as a tablet. Microsoft's latest layoffs and their CEO's comments on focusing less on devices and more on services could mean that there will be no Surface 4,but that is something I am not privy to and is speculation on my part.

    TBH, my personal tablet is an HP Touchpad with CM10. It is an old beast that will be replaced sometime in the near future. I find myself moving away from Android and toward the Windows with Baytrail-T platform. I have been very impressed by what a 2.2W TDP CPU can do.The mobile market will be very interesting in the next few years.


    mantle shits on directx


    << LINK REMOVED >> Mantle is pretty good, from what I've seen so far. With an R9 290, I get a 35 FPS average boost in Thief's built in benchmark once I turn Mantle on. Can't wait for more games to start using it.



    Until you get a bad piece of code... DirectX hiccups, then manages the resources and allows the code to continue (so long as it isn't a major problem, but even then you can still get to task manager). If Mantle gets a bad piece of code... mostlikely you will see blue screens. That is the danger of direct access. The OS is there for a reason... even if it is a resource hog.


    @rolla020980 Mantle is the future.. Cpu's are literally twigs compared to the power of GPU's.. And mantle takes advantage of this and makes sure games use CPU as little as possible and put the full load on the gpu.

    It's really a no brainer. Technically we shouldn't even have dedicated CPU anymore these days. the GPU should just do both jobs..

    There are actually experiments where they put the full load of the PC on the GPU and it literally becomes something like 8000x faster or something ridiculous like that


    << LINK REMOVED >>

    Source? CPUs and GPUs are different beasts. CPUs generally have higher clock speeds per core to handle large amounts of random data whereas GPUs have many more cores, but run at slower speeds to handle large amounts of predicted data. You are most likely looking at bitcoin mining, which is performed on the GPU... AMD optimizes their cards for this. Bitcoin mining is a hash algorithm that is predictable and is well suited to a GPU. Random processes are still much faster on CPUs.


    << LINK REMOVED >><< LINK REMOVED >> Today, I have learnt that arguing with this dude about tech is pointless. He knows literaly everything.



    He does not. All he said was cpu's and gpu's are different. No sh1t. If they weren't we woudn't even have cpu's.

    Gpus have many more cores, yes. And thats why gpu's are better. 2000 cores is better than 1 on an absurd scale


    While ARM is the leader in the mobile space, Intel has come out with a powerful processor to challenge ARMs' dominance, the Atom Z3770. It does both Windows & Android and I've read that it's comparable to the Core i5. That's very good horsepower in a mobile device.

    This article also left out Qualcomm. They were terrible a few years back, but now Qualcomm powers almost all high end smartphones.


    << LINK REMOVED >>

    Qualcomm is popular because they incorporate the modem into the SoC, which allows for tighter packaging and lower cost. Also, their Adreno graphics IP was bought from none other than AMD.

    Intel has done a terrible job of scaling down to smartphone levels. Their latest chips might be okay, but they still struggle to compete with ARM for performance per watt. x86 is an ISA that dates all the way back to 1978. ARM has a huge advantage of not dragging that kind of legacy with it.


    << LINK REMOVED >>

    I had the chance to play with a Z3735D tablet and it is impressive for the price. The best thing is, if you get a Windows tablet and load Bluestacks, you have Windows and Android on one device.


    I was thinking of loading Bluestacks on my Surface Pro 2. Not sure I have much need for it though because I have a Nexus 5 & 7 that handle everything Android. I gotta check it out though.

    • 256 results
    • 1
    • 2