Processor Wars. The story of the blue hare and the red tortoise

The modern history of confrontation between Intel and AMD in the processor market dates back to the second half of the 90s. The era of grandiose transformations and entry into the mainstream, when the Intel Pentium was positioned as a universal solution, and Intel Inside became almost the most recognizable slogan in the world, was marked by bright pages in the history of not only blue, but also red - starting from the K6 generation, AMD competed relentlessly with Intel in many market segments. However, it was the events of a slightly later stage - the first half of the XNUMXs - that played the most important role in the emergence of the legendary Core architecture, which still underlies the Intel processor line.

A bit of history, origins and revolution

The beginning of the 2000s is largely associated with several stages in the development of processors - this is the race for the coveted frequency of 1 GHz, and the appearance of the first dual-core processor, and the fierce struggle for supremacy in the mass desktop segment. After the hopelessly obsolete Pentium, and the introduction of the Athlon 64 X2, Intel introduced the Core generation processors, which eventually became a turning point in the development of the industry.

Processor Wars. The story of the blue hare and the red tortoise

The first Core 2 Duo processors were announced at the end of July 2006, more than a year after the release of the Athlon 64 X2. In the work on the new generation, Intel was guided primarily by architectural optimization issues, having achieved the highest energy efficiency indicators already in the first generations of models based on the Core architecture, codenamed Conroe - they outperformed the Pentium 4 by one and a half times, and with a declared thermal package of 65 W, they became, perhaps, , the most energy efficient processors on the market at the time. Acting as a catch-up (which happened infrequently), Intel implemented in the new generation support for 64-bit operations with the EM64T architecture, a new set of SSSE3 instructions, as well as an extensive package of x86-based virtualization technologies.

Processor Wars. The story of the blue hare and the red tortoise
Core 2 Duo microprocessor die

In addition, one of the key features of Conroe processors was a large L2 cache, the impact of which on the overall performance of processors was quite noticeable even then. Deciding to separate the processor segments, Intel disabled half of the 4 MB L2 cache for the younger representatives of the line (E6300 and E6400), thus marking the initial segment. Nevertheless, the technological features of the Core (low heat dissipation and high energy efficiency associated with the use of lead solder) allowed advanced users to achieve incredibly high frequencies on advanced system logic solutions - high-quality motherboards allowed overclocking the FSB bus, increasing the frequency of the junior processor up to 3 GHz and more (providing a 60% increase in total), thanks to which successful E6400 specimens could compete with their older brothers E6600 and E6700, albeit at the cost of significant temperature risks. However, even modest overclocking made it possible to achieve serious results - in benchmarks, older processors easily pressed the advanced Athlon 64 X2, marking the position of new leaders and people's favorites.

In addition, Intel launched a real revolution in production - quad-core processors of the Kentsfield family with the Q prefix, built on the same 65 nanometers, but using a structure of two Core 2 Duo chips on one substrate. Having achieved the highest possible energy efficiency (the platform consumed as much as two dies used separately), Intel showed for the first time how powerful a system with four threads can be - in multimedia applications, archiving and heavy games that actively use multi-threaded load parallelization (in In 2007, such were the sensational Crysis and the no less iconic Gears of War) the performance difference with a single-processor configuration could be up to 100%, which was an incredible advantage for any buyer of a Core 2 Quad-based system.

Processor Wars. The story of the blue hare and the red tortoise
Gluing two C2D on one substrate - Core 2 Quad

As in the case of the Pentium line, the fastest processors received the Extreme prefix with the QX prefix, and were available to enthusiasts and OEM system builders at a significantly higher price. The crowning achievement of the 65nm generation is the QX6850 at 3GHz with a fast FSB running at 1333MHz. This processor went on sale for $999.

Of course, such a resounding success could not but meet competition from AMD, but at that time the red giant had not yet switched to the production of quad-core processors, therefore, to counter the new products from Intel, an experimental Quad FX platform was introduced, developed in collaboration with NVidia, and received only one serial model of the ASUS L1N64 motherboard, designed for the use of two Athlon FX X2 and Opteron processors.

Processor Wars. The story of the blue hare and the red tortoise
ASUS L1N64

The platform turned out to be an interesting technical innovation in the mainstream, but a lot of technical conventions, huge power consumption and mediocre performance (compared to the QX6700 model) did not allow the platform to successfully compete for the upper market segment - Intel won, and Phenom FX processors with four cores appeared in Reds only in November 2007, when the competitor was ready to take the next step.

The Penryn line, which was essentially the so-called die-shrink (die-shrink) of 65nm chips from 2007, debuted on the market as early as January 20, 2008 with Wolfdale processors - just 2 months after the release of AMD's Phenom FX. The transition to 45nm process technology using the latest dielectrics and manufacturing materials has expanded the horizons of the Core architecture even further. The processors received support for SSE4.1, support for new power-saving features (like Deep Power Down, which almost nullifies power consumption in the hibernation state on mobile versions of processors), and also became much cooler - in some tests the difference could reach 10 degrees compared to the previous series conroe. By boosting frequency and performance, as well as getting an additional L2 cache (for the Core 2 Duo, its volume has grown to 6 MB), the new Core processors have consolidated their leading positions in benchmarks and set the stage for a further round of fierce competition, and the beginning of a new era. Epochs of unprecedented success, epochs of stagnation and calm. The era of Core i processors.

Step forward and zero back. First generation Core i7

Already in November 2008, Intel introduced the new Nehalem architecture, which marked the release of the first processors from the Core i series, which is well known to every user today. Unlike the well-known Core 2 Duo, the Nehalem architecture initially provided for four physical cores on a single chip, as well as a number of architectural features known to us from technical innovations from AMD - an integrated memory controller, a shared third-level cache, and QPI- interface that replaces HyperTransport.

Processor Wars. The story of the blue hare and the red tortoise
Crystal microprocessor Intel Core i7-970

With the transfer of the memory controller under the processor cover, Intel was forced to rebuild the entire cache structure, reducing the amount of L2 cache in favor of the combined L3, with a capacity of 8 MB. However, this step made it possible to significantly reduce the number of requests, and reducing the L2 cache to 256 KB per core turned out to be an effective solution in terms of speed with multi-threaded calculations, where the main part of the load was addressed to the shared L3 cache.
In addition to cache restructuring, Intel took a step forward in Nehalem, providing processors with support for DDR3 at 800 and 1066 MHz (however, the first standards were far from limiting for these processors), and getting rid of DDR2 support, unlike AMD, which used the principle of backward compatibility in Phenom II processors available on both AM2+ and new AM3 sockets. The memory controller itself in Nehalem could work in one of three modes for one, two or three memory channels on a 64, 128 or 192-bit bus, respectively, due to which motherboard manufacturers placed up to 6 DIMM DDR3 memory slots on the PCB. As for the QPI interface, it replaced the already outdated FSB bus, at least doubling the platform's bandwidth, which was a particularly good decision in terms of increasing memory frequency requirements.

The almost forgotten Hyper-Threading also returned to Nehalem, endowing four powerful physical cores with eight virtual threads, and giving rise to “the same SMT”. In fact, HT was implemented in the Pentium, but since then Intel has not remembered it until now.

Processor Wars. The story of the blue hare and the red tortoise
Hyper Threading Technology

Another technical feature of the first generation Core i was the native frequency of the cache and memory controllers, the setting of which included changing the necessary parameters in the BIOS - Intel recommended doubling the memory frequency for optimal performance, but even such a trifle could become a problem for some users, especially when overclocking QPI-buses (aka BCLK bus), because the unlocked multiplier was received only by the incredibly expensive flagship of the i7-965 line with the Extreme Edition postscript, and the 940 and 920 had a fixed frequency with a multiplier of 22 and 20, respectively.

Nehalem has become bigger both physically (the size of the processor compared to the Core 2 Duo has slightly increased due to the transfer of the memory controller under the lid) and virtually.

Processor Wars. The story of the blue hare and the red tortoise
Processor Size Comparison

Thanks to the “smart” monitoring of the power system, the PCU (Power-Control Unit) controller, together with the Turbo mode, made it possible to get a little more frequency (and, consequently, performance) even without manual tuning, limited only to passport values ​​of 130 watts. True, in many cases this limit could be slightly pushed back by changing the BIOS settings, getting an additional 100-200 MHz.

In total, the Nehalem architecture had a lot to offer - a significant increase in power compared to Core 2 Duo, multi-threaded performance, powerful cores, and support for the latest standards.

One misunderstanding is connected with the first generation i7, namely, the presence of two sockets LGA1366 and LGA1156 with the same (at first glance) Core i7. However, the two chipsets were driven not by the whim of a greedy corporation, but by the move to the Lynnfield architecture, the next step in the development of the Core i processor line.

As for the competition from AMD, the red giant was in no hurry to switch to a new revolutionary architecture, rushing to keep up with the pace of Intel. Using the good old K10, the company released Phenom II, which was a transition to the 45nm process of the first generation Phenom without any significant architectural changes.

Processor Wars. The story of the blue hare and the red tortoise

Due to the reduction in die area, AMD was able to use the additional space to accommodate an impressive L3 cache, which in its structure (as well as the overall layout of the elements on the chip) roughly corresponds to Intel's work with Nehalem, but has a number of disadvantages due to the desire for economy and backward compatibility with rapidly the aging AM2 platform.

Having corrected the shortcomings in Cool'n'Quiet, which was practically non-functioning in the first generation of Phenom, AMD released two revisions of Phenom II, the first of which was addressed to users on old chipsets from the AM2 generation, and the second - for the updated AM3 platform with support for DDR3 memory. It was the desire to keep support for new processors on old motherboards that played a cruel joke with AMD (which, however, will be repeated in the future) - due to the platform features in the form of a slow northbridge, the new Phenom II X4 could not work at the expected uncore bus frequency (memory controller and L3 cache), losing some more performance in the first revision.

However, the Phenom II was affordable and powerful enough to match Intel's previous generation, namely the Core 2 Quad. Of course, this only meant that AMD was not ready to compete with Nehalem. At all.
And then came Westmere...

Westmere. Cheaper than AMD, faster than Nehalem

The advantages of the Phenom II, presented by the red giant as a budget alternative to the Q9400, lay in two things. The first is the obvious compatibility with the AM2 platform, which was acquired by many fans of low-cost computers during the release of the first generation Phenom. The second is a delicious price, which neither the expensive i7 9xx, nor the more affordable (but already unprofitable) Code 2 Quad series processors could argue with. AMD was betting on accessibility for the widest range of users, inexperienced gamers and budget-conscious professionals, but Intel already had a plan for how to beat all the cards of the red chipmaker with one hand.

It was based on Westmere - the next architectural stage in the development of Nehalem (the core of Bloomfield), who have proven themselves among enthusiasts and those who prefer to take the best. This time, Intel abandoned expensive integrated solutions - the new logic set based on the LGA1156 socket lost its QPI controller, received an architecturally simplified DMI, acquired a dual-channel DDR3 memory controller, and also once again redirected part of the functions under the processor cover - this time it became PCI controller.

While visually the new Core i7-8xx and Core i5-750 are identical in size to the Core 2 Quad, the move to 32nm makes the die even larger than Nehalem's - sacrificing extra QPI outputs and combining a block of standard I /O ports, Intel engineers integrated a PCI controller that occupies 25% of the die area and was designed to minimize delays in working with the GPU, because an additional 16 PCI lanes were never superfluous.

Westmere has also improved the Turbo mode, built on the “more cores - less frequency” principle used by Intel so far. According to the logic of the engineers, the limit of 95 watts (namely, this is how much the updated flagship was supposed to consume) was not always achieved in the past due to the emphasis on overclocking all cores in any situation. The updated mode allowed the use of "smart" overclocking, dosing frequencies in such a way that when using one core, the rest were turned off, freeing up additional power to overclock the involved core. In such a simple way, it turned out that when overclocking one core, the user reached the maximum clock frequency, when overclocking two - already less, and when overclocking all four - negligible. In this way, Intel ensured maximum performance in most games and applications using one or two threads, while maintaining energy efficiency that AMD could only dream of then.

Processor Wars. The story of the blue hare and the red tortoise

The Power Control Unit, which is responsible for distributing power between the cores and other modules on a chip, has also been significantly improved. Through process improvements and material engineering, Intel was able to create a near-perfect system in which the processor, being in an idle state, is able to consume almost no power AT ALL. It is noteworthy that the achievement of such a result is not associated with architectural changes - the PSU controller unit migrated under the Westmere cover without any changes, and only the increased requirements for materials and overall quality made it possible to reduce leakage currents from disabled cores to zero (or almost to zero). processor and related modules in the idle state.

By exchanging a three-channel memory controller for a dual-channel one, Westmere could lose some performance, but thanks to the increased memory frequency (1066 for the mainstream Nehalem, and 1333 for the hero of this part of the article), the new i7 not only did not lose performance, but in some moments turned out to be faster than the Nehalem processors . Even in applications that don't use all four cores, the i7 870 is nearly identical to its big brother thanks to the DDR3 clock advantage.

The gaming performance of the updated i7 was almost identical to the best solution of the previous generation - the i7 975, which cost twice as much. At the same time, the younger solution was balancing on the verge of Phenom II X4 965 BE, sometimes confidently outperforming it, and sometimes only slightly.

But the price was exactly the issue that confused all Intel fans - and the solution in the form of an incredible $ 199 for the Core i5 750 suited everyone perfectly. Yes, there was no SMT mode here, but powerful cores and excellent performance made it possible not only to bypass the flagship AMD processor, but also to do it much cheaper.

Dark times had come for the Reds, but they had an ace up their sleeve - the new generation AMD FX processor was preparing for the release. True, and Intel did not come unarmed.

The birth of a legend and a great battle. Sandy Bridge vs AMDFX

Looking back at the history of the relationship between the two giants, it becomes obvious that the period of 2010-2011 was associated with the most incredible expectations for AMD, and unexpectedly successful solutions for Intel. Although both companies took risks by presenting completely new architectures, for the Reds, the announcement of the next generation could be fatal, while Intel, in general, had no doubts.

If Lynnfield was a massive bug fix, Sandy Bridge has taken engineers back to the drawing board. The transition to 32 nm marked the creation of a monolithic basis, no longer similar to the split layout used in Nehalem, where two blocks of two cores divided the crystal into two parts, and secondary modules were located on the sides. In the case of Sandy Bridge, Intel created a monolithic layout, where the cores were located in a single block using a common L3 cache. The execution pipeline that forms the task pipeline was completely redesigned, and the high-speed ring bus provided minimal delays when working with memory and, consequently, the highest performance in any tasks.

Processor Wars. The story of the blue hare and the red tortoise
Intel Core i7-2600k microprocessor die

Integrated graphics also appeared under the lid, which occupies the same 20% of the crystal in area - for the first time in many years, Intel decided to seriously deal with the integrated GPU. And although by the standards of serious discrete cards, such a bonus is not something significant, the most modest Sandy Bridge graphics cards could well turn out to be unnecessary. But despite the 112 million transistors allocated for the graphics chip, at Sandy Bridge, Intel engineers relied on increasing core performance without increasing the die area, which at first glance is not an easy task - the third-generation crystal is only 2 mm2 larger than the Q9000 once had. . Have Intel engineers managed to do the unbelievable? Now the answer seems obvious, but let's keep the intrigue. We will return to this soon.

In addition to a completely new architecture, Sandy Bridge has also become the largest line of processors in Intel's history. If at the time of Lynnfield blue presented 18 models (11 for mobile PCs and 7 for desktops), now their range has increased to 29 (!) SKUs of all possible profiles. Desktop PCs received 8 of them at the release - from i3-2100 to i7-2600k. In other words, all market segments were covered. The most affordable i3 was offered for $117, and the flagship cost $317, which was incredibly cheap by the standards of previous generations.
In marketing presentations, Intel called Sandy Bridge the "second generation of Core processors", although technically there were three such generations before it. The blues explained their logic by the numbering of processors, in which the number after the designation i * was equated with the generation - for this reason, many still believe that Nehalem was the only architecture of the first generation i7.

The first in the history of Intel Sandy Bridge received the naming of unlocked processors - the letter K in the model name, meaning a free multiplier (as AMD liked to do first in the Black Edition processors, and then completely everywhere). But, as with the SMT, this luxury was only available at an additional cost and exclusively on a few models.

In addition to the classic line, Sandy Bridge also had processors with T and S subscripts, aimed at computer assemblers and portable systems. Previously, Intel did not seriously consider this segment.

With changes to the multiplier and BCLK bus, Intel blocked the overclocking of Sandy Bridge models without the K index, thus covering a loophole that worked perfectly in Nehalem. A separate difficulty for users was the “limited overclocking” system, which allowed setting the Turbo frequency value for a processor devoid of the charms of an unlocked model. The principle of overclocking out of the box has remained unchanged with Lynnfield - when using one core, the system gives out the maximum available (including cooling) frequency, and if the processor is fully loaded, then overclocking will be significantly lower, but for all cores.

Manual overclocking of unlocked models, on the contrary, went down in history thanks to the numbers that Sandy Bridge allowed to achieve even when paired with the simplest bundled cooler. 4.5 GHz without spending on cooling? No one has jumped so high before. Not to mention that even 5GHz was already achievable in terms of overclocking with adequate cooling.
Along with the architectural innovations, Sandy Bridge was also accompanied by technical innovations - the new LGA1155 platform, equipped with SATA 6 Gb / s support, the appearance of a UEFI interface for the BIOS, and other pleasant little things. The updated platform received native support for HDMI 1.4a, Blu-Ray 3D and DTS HD-MA, thanks to which, unlike desktop solutions based on Westmere (Clarkdale core), Sandy Bridge did not experience unpleasant difficulties when outputting video to modern TVs and playing movies at a frequency of 24 frames, which undoubtedly pleased fans of home theaters.

However, things were even better from a software point of view, because it was with the release of Sandy Bridge that Intel introduced their well-known video decoding technology with CPU resources - Quick Sync, which proved to be the best solution when working with video. The gaming performance of Intel HD Graphics, of course, did not allow us to say that the need for video cards is now in the past, however, Intel itself rightly noted that for a GPU costing $ 50 or less, their graphics chip could become a serious competitor, which was not far true - at the time of release, Intel was demonstrating the performance of the 2500k graphics core at the level of the HD5450 - the most affordable AMD Radeon graphics card.

Intel Core i5 2500k is considered, perhaps, the most popular processor. This is not surprising, because thanks to an unlocked multiplier, solder under the cover and a small amount of heat, it has become a real legend among overclockers.

The gaming performance of Sandy Bridge re-emphasized the trend set by Intel in the previous generation to offer the user performance on par with the best $999 Nehalem solutions. And the blue giant succeeded - for a modest amount of just over $ 300, the user received performance comparable to the i7 980X, which seemed unthinkable six months ago. Yes, the new horizons of performance were not conquered by the third (or second?) generation of Core processors, as was the case with Nehalem, but a significant reduction in the cost of the cherished top solutions made it possible to become a truly "popular" choice.

Processor Wars. The story of the blue hare and the red tortoise
Intel Core i5-2500k

It seems that the time has come for the debut of AMD with their new architecture, but it took a little longer to wait for the appearance of a real competitor - with the triumphant release of Sandy Bridge, the red giant's arsenal included only a slightly expanded Phenom II line, supplemented by solutions based on Thuban cores - the notorious six-core X6 1055 processors and 1090T. These processors, despite minor architectural changes, could only boast of the return of Turbo Core technology, in which the core overclocking tuning principle returned to the individual tuning of each of them, as it was in the original Phenom. Thanks to this flexibility, both the most economical mode of operation (with a drop in the core frequency in idle mode to 800 MHz) and an aggressive performance profile (overclocking the cores by 500 MHz above the factory frequency) became possible. Otherwise, Thuban was no different from the younger brothers in the series, and its two additional cores served more as an AMD marketing chip, offering more cores for less money.

Processor Wars. The story of the blue hare and the red tortoise

Alas, more cores didn't mean better performance - in gaming tests, the X6 1090T aspired to the level of younger Clarkdales, only in some cases challenging the i5 750's performance. at 125 nm, did not allow the reds to impose stiff competition on the first generation Core and its updated brothers. And with the release of Sandy Bridge, the relevance of the X45 has actually faded away, remaining interesting only for a narrow circle of professional fan users.

AMD's loud response to new products from Intel followed only in 2011, when a new line of AMD FX processors based on the Bulldozer architecture was introduced. Remembering the most successful series of its processors, AMD did not become modest, and once again emphasized the incredible ambitions and plans for the future - the new generation promised, as before, more cores for the desktop market, innovative architecture, and, of course, incredible performance in price-to-performance categories.

Processor Wars. The story of the blue hare and the red tortoise

From an architectural point of view, Bulldozer looked bold - the modular layout of the cores in four blocks on a shared L3 cache in ideal conditions was designed to provide optimal performance in multi-threaded tasks and applications, however, due to the desire to maintain compatibility with the rapidly aging AM2 platform, AMD decided to keep under processor cover of the northbridge controller, creating for itself one of the most important problems for years to come.

Processor Wars. The story of the blue hare and the red tortoise
Crystal Bulldozer

Despite 4 physical cores, Bulldozer processors were offered to users as eight-core - this was due to the presence of two logical cores in each computing unit. Each of them boasted its own massive 2 MB L2 cache, decoder, 256 KB instruction buffer and floating point unit. Such a division of functional parts made it possible to provide data processing in eight threads, emphasizing the emphasis of the new architecture on the foreseeable future. Bulldozer received support for SSE4.2 and AESNI, and one FPU unit per physical core became capable of executing 256-bit AVX instructions.

Unfortunately for AMD, Intel has already introduced Sandy Bridge, so the requirements for the processor part have seriously increased. At a price well below the X6 1090T, the average user could purchase the brilliant i5 2500k for performance on par with the best of the previous generation, and the Reds needed to do the same. Alas, the realities of the times of the release had their own opinion on this matter.

Already 6 cores of the older Phenom II were half free in most cases, let alone eight AMD FX threads - due to the specifics of the vast majority of games and applications that use 1-2 threads, sometimes up to 4 threads, the new red camp was only a little faster the previous Phenom II, losing hopelessly 2500k. Despite some advantage in professional tasks (for example, data archiving), the flagship FX-8150 turned out to be of no interest to the consumer, already blinded by the power of the i5 2500k. There was no revolution, and history did not repeat itself. It is worth mentioning the built-in synthetic WinRAR test, which was multi-threaded, while in real work the archiver fully used only two threads.

Another bridge. Ivy Bridge or while waiting

AMD's example was indicative in many ways, but first of all, it emphasized the need to create some kind of basis on which to build a successful (in all respects) processor architecture. This is how AMD became the best of the best in the K7/K8 era, and it is thanks to the same postulates that Intel took their place with the release of Sandy Bridge.

Architectural refinements turned out to be useless when a win-win combination appeared in the hands of the blues - powerful cores, moderate TDP and a well-established platform format on a ring bus, incredibly fast and efficient for any task. Now all that remained was to build on the success by using everything that was before - and that was the success of the transitional Ivy Bridge, the third (according to Intel) generation of Core processors.

Perhaps the most significant change in terms of architecture was Intel's move to 22 nm - not a leap, but a sure step towards a reduction in die size, which again turned out to be smaller than its predecessor. By the way, the die size of the AMD FX-8150 processor with the old 32 nm process technology was 315 mm2, while the size of the Intel Core i5-3570 processor was more than twice as small: 133 mm2.

Processor Wars. The story of the blue hare and the red tortoise

This time around, Intel has once again bet on on-board graphics, and allotted more chip space for it - albeit only a little more. The rest of the crystal topology has not changed - all the same four blocks of cores with a common L3 cache block, a memory controller and a system I / O controller. It can be said that the scheme looks frighteningly identical, but this was the essence of the Ivy Bridge platform - to keep the best of Sandy, while adding pluses to the common piggy bank.

Processor Wars. The story of the blue hare and the red tortoise
Crystal Ivy Bridge

Through the transition to a thinner process technology, Intel was able to reduce the overall power consumption of processors to 77 watts - from 95 in the previous generation. Nevertheless, hopes for even more outstanding results in overclocking did not come true - due to the capricious nature of Ivy Bridge, reaching high frequencies required more voltages than in the case of Sandy, so there was no particular hurry to set records on this family of processors. Also, not the best role for overclocking was played by the replacement of the thermal interface between the thermal distribution cover of the processor and its crystal from solder to thermal paste.

Fortunately for the owners of the previous generation of Core, the socket has not changed, and the new processor could be easily installed in the old motherboard. However, the new chipsets offered such frills as USB 3.0 support, so users who follow technological innovations must have rushed to purchase a new board based on the Z-chipset.

The overall performance of Ivy Bridge hasn't grown so much to call it another revolution, but rather consistently. In professional tasks, the 3770k showed results comparable to professional X-series processors, and in games it outperformed the former favorites 2600k and 2700k by about 10%. Some will consider this insufficient for an upgrade, but Sandy Bridge is considered one of the longest-running processor families in history for a reason.

Finally, even the most economical PC gaming users could feel at the forefront - Intel HD Graphics 4000 was significantly faster than the previous generation, showing an average increase of 30-40%, and also received support for DirectX 11. Now you can play popular games on a medium -Low settings, getting good performance.

Summing up, Ivy Bridge was a nice addition to the Intel family, avoiding all sorts of risks from architectural excesses, and following the tick-tock principle, from which the blue ones did not depart at all. The Reds, on the other hand, made an attempt to carry out large-scale work on the errors in the form of Piledriver - a new generation in the old guise.
The outdated 32nm didn't allow AMD to make another revolution, so Piledriver was called upon to correct the shortcomings of Bulldozer by paying attention to the weakest sides of the AMD FX architecture. The Zambezi cores replaced Vishera, which included some improvements from solutions based on Triniti, the mobile processors of the red giant, but the TDP remained unchanged - 125 W for the flagship model with the 8350 index. Structurally, it was identical to its older brother, however, architectural improvements and an increase in frequency by 400 MHz allowed to catch up.

Processor Wars. The story of the blue hare and the red tortoise

AMD's promotional slides on the eve of the release of Bulldozer promised fans of the brand 10-15% performance gains from generation to generation, but the release of Sandy Bridge and a huge leap forward did not allow these promises to be called too ambitious - now Ivy Bridge was already on the shelves, pushing the upper limit of the threshold performance even further. In order not to make a mistake again, AMD introduced Vishera as an alternative to the budget part of the Ivy Bridge line - 8350 was opposed to the i5-3570K, which was due not only to the caution of the reds, but also to the company's pricing policy. The flagship Piledriver became available to the public for $199, which made it cheaper than a potential competitor - however, it was definitely impossible to say the same about performance.

Professional tasks became the most striking place for the FX-8350 to reveal its potential - the cores worked as fast as possible, and in some cases the new product from AMD was even ahead of the 3770k, but where most users looked (gaming performance), the processor showed results similar to the i7-920 , and at best not too far behind the 2500k. However, this state of affairs did not surprise anyone - the 8350 was 20% more productive than the 8150 in the same tasks, while the TDP remained unchanged. Work on the bugs was a success - albeit not as brightly as many would like.

The world record for overclocking the AMD FX 8370 processor was achieved by the Finnish overclocker The Stilt in August 2014. He managed to overclock the crystal to 8722,78 MHz.

Haswell: Too good to be true again

The architectural path of Intel, as you can already see, has found its golden mean - to adhere to the proven scheme in building a successful architecture, doing improvements in relation to all aspects. Sandy Bridge pioneered an efficient architecture based on a ring bus and a combined block of cores, Ivy Bridge refined it in terms of stuffing and nutrition, and Haswell became a kind of continuation of its predecessor, promising new standards of quality and performance.

The architectural slides of Intel's presentation gently hinted that the architectural part would remain unchanged. Improvements touched only on some details in the optimization format - new ports were added for the task manager, the L1 and L2 cache was optimized, as well as the TLB buffer in the latter. It is impossible not to note the refinement of the PCB controller, which is responsible for the operation of the process in various modes and associated power costs. Simply put, at rest, Haswell has become much more economical than Ivy Bridge, but there was no talk of a general reduction in TDP.

Processor Wars. The story of the blue hare and the red tortoise

Advanced motherboards with support for high-speed DDR3 modules provided enthusiasts with some joy, but from the point of view of overclocking, everything turned out to be sad - Haswell's results were even worse than the previous generation, and this was largely due to the transition to other thermal interfaces, about which only the lazy are not joking now. Integrated graphics also received performance gains (due to the increasing emphasis on the world of portable laptops), but against the background of the lack of visible growth in IPC, Haswell was dubbed "Hasfail" for a measly 5-10% performance increase compared to the previous generation. This, coupled with production problems, has led to the fact that Broadwell - the next generation of Intel - has become a practically non-existent myth, because its release on mobile platforms and a pause for a whole year negatively affected the overall perception of users. In order to at least somehow correct the situation, Intel released Haswell Refresh, also known as Devil Canyon - however, its whole essence was to increase the base frequencies of Haswell processors (4770k and 4670k), so we will not devote a separate section to it.

Broadwell-H: Even more economical, even faster

A long pause in the release of Broadwell-H was due to the difficulties associated with the transition to a new process technology, however, if you delve into the architectural analysis, it becomes clear that the performance of Intel processors has reached a level unattainable by competitors from AMD. But that doesn't mean the Reds were wasting their time - thanks to their investment in APUs, Kaveri-based solutions were in high demand, and the older models of the A8 series could easily outperform any integrated graphics from the Blues. Apparently, this state of affairs did not suit Intel at all - and therefore the Iris Pro graphics core occupied a special place in the Broadwell-H architecture.

Coupled with the move to 14nm, the Broadwell-H die size has effectively remained the same - but the more compact layout has allowed even more focus on increasing graphics power. In the end, it was on laptops and multimedia centers that Broadwell found its first home, so innovations such as support for HEVC (H.265) and VP9 hardware decoding look more than reasonable.

Processor Wars. The story of the blue hare and the red tortoise
Intel Core i7-5775C microprocessor die

Special mention deserves the eDRAM crystal, which occupied a separate place on the crystal substrate and became a kind of high-speed data buffer - L4 cache - for processor cores. The performance of which made it possible to count on a serious step forward in professional tasks that are especially sensitive to the speed of processing cached data. The eDRAM controller has taken its place on the main processor chip - engineers have replaced the space that has become free after the transition to a new process technology.

eDRAM has also been integrated to speed up the onboard graphics, acting as a fast frame cache - with a capacity of 128 MB, its capabilities can greatly simplify the work of the onboard GPU. In fact, it was in honor of the eDRAM-crystal that the letter C joined the name of the processor - Intel called the technology for high-speed data caching on a chip Crystal Wall.

The frequency characteristics of the novelty, oddly enough, have become much more modest than Haswell - the older 5775C had a base frequency of 3.3 GHz, but at the same time it could boast of an unlocked multiplier. With a decrease in frequencies, the TDP also decreased - now it was only 65 W, which is perhaps the best achievement for a processor of this level, because the performance remained unchanged.

Despite the modest (by Sandy Bridge standards) overclocking potential, Broadwell-H surprised with its energy efficiency, being the most economical and coolest among competitors, and on-board graphics outperformed even solutions from the AMD A10 family, showing that the bet on the graphics core under the lid was justified.

It is important to remember that Broadwell-H turned out to be so intermediate that six months later, processors based on the Skylake architecture were introduced, which was already the sixth generation in the Core family.

Skylake - The time for revolutions is long gone

Oddly enough, many generations have passed since Sandy Bridge, but none of them could shock the public with something incredible and innovative, with the possible exception of Broadwell-H - but there it was more about an unprecedented leap in the graphic part and its performance (compared to AMD's APUs) rather than huge performance breakthroughs. The days of Nehalem are certainly gone and will never return, but Intel continued to move forward in small steps.

Processor Wars. The story of the blue hare and the red tortoise

Architecturally, Skylake was re-arranged, and the horizontal arrangement of the computing units was replaced by the classic square layout, in which the shared-LLC cache separates the cores, and the productive graphics core is located on the left.

Processor Wars. The story of the blue hare and the red tortoise
Intel Core i7-6700k microprocessor die

Due to technical considerations, the eDRAM controller is now located in the I/O control unit area as an addition to the image output control module in order to ensure the best image transmission quality from the integrated graphics core. The built-in voltage regulator used in Haswell disappeared from under the cover, the DMI bus was updated, and thanks to the principle of backward compatibility, Skylake processors supported both DDR4 and DDR3 memory - a new SO-DIMM DDR3L standard was developed for them, operating at low voltages .

At the same time, one cannot fail to notice how much attention Intel pays to advertising the next generation of on-board graphics - in the case of Skylake, it was already the sixth in the blue line. Intel is especially proud of the performance boost, which was especially indicative in the case of Broadwell, but this time it promises especially economical gamers the highest level of performance and support for all modern APIs, including DirectX 12. The graphics subsystem is part of the so-called System on Chip (SOC ), which Intel also actively promoted as an example of a successful architectural solution. But if you remember that the integrated voltage controller has disappeared, and the power subsystem relies entirely on the motherboard's VRM, of course, Skylake has not yet reached a full-fledged SOC. There is no question of integrating the south bridge chip under the cover at all.

However, the SOC here plays the role of an intermediary, a kind of "bridge" between the Gen9 graphics chip, the processor cores and the I / O system controller, which is responsible for the interaction of components with the processor and data processing. At the same time, Intel put a significant emphasis on energy efficiency and a lot of measures taken by Intel in the fight to consume fewer watts - Skylake provides various "power gates" (let's call them power states) for each section of the SOC, including a high-speed ring bus, graphics subsystem and media controller. The former P-state based processor phase power control system has evolved into Speed ​​Shift technology, which provides both dynamic switching between different phases (for example, when waking up from sleep mode during active work or starting a heavy game after light surfing) and power cost balancing between active CPUs to achieve the highest efficiency within TDP.

Due to the redesign associated with the disappearance of the power controller, Intel was forced to transfer Skylake to the new LGA1151 socket, for which motherboards based on the Z170 chipset were released at the release, which received support for 20 PCI-E 3.0 lanes, one USB 3.1 Type A port, increased number of USB 3.0 ports, support for eSATA and M2 drives. Support for DDR4 modules with a frequency of up to 3400 MHz was announced as memory.

As for performance, the release of Skylake did not mark any shocks. The expected performance gain of five percent over Devil Canyon left many fans baffled, but it was also clear from Intel's presentation slides that the main focus was on energy efficiency and flexibility of the new platform, which can be suitable for both economical micro-ITX systems and and for advanced gaming platforms. Users who were expecting a jump ahead of the Sandy Bridge Skylake level were disappointed, the situation was reminiscent of the release of Haswell, and the release of a new socket was also upset.

Now it's time to hope for Kaby Lake, because someone who, and he was supposed to be the one ...

Kaby Lake. Fresh lake and unexpected redness

Despite the initial logic of the tick-tock strategy, Intel, realizing the absence of any competition from AMD, decided to expand each cycle to three stages, in which, after the introduction of the new architecture, the existing solution under a new name is being finalized for the next two years. Broadwell became a 14 nm step, followed by Skylake, and Kaby Lake, respectively, was designed to show the most advanced technological level in comparison with the previous Nebesnozersky.

Processor Wars. The story of the blue hare and the red tortoise

The main difference between Kaby Lake and Skylake was the increase in frequencies by 200-300 MHz - both in terms of base frequency and boost. Architecturally, the new generation did not receive any changes - even the integrated graphics, despite the labeling update, remained the same, but Intel released a logic set based on the new Z270, in which 4 PCI-E 3.0 lanes were added to the functionality of the previous Sunrise Point, as well as support for Intel technology Optane Memory for the giant's advanced devices. Independent multipliers for board components and other features of the previous platform have been preserved, and multimedia applications have received the AVX Offset function, which allows you to reduce processor frequencies when processing AVX instructions to increase stability at high frequencies.

Processor Wars. The story of the blue hare and the red tortoise
Intel Core i7-7700k microprocessor die

In terms of performance, the new seventh generation Core for the first time turned out to be almost identical to their predecessors - once again paying attention to optimizing power consumption, Intel completely forgot about innovations in terms of IPC. Nevertheless, unlike Skylake, the new product solved the problem of extreme heating during serious overclocking stages, and also made it feel almost like in the days of Sandy Bridge, overclocking the processor to 4.8-4.9 GHz with moderate power consumption and relatively low temperatures. In other words, overclocking has become easier, and the processor is 10-15 degrees cooler, which can be called the result of the very optimization, its final cycle.

No one could have guessed that AMD is already preparing a real answer to the long-term development of Intel. Its name is AMD Ryzen.

AMD Ryzen - When everyone laughed and no one believed

After the updated Bulldozer, Piledriver architecture was introduced in 2012, AMD completely moved into other areas of the processor market, releasing several successful APU lines, as well as other cost-effective and portable solutions. However, the company never forgot about the resumption of the struggle for a place under the sun on desktop computers, feigning infirmity, but at the same time working on the Zen architecture, a real new solution designed to revive the once-lost competitive spirit in the CPU market.

Processor Wars. The story of the blue hare and the red tortoise

To develop the novelty, AMD resorted to the help of Jim Keller, the very “father of two cores”, whose work experience led the red giant to fame and recognition in the early 2000s. It was he, together with other engineers, who developed a new architecture designed to be fast, powerful and innovative. Unfortunately, everyone remembered that the Bulldozer was based on the same principles - a different approach was needed.

Processor Wars. The story of the blue hare and the red tortoise
Jim Keller

And AMD took advantage of the marketing by announcing a 52% increase in IPC compared to the Excavator generation - the most recent cores that all grew from the same Bulldozer. This meant that compared to the 8150, the Zen processors promised to be more than 60% faster, and this intrigued everyone. At first, at AMD presentations, they devoted time only to professional tasks, comparing their new processor with the 5930K, and later with the 6800K, but over time, the game side of the problem was also discussed - the most acute in terms of sales. But even here AMD was ready to fight.

The Zen architecture is based on a new 14 nm process technology, and the architectural innovations are nothing like the modular architecture from 2011. Now there are two large functional blocks on the chip, called CCX (Core Complex), each of which can have up to four active cores . As in the case of Skylake, various system controllers are located on the crystal substrate, including 24 PCI-E 3.0 lanes, support for up to 4 USB 3.1 Type A ports, and a dual-channel DDR4 memory controller. Especially worth noting is the volume of the L3 cache - in flagship solutions, its volume reaches 16 MB. Each of the cores received its own floating point unit (FPU), which solved one of the main problems of the previous architecture. The consumption of processors has also drastically decreased - for the flagship Ryzen 7 1800X it was designated at 95 W compared to 220 W for the "hottest" (in every sense) AMD FX models.

Processor Wars. The story of the blue hare and the red tortoise

Processor Wars. The story of the blue hare and the red tortoise
AMD Ryzen 1800X microprocessor die

The technological stuffing turned out to be no less rich in innovations - so the new AMD processors received a whole set of new technologies under the heading SenseMI, which included Smart Prefetch (loading data into the cache buffer to speed up programs), Pure Power (essentially an analogue of "intelligent" control powering the processor and its segments implemented in Skylake), Neural Net Prediction (an algorithm that works on the principles of a self-learning neural network), as well as Extended Frequency Range (or XFR), designed to provide users with advanced cooling systems an additional 100 MHz frequencies. For the first time since the Piledriver, not Turbo Core was responsible for overclocking, but Precision Boost - an updated technology for increasing the frequency depending on the load on the cores. We have seen similar technology from Intel since the days of Sandy Bridge.

The new Ryzen architecture is based on the Infinity Fabric bus, designed to interconnect both individual cores and two CCX units on a chip substrate. The high-speed interface was designed to provide the fastest possible interaction between cores and blocks, as well as be able to be implemented on other platforms - for example, on economical APUs and even on AMD VEGA graphics cards, where the bus paired with HBM2 memory should operate with a bandwidth of at least 512 Gb/s

Processor Wars. The story of the blue hare and the red tortoise
Infinity Fabric

All this is due to the ambitious plans to expand the Zen line to high-performance platforms, servers and APUs - the unification of the production process, as always, leads to cheaper production, and low tempting prices have always been the prerogative of AMD.

At first, AMD introduced only Ryzen 7 - the older models of the line, aimed at the most picky users and media makers, and a few months later they were followed by Ryzen 5 and Ryzen 3. It was Ryzen 5 that turned out to be the most attractive solutions in terms of both price and gaming performance. , for which Intel, to put it bluntly, was not ready at all. And if at the first stage it seemed that Ryzen was destined to repeat the fate of Bulldozer (albeit with a lesser degree of drama), then over time it became clear that AMD managed to impose competition again.

The main problems of Ryzen were the technical nuances that accompanied the owners of early revisions during the first few months - due to problems with memory, Ryzen was in no hurry to recommend for purchase, and the dependence of processors on the frequency of RAM directly hinted at the need for additional expenses. However, timing-savvy users found that with high-speed memory modules set to the lowest timings, Ryzen was able to push even the 7700k, which caused quite a stir in the AMD fan camp. But even without such frills, the processors of the Ryzen 5 family turned out to be so successful that the wave of their sales forced Intel to carry out an urgent revolution in its architecture. The response to AMD's coup was the release of the latest (at the time of writing) Coffee Lake architecture, which received 6 cores instead of four.

coffee lake. The ice has broken

While the 7700k held the title of best gaming processor for a long time, AMD was able to achieve incredible success in the mid-range of the line by implementing the age-old principle of “more cores, but cheaper”. The Ryzen 1600 had 6 cores and a whopping 12 threads, and the 7600k was still tied to 4 cores, giving AMD a simple marketing win, especially with the support of numerous reviewers and bloggers. Then Intel shifted the release schedule, and introduced Coffee Lake to the market - not just another couple of percent and a couple of watts, but a real step forward.

True, and here it was made with a reservation. Six long-awaited cores, not without the joys of SMT, actually appeared on the basis of the same Skylake, built on 14 nm. In Kaby Lake, its base was tweaked, solving problems with overclocking and temperature, and in Coffee Lake it was finalized in the direction of increasing the number of core blocks by 2, and optimizing for colder and more stable operation. If we evaluate the architecture in terms of innovations, then no innovations (apart from an increase in the number of cores) appeared in Coffee Lake.

Processor Wars. The story of the blue hare and the red tortoise
Intel Core i7-8700k microprocessor die

But there were technical limitations associated with the need for new motherboards based on the Z370. These restrictions are associated with the growth of power requirements, since the addition of six cores and the redesign of the system, taking into account the growth of the gluttony of the crystal, required raising the bar for the minimum supply voltage. As we remember from the history of Broadwell, Intel has been striving for the opposite in recent years - to reduce stress on all fronts, but now this strategy has come to a standstill. Technically, the LGA1151 remained the same, however, due to the risks of damaging the VRM controller, Intel limited the compatibility of the processor with previous motherboards, thus protecting itself from possible scandals (as was the case with the RX480 and burned out PCI-E slots from AMD). The updated Z370 also did not support the previous DDR3L memory, but no one, in general, expected such compatibility.

Intel themselves were preparing an updated version of the platform with support for USB 3.1 of the second generation, SDXC memory cards and an integrated Wi-Fi 802.11 controller, so the release rush with the Z370 turned out to be one of those incidents that allowed us to draw conclusions about the appearance of the platform. However, there were plenty of surprises in Coffee Lake - and a special part of them was focused on overclocking.

Intel paid a lot of attention to it, emphasizing the work done on optimizing the overclocking process - for example, in Coffee Lake it became possible to configure several presets for stepwise overclocking for different core loading conditions, the ability to dynamically change memory timings without leaving the operating system, support for any, even the most impossible DDR4 multipliers (support for frequencies up to 8400 MHz is declared), as well as an enhanced power supply system designed for maximum loads. Nevertheless, in fact, overclocking 8700k was far from the most incredible - due to the impracticality of the thermal interface used, without delidding, the processor was often limited to 4.7-4.8 GHz, reaching extreme temperatures, but with a change in interface it could show new records in the style of 5.2 or even 5.3 GHz. However, the vast majority of users were not interested in this, so the overclocking potential of the six-core Coffee Lake can be called restrained. Yes, yes, Sandy has not been forgotten yet.

The gaming performance of Coffee Lake did not show any special miracles - despite the appearance of two physical cores and four threads, the 8700k at the time of release had only about the same performance step of 5-10% over the previous flagship. Yes, Ryzen could not compete with him in the gaming niche, but in terms of architectural improvements, it turns out that Coffee Lake is just another protracted "current", but not a "tick" like Sandy Bridge was in 2011.

Fortunately for AMD fans, after the release of Ryzen, the company announced long-term plans for socket AM4 and the development of the Zen architecture until 2020 - and after Coffee Lake returned attention to the mid-range Intel, it's time for Ryzen 2 - after all, and AMD must have its own "current".

Cruel truthWe wouldn't see Intel as it is today if it didn't use unfair competition to promote its products. For example, in May 2009, the company was fined a hefty $1,5 billion by the European Commission for bribing a PC maker and a trading firm for choosing Intel processors. Intel management then said that neither users who could buy computers at a lower price nor justice would benefit from the decision to file a lawsuit.

Intel also has an older and more effective method of competition. By including the CPUID instruction for the first time, starting with the i486 processors, by creating and distributing its own free compiler, Intel ensured success for many years to come. Such a compiler generates optimal code for Intel processors and mediocre code for all other processors. Thus, even a technically powerful processor from competitors "passed" through non-optimal program branching. This reduced the overall performance in the application and did not allow showing an approximately equal level of performance with an Intel processor of similar characteristics.

Under such conditions, VIA could not stand the competition, sharply reducing the sale of processors. Its energy-efficient Nano processor gave way to the then-new Intel Atom processor. Everything would be fine if one technically literate researcher Agner Fog failed to change the CPUID on the Nano processor. As expected, productivity increased and exceeded that of the competitor. But the news did not produce the effect of an information bomb.
The competition with AMD (the world's second largest manufacturer of x86 / x64 microprocessors) also did not go smoothly for the latter; in 2008, due to financial problems, AMD had to part with its own manufacturer of semiconductor integrated circuits, GlobalFoundries. AMD, in the fight against Intel, bet on multi-core, offering affordable processors with multiple cores, at a time when Intel in this category of products could respond with processors with fewer cores, but with Hyper-Threading technology.

For many years, Intel has been gaining market share in mobile and desktop processors, driving out a competitor. The server market of processors is already almost completely captured. And only recently the situation began to change. The release of AMD Ryzen processors forced Intel to change its main tactics of slightly increasing the operating frequencies of processors. Although the test packages helped Intel not to worry once again. So, for example, in SYSMark synthetic tests, the difference between the sixth and seventh generations of Core i7 desktop processors was disproportionate to the increase in frequency with identical core characteristics.

But now Intel has also begun to increase the number of cores for desktop processors, and also partially rebranded existing processor models. This is a good step towards its consumers becoming technically literate.

The author of the article is Pavel Chudinov.

2019 - Blue Point of No Return or Chiplets Revolution

Following the release of two highly successful generations of Ryzen processors, AMD was ready to take an unprecedented step forward not only in terms of performance, but also in the latest manufacturing technologies - the transition to a 7nm process that provides a 25% increase in performance while maintaining the same thermal package, coupled with many architectural developments and optimizations made it possible to bring the AM4 platform to a new level, providing all owners of previous "folk" bundles with a painless upgrade with a preliminary BIOS update.

And the psychologically important mark of 4 GHz, which in many ways was a stumbling block on the way to fierce competition with Intel, worried enthusiasts in a different way - since the first rumors, many rightly noted that the increase in frequency in the Ryzen 3000 family would hardly be more than 20%, but no one could forbid dreaming about 5 GHz, which Intel flaunted. Fueled by interest and numerous "leaks", as well as full lines of processors and incredible details, many of which turned out to be quite far from the truth. But in fairness, it is worth noting that some leaks were quite consistent with the results seen - of course, with some reservations.

Technically, the architecture of Zen 2 received a number of radical differences from its predecessor, which underlies the first two generations of Ryzen. The key difference was the layout of the processor, now consisting of three separate dies, two of which contain blocks of cores, and the third, more impressive in size, includes a block of controllers and communication channels (I / O). Despite all the many advantages of the energy-efficient and advanced 7nm process, AMD could not help but face noticeably increasing production costs, because the 7nm process had not yet been tested and brought to ideal ratios of defective chips to clean ones. However, there was another reason - the general unification of production, allowing you to combine different production lines into one, and select crystals for both affordable Ryzen 5 and incredible EPYC. This cost-effective solution allowed AMD to keep prices at the same level, and please fans with the release of Ryzen 3000.

Processor Wars. The story of the blue hare and the red tortoise
Structural layout of chipsets

Dividing the processor chip into three small segments made it possible to significantly advance in solving the most important tasks facing AMD engineers - reducing Infinity Fabric latency, cache access delays, and data exchange delays from different CCX blocks. Now the cache size has at least doubled (32 MB L3 for 3600 versus 16 MB for last year's 2600), the mechanisms for working with it have been optimized, and the Infinity Fabric frequency has its own FCLK multiplier, which allows using RAM up to 3733 MHz with an optimal result. (delays in this case did not exceed 65-70 nanoseconds). However, Ryzen 3000 is still sensitive to memory timings, and expensive low-latency sticks can give owners of new gems up to 30% or more performance gains - especially in certain scenarios and games.

The thermal package of the processors remained the same, but the frequencies increased as expected - from 4,2 in the 3600 boost to 4,7 in the 3950X. After entering the market, many users faced the problem of "malaise", when the processor did not show the frequencies declared by the manufacturer even under ideal conditions - the "red" had to implement a special BIOS revision (1.0.0.3ABBA), in which the problem was successfully fixed, and a month ago global 1.0.0.4 was released, containing more than one and a half hundred fixes and optimizations - for some users, after the update, the processor frequency increased up to 75 MHz, and the standard voltages decreased significantly. However, this did not affect the overclocking potential in any way - the Ryzen 3000, like its predecessors, works great out of the box, and is not able to offer overclocking potential beyond symbolic increases - this makes it boring for enthusiasts, but it pleases those who don’t for which gingerbread does not want to touch the settings in the BIOS.

Zen 2 received a significant increase in per-core performance (up to 15% in various applications), allowed AMD to seriously increase capacity in all market segments, and for the first time in decades, turn the tide in its favor. What made this possible? Let's take a closer look.

Ryzen 3 - Tech Fantasy

Many who followed the leaks regarding the Zen 2 generation were especially interested in the new Ryzen 3. The affordable processors were promised 6 cores, powerful integrated graphics and a ridiculous price. Unfortunately, the expected successors to Ryzen 3, which AMD equipped the lower segment of its platform in 2017, never saw the light of day. Instead, the Reds continued to use the Ryzen 3 brand as a low-end brand that included two cost-effective and simple APU format solutions - a slightly overclocked (compared to its predecessor) 3200G with integrated Vega 8 graphics that can handle base system loads. and games at 720p resolution, as well as its older brother 3400G, which received a faster video core with Vega 11 graphics, as well as active SMT + frequency growth on all fronts. Such a solution could already be enough for simple games in 1080p, but these entry-level solutions are mentioned here not for this reason, but because of the discrepancy with the leaks that prophesied Ryzen 3 not only 6 cores, but also maintaining a ridiculous price (around $120-150 ). However, one should not forget about the real status of APUs - they still use Zen + cores, and in fact they are representatives of the 3000 series only formally.

However, if we talk about the value of the new generation as a whole, here AMD has taken care to consolidate its unconditional leadership status in many segments - it has been particularly successful in the mid-range processor category.

Ryzen 5 3600 – A folk hero without reservation

One of the key features of the Zen 2 processor architecture was the transition from a classic single-chip layout to the creation of a “modular” design - AMD has implemented its own patent for “chips”, small chips with processor cores interconnected by an Infinity Fabric bus. Thus, the "red" not only entered the market with a new batch of innovations, but also did serious work on one of the most acute problems of previous generations - high latencies both when working with memory and when exchanging data between cores from different CCX blocks.

And this introduction was by no means just like that - Ryzen 3600, the undisputed king of the mid-range segment, achieved an unconditional victory precisely thanks to the innovations implemented by AMD in the new generation. A significant increase in performance per core and the ability to work with memory faster than 3200 MHz (which for the most part were the effective ceiling of the previous generation) made it possible to easily take and raise the bar to unprecedented heights, targeting not only the fastest i5-9600K, but also on the flagship i7-9700.

Against the background of its predecessor in the face of the Ryzen 2600, the newcomer acquired not only a lot of improvements in the field of architecture, but also a less ardent disposition (the 3600 heats up objectively less, which is why AMD was even able to save on the cooler by removing the copper core), a cool head and the ability not to be shy shortcomings. Why? It's simple - the 3600 does not have them, although this seems absurd. Judge for yourself - the peak frequency has increased by 200 MHz, the passport 65 W has ceased to be conditional, and 6 cores have equaled (or even surpassed!) The current Intel cores in Coffee Lake. And all this was served to fans for the classic $199, flavored with backwards compatibility sauce with most AM4 motherboards. The Ryzen 3600 was destined for success - and sales around the world show this clearly for the third month in a row. In some regions that have been loyal to Intel since ancient times, the situation on the market changed overnight, and European countries (and even Russia!) brought a new national sales hero to the pinnacle of success. In the vastness of our homeland, the processor occupied 10% of the market of all CPU sales in the country, ahead of the i7-9700K and i9-9900K combined. And if it seems to someone that it's all about a tasty price, then everything is not so simple: Ryzen 2600, for comparison, in the same period after entering the market took no more than 3%. The secret of success was different - AMD beat Intel in the most crowded segment of the processor market, and declared this openly at the presentation during the debut of processors at CES2019. A delicious price, wide compatibility and a cooler in the kit only consolidated the already indisputable leadership.

Processor Wars. The story of the blue hare and the red tortoise

So why was the older brother, 3600X, needed? Similar in all characteristics, this processor was faster by another 200 MHz (and had a boost frequency of 4.4 GHz), and allowed to get a truly symbolic advantage over the younger processor, which did not look quite convincing against the background of a significantly increased price ($229). Nevertheless, the older model still had some advantages - this was the absence of the need to turn the sliders in the BIOS in pursuit of frequencies above the base ones, and Precision Boost 2.0, which can dynamically overclock the processor in stressful situations, and a heavier cooler (Wraith Spire instead of Wraith Stealth). If all this sounds like a tempting proposition to you, the 3600X is a great gem in AMD's new lineup. If overpaying is not your option, and the performance difference of 2-3% does not look significant, feel free to choose 3600 - you will not regret it.

Ryzen 7 3700X - Old new flagship

AMD prepared a replacement for the former leader without much pathos - everyone understood that against the background of current competitors, the 2700X looked rather poor, and a big step forward (as in the case of the 3600) was obvious and expected. Without changing the alignment of forces in terms of cores and threads, the "reds" introduced a pair of processors to the market, devoid of any special differences, but significantly different in price.

The 3700X was introduced as a direct replacement for the former flagship - for a suggested price of $329, AMD introduced a full-fledged competitor for the i7-9700K, emphasizing each of its advantages, such as more advanced technological solutions and the presence of multi-threading, which Intel decided to leave only to its "royal" processors of the highest category. At the same time, AMD introduced the 3800X, which, in fact, was only a slightly faster version (by 300 MHz in the base and 100 in the boost) version, and was unable to differ from its younger relative in any way. However, for people who still feel horror about the word "overclocking manually" this option looks pretty good, but for such little things you have to pay a lot - as much as $ 70 on top.

Ryzen 9 3900X and 3950X - Show of Power

However, the most important (and frankly, necessary!) Indicator of the success of Zen 2 was the older solutions from the Ryzen 9 family - the 12-core 3900X and the 16-core champion in the face of 3950X. These processors, stepping into the territory of HEDT solutions with one foot, remain true to the logic of the AM4 platform, having a huge reserve of resources that can surprise even fans of last year's Threadripper.

The 3900X, of course, was intended primarily to complement the Ryzen 3000 line against the current gaming legend, the 9900K, and in this regard, the processor proved to be incredibly good. With a boost of 4.5 GHz per core and 4.3 across all available, the 3900X has taken a significant step towards the long-awaited parity with Intel in gaming performance, and at the same time, terrifying power in any other task - rendering, computing, working with archives, etc. 24 threads allowed the 3900X to catch up with the younger Threadripper in pure performance, and at the same time not suffer from an acute shortage of power per core (as was the case with the 2700X) or a flaw in several core modes (and the notorious Game Mode, which disabled half the cores in AMD HEDT processors). ). AMD has played without compromise, and while the crown of the fastest gaming processor is still in the hands of Intel (which recently introduced the 9900KS, which came out as a controversial limited edition processor for collectors), the Reds were able to present the most versatile high-end gem that is currently on the market. But not the most powerful - and all thanks to the 3950X.

3950X has become a field for experiments for AMD - combining the resource capacities of HEDT and the title of "the world's first 16-core gaming processor" can be called a pure adventure, but in fact the "reds" were almost not cunning. The highest boost frequency in the form of 4.7 GHz (when loaded on 1 core), the ability to operate all 16 cores at 4.4 GHz without exotic cooling, as well as selected higher-class chiplets that make the new monster even more economical than its 12-core counterpart due to for lowering operating voltages. True, the choice of cooling this time remains on the buyer's conscience - AMD did not sell the processor with a cooler, limiting itself to only recommending the purchase of a 240 or 360 mm CBO.

In many cases, the 3950X delivers gaming performance on par with a 12-core solution, which is great enough considering the sad story of how Threadripper behaved. However, in games where the use of threads is significantly reduced (for example, in GTA V), the flagship is not pleasing to the eye - but this is more of an exception to the rule.

The new 16-core processor shows itself in a completely different way in professional tasks - it’s not for nothing that many leaks said that AMD has shifted its focus in the consumer segment so much that the new 3950X feels confident even on expensive analogues like the i9-9960X, showing a huge increase in performance in Blender , POV Mark, Premiere and other resource intensive applications. The day before, Threadripper already promised a grand show of computing power, but even the 3950X showed that the consumer segment can be completely different - and even semi-professional. Recalling the achievements of the 16-core flagship of the AM4 platform, one cannot help but recall how Intel responded to attacks on HEDT.

Intel 10xxxX - Compromise on compromise

Even in anticipation of the release of the new generation of Threadripper, conflicting data appeared here and there about the upcoming HEDT line from Intel. In many ways, the confusion was related to the name of the new products - after the release of the rather controversial, but still fresh mobile processors of the Ice Lake line on the 10 nm process technology, many enthusiasts considered that Intel decided to promote products on the coveted 10 nm in small steps, occupying not the most numerous niches. From the point of view of the laptop market, the release of Ice Lake did not cause any special shocks - the blue giant has long controlled the mobile device market, and AMD has not yet been able to compete with the giant OEM machine and fat contracts of companies that have been closely cooperating with Intel since the early XNUMXs. However, in the case of the high-performance systems segment, everything turned out quite differently.

Processor Wars. The story of the blue hare and the red tortoise

We all know about the i9-99xxX line - after two generations of Threadripper, AMD has already boldly declared itself as a contender in the HEDT-systems market, but the market superiority of the blue ones remained unshakable. Unfortunately for Intel, the red ones did not stop at their past achievements - and already after the debut of Zen 2, it became clear that soon high-performance systems from AMD would greatly raise the performance bar, to which Intel was powerless to respond, because the blue giant has fundamentally new solutions it was not trite.
First of all, Intel had to take an unprecedented step - to reduce prices by 2 times, which has never happened before in the long years of rivalry with AMD. Now the flagship i9-10980XE with 18 cores on board cost only $979 instead of $1999 for its predecessor, and other solutions have lost comparable prices. However, many already understood what to expect from the two releases, and who would come out the winner, so Intel went to extreme measures, lifting the embargo on publishing reviews of new products 6 hours before the scheduled date.

Processor Wars. The story of the blue hare and the red tortoise

And the reviews began to appear. Even the largest channels and resources were deeply disappointed with the new line - despite the radical change in pricing policy, the new 109xx line turned out to be a simple "work on the bugs" on the previous generation - the frequencies have changed slightly, additional PCI-E lines have appeared, and the heat pack has excellent overclocking potential did not leave even hardcore fans with large CBOs a chance - at the peak, the 10980X could consume over 500 W, showing off not only excellent performance in benchmarks, but also clearly demonstrating that there is simply nothing to squeeze more out of great-grandfather's 14 nm.

The fact that the processors were compatible with the current HEDT platform of the previous generation did not save Intel either - the younger models of the new line lost out to the 3950X with a devastating score, leaving many Intel fans in bewilderment. But the worst was yet to come.

Threadripper 3000 - 3960X, 3970X. Monsters of the world of computing.

Despite initial skepticism about the relatively small number of cores (24 and 32 cores did not make such a splash as doubling the cores in previous Threadripper once did), it was clear that AMD was not going to bring solutions to the market "for show" - a huge increase in performance for through numerous Zen 2 optimizations and a radical improvement in Infinity Fabric, it promised performance never before seen on a semi-professional platform - and it was not about 10-20%, but about something truly monstrous. And when the embargo was lifted, everyone saw that the huge prices for the new Threadripper were not taken out of thin air, and not from AMD's desire to rip off fans.

Processor Wars. The story of the blue hare and the red tortoise

In terms of economy, the Threadripper 3000 is a wallet apocalypse. Expensive processors migrated to a completely new, more technologically advanced and complex TRx40 platform, providing up to 88 PCI-e 4.0 lanes, and thus supporting complex RAID arrays from the latest SSDs or a bunch of professional video cards. The quad-channel memory controller and incredibly powerful power subsystem are designed not only for current models, but also for the future flagship of the line - the 64-core 3990X, which promises to be released after the New Year.

But if the cost may seem like a big problem, but in terms of performance, AMD did not leave a stone unturned from Intel's new products - in a number of applications, the presented Threadripper was twice as fast as the flagship 10980XE, and the average performance increase was about 70%. And this despite the fact that the appetites of the 3960X and 3970X are much more moderate - both processors consume no more than the passport 280 W, and with a maximum overclock of 4.3 GHz for all cores, they remain 20% more economical than the red-hot nightmare from Intel.

Thus, for the first time in history, AMD managed to offer the market an uncompromising premium product that provides not only a huge increase in performance, but also does not have any significant drawbacks - except for the price, but, as they say, you have to pay extra for the best. And Intel, absurd as it may seem, has turned into an economical alternative, which, however, does not look so confident against the background of the $3950 750X on a much more affordable platform.

Athlon 3000G - Salvation for a pretty penny

AMD hasn't forgotten about the budget segment of low-power processors with formal graphics on board - the new (but also old) Athlon 5400G is in a hurry to help those who look at the Pentium G3000 with great contempt. 2 cores and 4 threads, 3.5 GHz base frequency and the familiar Vega 3 video core (twisted by 100 MHz) with a TDP of 35 W - and all this for a ridiculous $49. The Reds also paid special attention to the possibility of overclocking the processor, which provides at least 30% more performance at a frequency of 3.9 GHz. At the same time, you won’t have to spend money on an expensive cooler in a budget assembly - the 3000G comes with excellent cooling, designed for 65 W of heat - this is enough even for extreme overclocking.

At presentations, AMD compared Athlon 3000G with Intel's current competitor - Pentium G5400, which turned out to be much more expensive (recommended price - $73), is sold without a cooler, and is seriously inferior in performance to the new product. It's also funny that the 3000G is not built on the Zen 2 architecture - it is based on the good old Zen + at 12 nm, which allows us to call the new product a light refresh of last year's Athlon 2хх GE.

The results of the "red" revolution

The release of Zen 2 had a tremendous impact on the processor market - perhaps such a radical change has never been in the modern history of the CPU. We can remember the victorious pace of AMD 64 FX, we can mention the triumph of Athlon in the middle of the last decade, but we are not able to draw an analogy from the past "red" giant, where everything changed so rapidly, and the successes were simply amazing. In just 2 years, AMD has introduced incredibly powerful EPYC server solutions, won many lucrative contracts from global IT companies, returned to the game in the consumer gaming processor segment with Ryzen, and even forced Intel out of the HEDT market with the help of the incomparable Threadripper. And if earlier it seemed that only Jim Keller’s brilliant idea was behind all the success, then with the entry of the Zen 2 architecture to the market, it became clear that the development of the concept had gone far ahead of the original scheme - we got excellent budget solutions (Ryzen 3600 became the most popular processor in world - and still remains so), powerful universal solutions (the 3900X can compete with the 9900K and impress with its success in professional tasks), daring experiments (3950X!), and even ultra-economical solutions for the simplest everyday tasks (Athlon 3000G). And AMD continues to move forward - next year we will have a new generation, new successes and new frontiers that will definitely be conquered!

Processor Wars. The story of the blue hare and the red tortoise

House of NHTi "Processor Wars" rubric in 7 episodes on YouTube - Pumpkin

The author of the article is Alexander Lis.

Only registered users can participate in the survey. Sign in, you are welcome.

So what's better?

  • 68,6%AMD327

  • 31,4%Intel150

477 users voted. 158 users abstained.

Source: habr.com

Add a comment