Is Intel Core i9 10850K and Core i9 10900K good for Rendering

 

Most CPU launches include a lot of stories, leaks, and fanfare – with curious mind rushing to get new updates, system integrators updating their product lines, and enthusiasts doing everything they can to get their hands on one of the new models. The latest processors from AMD and Intel, however, lack almost all of that.

AMD's new Ryzen "XT" processors and Intel's new Core i9 10850K are so close in performance to the original models that there should be no perceptible difference in real-world usage.

I've been switching between these chips to know which is good for rendering. Unfortunately none of these are the best or fastest CPUs for rendering, that honor goes to AMD's Threadripper 3rd Gen lineup.

Intel actually released a slightly slower model – presumably to help alleviate some of the supply issues with the Core i9 10900K. The new Core i9 10850K is essentially the same as the 10900K, only with 100MHz lower base and Maximum Turbo Boost clock speeds. 

Does the little difference in clock speed even matter? 

While certainly important, the frequency (either base or Boost/Turbo) is just one factor in what determines the real-world performance of a processor. I run all of these chips through multiple rendering benchmarks, within the context of CPU-based rendering, to answer the question.

Each benchmark (using Cinebench R20 and V-Ray Next Benchmark 4.10.06)

Benchmark Results

was run twice per CPU, and the faster of the two results is included at the benchmark (see figure at the left). That method does slightly complicate things with the Intel processors, though, as these new 10th Gen Core models have multiple power limits which can substantially impact performance. This isn't completely new, but it is more pronounced with this generation of processors… and it leads to short-term boosts in performance when a processor first gets under load which then drops off after a while. As such, the first run of each benchmark test on the 10900K and 10850K scored substantially higher than their second runs. If anything, this may make those Intel processors look better than they should when compared to AMD's models – because in longer workloads (which would include most real-life rendering situations) most of the processing time would be spent at the lower power limits and thus enjoy reduced performance. I may cover this topic in more depth in a later blog post, but I wanted to make sure readers were at least made aware of it here.

I've noticed that the Core i9-10850K consistently delivers better performance than the Core i9-10900K in some tests—examples are Blender, Corona, Unreal Engine, MySQL, and Java. I switched out the i9-10850K for the i9-10900K—comparing test results back-to-back without changing any settings, software, or other hardware. Remove one CPU, insert in the other. Results were still consistently faster for the i9-10850K—this was not some random variation, nor due to temperature differences during testing.

I 've recorded the CPU frequency during testing. Given positioning and specs of the i9-10900K and i9-10850K, I went in expecting the i9-10900K to run 100 MHz higher CPU clock than the i9-10850K. If so, performance discrepancies were likely due to external factor instead of the CPU.

The frequency of all the cores while running the "Unreal Engine" test. As expected, the i9-10900K starts out at 100 MHz higher frequency than the i9-10850K (4.9 GHz vs. 4.8 GHz). What is curious is that the i9-10900K reduces its frequency much sooner than the i9-10850K, and the drop is to a lower frequency than the i9-10850K (4.6 GHz vs. 4.7 GHz). That's a 200 MHz difference compared to what I'm expecting.

Overall, this means that on average, over the whole duration of the benchmark, the Core i9-10900K runs at 4.67 GHz, whereas the Core i9-10850K runs at 4.74 GHz—a 1.48% difference in favor of the Core i9-10850K. I noticed that the CPU power-draw profile showed huge differences between both processors. This is everything at stock, with all Intel limits at their default setting. The Core i9-10900K jumps to 180 W and drops to 125 W soon thereafter to respect its TDP power limit. The Core i9-10850K, on the other hand, goes to just 140 W and stays at that power level for longer, before eventually dropping to the same 125 W TDP limit as the i9-10900K.

This explains why the Core i9-10900K runs out of steam more quickly—it exhausts its power budget much sooner because it consumes power at a much higher rate than the Core i9-10850K. We have talked about PL1, PL2, and Tau in previous articles.

In summary, the assumption was that you are able to run the processor 56 seconds for up to 250 W before it will drop to 125 W. The data shows that this is not the case. Rather, there seems to be a certain amount of total energy (Energy = Power x Time) that can be used while boosting, and once that budget is exhausted, the TDP limit will activate.

Reading into Intel's publicly available datasheets, I've found out "Turbo Time Parameter (Tau): An averaging constant used for PL1 exponential weighted moving average (EWMA) power calculation." Tau really isn't a duration in the assumed sense, but, rather, used to estimate how quickly the remaining power budget will be used up.

Here's the reason for the increased power draw on the Core i9-10900K: It's simply running all its cores at higher voltage. I can clearly see that 1.35 V are used on the i9-10900K. While it's running in PL2, the i9-10850K runs at only 1.27 V, which is a huge difference, especially since power draw from higher voltage increases quadratically.

The i9-10900K isn't as good and thus requires higher voltage to run stable at a given frequency. The operating voltage of each CPU is fine-tuned at the factory, based on silicon quality and lots of other factors. This is called "binning". CPUs that are not good enough for certain criteria often end up as lower-specced SKUs. 

My overclocking results show the i9-10850K to be the slightly better overclocker. The i9-10900K and i9-10850K both topped out at 5.1 GHz, but the i9-10850K needed a tiny bit less voltage at 1.323 V vs. 1.332 V (9 mV difference). Not a huge difference, I'd say it is not large enough to explain the 80 mV stock voltage difference we observed.

Another possibility is that our i9-10900K is from an earlier batch of silicon since it was provided by Intel for launch-day review. The i9-10850K in this review is a retail processor. Still, I would assume that Intel CPU samples are representative of retail performance—what would be the point in sampling these for review otherwise? Especially if the review sample performs worse. I also doubt Intel made big improvements in their process in such a short time frame. We reviewed the i9-10900K in May, just three months ago—it's not like they had a year for optimizations.

Given the Core i9-10900K is designed to operate at 5.3 GHz, which seems to be very close to the limit of the 14 nm++ process, it's possible that Intel defined a more aggressive voltage-frequency curve for their flagship to ensure sufficient yields. Since the i9-10850K tops out at 5.2 GHz, the V-F curve could be more relaxed.

The Comet Lake architecture, which comes with the 14nm++ process, is yet another Skylake derivative, meaning most performance gains come from added features and clock rate improvements. The Core i9-10900K, the 10850K has an unlocked multiplier that enables easy overclocking, solder TIM to boost overclocking capabilities, and doesn't come with a bundled cooler.

The T-junction of the Intel® Core™ i9-10850K processor is 100°C, any temperature results equal to or below that value is considered normal and expected for this unit, especially while playing games which is a high intensive task, even though using a dedicated graphics card.

Cinebench showed a 2-3% drop in rendering speed, while V-Ray showed a similar drop in CPU-based rendering and no difference in GPU emulation mode.

Comments

Previous Posts

Fire Detection and Alarm System and R A 9292

Technnovation 2016

CCTV single or multiple camera solutions