Computer game apps read compressed artificial world descriptions from a disk file. This artificial world is regenerated by the CPU and loaded into the GPU where it is displayed to the gamer. The gamer’s actions are fed back to the GPU which dynamically modifies the artificial world it displays.
When a content creator’s computer is transcoding media the workflow is quite different. Transcoding involves reading compressed video frames from a disk, each of which is decompressed and then recompressed, with the recompressed frame written back to a disk.
Video playback from a content creator’s system is also different from gaming. Here a continuous sequence of frames is rapidly pulled from a disk, decompressed by the CPU, sent to GPU buffers where mathematical operations, such as those for color correction, are applied— and then they are displayed for a precise interval. This process is repeated for each frame.
When exporting a movie, compressed frames are read from a disk, each of which is decompressed. Effects, including color correction, are applied to these frames which are recompressed, with each recompressed frame written back to disk.
There is, however, one thing that gaming and content creation have in common — both involve long task execution times during which heat is generated by the CPU and GPU. Both components can push a computer toward, or into, thermal throttling. (See thermal throttling.) So, although gaming and content creation tasks do not employ the same compute intensive processes, they both can generate high thermal loads.
The question we want answered; do computers that perform well when playing games also perform well when executing content creation tasks. To be more complete, we want to know whether game play or industry standard benchmark performance has a higher correlation with content creation performance.
Be sure to read Part 1 of this article.
We need to measure performance on multiple computers because we require multiple data points for the measure of game play performance and multiple data points for the measure of benchmark performance.
Before describing the measures I took on each computer, you need a description of each computer. In the first phase of this exploration I had access to four systems.
Number 1. Samsung Galaxy TabPro S Windows 10 Tablet: Fanless Intel m3-6Y30 CPU/GPU (similar to an i3); 4GB RAM; 2GB Video memory; 256GB SSD.
Number 2. Lenovo Ideacenter Y910-27: Intel i7-6700 (3.4-4.0GHz); 16GB DDR4-2133; NVIDIA GTX 1080, 8GB GDDR5 memory; 128GB M.2 NVMe PCIe SSD.
Number 3. Gigabyte 15 OLED Laptop: Intel i7-9750H (2.6-4.5GHz); 16GB DDR4-2666 dual-channel RAM; NVIDIA GTX 1660Ti 8GB GDDR6 memory; 512GB M.2 NVMe PCIe SSD.
Number 4. HP Omen 15: Intel i7-9750H (2.6-4.5GHz); 16GB 2666MHz DDR4 dual-channel RAM; NVIDIA GTX 1660Ti 6GB GDDR6 memory; 512GB M.2 NVMe PCIe SSD; 32GB Intel Optane memory. (Optane increases system performance.)
Figure 1 shows my AERO 15 SA. It’s AERO Geekbench 5 Multi-core performance is 5486; similar to a comparable MacBook Pro 16. (The MBP 16, however, is far more expensive yet doesn’t offer a calibrated OLED 4K display, a full selection of USB ports, nor an SD card reader.)
In the second phase of this experiment, I had access to only three systems. Thankfully these three systems also offered a wide performance range.
Number 1. Samsung Galaxy TabPro S Windows 10 Tablet.
Number 2. Lenovo Ideacenter Y910-27.
Number 3. HP Omen 15.
Independent Variable 1: Geekbench 4 Multi-core Performance
Figure 2 presents a histogram of Geekbench 4 multi-core performance generated by the four computers. (A histogram is the correct way to plot discrete data.)
A linear trendline has been overlaid on these data. The Coefficient of Determination, r2 provides an estimate of the correlation (±1.00) between Geekbench data points and a linear trendline.
Figure 3 presents a histogram of Geekbench 4 multi-core performance generated by the three computers. Figures 2 and 3 are remarkably similar. The three computers’ trendline has a linear r2 value of ≈1.00.
Figure 4 presents a Geekbench 4 data curve that is so linear I had to add labels to indicate the data point locations. The linear data points are a chance result. Including a different laptop, such as a MBP 15, within the set of tested laptops resulted in non-linear data.
Independent Variable 2: War Thunder Game Performance
Figure 5 presents game playback performance, in frames-per-second, from the four computers. Each computer’s data point is the average of the automatic playback of three benchmark battles: Pacific War (Morning), Battle of Berlin, and Tank Battle. (Each playback value is an average of Average and Minimal performance framerates.) Also see Figure 6.
Figure 5 shows a logarithmic trendline overlaid on these data. Again, r2 provides an estimate of the correlation between the test data and the trendline.
Figure 6 presents the Average (207 FPS) and Minimal (137 FPS) playback, at 1920x1080, framerates from the Pacific War (Morning) benchmark battle run on my Gigabyte 15 OLED laptop.
Figure 7 presents an image captured from the War Thunder benchmark battle, Pacific War (Morning). This benchmark I found to be the most realistically rendered.
Figure 8 presents game performance from the three computers. A logarithmic trendline has been overlaid on these data. Figures 5 and 8 are remarkably similar although 0.97 r2 is higher than the r2 (0.91) value obtained from the four computers.
Both three- and four-computer War Thunder performance data curves are concave (Figure 9), a result that is quite different than the Geekbench linear curves shown by Figures 2 and 3. Assuming the greater a laptop’s maximum “power” the greater the thermal throttling imposed when playing a long hard game, over this particular set laptops, each performance increment was disproportionately smaller—hence the logarithmic rolloff in performance seen across the set of laptops.
Because of the high correlation between three- and four-computer data, from this point, only three-computer data will be employed.
Two variables, game playback performance and benchmark performance, are independent variables that will be correlated with four dependent variables.
Figure 10 shows the two independent variables superimposed.
Dependent Variable 1: Timeline Playback
Two non-optimized video clips were placed in a Timeline and a long Dissolve FX applied to the majority of the Timeline’s duration. Figure 11 shows Timeline playback performance.
The (yellow) logarithmic trendline closely matches the Figure 10 gameplay data. It does not match the Figure 10 (blue) linear trendline. See Figure 12.
“Optimization (including resolution and frame-rate independence) …” are powerful DaVinci Resolve features that enable your computer to work like a much more powerful system. See Figure 13. (From my new Kindle electronic and paperback book, Rapid DaVinci Resolve.)
“Optimize” refers to the transcoding of an acquisition codec to a more optimal — for editing — production codec. Another, and to my mind, a better term for editing in this way is called “proxy” editing.
Figure 14 presents three reciprocals of the number of seconds required to optimize a 2160p29.97 clip from a Sony FS5 camcorder.
The (yellow) logarithmic trendline closely matches the Figure 10 gameplay data. It does not match the Figure 10 (blue) linear trendline. See Figure 15.
Figure 16 presents the 2-stream playback task whose results were reported in Figure 3. The Figure 3 results come from non-optimized media while the Figure 16 results come from optimized media.
Optimization yields, on the Lenovo (System 2), over a 35-percent increase in playback performance.
Dependent Variable 4: Timeline Export
Figure 18 presents reciprocals of the number of seconds required to export a 1080p29.97 H.264 (Figure 17) movie from a 2160p29.97 Timeline.
The (yellow) logarithmic trendline closely matches the Figure 10 gameplay data. It does not match the Figure 10 (blue) linear trendline. See Figure 19.
The data collected in this experiment confirm the idea that computers, in particular laptops, which perform well in gameplay will likely perform well for media creation.
You might also like...
Development of new technology and moving to the newly available 5GHz spectrum continue to expand the creative and technical possibilities for audio across live performance and broadcast productions.
The IP Showcase is a highly anticipated event at the NAB Show in April that annually brings together a myriad of companies with complementary IP technology that spotlights “real world” applications using third-party products. Attendees like it because they get a h…
Microservices enable broadcasters to find new ways to adopt, engineer, operate and maintain the value of their solutions. For vendors, microservices provides opportunities to offer what could essentially be a self-serve menu for clients rather than building bespoke workflows internally.…
In Part 2 we looked at solutions to keep AoIP systems simple and discussed the compromise and efficiency vendor specific network systems provide. In Part 3, we look further into system management and network security.
Error correction is fascinating not least because it involves concepts that are not much used elsewhere, along with some idiomatic terminology that needs careful definition.