Another Monday, another episode of DF Direct Weekly. Myself, John Linneman and Alex Battaglia battle our way through the topics of the week, including initial reactions to the Hellblade 2 previews from last week – but what I want to talk about here is how Alex spent his time last week. Within this week’s Direct, you’ll see his tests with Intel’s PresentMon 2.0, which may well change the face of PC benchmarking… to a certain degree, at least.
PresentMon forms the basis of just about every reputable benchmarking tool out there. Intel pioneered it and adds new features (more about that shortly) but it’s used by the likes of CapFrameX, AMD OCAT and even Nvidia FrameView. All of them mix and match various features but it’s PresentMon at the core of it, which makes the latest additions interesting. We first caught site of them in a video put together by Gamers Nexus and we’ve also seen the new features used ‘in anger’, so to speak, in GN’s excellent performance review of Dragon’s Dogma 2.
First of all, let’s talk about PresentMon more holistically. It’s a great tool. Essentially it gives accurate frame-time readings for any given benchmark run, it calculates frame-rates and the various percentiles used in most GPU and CPU reviews (bar ours – we still test based on actual frames that emerge from the PC). You also get great metrics on important factors including power consumption, CPU core utilisation and more. Much more. Intel’s first major change was to implement what it calls ‘CPU Busy’ and ‘GPU Busy’. Rather than just telling you how long it takes to render and present a frame, we could now see how much of the time was occupied by CPU and GPU in delivering any given frame – invaluable in ascertaining the overall balance of your system.
With these two variables, you can aim for ‘balance’ – which is to say, maximising utilisation of those components to achieve peak performance. Personally, at DF we tend to be more interested in consistency, delivered either via a more rigourous lock to a target frame-rate (eg 60fps or 120fps) or else ensuring that the game is always (or as close to it) GPU-limited. Any given frame tends to present similarly to the one before and after it, meaning consistent frame-times and a perceptually smoother experience, good for VRR monitors. In comparison, being CPU-limited almost always results in egregious stutter.
0:00:00 Introduction0:01:04 News 01: Hellblade 2 previews drop!0:26:53 News 02: PS Portal hack patched before release0:34:41 News 03: 90s FPS PO’ed getting Nightdive remaster0:41:07 News 04: C-Smash VRS hits Meta Quest0:52:03 News 05: Alex’s PresentMon experiments1:06:57 News 06: Campaign demands: stop killing games!1:16:19 News 07: Intel updates XeSS with revised modes1:22:04 Supporter Q1: Would it be more practical to target 1440p than 4K for performance modes?1:28:11 Supporter Q2: Dragon’s Dogma 2 has unimpressive NPC density, so why can’t it hit 60fps on consoles?1:35:17 Supporter Q3: Why do some developers implement incorrect 30fps frame-rate caps?1:43:36 Supporter Q4: Why isn’t low frame-rate compensation the default with 120Hz and VRR on PS5?1:50:05 Supporter Q5: Could the PS5 Pro be powerful enough to run advanced path traced games?1:55:04 Supporter Q6: With Microsoft potentially opening Xbox to third parties, was the 3DO approach right all along?2:01:21 Supporter Q7: Would it have been better to forgo the PS5 Pro and instead shorten this console generation?
Another interesting new data point is ‘click to photon’ latency, which is essentially the time taken from user input to register on-screen. Nvidia has this function within its own FrameView tool based on ‘markers’ the developer needs to add to their code (which happens naturally if using DLSS 3 frame-gen). Intel’s solution should ‘just work’. Latency monitoring is crucial for not just telling you why a game feels poor to control but to put an actual number on it, something that’s highly time-consuming to do without internal metrics.