Quote: Originally Posted by Ivan the SoSo
If you look at the graphs the 4850 hits higher peaks than the 5770...at one point almost hitting 100%. The 5770 only gets close to 80%
The graph I'm assuming you're looking at is from PERFMON, which is a "CPU utilization over time" display from each of the two [very similar] machines. It's unrelated to GPU utilization on the two video cards. I don't think this should be terribly significant, as it also reflects what else I might have been doing over the past few minutes getting screenshots set up, starting GPU-Z, etc. So it isn't an accurate indicator of WMC-caused CPU utilization, unless I let things stabilize and take a screenshot. For sure, it has nothing to do with GPU utilization.
In other words, it is the Aidi64 "hardware monitor" information on the left-lower corner which is most relevant, showing the "instantaneous" values for CPU utilization and GPU utilization while WMC is presenting the video frame shown in the screenshot.
Now one other piece of interesting information. The GPU utilization seems directly related to the bitrate in the underlying HD program transport stream. In other words, here in LA KCBS-DT (which is where that "CSI" program is carried) is the only network not using sub-channels, but instead dedicating essentially of their bandwidth to their one-and-only 2.1 CBS programming content. So it's up at about 15Mb/s bitrate.
In contrast, if I watch one of the basic cable channels where TWC/LA may have either recompressed (to achieve lower bitrate) or the original content from that network is simply at a much lower bitrate than KCBS-DT, the GPU utilization shown on the HD5770 is also much lower... say between 30% and 55%. This does seem reasonable and consistent, if the hardware on the HD5770 is being asked to deal with a lower bitrate datastream rather than a higher bitrate datastream.
So the question then is why would the HD4850 NOT also reflect this same "lesser demand on GPU" when fed lower bitrate programs?
Which brings me back to the original question: why does the HD5770 show so much more GPU utilization than the HD4850, when watching the same datastream with WMC? If the result is seemingly better picture (which to my eyes is undeniable, as revealed by my two accidental screenshots which demonstrate a clear improved result from the HD5770) then I have no complaints and will just keep my mouth shut.
Otherwise, this is really just a question for intellectual curiosity, to come up with an explanation why these two video cards seem to behave differently as far as GPU utilization while watching HDTV.