raimue at macports.org
Thu Dec 23 03:31:20 PST 2010
I don't have a good explanation, but there are some aspects in
virtualization which are serious issues for taking benchmarks inside the VM.
Some operations may take longer or even shorter on a VM than on real
hardware. As an example, in a VM the I/O operations can happen directly
in RAM as the host system is responsible for syncing to the considerably
slower hardware disk. So it depends on what your program actually does.
The other running processes of the host system and their scheduling are
also an important factor.
Timekeeping is a complex topic for virtualization. The wall clock needs
to be synchronized at defined time slots or special system calls,
therefore gettimeofday() will not always return correct values when
observed from outside the VM. For the curious, technical information and
algorithms on this topic can be found in a paper from VMWare .
On 2010-12-23 00:52 , Marko Käning wrote:
> On the other hand I see that "user"+"sys" time is usually equal to
> "real" on Linux, whereas "user"+"sys"<"real" on Mac OS X itself,
> which makes me believe that "user"+"sys" is the more actual than
> "real". :-)
By the way, with a multi-threaded program in a
multi-processor/multi-core environment, the sum of user and sys time
could even be larger than real time as each thread counts for itself.
More information about the macports-dev