darwin may lose primary target status on FSF gcc
vincent habchi
vince at macports.org
Sat Sep 19 00:01:42 PDT 2009
Jack,
> http://www.nabble.com/x86_64-apple-darwin-Polyhedron-2005-benchmarks-td25108861.html
These figures are very interesting, but I don't understand why they
lack the gcc 4.2.4 -m64 performance; was that unavailable at that
time? I would be also curious to see the relative performance of the
two compilers on C-based scientific benchmarks (like BLAS or gsl or
whatever). I know that because of aliasing, C compilers cannot
optimize as much as Fortran ones do, but that may be at least partly
solved by using the new C99 syntax.
It also raises other questions:
1. Could we compare those figures with the ones obtained with a
"professional" compiler like, e.g. the Intel C/Fortran compilers for
Mac?
2. What are the relative possibilities of progression of GCC and LLVM?
I mean, maybe LLVM is lagging these days, but if the intermediate code
representation it uses is potentially more powerful that the one
chosen for GCC (as it is publicized), it might be just a matter of
months before the results get close to a draw.
3. What is the place of Fortran in modern scientific computing
compared to C-based languages? More specifically, in the scientific
packages available in MacPorts like, e.g. SparseSuite, what part of
the Fortran code is actively under development and what part is only
legacy code?
4. What is the place of Open-CL? Of course, Open-CL is brand new, but
with its capacities to unleashed massive parallel computing power, it
could represent an awesome tool for matrix operations. Yet, I
understand that due to the overhead of moving data to and fro the
video RAM, Open-CL is inefficient for small matrix sizes; but that's
also simply where optimization of CPU code is not really significant
either.
Vincent
More information about the macports-dev
mailing list