The test env. is the IBM H22 Intel Nehalem blades.
>>./configure --prefix=/home/mydir/mpich2-1.3a
>>make
>>make install
No special configuration option is required. (In 1.2.1, we need --with-pm=hydra)
Setup the 'myhost' file as
intel01:1 binding=user:0
intel02:1 binding=user:0
intel03:1 binding=user:0
intel04:1 binding=user:0
>>LD_LIBRARY_PATH=../socIntel/goto:$LD_LIBRARY_PATH mpiexec -f myhost -n 4 ./main 62 62 tests/
I have to say that there is no process migration among cpus. However,
I cannot say this installation really has cpu affinity, because when I
use
-binding user:2,4 processes are not really binded to cpu2, and 4. Even
if I use intel01:4 binding=user:4,5,6,7. I see cpus 0,1,2,3, are busy.
Nevertheless, this is the best result I can get from the Bluegrit. On
it, the OpenMPI can do cpu affinity only on one node, because of TCP
firewall. Besides, MVAPICH2 cannot really support cpu affinity, since
there is no IB, iWARP, etc. Last, early version of MPICH does not
support core binding. It is really hard to get core mapping as a
non-root. I don't know why the admin are reluctant to install these
for the users. I wasted a lot of time on that!
No comments:
Post a Comment