[IPOL discuss] [IPOL tech] demo with omp parallelism

Miguel Colom colom at cmla.ens-cachan.fr
Fri Sep 7 10:42:57 CEST 2018


Quoting Boshra Rajaei:

> Dear Miguel,
>
> Regarding our new test demo on ipol (
> https://ipolcore.ipol.im/demo/clientApp/demo.html?id=55555111), we have one
> wondering that we thought you may be able to help us.

Hi Boshra,

For this kind of questions not related directly to the demo system I  
think it's better to send them to the IPOL's discuss list. This way  
any one can answer it, and also the response is archived for others.

For questions on the demo system itself, it's better to use the  
Editor's group. I tried to invite you, but it seems that you've  
configured Google groups not to deliver invitations to you.

> Actually, the
> algorithm runs slower than what we expect and sometimes I feel the speed is
> even more on my laptop. I am not sure if the the algorithm is running in
> parallel or not. We used openmp for parallelism. Is there any way we can
> track how many cpu cores is assigned to the algorithm?

To test the program, the best is to run and debug it on the Purple  
server. It's a 32-core machine, so if there's any lock of race  
condition problem it's likely that you find it easily.

If the code runs faster in your laptop than in our servers, the reason  
could be one of this:
- You've fixed the number of threads to something small instead of  
letting openMP to choose by itself
- You've parallelized not only a big outer loop, but also inner loops.  
This has two consequences: (i) you create way more threads that you  
need and (ii) if the cost of creating/releasing the inner threads is  
higher than the actual computation, you'll simply consume system time  
rather than dedicating it to your process and you're execution will  
take longer than expected
- You're using locks or openMP barriers than actually are preventing  
the parallelization.

To me, this are most common causes. If Someone has some others in  
mind, please tell!

Good luck with your code,
Miguel


>
> Thank you,
> Boshra
>





More information about the discuss mailing list