EGI Training for AEGIS Site Administrators

Attending EGI training at Scientific Computing Laboratory of the Institute of Physics Belgrade.

More about it here.

Parallel programing with MPI and OpenMP

Hey hey ppl!

In past couple of hours I have been writing some seminar about parallel programing. It would be correct to say dozen of hours, but I don’t know if that is grammatically correct? 😐 To be precise, I have been working on source codes couple of days, and it was very strong impression. Unforgettable! How could I forget it when I had just a little sleep 🙂 I’m afraid I’m getting hyperactive and very tired at the same time, so don’t mind if I write nonsense every now and then. Yey 😀 Let’s get to business, or sort of business :S

I wont bug you with the history of parallel programming… It’s just a yatty yatty bla bla big companies having a lot of money to spend on distributed computing research… And so, bla bla they developed tools for making programs execute at the same time on more than one processor… Yatty yatty there were a lot of different implementations with the almost same idea, and they had a deal to make some standards. MPI and OpenMP are well known. Boom, you think we are starting real job now! Wouw! Anyway, different tech for different purposes. MPI is Message Passing Interface. Programmers use it for parallel computing on distributed multicomputers. Distributed multicomputers!!! Now, I’m a smart guy? No, it is just a techy name for… lets say cluster! Wtf is cluster man? 🙂 Ok, lets say you have couple of computers and very fast interconnection between them. At that point you are on good way to have a cluster! All you need is right OSes, and very powerful processors on those computers. Yeah, not to forget a lot of RAM, if you are going to run some serious computations which are RAM expensive. On the other side, there is OpenMP. It is Open Multi Processing. Programmers use it to parallelize program codes on centralized processing machines. SMP if you like it! With it you have the power to fully utilize both processors of your computer, or how many you have them. Cool, I’ll try and run it on GPU. Not! I’m afraid you wont be able, coz OpenMP doesn’t run on GPU processors. Either way, you can make a practical use of it. I just can’t remember any right now. These examples which I’ll talk about are some of use, but not very practical. Hey, who needs to calculate list of prime numbers using Eratosthenes sieve on daily basis?? Not me! Lets explain ideas behind those examples.

So, I did some research on Eratosthenes sieve and sequential program codes, and I’m not nuclear physicist, but I did found some really useless things in some implementations found on net. Gosh, today everyone with Internet connection can write something and publish! Haha, me too! 😀 Tell me I’m goood. Pleaseee??!! Ah, forget it. Back to the subject. I did some research on OpenMP and I decided to try and parallelize Eratosthenes sieve with OpenMP, having already parallelized code with MPI. It was good thing to let programs calculate list and compare needed times for both of them. Since I’m going to give you link to archive containing all source code, I’m not going to get all dirty explaining code. Buuut! I’m going to share with you the results of testing. Those are:

OMP v1 and OMP v2 are two versions of OpenMP parallelized code. Word ukupno means time needed for whole program to finish. As you can see, OpenMP is little bit better in compare to MPI. Of course, MPI has some implicit barriers in collective communications and that is what is keeping it from working as fast as OpenMP.

Anyway, you can download source here, and you will find instructions for running programs.

Have a nice day! Bye!

Making ganglia work

Hey hey!

Today I’m gonna say a few words about ganglia monitoring system.

First things first. I’m a newb for it. But I have managed to make it work on cluster. I went through compiling 3.1.2 version, making frontend work, installing all necessarily packages like libconfuse, rrdtools and other stuff. Everything went smoothly except graphs didn’t show! 😦 Buuut! today, after all that blood, sweat and tears 🙂 graphs finally appeared.

The problem was that I wanted to use custom directory for rrd graphs. Apparently, I did not set paths correctly. At the end I decided to make /var/lib/ganglia/rrds directory, and they showed up! YEY 🙂 Cheers!

PBS4C#

Ok ppl, this is something I’ve been working on in past few weeks.

There is a particular project called pbs4java. With it, you can call pbs commands from java implemented methods. I used it as a idea.

There was the certain need for the .net implementation, and I did some programming to enable some basic methods for work with pbs with .net. Now, don’t get me wrong, I’m not a .not programmer!!! 😀 Nor I like m$, buuut as this is for science, I think I can live with it.

Anyway, as soon as I finish up my first release, I’m going to make some noise in here 🙂