Progress report after mid-term

Woah! I am enjoying every bit of coding now. I have figured out the whole algorithm from the basics! I had been a fool all the time 🙂 . Though I still think that the current understanding couldn’t have been possible if I didn’t made all the mistakes! So Thumbs Up!

Spending time with the exact inference codes that are already implemented in the pgmpy library along with the meticulous reading of those algorithms from Koller PGM book helped me enormously. The Variable elimination and the Belief-propagation methods are really cool for the beginners if they want to jump onto Approximate inference!

The messages are nothing but the Factor data-structures of the pgmpy library. Gosh! I never knew that. I am happy to understand how to inject the current implementation of the Mplp code into the library. My status is that I have completely reworked the 1st paper into pythonic class-oriented code. It is working awesome with some of the example UAI files that are present here: PASCAL examples .

I am biting my fingers to finish off the coding as it has been so interesting to code now I know what the algorithm meant!

Check out my new PR: pull 449

I hope you notice improvements there.

As far as the road-blocks are concerned…they seem to never leave me at all. After implementing the 1st paper two times (I hope you remember the old crude implementation), I got to know that Sontag later modified it in his PHD thesis a bit from (the original one in the Fixing-Max Product 2007 paper) that I implemented.  The new one is as follows:


Of the set of 3 papers, the Later 2 papers are written by Sontag along with the cpp code so I had the intention of changing the code. Though this part was trivial but still counts as a road-block!

The next thing to expect from my side is to add the Triplet clustering code. Till then, bye bye.

Advertisements

Mid-Term Summary

The Mid-term is over. I am through the first half of the Google summer of Code! As far as the accomplishments are concerned, I have almost implemented the culmination of 3 papers in python and have kept on updating my pull request here: pull 420 .

My most efforts went on getting all the algorithm work so I was not able to work on finesse points. Abinash is not happy with the current code. There are many loop-holes and my coding style isn’t good. Looks like the way I have spent my time understanding the algorithm from the implemented code isn’t right. I needed to get a grip of the algorithm.

So I have spent a lot of time again on the theory. This time, I made meticulous notes. For notes check this out: http://kislaynotes.droppages.com/

My expectations for the next period is to completely rework the current implementation. This code can be thought more like a sample to understand what lies ahead.

One thing got clear. I need to get more object oriented. The current method that I was using was more of porting the C++ code to python disregarding the pgmpy’s inherent classes.I now see, how sheepishly I read the already implemented exact inference algorithms in the library. I need to get more pythonic. My current code looks more like a C++ code which is very bad.

Anyways, so my focus now is to spend time with the already implemented library. See how the 1st paper fits in. Discuss a lot with the mentor. I have lost some precious time, but now since I know some of the things where I was doing wrong … I hope to proceed swiftly

 

Bye! I hope to get some cool python coding done in the next few days!