Getting OpenGM to work …

I had been trying hard to get accustomed with the OpenGM library so that I get to understand what are the things that I will be needing for my implementation of Approximate algorithms.

I will jot down my findings here:

Below is the most important image that will help me to write off things quite cleanly.

factor_graphs_opengm

 

The things that are needed by the OpenGM for completely specifying the above factor model are:

  1. The number of variables along with cardinality (the number of labels) of each of these. This is done by constructing an object called an Label Space.
  2. Functions describe the different small \varphi_i that are used to decompose the bigger \varphi.
  3. Operations can be addition, multiplication, max and min. These are the operations through which the \varphi decomposes into \varphi_i.
  4. Factors is the only missing link which connects the appropriate \varphi with respective x_v's intended as the inputs.

Writing a complete model in OpenGM using ExplicitFunctions.

explicit_func_code_blog

 

 

The above one was a very primitive model. Let us make from a complete one like present below:

factor_snap

 

 

download

Gallery

Woah! Finally Perceptron has ended

For last 3 months,  me and my two other friends  worked hard for this event. We needed to provide our juniors a platform from where they can learn. The gluttony of the events held presently in Prastuti always made me sad. To get over all these things, we brought Perceptron.

Perceptron is a event based on Computer Vision. The aim of this event is to develop a Face detection framework from a scratch in matlab . We used the paper of Viola and Jones for reference. It took us 4  workshops to present the overview of the system to a audience who had no previous knowledge about machine learning or image processing!! Personally I think, we may went just too harsh on them. Nevertheless, there were 24 students who sticked till the final day of the event.

 

The whole procedure looked something like this:

  1. workshop 1 ( 4-Jan-2014 ):
  • Formation of Integral Images.
  • Introduction to convolution and Haar features.
  • Basic needs of a face detection framework as a whole.

2.  workshop 2 ( 17-Jan-2014 ):

  • What is machine learning?
  • Supervised vs Unsupervised learning
  • Classification vs Regression.
  • Introduction to Perceptron Learning algorithm.
  • Challenge 1  released.

3.  workshop 3 (15-Feb-2014 ):

  • Code PLA in Matlab step by step.
  • Disadvantages of PLA and its ineffectiveness against non-linear separable data.
  • Modifications in PLA to make it more flexible.
  • A revision of Face detection framework.
  • Challenge 2 released.

4. workshop 4 (29-March-2014):

  • Framework revision.
  • Introduction to Adaboost.
  • Challenge 3 released
  • quizz

Stills from the workshops 🙂

workshop 4 stills
workshop 4 stills

 

Here me and my teammate showing how to calculate error in the adaboost algorithm used in Viola Jones framework.

here me and my teammate showing how to calculate error in the adaboost algorithm used in Viola Jones framework
here me and my teammate showing how to calculate error in the adaboost algorithm used in Viola Jones framework

 

Me and my team in action.

Me and my team in action