This is the second full-time week spent on things besides my PhD and time has been distributed between algorithms and machine learning. Cramming the algorithms (from computer science) has surprised me about how little I know ‘behind the scene’. More importantly, I’ve become motivated to absorb powerful design principles and grasp the useful data structures.
Motivated by the success last week, I took similar approaches and found the open course by MIT 6.006 Introduction to Algorithms as a good starting material. The instructors are great! Highly recommend Prof. Demaine’s lectures on dynamical programming. Some of the assignments are quite challenging but most of them funny too. This one on image resize amazed me!
Results summary
Theory notes
- Bishop Notes updated
- EM for factor analysis
- Probabilistic graphical models fundamentals
- Hidden Markov Model
- Reinforcement learning
- Basics concepts on Markov decision process based on the lecture notes of CS229
- Algorithms
- Notes from MIT 6.006 Introduction to Algorithms
Notebooks
- K means for image compression
- Applied accelerated vectorised implementation
- EM interpretation
- Reinforcement Learning of inverted pendulum
- Basic implementation of Markov Decision Process
- Hidden Markov Model
- Toy Example from Wikipedia
- Implementation of the forward-backward algorithm for computing posterior probabilities and Viterbi algorithm for finding the optimal path
- Learning for the multinomial example needs further work.
Some thoughts
Been wondering: how important is the understanding of algorithm to practice machine learning (ML)? On the one hand, one would most likely work on a team and there will be software developer to handle the problem. On the other hand, I feel strongly about the power of knowing the fundamentals well. The reasons are: (1) Efficient implementation is desirable and programming logic and intuitions are useful! (2) Some ML algorithms might benefit from certain data structures/algorithm design. E.g. the reinforcement learning and dynamic programming. (3) Really helps understanding complexity of a program, which comes handy when scaling things up.
Probabilistic graphica models are beautiful! Last time I mentioned the Bayesian perspective and the visualisation of such models enlighten me once more. Highly recommend Bishop (2006) §8.1 showing the graphical model on the same example as §1.2.5.
Things to follow up on
- There are a bigger picture behind PCA! Such as probabilistic PCA, kernel PCA. Really tempted to dig down as I love the basic form!
- Variational methods is essential for intractable problems such as some extended HMM.