View profile

OpenMLU Newsletter - Issue #2

OpenMLU Newsletter
OpenMLU Newsletter - Issue #2
By Sanny Kim • Issue #2 • View online
Hi everyone,
Welcome to the OpenMLU newsletter 🥳 Since our first edition, 400 of you have joined, so a huge thank you to all of you! With this newsletter, I’m hoping to highlight ML resources that will help you become a better ML learner and practitioner.
We will continue with our theme of hidden gems 💎. Last time we featured resources for topics like causal inference, graph neural networks and computer vision. This issue will go into underappreciated resources in CogSci, GOFAI, ML, NLP and RL.
If you have any questions, feedback, or suggestions for the next issue, feel free to let me know! You can always reach me via 📩

Cognitive Science
So much enthusiasm for the receptive field of retinal ganglion cells.
So much enthusiasm for the receptive field of retinal ganglion cells.
  • MIT The Human Brain (2018): It’s genuinely inspiring listening to Nancy Kanwisher, whose passion for her research permeates all her lectures. I’d recommend the course to anyone who would like to learn more about the human brain. It’s very beginner-friendly and will teach you about many essential ideas like Marr’s levels of analysis, fMRI, double dissociations, and V1. Since it’s taught by Kanwisher, expect a lot of fun face examples! 
  • UC Dublin Intro Cognitive Science (2020): Another very approachable resource for people who aren’t familiar with this field. Compared to Kanwisher’s course, Fred Cummins focuses much less on neuroscience - as its title implies - and tries to provide a broader overview of Cognitive Science through topics such as language, perception, representations, and speech.
  • MIT Intro to Neural Computation (2018): A fascinating course on neural computation. Michale Fee articulates difficult ideas in an easily digestible manner. It’s quite an interdisciplinary course as it draws from concepts in biochemistry, CS, math, ML, physics, and neuroscience. The course definitely increased my appreciation for the complexity of neurons. And the last class is on Hopfield networks! 
  • Neuromatch (2020-present): Neuromatch has somehow managed to pull off two virtual conferences and one summer academy in the middle of a global pandemic. While the conference has plenty of interesting talks and (somewhat) productive debates, the academy features a long list of awesome tutorials ranging from network causality to real neurons to RL.
Good-Old-Fashioned AI
Knowledge bases and Wumpus worlds, a symbiotic relationship.
Knowledge bases and Wumpus worlds, a symbiotic relationship.
  • UC Berkeley Intro to AI (2018/2019): This course is not solely focused on GOFAI but is more of a buffet of different topics in AI, ranging from search to game trees to RL to NNs and Robotics. The lecture videos are from 2018, but the 2019 website has some nice complementary discussion videos of the material covered in class/exam prep. I found Pieter Abbeel’s explanations of search and heuristics to be the best introduction to the topic. 
  • Stanford AI: Principles and Techniques (2019): While neural nets and optimization are introduced towards the end of CS188, Dorsa Sadigh and Percy Liang introduce them at the beginning of CS221. The course is generally similar to CS188, but is more fast-paced and has a couple of lectures on logic, which is a big topic that’s missing in CS188. Both courses have excellent assignment and exam material (featuring Pac-Man!). 
  • Francisco Iacobelli - Russell & Norvig book series (2015): To dive deeper into notions like horn clauses, forward and backward chaining, the Russell and Norvig book has some lovely material. I’m not sure if the free book PDFs online are legal, so alternatively, this video series goes into some parts of the book and is really well explained. 
Machine Learning
ML Blinks: the most beautiful ML course online?
ML Blinks: the most beautiful ML course online?
  • Bloomberg ML (2017): David Rosenberg does a phenomenal job balancing practical applications and mathematical background in this underrated course. I particularly recommend his lectures on optimization, which offer the most intuitive explanations on topics like lagrangian duality and subgradients that I know of. 
  • Caltech Data-Driven Algorithm Design (2020): The first half of this graduate-level course consists of classes on model & algorithm design (e.g. learning to search and bayesian optimization) by Yisong Yue and his TAs, while the other half features guest lectures on various new lines of research. I especially enjoyed the guest lectures on differentiable optimization-based modeling for ML and device placement optimization.
  • VU ML (2020): Peter Bloem, the author of this incredible blog post on Transformers, also teaches this ML course at VU Amsterdam. In the same style to his blog, Peter uses simple visuals to express ideas that consequently become much more digestible. 
  • ML Blinks (2021): Speaking of visualizations, ML Blinks is one of the best visual ML resources online. Islem Rekik has a gift for expressing her teaching through beautiful illustrations. Her videos are basically what I’d imagine if 3blue1brown and mathematicalmonk collaborated.
Natural Language Processing
Just a small preview of the content in Eisenstein’s NLP book.
Just a small preview of the content in Eisenstein’s NLP book.
  • CMU Neural Nets for NLP (2020): I really liked the 2019 version and the 2020 version looks even better. This graduate-level course (taught by Graham Neubig and TAs) is similar in content to Stanford’s CS224n, but is a bit more advanced and up-to-date. For instance, CS224n’s contextual word embedding lecture goes up to BERT, whereas CMU’s lecture covers XLNet and RoBERTa as well.
  • Jacob Eisenstein Intro to NLP (2018): A textbook covering a wide range of topics in NLP with a focus on search and learning. You can use it to go more in-depth on topics covered in an NLP course or as a reference book. I personally used the book to learn more about logical semantics and predicate-argument semantics (chapters 12-13) and thought the concepts were clearly explained and examples well-chosen. 
  • Oxford & DeepMind Deep Learning for NLP (2017): While I wouldn’t recommend going through the whole playlist as there is more up-to-date material like the CS224, there are some real gems in this course. In particular, Phil Blunsom’s lectures on RNNs & language modeling and Andrew Senior’s lectures on speech recognition & Text-To-Speech are fantastic. 
Reinforcement Learning
Hats off to OpenAI's design team.
Hats off to OpenAI's design team.
  • UWaterloo Reinforcement Learning (2018): Half lectures and half paper reviews, I’d recommend going through the graduate-level lectures for anyone interested in building a solid understanding of RL and some of its theoretical underpinnings. Pascal Poupart’s approach to teaching makes it easy to follow proofs and conceptual ideas. 
  • UAlberta Reinforcement Learning (2019): This 4-part MOOC is likely the most approachable RL course online. Covering essential ideas like Markov Decision Processes, Dynamic Programming, Temporal Difference Learning and Policy Gradients, Martha and Adam White have built an awesome ecosystem around the lessons, from assignments to discussion forums to quizzes to capstone projects.
  • OpenAI Spinning Up (2018): One of the best all-around deep RL resources out there. Whether you use the site as a starting point or review of key algorithms or reference for PyTorch/TensorFlow RL implementations, you’ll probably find it in their docs. There is also a complementary Spinning Up workshop recording worth watching if you want a quick intro to deep RL.
Cheers
If you’ve made it this far, thanks for reading! 
Next time I’m hoping to go into topics like Statistical Learning Theory and Unsupervised Deep Learning. I’m still going through some of the material, so the next issue might take a bit longer. 
If there’s anything you’d like to let me know, just hit reply on this issue 😄 .
Oops.
Oops.
Did you enjoy this issue?
Sanny Kim

Curating the internet's best ML resources.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue