Date Tags omscs (5 min read)

First OMSCS course is complete!

Disclaimer: This review applies to the Fall 2018 instance of CS7367: KBAI.

tldr

So, tldr, I thought this course was fine.

I didn't know much about the topic - knowledge-based AI systems - before signing up, but it sounded intriguing, like maybe it would combine some sort of sublimated Oliver Sacksian neuroscience wonder with programming fun.

It was sorta like that, but I honestly ended up finding the methods of KBAI frustratingly vague. I mean, people get grumpy about the inscrutability of neural nets, and AI modeled on human-like cognitive systems certainly has a contrasting appeal. But coding such systems? Especially when we don't have a full understanding of how human cognition works? Oooof.

The structure of the course was great, though I felt an increasing disconnect between the project side of things vs. the everything else side of things. That said, it was a good, gentle intro to the OMSCS program overall.

Structure

  • Twenty-six lectures on Udacity, each about 45-90 minutes long.
  • Three homeworks, which covered at least one lecture topic as well as some open-ended, creative stuff about, "hey, so how about that crazy AI, eh?!". If you hate writing, you will be sad. If you love writing, these are awesome!
  • Three exams, which cover the lecture material and I found very ugghh. These were timed, proctored using an Orwellian webcam auto-proctor (heh), and generally relied on tricksy wordings of concepts as opposed to creative applications of concepts. I liked the exams the least, and felt they contributed to my learning the least. (That said, one of the co-instructors, Dr. David Joyner, released some very cool post-exam analyses after each one, which seems to indicate I may have been an outlier?)
  • One zany project. This was the heart and meat of the course for me; it was the place I learned the most. Basically, we had to build - from scratch (!), and in the language of our choosing (either Python or Java) - an AI agent that could parse visual intelligence tests of the following kind:

rpm

If that sounds daunting, oh man, you bet it was! The project was an emotional roller coaster of "omg how do i even" and "OMG IT'S WORKING IT'S WORKING".

The project work felt very distinct - even disconnected - from what the lectures were telling me. I basically had to be resourceful and self-teach stuff like (a) how do you parse images in Python (we were allowed to use the Pillow library - a basic visual parsing library - but not the more advanced, computer visiony OpenCV), (b) once you "see" the image, how do you compare images, and (c) how do you develop generalized logic to compare arbitrary transitions and patterns.

It was madness!

And a lot of fun. My guesstimate is that the class - which had ~400 people in it - had a long-tail distribution of a few people who put many, many hours into developing near-perfect agents and a lot of people who put in a decent number of hours to get a decent performance. I was more in the fat part of the distribution (yo, I got a life to live here); my agent performed well-enough (it crossed each increasingly difficult hurdle as we faced harder and harder visual IQ tests, but it often just barely crossed), and I did a fair amount of love/hate refactoring throughout.

I do think I missed out on some "ah ha!" moments when I could truly blend the lectures' vague-feeling "let's model incremental concept learning!" with my AI agent (which mostly used heuristics). But I just didn't have the time to make that leap into KBAI nirvana. (And I'm guesstimating I put about ~20 hours total into the agent?)

Effort

Which brings me to effort! So this was less effort than any of the Harvard Extension School classes that I've taken. And thank God for that, because those classes had me doing ~20 hours/week of problem sets and, man, that shit is unsustainable. (And yes, I did learn a lot, but I do think there were marginal decreasing returns to some of those p-sets.)

I'd guesstimate that I spent, on average, about 7-10 hours/week on this course, and I wasn't super disciplined or time-efficient. I watched the lectures at about 1.25x - 1.5x speed, which was normally fine. I read a little bit of Winston. I was a speed-demon on the exams and homeworks, and I spent the bulk of my time agonizing over the project. This was fine. Again, I probably could have invested more and gotten more out of it, but - well - the thing I love about the OMSCS overall is its flexibility.

Would I recommend this?

  • YES for first semester OMSCS students. Dr. Joyner is a great edtech educator, and he seems very dedicated to running these courses well. I think any of his courses - like this, CS 6750: Human-Computer Interaction, or CS 6460: Educational Technology - is a great "intro" to the OMSCS overall. I want to actually take his other courses just because I'm so confident they'll be so well-run.
  • NO for people interesting in machine learning, since it felt like this type of AI isn't really practiced anymore. Sometimes it's valuable to see a contrasting methodology, even if you never use it, but I found much of the lecture material to be so far in the camp of "I will definitely never do it this way" that it got pretty existentially doubtful. Like, why? WHYYYYY?

Up next?

I'm pumped for a bunch of courses. Oddly, the Digital Wolf of Wall Street Machine Learning for Trading has me pretty pumped.


This post is part 4 of the omscs series:

  1. Two announcements
  2. A Groundhog Day of linear algebras
  3. oh no
  4. OMSCS Course review: Knowledge-based Artificial Intelligence