Google’s Machine Learning Coursera Specialisation Review

Mark MacArdle
4 min readJul 24, 2018

I recently started the Google machine learning specialisation on Coursera called Machine Learning with TensorFlow on Google Cloud Platform. I’ve previously done the popular Andrew Ng machine learning course on Coursera and thought it very well taught, but also very academic. With this course coming from a company and using popular private sector tools in Google Cloud I had high expectations.

There are five courses in the specialisation. They started off great but as I progressed I became disappointed in the labs and eventually stopped after the third course. There’s more detail below but the TL;DR summary is that I wasn’t doing much independent work so didn’t think the learning would stick.

Course 1: How Google Does Machine Learning

This was my favourite course in the specialisation. It was a good intro to when machine learning can benefit a process and what google are trying to achieve with it. Before this course I’d only thought that machine learning could be applied to specific tasks (e.g. image recognition, forecasting, etc). The idea of using it to drive continuous improvement in a whole process, rather than just the individual tasks, was a real eye opener for me. I wrote an earlier blog post discussing this more in depth. They did a great job of clearly explaining why machine learning can be so transformative for their business.

Course 2: Launching into Machine Learning

This course mainly covered the basics of machine learning (e.g. what a loss function is, common algorithms, what are neural nets, etc). As I’ve previously done introductory machine learning courses it didn’t have a lot of new information for me aside from two things.

The first was that it introduced Datalabs an easy way to create a virtual machine on Google Cloud that can run Jupyter notebooks. It’s really easy to set one up and it automatically has a lot of the data science Python libraries already installed so is convenient to use.

The second useful piece of info was on how to repeatedly split data into training and test sets. Before I’ve used functions that use random numbers to split data and not thought twice about it. However, if you work in a team or later want to rerun the model on an updated data set you need a way to do so that will give the same split every time. The method they use is to create a hash value of one of the variables with the FARM_FINGERPRINT function (which is gives a long number), divide by either 10 or the amount of subsets wanted and then use the remainder to split on.

Splitting off 80% of a data set with a hash function

Course 3: Intro to TensorFlow

The specialisation started to get more technical and work more with code here. It started by explaining how TensorFlow, which is a library for Python, works. Code you write isn’t executed directly, instead it’s used to build a “DAG” (Directed Acyclic Graph) which is essentially a flow chart. Each step of your process (e.g. multiplication, measuring a loss metric, etc.) becomes a node in that process. Once in graph form TensorFlow can compile what you’re trying to do super efficiently. You then create a session to run your graph.

Creating and running a very simple TensorFlow DAG

Labs: The Achilles Heel of the Specialisation

I was expecting the labs to be where you have to independently apply the lessons to new problems with some minimal guidance. Unfortunately here they consisted of giving you Jupyter notebooks of code (all available here), telling you to run it and understand how it works. They recommended changing some parts of it to see the effect and help with understanding.

I thought this was really weak as, even if I did make small changes, it’s very hard to apply new learning in future without some independent practice during learning.

At the end of the notebooks there were optional challenge exercises. Being optional and not needed for progression there wasn’t a huge incentive to do them. I did try a few however, for example making a dataset of cylinder heights, diameters and volumes and then training a neural net to predict volumes when given height and diameter. That exercise was interesting but all the subsequent ones only involved making small adjustments to that model.

I looked at the content of the labs of the next two courses and it seemed to be more of the same. I decided to stop the specialisation here as, without applying it myself, I was unlikely to come away with more than a high level understanding of the content.

Should You Take This Specialisation?

The first course was great for anyone interested in or already working in machine learning. The second course would be useful for anyone starting out with machine learning. The third course was useful for explaining the basics of TensorFlow but was poor at getting you to practice doing it yourself.

If you currently have a machine learning project to apply what you’re learning to then I think it would be worth doing the whole specialisation. The content of all the courses is interesting, concise and well delivered. However, if you don’t, then I don’t recommend doing the full specialisation. I imagine there’s other courses out there where you can better invest your time. I’ve done a few online courses and my experience has been that I don’t gain a deep understanding until I try to apply the learning independently.

--

--