How to pick a successful AI project, part 3: Working with models

This post is part of a series on ‘how to pick a successful AI project’. Part 1, Part 2.

Here I’ll cover what I’ve learned working with models and how this matters to pick up an AI project that will succeed. I mentored over >165 of them and counting at AI Deep Dive and Data Science Retreat.

Not much theory

If you ask a meetup presenter ‘how did you pick your architecture?’ the most likely answer is something similar to ‘I copied it from a blog post or paper, then tweaked it.’ There’s little theory that guides how to pick an architecture. The field seems to be at the stage of medieval apprenticeships, where apprentices copy the work of masters. Although the field produces 1000s of papers per year, there’s very little in terms of theory. Practitioners generate ‘rules of thumb’ to pick architectures and train models. One good example is Andrew Ng’s book ‘machine learning yearning.’

This book is dense with the closest thing that we have to ‘theory’ to pick up architectures and finetune models.

You need to have a gold standard

Your model must improve a KPI that matters. That means that there’s something observable, measurable that the model does better than the baseline (which often means no model at all).

Supervised learning is better than unsupervised in that you can justify what your model does.

If you use cluster analysis, your boss could always say ‘you show me 3 clusters, why not 5? I think there are 5.’ there’s no correct answer to this. A supervised model has clear performance measures, plus it can often be checked ‘by eye’ (hey, your dog classifier missed this dog that looks like a cranberry cupcake.)

Use a pretrained model

With transfer learning, instead of starting the learning process from scratch, you start from patterns learned when solving a different problem. This way, you leverage previous learning and avoid starting from scratch.

When you’re repurposing a pre-trained model for your own needs, you start by removing the original classifier, and then you add a new classifier that fits your purposes. You save time (in weeks when dealing with big deep networks with millions of parameters).

There are repositories of models in the public domain, for example:

https://modelzoo.co/

https://github.com/Cadene/pretrained-models.pytorch

Try also https://paperswithcode.com. As the perfect name indicates, it’s a searchable collection of papers that have a public implementation, an excellent place to start.

If you have done fast.ai or many of the other ML courses out there, you know more than enough to start reusing pretrained models. Even if you cannot find a pretrained model that matches your problem, using one that is barely related usually works better than starting from scratch. More so if your data is complex and your architecture will be more than a dozen layers. It takes a long time (and big hardware!) to train big architectures.

It’s good to stay somewhat on top of new models; tracking state of the art is not necessary, but for sure, it’s now easier than ever. Twitter will tell you if anything significant has just popped up. If someone made a great demo, Twitter would catch fire. Just follow a few people who post about these things often.

To navigate arXiV, try arxiv sanity (this is good to pick up trends, I don’t recommend you to make paper reading a priority if you want to be an ML practitioner. You will likely need to move so fast to deliver results that reading papers becomes a luxury you cannot afford.) About talk videos: https://nips.cc has now videos for most talks. ‘Processing’ NeurIPS is a giant job, so it’s easier to read summaries from people soon after they attended.

Most projects I supervised (at least in the last year or two) used transfer learning. Think about your former self, 10 years ago. Would you be surprised if you told your past self that in the future, anyone could download a state-of-the-art ML model that took weeks to train? And use it to build anything you want?

Published papers in all scientific disciplines use ML, but their models are not very strong; improving them is a quick win

Take, for example, this paper on how to predict battery life from discharge patterns — published in Nature, one of the best scientific journals. Their machine learning is rudimentary at best; this is actually to be expected, as the authors are in electric engineering, not machine learning. The team focused more on their domain knowledge in electrical engineering than on the machine learning part. A very astute team of participants at Data Science Retreat batch 18 (Hannes Knobloch, Adem Frenk, and Wendy Chang) saw an opportunity: what if we make better predictions with more sophisticated models? They not only managed to beat the performance of the model on the paper; they got an offer from Bosch to continue working on it for 6 months (paid! no equity stake). They refused the offer because they all had better plans after graduation.

There’s an entire world of opportunity doing what Hannes, Adem, and Wendy did; so many papers out there provide data and a (low) benchmark to beat. Forget about doing Kaggle competitions; there’s more opportunity in these high profile papers!

Avoid the gorilla problem

What follows only applies to models that produce results that a user sees. If your models’ end-user is another machine (for example you produce an API that other machines consume) you can skip this section.

Your ML model provides value to your users, but only as long as they trust the results. That trust is fragile, as you will see.

In 2015, Google photos used machine learning to tag the contents of the pictures and improve search. While the algorithm had accuracy levels that made Google execs to approve it for production, it had ‘catastrophic mislabeling.’ You can imagine the PR disaster this was, both for google and for machine learning as a field. Google issued a fix, but the first fix was not sufficient, so Google ultimately decided not to give any photos a “gorilla” tag.

What do we learn from this? If your problem depends on an algo that has any chance of misclassifying something that breaks trust: pick another problem.

In the 200 projects I supervised, when a team brought up an idea that had the ‘gorilla problem,’ I steered them away from it. You can spend months doing stellar ML work that is invalidated by the gorilla problem. Another example is tagging ‘fake news’: if your algo tags one of my opinion leaders (one I trust blindly) as ‘fake news,’ you have lost me forever.

Multiple models doing a single task (example: picking up cigarette butts)

Making the self-driving car work is a constellation of engineering problems. Many different ML models work in unison to take you to where you want to go (pardon the pun).

An example from our labs: Emily, a self-driving toy car that finds and picks up cigarette butts we mentioned before, is doing 3 subtasks:

– Identify cigarette butts

– Move the car close enough so that the cigarette butts are within reach

– Pick the cigarette butt (stabbing)

Each subtask is a model.

Note that cigarette butts are incredibly poisonous (one can contaminate 40 liters of water) and hard to pick with a broom because they are so light). As a result, they accumulate in public areas. A swarm of these robots could have a serious ecological impact. Of course, it’s still early days, and plenty of practical problems remain: would people steal the cars for parts? Even if they don’t, would they share the street with autonomous robots that will make plenty of mistakes and may block their path at crucial times?

One lesson to learn is that combining 3 models lets you solve a problem that was unreachable otherwise. Each problem in isolation may not be that tough; in fact, it might be a solved problem.

Understanding context

What problem is this model trying to solve? You know these ‘product guys’? They think people “hire” products and services to get a job done. What is the job that your model is getting done?

This might be obvious at times, but not so obvious some other times, and there lies opportunity.

Imagine that you work for a hospital. After lots of deliberation, your boss has decided that your next task will be to build a model that predicts when an intensive care patient is going to crash. What is the ‘job’ of this model?

One way to look at it: the job is to save lives.

One other way to look at it: the job is to use the resources of the hospital optimally. When a patient crashes, it takes lots of people to try to get her to be stable again. Every nurse and doctor that have anything to do with this patient rushes to the room, and abandons any task they are doing. Retaking the task is costly. Task switching is very inefficient; chronometer in hand try doing two tasks A and B multiple times, AAAABBBBB vs ABABABAB. The second takes longer, for pretty much any tasks A and B. This is why getting distracted by a notification is so damaging for productivity.

In any case, whether you think your model is saving lives (period) or allocating hospital resources optimally (to save more lives!) makes all the difference.

Because ‘bosses’ who are not ‘close to the metal’ cannot really estimate what the right job for the model is, you will have to do it. It’s a good fit for the analytical mind of the data scientist.

And there you have it; a complete manual to pick a successful AI project, in three installments. I hope this was useful, and that it helps you solve problems real people have with machine learning. There’s never been a better time to be alive. So much low hanging fruit, so much leverage thanks to this flavor of technology, so much productivity gains if we manage to educate a few more thousand people in AI. If this manual has helped, I’d love to hear from you and witness the project you built. Send it to me on twitter at @quesada, my DMs are open.

How to pick a successful AI project, 2: working with data

This post is part of a series on ‘how to pick a successful AI project’. Part 1, Part 3.

Here I cover hacks and tricks concerning working with data when selecting an AI project.

Your minimum viable dataset is smaller than you think

How can we double food production by 2050 to feed 9 billion people? Could the solution be started by two people walking around with taking pictures with a smartphone?

For an example of a bootstrapped dataset, take Blueriver Inc. They do precision agriculture. 90% of herbicide usage can be reduced by precisely spraying only at the right spots.

See & Spray machines use deep learning to identify a greater variety of plants with better accuracy and then make crop management decisions on the spot. Custom nozzle designs enable <1-inch spray resolution. They focus on cotton and soybeans.

In September 2017, John Deere acquired Blue River for 300 million.

What was the dataset Blue River started with? Collected by a handful of guys with phones taking pictures while walking down an entire crop field:

Often the amount of data you need for proof of concept is ‘not much’ (thanks to pretrained models!).

You can collect and label the data yourself, with tiny, tiny resources. Maybe 50% of the portfolio projects I have directed started with data that didn’t exist and the team generated them. Jeremy Howard, the founder of fast.ai, is also adamant about destroying the myth that ‘you need google-sized datasets’ to get value out of AI today.

What a great opportunity; it’s exciting to be alive now, with so many problems that are solvable with off-the-shelf tech. A farming robot can crunch mobile camera data opens a new path. Next time you think up AI projects, use Blue River as a reference of what’s possible with data you create.

Knowing “your minimum viable dataset is smaller than you think” opens up the range of projects you can tackle tremendously.

Because of pretrained models, you don’t need as much data

Model zoos are collections of pretrained networks. Each network there saves a tremendous amount of time AND data to anyone solving a problem similar to one already solved. For example:

Curators at model zoos make your life far easier. With recent success in ML, researchers are getting bigger grants, publish models from academia that have enjoyed powerful machines and weeks of computation. Industry leaders publish their models often, in the hope of attracting talent or nullifying a competitor’s advantage.

80% of the time of a data scientist is cleaning data; the other 20% is bitching about cleaning data

It’s a joke, but it’s not far from the truth. don’t be surprised if you spend most of your time with data preparation.

Data augmentation works well on images

Don’t do it ‘by hand’, today there are libraries for most common data augmentation tasks (Horizontal and Vertical Shift, Horizontal and Vertical Flip, random rotation, etc). It’s a pretty standard preprocessing step.

Get unique data by asking companies/people

At Data Science Retreat, One team once needed bird songs. It turns out there’s an association of bird watchers that has a giant dataset of songs. They could give it to our team, no questions asked.

Companies may have data they don’t care much about. Some other times, they do care about them but still would give them to you if you offer something of value in exchange, such as a predictive model they can use. Governments often have to give you data if you request it.

If you run out of ideas to get data with more traditional means, try asking for it. Sending a few emails may be a good use of your time. Academics should share their data if they published a paper about it and you asked. It doesn’t always work, but it’s worth trying. Small companies looking for any advantage may want to partner with you as long as they get some benefit.

Compute on the edge (federated learning), avoid privacy problems

This is our life in 2019: Huawei’s new ‘superzoom’ P30 Pro smartphone camera can identify people from far away and apply neural networks to lip reading. Progress with computer vision systems that can re-identify the same person when they change location (for example, emerging from a subway) all indicates that mass surveillance is growing in technical sophistication. Privacy is ‘top of mind.’

Corporations get access to more and more citizens’ private data; regulators try to protect our privacy and limit access to such data. Privacy protection doesn’t come without side effects: it often produces situations where scientific progress suffers. For example, GDPR regulation in Europe seems to be a severe obstacle to both researchers and companies to apply machine learning to lots of interesting problems. At the same time, datasets such as health data benefit form privacy; imagine if you had a severe illness and an employer would not hire you because of this.

A solution: What if, instead of bringing the corpus of training data to one place to train a model, you could bring the model to the data wherever it’s generated? That is called federated learning, first presented by Google in 2017.

This way, even if you don’t have access to an entire dataset at once, you can still learn from it.

Andrew Trask, harbinger of federated learning

If you are interested in this topic, follow Andrew Trask. He has a coursera course on federated learning and a handy jupyter notebook with a worked-out example.

Why is this important in a conversation about picking AI projects? Because if your project uses federated learning, you may have a much easier time getting people to give you data and use your product. It opens the door to a different class of projects.

Data reuse and partnerships

Data has excellent secondary value.

That is, often, you can find uses for data that the entity collecting it didn’t think about. Often through partnerships, new uses of data can produce a secondary revenue stream. For example, you can integrate:

– data on fraud

– data from credit scoring,

– data from churn,

– data about purchases (from different sources)

The organisation that published these data (containing personal data) might use a license that restricts your usage. You need to check if they explicitly have a license for data reuse, otherwise it is best to contact them and agree on a license for your project.

Beware of problems though:

1. If your product depends on data that only that one single partner can produce, you are in their hands. The moment you decide to end the partnership, they ended your business.

2. Data integration will be difficult, more so if the only shared variable in different datasets is a person. Data that would help identify a person is subject to regulation in some parts of the world.

Even if you can legally integrate these different data sources, remember there are entire teams dedicated to integration in big companies. Never assume this will go smoothly.

Using public data is not the ‘only option.’ Some people will complain that for every dataset out in the open, there are plenty of projects already. It’s hard to stand out. Maybe it’s worth it to email people in industry to get some data nobody else has (data partnerships). If you offer the result of your models in exchange for access, some companies may be persuaded.

How about doing Kaggle? Not a great idea for a portfolio project, because 1/ the hard pard of finding the problem and data is done and 2/ it’s hard to stand out from the crowd of competitors that probably spent more time than you fitting models and have better performance.

Finding secondary use in data is a fantastic skill to have. Coming up with project ideas trains this skill.

Use unstructured data

For decades, all data ML could consume was in the form of tables. Those excel files flying around as attachments, those SQL databases… tabular data was the only thing that could benefit from ML.

Since the 2010s, that changed.

Unstructured data are:

  1. Images
  2. words written by real people that don’t follow a pre-defined model, using language riddled with nuances.
  3. Video
  4. audio (including speech)
  5. sometimes, sensor data (streams)

Data that is defined as unstructured is growing at 55-65 percent each year. Emails, social media posts, call center transcripts,… all excellent examples of unstructured datasets that can provide value to a business.

You may think that ‘everyone knows this, so why mention it ‘… but in my experience, there are large companies left and right that didn’t receive the memo. If you work for one of these and happen to find a use case for unstructured data that they may have, you are onto something that could be a career changer.

Take, for example, banks. For them, data means numerical information from markets and security prices. Now they use satellite images of night light intensity, oil tank shadows, and the number of cars in parking lots, for example, can be used to estimate economic activity.

In my experience, at AI Deep Dive and Data Science Retreat, most people chose unstructured data for portfolio projects. And it’s easy to pass ‘the eyebrow test’ with these. Plus, they are abundant in the wild. Everyone has access to pictures, text… in contrast to tabular data such as money transactions.

One downside: unstructured data can trigger compliance issues. You never know what is lurking on giant piles of text. Is there confidential information in these emails? Is our users’ personal data leaking, even when we tried to anonymize them?

You may remember the AOL fiasco. On August 4, 2006, AOL Research released a compressed text file on one of its websites containing twenty million search keywords for over 650,000 users over a 3-month period intended for research purposes. AOL deleted the search data on their site by August 7, but not before it had been mirrored and distributed on the Internet. AOL did not identify users in the report; however, personally identifiable information was present in many of the queries, creating a privacy nightmare for the users in the dataset.

There you go; things I’ve learned about picking a good project that have to do with collecting and cleaning data.

In the last part of this series, I’ll cover what I’ve learned on model building that affects how you pick an AI project.

How to pick a successful AI project, part 1: Finding the problem and collecting data

This post is part of a series on ‘how to pick a successful AI project’. Part 2, Part 3.

Imagine that you have 200 hrs of your life to improve your career prospects in ML. What is the best use of this time? I’m betting on doing a portfolio project.

Let’s compare different options:

1. You could go to meetups

2. You could do tutorials, follow template code and be the Github user # 2245 who has this same exercise

3. You could take classes or MOOCs

Option 1 relies on serendipity. Assume you meet a potential job opportunity every 5 meetups (which could be a bit optimistic!) You may well spend a year going to meetups and getting exactly zero job opportunities if:

  • you are not looking amazing on paper/online, and
  • you don’t communicate very well in person

Assuming commuting to and attending the meetup takes 5hrs, How many meetups can you do? 40. That’s 20 conversations worth having. Still, without some strong signal that you can perform (such as work experience, or a tangible ML project you built from scratch), you will not convert these random conversations into job offers.

Options 2 and 3 (tutorials and MOOCs) don’t differentiate you enough from the masses. They are a prerequisite to reaching the level that would make a hiring manager take notice. Because work experience is hard to get when you are trying to get into a new field (chicken and egg problem) that leaves you with my preferred option, and the reason I wrote this series of posts: show the world what you can do by having an AI portfolio project.

A substantial project strongly dominates the other options

Once I asked Ted Dunning: “What is the one thing you care about as an interviewer?”. His answer: “I only care about one thing: What you have done when nobody told you what to do.”

Having a portfolio of projects shows creativity, which is extremely important in data problems.

Data cleaning, even model fitting, is somewhat grunt work. We will likely automate this in the future.

What is not automatable (and where you want to excel) is at:

• finding a problem

• That is worth solving (produces business value; helps someone)

• and is solvable with current tech

Ok, so you want to do a good AI project

The rest of this post is what I’ve learned after mentoring more than 150 AI projects over five years at the companies I’ve founded.

I’ve grouped what I know into four classes:

• Finding the problem

• Collecting data

• Working with data

• working with models

Finding the Problem

What human task is your project helping or displacing? If the task’s decisions take longer than a second for a human, pick another one

Machines are good at helping humans with boring tasks. The rule of thumb (from Andrew Ng) is that you want to tackle tasks that take a human less than one second. Driving, for example, is a long series of sub-second decisions. There’s little deep thinking there. Writing a novel is not like that. While NLP progress may look like we are getting closer and closer, writing a novel with AI is not a good project.

Myth: you need a lot of data. you need to be google to get value out of machine learning

When looking for problems, you are looking for data too. Often you feel there’s no data to address the problem you found. Or not enough data. Data is valuable, and those who have data don’t make them public. There’s plenty of public data, but nothing that matches what you need.

You come up with hacks, of course. Maybe you can combine different sources? Maybe you generate and label the data yourself?

Then you read online that you need giant datasets. You are screwed. You are not going to label millions of images just by yourself.

It’s true that for very complex deep learning models, you need those giant datasets. An excellent boost for ML in the 21st century is that now we have lots more data, and compute power, to train more complex models.

Many of the revolutionary results in computer vision and NLP come from having a lot of data.

Does this mean that you cannot do productive machine learning if you have no big compute clusters or datasets in the Terabyte range? Of course not.

Libraries contain pretrained models (like Inception V3, ResNet, AlexNet). You can use then to save computation time.

Take, for example, YOLO (you only look once). Anyone with an off the shelf GPU can do real-time object detection and segmentation in video. That is amazing. How many projects can you conceive with this feature? Make it an exercise: list 10 projects that you could build based on YOLO. By the time you read this, it’s likely there’s a new algorithm that does the task better. It’s a beautiful time to be alive. You don’t need terabyte-sized datasets to get value off of machine learning.

Pick a problem that passes ‘the eyebrow test’

The skill of picking a problem is as useful as the skill to solve it. Some problems will capture your imagination, and some will leave you unmoved. Pick the first.

And then there are those projects that make people think that what you are saying is impossible. You want one of those.

How do you know you picked the right problem? Use ‘the eyebrow test.’ You want to see the eyebrows of the other person going up when they hear it. If they don’t, keep looking.

Some problems are mere curiosities and can suck your resources. Don’t go for those. There’s tremendous potential for impact using ML right now. Why spend your time creating a translation algorithm from Klingon to Dothraki (both invented languages) when you can make people’s lives easier?

At AI Deep Dive and Data Science Retreat we believe that because there are plenty of ‘good problems,’ there’s no excuse to pick one that doesn’t pass the eyebrow test. It takes time to find a good problem. How long should you spend? As much as needed to pass the eyebrow test.

It helps if you are in the surroundings of people who have found a good problem before. Sometimes, those with deep domain knowledge in ML are not as useful as you may think to pick up ideas.

You are selling the value of AI. When you do ‘eyebrow test’ projects, you help not only the people-who-have-that-problem (always keep them in mind) but… your fellow data scientists. The more impressive your projects are, the more the general public will appreciate how transformative AI is for society. And the more opportunities you create for everyone else in the field.

You only need to be in the ballpark of a good idea

One participant came to me saying that he was running late to pick up an idea and that he had exhausted all his possibilities. He had nothing.

“Ok, let’s look at what you are passionate about. What’s alive in you?”

“Well,… I hate waste.”

So we googled ‘waste trash machine learning.’ Nothing too obvious, nor too exciting, came about. After some more search, we found a Stanford project that categorized trash. The students had a trash dataset, painfully labeled. Still, this was not exactly an idea that passes the ‘eyebrow test.’

“What if instead of categorizing trash, we could build something that picks up trash?” (iterating on the idea)

“You mean something that goes around autonomously?”

“Yes, a self-driving toy car. There was one project in batch 08 of DSR that did exactly that. The car ran laps on a circuit. You would have to modify it to identify trash, get close to it, and pick it.”

“That sounds amazing!” (eyebrow test passed)

With time, this ballpark idea morphed from general trash to picking up cigarette butts, which are terrible for the environment. Details about how to pick the cigarette butts improved with time: stabbing them, instead of trying to grab them with a robotic arm. We will talk more about this project later in this series.

Pick a problem that moves you. If nothing does, use ‘watering holes’ to listen to problems that move people

If you have the problem you want to tackle, you have a significant advantage. You understand the need. You can build the solution and know how well it works by applying it to yourself. You can tell between ‘nice to have’ and ‘pain point.’ At this point, what we are doing is no different from what startup founders and product managers do.

You have in your hands the closest thing to a shortcut: have the problem yourself. You will save time but not going into detours that don’t help. You will have a keen sense of what to build.

In 2014 I was building a company that eventually failed doing customer lifetime value (CLV) predictions for e-commerce stores. The product CLV predictions, though, was something I could deliver myself as a consultant. So I became a ‘CLV consultant.’ One of my clients was hiring a data scientist, and they hired me fulltime, so I magically transitioned into data science.

Many others had the same problem: they had tech skills, maybe a Ph.D., and they wanted to become data scientists, but they didn’t know how. Remember, this was 2014, before the web started boiling with advice on how to become a data scientist. I built a business around helping others solving this problem, and it’s been working well for the last five years.

I knew well what the problem is; too much information, unclear guidelines, interviewers who don’t know how to recognize talent. Every step of the way, I felt I knew what I was doing with this business; a feeling that is extremely valuable. Transitioning to data science is an excellent problem; 5 years later, people still struggle with it.

I don’t play golf. If I wanted to build a product for golfers, I’d be lost entirely. I would build features nobody need; I would miss the pain points. Even if I ran interviews and listened to the market, I would at a disadvantage to a golfer.

So my advice to pick a portfolio project: pick a problem you know well. Even better if it’s a problem that moves you. If you lost three friends to suicide, build something to prevent suicide. In this case, you don’t have the problem yourself, but you have a strong motivation to solve it.

What if you have no problems whatsoever? You have been in the same industry forever (say Oil and Gas), and all valuable problems there got taken care of!

I don’t believe you. There’s no industry so mature that all problems are solved. But, ok, you cannot come up with something that moves you, some problem that you have.

Then observe what problems other people have. Large groups of people. They tend to congregate in public spaces and bitch about their problems; every time you see people bitching about something… turn that into an opportunity to do a project.

Which public spaces? I call these ‘watering holes’ (HT to Amy Hoy). Online, you can have obscure forums, but Reddit and twitter are the easiest. Just sit there and ‘listen’ to people discuss the problem. Learn every detail of it. Is it a real problem, or a ‘nice to have’?

For example, you may join a gaming subreddit to see if gamers care about having stronger AI in videogames. Or if they care for VR. These ideas are too abstract to be good project ideas, but you get where I’m going.

Collecting data

Integrate distinct data sources

Often companies are so focused on getting value out of the data they have they forget they can increase value using the data not in their company but publicly available.

There’s plenty of open access data. And APIs for data that changes frequently. There’s no reason not to use multiple data sources. You can solve a more interesting problem (one that was not obvious before) by integrating APIs.

For a collection of APIs, check https://www.programmableweb.com/.

Problems that seemed impossible with a single source of data become solvable when you add a new data source. Boring projects come alive. Thankless tasks become a joy to work with if you manage to find a twist that shows more value.

When using multiple data sources, you have to stitch them together using a shared key (a column that is present in both datasets.) You cannot combine data sources that don’t have a shared key, and that tends to be a showstopper for many ideas.

Instead of collecting or reusing data, produce your own data

You don’t need to find data, or own it. Thanks to pretrained models (see later section on them), you don’t need all that much data, which means you can produce it yourself. Removing “I don’t much/any data” as an obstacle opens up the space of problems you want to solve.

To produce original data, I found one big hack: use hardware. Sensors are cheap, and they give you data that you own.

JD.com’s Shanghai fulfillment center uses automated warehouse robotics to organize, pick, and ship 200k orders per day. 4 human workers tend the facility. JD.com grew its warehouse count and surface area 45% YoY. AI affects manufacturing too. It’s easier than ever to produce things en masse, and this means there are a lot more hardware ‘toys’ in the market. Things that you would never consider to be affordable like microscopes, spectroscopes are reaching the mass consumer market and are eminently hackable. These are wonderful data sources!

Because of Shenzhen, Kickstarter, etc. hardware is evolving far faster than before. It’s never going to be as fast to iterate on as software, but we are getting there. Have you checked what’s available on Ali express? There are multiple sensors you can buy for under 100 bucks. Attach one of these to a phone running your code, and you have a portable purpose-specific machine.

For example, you can buy a microscope and use it to detect Malaria without a human doctor. Deep learning running on a phone is good enough to count parasites on blood.

Ali express is full of cheap hardware that you can attach to a phone. You can add a lot of value to someone’s life using a mixture of phones (that run the ML code) and

Example: our Malaria Microscope.

Eduardo, AIscope’s founder, after reaching 1000x the first time

Malaria kills about 400k people per year, mostly children. It’s curable, but detecting it is not trivial, and it happens in parts of the world where hospitals and doctors are not very accessible. Malaria parasites are quite big, and a simple microscope can show them; the standard diagnostic method involves a doctor counting them.

It turns out you can attach a USB microscope to a mobile phone, and run DL code on the phone that counts parasites with accuracy comparable to a human. Eduardo Peire, DSR alumni, started with a pubic Malaria dataset. While small, this was enough to demonstrate value to people, and his crowdfunding campaign got him enough funds to fly to the Amazon and collect more samples. You can follow their progress here: http://aiscope.net.

For another example, you can buy a spectroscope that pointed at any material, and it’d tell you its composition. It’s small enough to attach to a phone for a hand-held scanner. Can you detect traces of peanuts in food? Yes! There you go, a solution to a problem real people have. If you are allergic to peanuts, this will buy you a certain quality of life.

Sensors are cheap nowadays, and they will help you get unique data. You can turn a phone into a microscope, a spectroscope, or any other tool. The built-in camera and accelerometer are excellent sources of data too.

Next on this series: what I’ve learned about working with data and how this can help you pick successful AI projects.

Understanding a Machine Learning workflow through food

Photo by Cel Lisboa on Unsplash

Originally posted on Towards Data Science.

Through food?!

Yes, you got that right, through food! :-)

Imagine yourself ordering a pizza and, after a short while, getting that nice, warm and delicious pizza delivered to your home.

Have you ever wondered the workflow behind getting such a pizza delivered to your home? I mean, the full workflow, from the sowing of tomato seeds to the bike rider buzzing at your door! It turns out, it is not so different from a Machine Learning workflow.

Really! Let’s check it out!

This post draws inspiration from a talk given by Cassie Kozyrkov, Chief Decision Scientist at Google, at the Data Natives Conference in Berlin.

Photo by SwapnIl Dwivedi on Unsplash

1. Sowing

The farmer sows the seeds that will grow to become some of the ingredients to our pizza, like the tomatoes.

This is equivalent to the data generating process, be it a user action, be it movement, heat or noise triggering a sensor, for instance.

Photo by no one cares on Unsplash

2. Harvesting

Then it comes the time for the harvest, that is, when the vegetables or fruits are ripe.

This is equivalent to the data collection, meaning the browser or sensor will translate the user action or the event that triggered the sensor into actual data.

Photo by Matthew T Rader on Unsplash

3. Transporting

After the harvest, the products must be transported to their destination to be used as ingredients in our pizza.

This is equivalent to ingesting the data into a repository where its going be fetched from later, like a database or data lake.

Photo by Nicolas Gras on Unsplash

4. Choosing Appliances and Utensils

For every ingredient, there is the most appropriate utensil for handling it. If you need to slice, use a knife. If you need to stir, a spoon. The same reasoning is valid for the appliances: if you need to bake, use an oven. If you need to fry, a stove. You can also use a more sophisticated appliance like a microwave, with many, many more available options for setting it up.

Sometimes, it is even better to use a simpler appliance — have you ever seen a restaurant advertise “microwaved pizzas”?! I haven’t!

In Machine Learning, utensils are techniques for preprocessing the data, while the appliances are the algorithms, like a Linear Regression or a Random Forest. You can also use a microwave, I mean, Deep Learning. The different options available are the hyper-parameters. There are only a few in simple appliances, I mean, algorithms. But there are many, many more in a sophisticated one. Besides, there is no guarantee a sophisticated algorithm will deliver a better performance (or do you like microwaved pizzas better?!). So, choose your algorithms wisely.

Photo by S O C I A L . C U T on Unsplash

5. Choosing a Recipe

It is not enough to have ingredients and appliances. You also need a recipe, which has all the steps you need to follow to prepare your dish.

This is your model. And no, your model is not the same as your algorithm. The model includes all pre– and postprocessing required by your algorithm. And, talking about pre-processing

Photo by Caroline Attwood on Unsplash

6. Preparing the Ingredients

I bet you the first instructions in most recipes are like: “slice this”, “peel that” and so on. They don’t tell you to wash the vegetables, because that’s a given — no one wants to eat dirty vegetables, right?

Well, the same holds true for data. No one wants dirty data. You have to clean it , that is, handling missing values and outliers. And then you have to peel it and slice it, I mean, pre-process it, like encoding categorical variables (male or female, for instance) into numeric ones (0 or 1).

No one likes that part. Neither the data scientists nor the cooks (I guess).

Photo by Bonnie Kittle on Unsplash

7. Special Preparations

Sometimes you can get creative with your ingredients to achieve either a better taste or a more sophisticated presentation.

You can dry-age a steak for a different flavor or carve a carrot to look like a rose and place it on top of your dish :-)

This is feature engineering! It is an important step that may substantially improve the performance of your model, if done in a clever way.

Pretty much every data scientist enjoys that part. I guess the cooks like it too.

Photo by Clem Onojeghuo on Unsplash

8. Cooking

The fundamental step — without actually cooking, there is no dish. Obviously. You put the prepared ingredients into the appliance, adjust the heat and wait a while before checking it again.

This is the training of your model. You feed the data to your algorithm, adjust its hyper-parameters and wait a while before checking it again.

Photo by Icons8 team on Unsplash

9. Tasting

Even if you follow a recipe to the letter, you cannot guarantee everything is exactly right. So, how do you know if you got it right? You taste it! If it is not good, you may add more salt to try and fix it. You may also change the temperature. But you keep on cooking!

Unfortunately, sometimes your pizza is going to burn, or taste horribly no matter what you do to try to salvage it. You throw it in the garbage, learn from your mistakes and start over.

Hopefully, persistence and a bit of luck will produce a delicious pizza :-)

Tasting is evaluating. You need to evaluate your model to check if it is doing alright. If not, you may need to add more features. You may also change a hyper-parameter. But you keep on training!

Unfortunately, sometimes your model is not going to converge to a solution, or make horrible predictions no matter what you do to try to salvage it. You discard your model, learn from your mistakes and start over.

Hopefully, persistence and a bit of luck will result in a high-performing model :-)

Photo by Kai Pilger on Unsplash

10. Delivering

From the point of the view of the cook, his/her work is done. He/she cooked a delicious pizza. Period.

But if the pizza does not get delivered nicely and in time to the customer, the pizzeria is going out of business and the cook is losing his/her job.

After the pizza is cooked, it must be promptly packaged to keep it warm and carefully handled to not look all squishy when it reaches the hungry customer. If the bike rider doesn’t reach his/her destination, loses the pizza along the way or shake it beyond recognition, all cooking effort is good for nothing.

Delivering is deployment. Not pizzas, but predictions. Predictions, like pizzas, must be packaged, not in boxes, but as data products, so they can be delivered to the eager customers. If the pipeline fails, breaks along the way or modifies the predictions in any way, all model training and evaluation is good for nothing.


That’s it! Machine Learning is like cooking food — there are several people involved in the process and it takes a lot of effort, but the final result can be delicious!

Just a few takeaways:

    • if ingredients are bad, the dish is going to be bad no recipe can fix that and certainly, no appliance, either;
    • if you are a cook, never forget that, without delivering, there is no point in cooking, as no one will ever taste your delicious food;
  • if you are a restaurant owner, don’t try to impose appliances on your cook — sometimes microwaves are not the best choice — and you’ll get a very unhappy cook if he/she spends all his/her time washing and slicing ingredients

I don’t know about you, but I feel like ordering a pizza now! :-)

If you have any thoughts, comments or questions, please leave a comment below or contact me on Twitter.