“AI IS NOT THE HAMMER FOR WHICH EVERYTHING IS THE NAIL”

GOOGLE’S DI DANG ON USING AI THOUGHTFULLY

  • How Google is revolutionizing one small but vital corner of medicine
  • How AI could help children in India learn how to read
  • Why Di doesn’t think AI will perform at human levels for another 40 years
  • The one question AI startups should be asking themselves

Di Dang has been working as an Emerging Tech Design Advocate at Google for the past year and a half, where she launched the People + AI Guidebook, an open-source framework for designing human-centred AI products, with the People + AI Research (PAIR) team. We spoke to Di about the future of AI, Google’s work in the field, and why human-centric design is so essential.

Our conversation was so fruitful that we’ve divided the discussion into two parts. Stay posted for part two.

WHAT ARE SOME OF THE MOST EXCITING PROJECTS GOOGLE IS WORKING ON IN TERMS OF AI?

One compelling example is in medicine. There’s a Google AI product team that’s been building and training deep learning models in order to diagnose diabetic retinopathy in patients. I think it’s an interesting example of people and AI partnerships: physicians and the model can work together in a way that provides more accuracy than either might contribute individually.

In effect, instead of having human doctors work on their own in a silo or having machine learning models going purely off the results of this or the diagnosis rate of that, Google may actually help bring together doctors, who are highly specialized in this field with their highly trained machine learning models, so that you have a human in the loop. 

The human doctors are then able to review the output from the machine learning models. This combination of people and AI; doctor and machine learning is essentially able to bring about a higher success rate than either one on their own.

San Francisco Design Week

So how does this look in practice? As we see in this study published in Opthalmology, the adjudication panel found signs of moderate diabetic retinopathy. Without the assistance of machine learning, two out of three ophthalmologists grading the image marked it as no DR, which could have resulted in a patient missing a needed referral to a specialist. Similarly, the model also indicated evidence for no DR. However, when the ophthalmologists saw the model’s predictions, perhaps prompted to examine the case more carefully, all three opthamologists ultimately gave the correct answer of moderate DR.

Another exciting project is Bolo, which was created by a Google product team based in Hyderabad, India. According to the ASER 2018 report, of all students enrolled in grade 5 in rural India, only about half of them can confidently read a grade 2 level textbook. This may be due to a number of different factors, such as lack of access to quality material, under-resourced infrastructure, and barriers to learning outside of the classroom. 

So the team created Bolo, an app that helps children improve their Hindi and English literacy with “Diya,” who encourages, explains, and corrects the child as they read aloud. So how does Bolo work? It functions as an on-device version of the Google Assistant API, which means that speech recognition is done locally, on the phone, so the child’s speech input isn’t sent to the cloud. The speech recognition models match the child’s speech against the pronunciation of the word, so that Diya may congratulate or prompt the child to try reading it aloud again with her help.

Bolo, a Google app for kids to improve their English and Hindi literacy.

GOOGLE’S AI BEAT TOP HUMAN PLAYERS AT THE GAME STARCRAFT II IN OCTOBER, WHICH PREDATED AI RESEARCHERS’ EXPECTATIONS — WHAT DO YOU THINK THE TIMELINE IS FOR AI PERFORMING AT A SUPERHUMAN LEVELS ACROSS ALL FIELDS?

This is a really good question because it gets at the thorny issue of how to distinguish what type of AI we’re talking about. For the intents and purposes of the tech industry, we conflate AI and use it interchangeably with machine learning. But when we talk about AI, the question  becomes: how do we demarcate narrow AI from general AI from super AI? 

When we say narrow AI, examples which fall into that category would include the diabetic retinopathy models, DeepMind’s AlphaStar (which beat the StarCraft II human players), as well as Bolo, i.e. where the machine learning models are trained in order to generate a very, very specific output that helps us solve this specific problem. General AI is about how machine learning is able to perform at a level that’s similarly as broad and comprehensive as human intelligence. We’re not even close to that stage today, by any means. 

We can train machine learning models to do a very specific thing but we can’t train it the way we can help a human learn how to do all sorts of things, like how to do math, ride a bike, and make music, and fill out their taxes, or whatever other skills. These are examples of more general intelligence, so to speak. When we talk about super AI — when does the AI output begin to exceed what we as humans are able to accomplish ourselves? So that’s why it’s significant that we distinguish what constitutes these different levels of AI. When it comes to that, many AI researchers have answers which range from general AI arriving in the next five years to the next 40 years to never. I myself would estimate that it will be maybe 40 years from now simply based on what I’m reading from research and what I’m seeing in industry. But I think, again, that a lot of it boils down to the question of how you demarcate narrow from general from super AI, and what it would even mean to perform at a “superhuman level”.

WHAT TIP WOULD YOU GIVE TO AN AI STARTUP TRYING TO MAKE IT BIG IN 2019?

There’s a lot of hype around ML/AI right now, but I want to make sure we’re using AI in thoughtful, sensitive ways. So my one tip would be for startups to ensure that the ways in which you’re utilising machine learning are actually solving that problem for users in a unique way that can’t be solved by any other technology. 

After all, using machine learning is no small feat, it takes a lot of engineering resources and compute power in order to train and build models, moreover, there’s a lot of work required in collecting and cleaning your training data, as well as ensuring you have accounted for unintentional bias. It’s a very laborious process so maybe AI isn’t the answer in the end. Even the premise of an AI startup is interesting to me — because what does it mean? When you look at applications of AI, AI is usually an aspect of the overall technology, it’s not a monolithic thing where something is entirely AI, but due to the hype and due to the sci fi-influenced mental models, it’s easy to lose sight of the fact that AI is often a top layer built on heuristic infrastructure, by which I mean rules-based, what we think of as more traditional, programming. 

For AI startups, I would just caution that AI is not the hammer for which everything is the nail.

Want more great AI content? Stay tuned for the second half of this interview in which Di discusses one key piece of advice for startups in AI, her problem with the phrase “AI expert,” and why looking into your dataset is vital. 

How about learning more about Google’s take on AI? Check out Di’s work on the People + AI Guidebook with the PAIR team.