“Be Wary of Anyone Calling Themselves An AI Expert”

SPEAKING TO GOOGLE’S DI DANG (PART 2/2)

  • Why landing a dream job is as much about luck and privilege as it is about hard work
  • Why using the word “expert” can harm the field of AI
  • How using photos of celebrities can screw with your machine learning model
  • Why the future of AI is human-centric

We recently spoke to Di Dang, an Emerging Tech Design Advocate at Google, where she launched the People + AI Guidebook, an open-source framework for designing human-centred AI products, with the People + AI Research (PAIR) team. In the first part of our interview, we discussed how Google AI is helping to support medicine and child literacy, as well as the importance of differentiating between narrow, general and super AI. In this week’s installment, Di talks about her skepticism towards anyone who calls themselves an AI expert too easily; the importance of datasets and gives one piece of advice for startups concerning AI.

You’re working at one of the most innovative tech companies in the world. What has been key to your professional success?

The number one key to my success has been a cocktail — a mélange — of luck, hard work and privilege, if I’m perfectly honest. I’ve worked really hard over my career in each position to excel to the best of my abilities but at the same time, to be perfectly frank, when I moved from Berlin [where Di worked as a freelance UX designer for companies like CareerFoundry] to Seattle I was very intimidated because there were a lot of big tech organisations – Amazon, Microsoft, Facebook, Google, etc. and a lot of people in the tech industry who had relocated there expressly for the purpose of working in big companies like this. 

So I was concerned about being able to ‘make it’ as a UX designer in a full-time capacity and I was trying to make the transition between consulting to being in-house. And at the time, my partner actually supported me financially for months as I job hunted so I could find a job I really loved and so I could make sure I was joining the right company for the right reasons as opposed to taking the first job I was offered when I relocated. 

That’s hard work, that’s privilege, but there’s also luck: I got recruited to join Google while  I was leading a workshop in VR/AR design at a conference, so what would have happened if I hadn’t been at that conference at that particular day, leading that workshop? Who knows where I would be now?

You’re obviously an AI expert, what for you —

Let me stop you there! I noticed you said “obviously you’re an AI expert.” I’m very wary of anyone who calls themselves an AI expert. Not to deride the work of academics, research scientists, or other people who have years of experience and research in the space but when it comes to AI being used in a product or service for people, it’s a relatively nascent area which we’re still trying to understand. So I try to caution people to not use the expert term lightly and to be wary when they hear other people describe themselves as experts. 

The reason why I make this point is that there’s so much demystifying that needs to be done in AI and machine learning and I find that sometimes when people say “That’s our AI expert” it sometimes shuts down the conversation in the room. Other people feel less space to ask really deep, hard questions like “Can you tell me more about how this model output was achieved? Can you tell me more about the training data used?” That’s something people don’t talk nearly enough about because we’re so fixated on the ‘sexiness’ of machine learning models. 

There’s a lot of work to be done around the data sets that are being used to train the machine learning model. There may be implicit biases that are built into that data set which then influence what the model is picking up on and the kind of output it’s been generating. 

This is a long-winded way of saying when you’re working with AI in a product setting be aware that it’s still an emergent area and it’s OK if you don’t understand things, it’s OK to be naturally curious, it’s OK to stop, pause and ask really thoughtful questions, even to the so-called experts in the room. Because if they really are experts, they won’t mind taking the time to explain and unpack the whats and whys for you. 

people spelling out HCAI for human-centered AI
Caption: Google AI Impact Challenge:
“We’re spelling out HCAI for human-centered AI like a buncha dorks”

You mentioned the importance of the data set. Do you think that’s a key way to develop AI ethically — ensuring you have as diverse a data set as possible?

Oh, absolutely. I will admit here that I’m not a trained data scientist, but it’s something that I as a UX designer would feel comfortable working with my data scientists to try and ask those questions and understand which data sets we’re using. I’d want to know are they public, are they private? How much are we able to inspect the data set and understand if it’s skewed towards one particular subtype of users or another? 

For instance, it’s really easy to find public datasets of celebrity photos. Think one step deeper. What are the implications if you were to train, say, a machine learning model for facial detection on this dataset of just celebrity pictures? First of all, chances are all of these pictures are very hi-res, high production value in terms of lighting, everyone is posed to always be facing the camera, maybe it’s on the runway or the red carpet. Let’s go one step further in terms of analyzing the racial make-up. Chances are that this data set is going to skew in favor of more white celebrities than, say, celebrities who identify with underrepresented groups or as people of colour.

That means now that if you were to utilise that facial detection model that had been trained on these celebrity faces, then it’ll perform well for other people who are white, but less so for people who are of color, due to the training data that you gave it. 

That’s one example of why it’s so important to understand where your data comes from and what makes up your data. There are tools that you can utilize to help you inspect this more deeply. One is called Facets, without needing to know how to code, it enables you to understand and analyse your training data set for any skew or bias. It’s essentially just allowing you to go deeper into it.

Laptop showing google search
Unsplash image

When you think of the future of tech what gets you the most excited?

In my role as an emerging tech design advocate and UX designer, it’s the people. I sound so clichéd but it’s true! This is a problem I see again and again with emergent tech. There’s such a fixation on what’s new and shiny that you rush ahead trying to do things just because we can, not because we should. 

I’m less excited by technology and I’m more excited about our evolving relationship with technology. There’s a lot of interesting work surrounding digital wellbeing that’s being done. We want to cultivate healthier, more balanced relationships between people and tech at the end of the day. That’s what I find most exciting. Because our industry fixates on what’s shiny and novel and I’m more focused on bringing it back to the people.

Do you mean specifically digital well-being?

I essentially mean centring people in the dialogue. When thinking about digital wellbeing — I believe just because we can do a thing, doesn’t mean we should. I ask myself how can we design products where people don’t feel icky after using it, where they feel like oh, that was a good interaction. I’m able to walk away. I’m able to use this for as long as I want and not feel addicted after using it.

I have some limited amount of power which I can try to utilize towards bringing about more of what I hope for in the future and trying not to bullshit myself about doing good when I’m not doing good. 


Want more? Check out the People + AI Guidebook that Di Dang has been working on to get a more in-depth insight into Google’s human-centric approach to AI.