<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

17 Min Read

Watch Angela Pham's Keynote from MAICON: How to Talk to Users in a Machine Learning World (Video)

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Facebook has more than two billion users. It’s not just a social leader; it’s also known for being an innovation leader. 

One way in which Facebook innovates is through the use of machine learning and conversational technology. Angela Pham (@angelapham) is a Content Strategist at Facebook, and she joined us this past July at the Marketing AI Conference (MAICON) to discuss the development of content strategy in a machine learning world. 

In her daily roles, Pham makes decisions on how the Facebook platform engages with two billion customers. Her team gives a human voice to machine-enabled conversation technology. In her keynote at MAICON, she shared insights and examples of how Facebook is able to use technology to scale a 1:1 conversation to two billion customers—without losing the human touch. 

During her talk, she also discussed:

  • How suggested phrases play into brand voice and experience.
  • Leveraging predictive text to drive meaningful (not creepy) conversation
  • Using suggestive text to serve the world, not just a subset. 

Watch the video below, or read the full transcription. Please note that the transcription was compiled using AI with Otter.ai, so blame any typos on the machine :)

 

 

Session Transcription 

Hi, thanks so much for having me. My name is Angela. I'm a UX content strategist at Facebook. And it's always interesting for me to try to explain my job particularly to a lot of content marketers who are out here, and also as an ex content marketer and journalists myself where a lot of our default assumptions about what a writer does at Facebook is a little bit of confusion, like most of its user generated content that we see on Facebook's newsfeed. So I actually work on the feed and stories team. So what exactly do I write? I think of it as choosing the words that we use for people. And the reasons why we do that in the user interface. So one way of explaining it is, why does a button say send instead of submit? Why does a header over friends say suggested friends are recommended friends. So those are the kinds of decisions I make day to day in my role. So it's a very nice sort of industry to work in.

1:10

And knowing that that's already a niche industry to be in UX content strategy is a sub niche, which is how this can come to play in machine learning. So I actually ended up working on a project on the newsfeed team that dealt with predictive text suggestions that would be surface to users to use as comments. And a lot of us are now very familiar with these kinds of product experiences, which often end up looking something like this. So we now see these across so many different email services, social media networks as well. were essentially predictive text suggestions. Look at what message was occurring before you started typing, and suggest maybe something that you might want to say that would be a logical next step. So

2:00

A lot of classic examples appear on the screen. The logical one is, you know, if I say, You're welcome and someone says no problem, that suggestion is probably likely to be surface by any one of these platforms. So they're pretty basic forms of machine learning, right? Where it's pretty easy to do some text analysis and determine a users are commonly saying these things in response to messages like this, would you like to send one of these three options?

And particularly in social media, I think a lot of us think that we're saying vastly unique additive things in the commenting sections. But ultimately, the stuff that we say all can kind of be combined into phrases like this. So this is just a small sample of very popular comments that occur across vacation posts on Facebook. So if you look there's just a lot of different variations of the form of basically complimenting someone's vacation... experience and variants around how beautiful it is. This is a pretty basic model of machine learning. And that it's, it's pretty easy to surface based on data, hey, these are the most common and popular phrases. I can categorize what kind of newsfeed posts that I see commonly, whether it's a vacation, or it's a celebratory experience and kind of predict what people are likely to comment with. So content strategy plays a unique role in this space, because we can't just surface any kind of suggestion right. And it gets even a little more complex when we think about the fact that it's not just words that can be surfaced as comments, but images as well. So content strategy enters this really unique space where I'm determining the kind of rules of what we surface in terms of emojis as well. stickers and gifts here in the middle, and even combinations of the two in some instances. So emojis paired with text comments.

4:03

And you could see on engagement posts like this that my friend shared on Facebook, there's a lot of different options as to what emoji we even think the user might pick. And it would be a mistake to base that just on user data in popularity alone. And that's where my role came into play where I realized for the first time, my career, I suddenly was working in a machine learning world which was before this very unfamiliar to me.

So I learned a lot through working on this kind of product experience. And I learned five things and five considerations to think about in order to build with care, and to use language in a way that was thinking about a user first experience. And I think this can come into play for a lot of you as well who might be considering or working on a chat bot experience that kind of has a two way dialogue as well. Or if you're thinking about voice tech has heard a lot about today, you know, how does voice tech actually get translated into text form? And if that's going to represent the user, how does that look? What kind of grammar and language choices are you making? So I learned a lot of lessons through this kind of experience. And I really knew very little going in. So hopefully these can be useful for a lot of your own experiences as well.

5:24

So the first thing I think about in order to build and use language with care for user experiences, is asking, who's actually doing the talking here. And I always think about this classic New Yorker cartoon, which a lot of you are probably familiar with, where the two dogs say to each other, you know, on the internet, nobody knows you're a dog. And the kind of modern form of this now is that on the internet, no one really knows if you're a bot or not. I think we've heard a lot of examples throughout the day of this, where sometimes I think about Twitter

Almost being a conglomeration of ghost riders, and social media managers and bots all kind of just like talking at each other. And sometimes it's actually hard to discern who is a bot and who is it. And increasingly that's kind of a lot of our social network experiences is there's a lot of automation going on now behind the scenes.

6:21

But looking at this particular problem of predictive text suggestions, there's this really interesting question of when the transfer of ownership of the message actually takes place. So on messenger, colleague of mine on content strategy was also working on a similar problem of predictive texts surfaced in Facebook Messenger the app, and when that suggestion is for surfaced, it belongs to the company still at first, right? So in some ways, kind of representing what Facebook thinks of first and thinks is at least appropriate to surface on its own platform.

So it's a little bit representative of the company in a lot of ways. It's still there's, if you were to ask a user, you know, oh, what are these messages at the top of your comment box? You know, you a person might logically say, Oh, this is what messenger wants me to say, right? But then the second, that person chooses one of those predictive text suggestions, and they hit send on that, that phrase or word, it suddenly becomes there, so the ownership transfers over, and then it acts as their voice. Often the other party doesn't even know that that wasn't originally their suggestion in the first place. So it's this really tricky space when you're dealing with that balance between almost like a brand voice choice and a representation of your company. But then also knowing that you want it to sound like the user as much as possible.

So how this plays out, gets even more complex when you think about how much power is actually behind these kinds of languages, suggestions, particularly on a platform like Facebook, where we have over 2 billion users, that's a lot of potential conversations that you're helping shape. So if you look at these examples here, where this is a typical instance of an email platform using suggested phrases, you might want to send it to someone. And on one side, you've got an Okay, sounds good. And Okay, thanks. That's kind of pushing people toward maybe a more positive interaction there. It's polite, it's, it's kind of straightforward. And then on one side, you also have the option to say that sucks, completely different and more negative experience that you could potentially push users toward. And that's pretty huge, right? When you think about the consequential dialogue that can come from just those mere suggestions, and which choice they tend to make either instinctively or with a lot of thought put into it.

And at the bottom here, you can see another set of suggestions that are a lot more formal.

And I remember when I actually got this set on Gmail one day of thank you for the information or thank you for the mail noted with thanks. And I kept wondering, you know, what were the inputs that they use to surface these To me, it's almost like formal Shakespearean style language and certainly not my own language pattern, right. And I doubt that Gmail suggestions are actually getting really more accurate and personalized over time, which is common with obviously most machine learning models. But it's, it's interesting to think about the fact that this can either match me and feel like me, or it can be creepy if it feels too much like me. And also disconcerting, if it's like the opposite of how I feel like I talk.

9:47

And it makes me think about a friend of mine who's one of those like artsy, stuttery types, right? I'm sure most people have this kind of friend, where it's almost like a Christmas miracle if you get an email from him and then when you do it. Like, the kind of person that from maybe like types of one finger and like doesn't use periods or grammar and like, just barely coherent sentences. I remember a few years back he like responded with an email and I'm always like, Wow, he responded like, what's he saying today? And it was the first time ever saw like a full set of coherent sentences with like, perfect punctuation, exclamation point, like capitalization. It was incredible seeing from person like him. And it was not coincidental that was around the same time that Gmail was really testing out the predictive email suggestions for the first time. I just remember thinking like, this guy either got his act together out of nowhere, or he's like, Gmail, his dream target user for this kind of experience, where he's just like hitting the buttons of all the phrases and imposing like the perfect flawless email. So like, in some sense, I think that's like awesome for someone like him to enable this like really easy communication.

And not put much effort into it. But at the same time, I felt this almost sense of loss where my friend didn't sound like himself in the email exchange at all right. And there's something there that feels a little bit kind of sad and something that weighed on me a lot as I worked on a project like this.

And I think this was really well articulated by this tech blogger. And I pulled this from his blog, his Twitter handle is geek in chief. And he said that chances are that to save yourself the effort, you'll just start accepting Google suggestions. And then all of a sudden, your voice, your voice is the same as everyone else's. So it's it's kind of a heavy thing to weigh in, like when we think about the future of communication on the internet, and how we all are increasingly able to start using bots and suggestive text to let people's dialogue be shaped more and more in the hands of a company.

What does that do to the uniqueness of our voices over time? I think it's something that we should all be considering if we work on similar product experiences.

12:12

The second question I asked myself to make sure I was building and using language with care for our users is, what are we actually helping people do better? So this is just another variant of the classic question of, before we use any kind of automation as a solution, we should really be crystal clear on what the problem is that we're trying to solve. And that was no different than a product like this, where I really wanted to articulate and visualize what it was that this predictive text suggestions could help our users do better on Facebook.

So I developed this framework of sorts that helped me convey to my product team. This is the role where content strategy can play. And here's where I think the space most makes sense for us in terms of what kind of phrases we think are good to surface, and which are not good to surface for our users. So on one side, you've got the mundane, and on the other, you've got meaningful. On one end, it's throwaway and the other side is thoughtful. So if you go toward the mundane side, something that's kind of throwaway like Haha, we've been lol. That's not necessarily helping us achieve a goal of having meaningful conversation on Facebook's commenting on newsfeed.

But if you look at the other extreme, that's not where I thought it made sense for us to play either, especially given our own product goals. So seeing Facebook surface a phrase like I love you so much, my darling would probably be eerie for a majority of us regardless of whether that was accurate context or not, but also almost dystopian in a way to think of a world in which our human experience is no longer needed to be hand typed by us, but actually suggested by the platform we happen to be using at a given time, right. So the sweet spot for our product at the time was more in the middle. Sounds good to me a phrase that's meaningful enough to not just be a simple throwaway, and just be a little efficient tap, but rather, something that could help create a little bit more of a meaningful dialogue in the commenting section for the future. So creating a framework like this really helped me articulate why this was so important that we made a decision around what we did and did not want to surface so that we weren't necessarily feeling overwhelmed with the number of phrases we had to choose from, and were more strategic about how we wanted to achieve our goals.

14:53

So the third thing to ask an order to build with care and use language with care is border our language principles. So a lot of you in the room who are content marketers are familiar with some of these basics that come into play. Things like style guidelines. What are our rules on grammar and punctuation? What do we capitalize and don't capitalize? What are some words that we say we will never surface like curse words, versus ones that we think are one size fits all for everybody.

And it's a little bit trickier when it comes to actually trying to represent the user's voice with products like suggestive, predictive text, in that you're suddenly making decisions on behalf of a large number of users. And that makes it kind of an elevated style decision. That means you're not just merely saying, Hey, I think looks good should be lowercase with a period versus uppercase without a period. You're in a sense, dictating a lot of language decisions and shaping of how a lot of dialogues will play out over time.

16:02

Things like, I want to go someday versus I want to go someday, if we actually find that a significant number of users prefer to say the one variant, does that mean that we always suggest it? So making strictly database decisions is never wise. And I think we've seen a lot of examples of that in earlier discussions today. But at the same time, how do we make those kind of principles and stick to them, in which we are saying, Hey, I actually think we should avoid internet slang, or abbreviations and acronyms, even if we see that users commonly use them. And maybe certain demographics are more comfortable with them and others. These are the kinds of things you kind of have to make at the very beginning when you're building out a language based product like this. And knowing that there's a lot of responsibility in shaping how people communicate with every single language choice that you make.

So one way that I really was able to make these kinds of micro decisions with confidence was actually going back to principles that I could find. So there wasn't specific guidance inside of Facebook as to how to make a style guide for predictive text suggestions. But what we did have and what a lot of your own companies probably have as well, our AI principles, so a lot of these center around ethical choices that we said we will and will not do when it comes to automated technology. Google famously has been working on there's a lot over the last year, and Facebook and Google and a lot of other big tech companies have actually partnered with a lot of academics and other organizations to agree upon principles early on, and just to make sure that we stay within those bounds. So when I referred to our principles in similar places like this, I looked at things like point two we said we will believe and fair, transparent and accountable AI. And I use that is kind of my guideline for how I made style choices. So if I were to zero in on the word fair, what's fair when it comes to grammar and punctuation and acronyms, and the way I translated that was at least early on until our model gets more advanced. fair to me means making language choices that are as applicable and universal as possible. So for that reason, I steered away from acronyms and abbreviations that wouldn't necessarily resonate or be understood by a wider user base. So if you're ever in a similar situation, and you're dwelling on all these micro details of the nitty gritty of style choices, and almost overly overwhelmingly large language choices to make, definitely go back to the principles and find what you can to help support your choices so that you don't feel like you're making a subjective language choice polling your own and I think this is something that's really powerful, and really makes a difference that if people ever challenged what questions or what rules you've put in place, you can say with authority and confidence. This is actually based on very sound strategic principles that our company has agreed to early on. And if you're at a smaller company that might not have this level of robustness, partnerships and principles in place, then I definitely recommend scouring interviews for maybe where your leadership, your C suite has done with the media and talked about similar principles that you can kind of latch on to, and make sure that every style choice you're making is backed up with something bigger than your own work.

19:43

And number four question to ask to build with care is, how do we serve the world and not just a subset of the world? So obviously, at Facebook, we particularly think about this and everything we do, where it's a very international audience for all our choices.

Diversity and Inclusion is a huge initiative that all of our product teams make conscientious effort to be as inclusive, inclusive as possible in a number of different ways. And it was really interesting for me to work on this project and realize that even in areas as seemingly innocuous as suggested emojis and comments could be so heavy with kind of controversy and risk and every decision that we could potentially surface a given emoji.

So on one side, you've got the the universal yellow cartoonish emojis that are expressing pretty universal human emotions, right? So hard item Mooji. Throughout the world is a recognized thing, because of the almost cartoonish color of them. Most people throughout the world can use these and feel like they can relate to them. On the other hand, you've got what look innocuous initially, but we're actually pretty stressful.

21:00

Think through all the different use cases of which emojis are safe to surface for everybody versus those that are not. So the dancing woman in the red dress, a really popular emoji used throughout the world However, what would it be? What would it be like for Facebook to surface this to someone who doesn't identify with that gender, or who doesn't identify with the garments that the the Moji is wearing? That's a high risk situation, not ideal for us to surface. Similarly, maybe, and Day of the Dead and Mexico, it might make sense to surface the skull emoji and it might be very popularly used during that time. But it would be very eerie and sinister and a lot of other places for Facebook to surface the skull emoji and say, Hey, why don't you comment to your friends with this guy?

21:50

Same with a Caucasian flesh colored thumbs up. So this might make sense and might be one of the most popular emojis you...In Sweden, but that does not mean that that's the one we start defaulting to and every Swedes experience, right? We have to think through like, are we actually building for a majority, and overlooking the minorities and a number of different experiences. And so until we are able to get a model that's better, replicating how people really talk and more personalized to them. Those were experiences that weren't really safe for us to surface. Another example of where this gets risky, is something that again, seems innocuous at first, where this is actually something I found on Twitter where a conversation was from a stranger, asking someone if they are available to meet for coffee, and all of the predictive text suggestions were variants of the word. Yes.

So I think, you know, you could see this in a number of different ways and almost understand maybe what the rationale was on the product team that built this. Maybe the data told them Oh, like 90% of these kinds of conversations lead to someone saying a yes response. And we are a network that's designed to help people meet and communicate and find new opportunities. So the yeses, make sense. But if you've ever been vulnerable on any online domain, felt unsafe, or felt like you did not want anything to default to the word Yes, when you're meeting a stranger, from the internet, live for the first time, you could understand that this actually creates a lot of discomfort and fear and assumptions about who built this and how did they not think of this? in tech, we often call these edge cases, which on my team, we really try to reframe as stress cases to not imply that these are something that we think about on the edge, but these should actually be core considerations and any kind of product experience that we're building. Maybe for the team thought through this, they didn't realize how this could come off. But that's why it's so important for us to have a diverse set of people helping shape these experiences together. Because if someone has ever felt threatened online, hopefully they would voice that at the table very early on to say, these are things that will come up. And we don't want people to feel uncomfortable when the platform is telling them in every possible way, you should go meet the stranger. And the fifth thing to ask is, how do we build experiences if we're not the ones actually building the products? So particularly interesting for those who are also from humanities and liberal arts backgrounds, where I think a lot of us join these kinds of conversations with almost a pre warning of, well, I'm not very technical, or I'm not a coder. So I don't really know how this algorithm works. But you know, I can talk about the user experience, but maybe not the back end.

25:00

And a really big lesson I learned from being a part of this project for so long was that it didn't really matter that much if I wasn't a coder. And if anything, it helped me in a lot of ways. Because the skills that make me bad at coding, make me really good at empathy and make be really good at being able to explain complex things. Because I'm not immersed in the complex things every day, I have the ability to translate for real people. And that's what I really love about working in this kind of space. Which is why I always think about this as we call these experiences automated, but they're not autonomous. So these are not sentiment models at this point. These are all particularly initially built with a lot of human touch and a lot of human hand holding. And I was really proud of the fact that even though I didn't have that technical background, and even though I would identify as much as you do today, as beginners

Intermediate in the AI space, I still was able to shape a massive part of the product experience for probably millions of people. And that's a lot of impact that I want everyone to remember that it doesn't matter. If you don't have all the chops that you think classically come with working in this space, you can still play a really big role just by being a human being.

And one, one example I have of someone who did a really good job of conveying how humanities and creative writing can actually have a lot of impact in big tech was from my colleague, Jasmine Ty, who worked on a similar product with predictive suggested text in the messenger product. And she actually wrote a poem that she presented to her entire product team, and it really moved me to see how she thought of her words and use those as a way of articulating how important it was for our teams to think through the entire product experience. And notice that content strategy was a very key role here.

27:10

So I'll read it out loud.

And all the conversations that happened day today. It's easy now to reply. Thanks and Okay, or to share cute stickers and funny gifts too. But what if you don't want to tell someone? I love you. As machines help us communicate and get things done. How can we make things useful and fun? As storytellers and communicators? What is our role? How do we make machines have a soul? Do our suggestions represent Facebook and suck? And if so we probably shouldn't service curse words like but on the other hand, are we controlling the choice and denying people have their own expression and voice? What is the line between creepy and cool? What are the guidelines for our large language pool?

When are these welcome and when are they not? These questions are tough and we have quite a lot. Thank you.

Transcribed by https://otter.ai

 

Related Posts

Angela Pham from Facebook Speaking at Marketing AI Conference (MAICON)

Paul Roetzer | May 21, 2019

Angela Pham, UX Content Strategist at Facebook, is speaking at the Marketing AI Conference (MAICON) on how her team gives a human voice to machine enabled conversation technology.

Watch Karen Hao's Session from MAICON: What Is AI? (Video)

Sandie Young | November 12, 2019

In her talk at the Marketing AI Conference, Karen Hao of MIT Technology Review discussed, “What is AI?” Watch the full video here.

8 TED Talks on AI Every Marketer Should Watch

Elizabeth Juran | March 24, 2020

AI TED Talks are an excellent avenue for marketers looking to expand or deepen their knowledge on artificial intelligence. Here are a few we recommend.