[00:46]

 

Over the last decade the market research industry has been disrupted.  Our largest agencies are struggling to keep up as their customers turn to newer, faster and cheaper data sources. Now we are on the edge of yet another market shift. Now is the time for us to reassert ourselves as the rudder of the brands we love. Thank you for tuning in to the Happy Market Research Podcast where we are charting the path for the future of market researchers and businesses. Hi, I’m Jamin Brazil, and you’re listening to the Happy Market Research Podcast.  Today my guest is Tim Peacock, COO of Affectiva. Affectiva spun out of MIT and is the pioneer of emotion AI. Prior to joining Affectiva, Tim has had extensive experience in all areas of technology from startups to large organizations. Tim, thanks very much for being on the Happy Market Research podcast today.

 

[01:35]

 

Hi, great to be here.

 

[01:37]

 

So, maybe you could start out telling us a little bit about your parents, how they’ve impacted you in your current career.

 

[01:46]

 

Sure, I’d be happy to.  It’s going to take us back a little ways.  My parents were born in England, and I’m English myself.  My dad’s career relates to how I find myself in the United States.  Dad was a hardware architect for IBM back in the somewhat the early days of business computing.  And he was working on a large project called System 360, which had the novel idea that software programs should be able to run on all different models of computers. While working of that, which was really a major project in the United States, he was transferred over to New York and that’s how we came to this country.  I think what I got from my dad was a sort of an engineering perspective into solving problems and looking at things, framing everything as a problem that needs to be solved. Thus, my interest in software engineering. It was much later actually that I discovered that he knew Fred Brooks very well. And Fred Brooks, for your older listeners, wrote the seminal text on software project management called The Mythical Man-Month.  So I found it interesting that I had a connection to Fred Brooks like that.  My mom was very different: my mom was an executive secretary in New York City for many years, working for various presidents of companies, including Viacom just before Sumner Redstone took over it.  And what I really got from her was a feel for operations and how things actually work. So, the presidents of all these very important companies would say, “Oh, you know I can always get you tickets and I can do this and I can do that.”  But all that really meant is that they would turn around to someone like my mom who knew how to actually do it. So I think I acquired a fascination in how things actually work and not just how people think they work. And so the combination of that engineering and operations just sort of led my career to where it is today.   

 

[03:43]

 

I think that the operational consideration of how things get done is so powerful and is oftentimes demeaned inside of the organization.  So people will see the CEO – you pick the bigshot in the company – but it’s actually the people that are underneath them that are providing a level of support that are really delivering on those promises.  My favorite example of that, personally is… I did an internship, doing mainframe support, back in ’89. [Laughs] Super old, right? This is the vacuum-tube type thing; Wang was the system. And I had these expense checks or expenses that I incurred doing the job and I needed to get them reimbursed because, you know, because literally I was going to eat or not eat based on the timing of those payments.  And so, I would bring a donut to the lady who did the accounts payable for expense reimbursements once a week. So it was about whatever it was back in those days: 10 cents? – sort of an investment. But I actually got hand-delivered my checks by her a week or two ahead of my boss.

 

[04:59]

 

There you go, there you go.

 

[05:02]

 

It’s all about figuring out how the organization worked and could accomplish that objective.  

 

[05:06]

 

Yeah, at every sort of start-up I’ve been, I’ve always made a point of always understanding who’s doing the payroll, who’s doing accounts payable.  Obviously, that function reports to me right now, but it didn’t always. And I always would look at people who would annoy the people who cut their paycheck.  And I’m like, “Why would you do that?” [laughter] In what world would you annoy the person who is cutting your paycheck? [laughter]

 

[05:25]

 

It’s like the opposite.  

 

[05:26]

 

As the donut story definitely shows.  

 

[05:30]

 

For sure.  You think about like sales.  A lot of time if you have this bottoms-up – which most market researchers do – approach where we’re starting with usually at a project manager level and then as the tools or services get traction and then you move up the value chain, ultimately, of course, hopefully to the final decision makers whether it’s SVP at marketing or CMO’s,  that source of brand that you build at those early rungs is absolutely most critical because this is the foundation of the potential size of the engagement that you wind up with.

 

[06:05]

 

Exactly.  And what I found interesting in our run in market research, which as you know started in 2011…  So Affectiva has been in this for about seven years now. It’s not always clear even in large organizations where the decision maker is.  A lot of our best stuff… our best relationships have started with somebody just authorizing a single little project and doing that well the first time and impressing someone and creating a champion in the organization.  It’s so critical for the future success.

 

[06:35]

 

So seven years you’ve been in this space, and that coincides, of course, with your tenure at Affectiva.  Did you help bring, open up that channel or that market?

 

[06:46]

 

Yeah, but it…  I was there. We all did really.  We were a smaller place then and this opportunity came in with Kantar Millward Brown to do a pilot.  And we could see the potential, and it was very much in the sweet spot of what our technology did then, still does now, of course. So, I was involved in the early pilot.  And it’s interesting how much of this stuff that we kind of made up as how it should work for that pilot has still stuck to that day. I mean now the science is vastly better than it was.  And the norms and the sheer number of these tests we’ve done, like 35,000 plus. But some of the stuff that we did in those halcyons, very first weeks or so is still really how we do it today. It’s been quite a ride.  

 

[07:37]

 

So, Kantar was… in a lot of ways, opened up market research for you from an access point.  Can you talk to us a little bit about the early days? I know this is going to precede your experience, I think, anyway with Affectiva. But back at MIT.  So what was the original value prop and then how did that help you get a major customer like a Kantar?

 

[08:06]

 

Yeah, as you’ve noted, after Affectiva began… actually it begins before MIT, at Cambridge University where our founder, Dr. Rana el Kaliouby Rana…  Rana’s Ph.D. thesis was about humanizing technology, which we’ve now come to call emotion AI. Her thesis demonstrated that a simple laptop computer webcam can measure the expressions and emotions on one’s face.  She was inspired to do that by Dr. Rosalind Picard’s book, Affective Computing, which sort of created the field actually, literally created the field of affective computing in the ‘90s.  And so when she got her doctorate, Rana went to work for Ros in the Affective Computing lab at MIT’s Media Lab.  And what happened was… The way the Media lab is funded is a large number of industry sponsors. And whenever they come in, it’s sort of a demo-or-die experience for the researchers at the Media lab.  And Rana and Ros and their team would be doing that. And the interest from the commercial side just got bigger and bigger and bigger and bigger. And eventually, they went to the head of the Media lab and said, “Well, we just need more graduate students to do this interest.  And he said, “No, no, that’s not right at all. You actually need more… you actually need more to spin out more; you need to be a commercial enterprise. The interest was very broad but market research such as from Kantar was there right from the beginning and that was because emotions matter.  I mean the premise of our company is that emotions matter. We know from neuroscience research that the emotional centers of the brain kick in before the cognitive centers do in our decision-making process. They’re both involved but emotions get started. So, anyone who is attempting to influence decision making, loyalty, stuff like that – all the classic roots of marketing – knows that emotions matter.  The question is how can you best measure them. And that, of course, is where Affectiva and emotion AI comes in. So to get started, what happened was that Kantar, Millward, Brown, and Pantic… there’ve been a lot of other firms as well. We’re very strong on the cognitive side. We’re sort of the classic surveys. And then there were various attempts to infer human emotion from how fast you answered a question and things like that, but very few people were directly measuring them.  Some the other technologies around were EEG rates where, you know, you’re putting EEG rates on people’s heads. And those have come a long way in the years since. And people still do that. But Kantar, because their customers were Fortune 500 – global Fortune 500 companies, they needed something that could scale around the world. And our technology had that promise. They basically felt that if our technology really worked – which was the point of the pilot – that this is something they would invest in it and that we would scale together.  And, indeed, the pilot was very successful. The customer was really happy. They never really had seen anything like this before, thought of it as a great advantage. The rest was a multi-year buildout to work on all the different ways that people do market research in 87 different countries around the world.

 

[11:14]

 

Emotions matter.  In a modern context, that is more and more and more important like literally I’m seeing daily movements toward raising this banner.  You think about the impact of voice on a consumer-purchase journey and, unless a consumer connects to a specific brand name at an emotional level over the generic, then you’ve got a big problem, right?  It’s all about… I’m going to call it The Love Connection. (If you remember that show.)

 

[11:52]

 

Yeah, I do.  We’re both showing our age a little bit, but that’s OK.

 

[11:56]

 

Yeah, that’s kind of sad.  It’s kind of sad. That’s fine.  So, we hire younger people because they keep us relevant, I guess.  I don’t know. So, are you seeing an insurgence of adoption, given that banner?

 

[12:08]

 

Yes, very much.  Just to come back to the brand point for a second:  the one I always use is my own emotional connection.  I’m a BMW driver. I was lucky enough to start driving them many years ago.  And I have this emotional connection, as you say to the brand. I don’t think which car… do a whole search.  It’s like, ‘Oh which of the latest BMW’s I’m going to buy.” And that’s something that BMW has fostered in many ways with me.  And that’s the connection that you’re talking about. So, are we seeing an insurgence? Yeah, absolutely. Market research was very much our breakthrough market.  It took our technology worldwide. It gets us seven million plus and growing every day. People’s faces, empowering our science and our norms. But it’s spread out.  We get inquiries all the time. And we’re very much involved in things like the emotional aspects of automotive and the cognitive aspects of automotive safety and automotive cabin design – cabin experience, I should say.  We’ve done some education; we’ve done some health care. We have people using our stuff for the online video recruitment. And then, I don’t know if you made it to our recent, second emotional AI summit, but one of the things we had on display there was a pepper robot from SoftBank.  What we’re doing there is enabling Pepper to understand emotion. And that’s really getting into the bigger play which is where Rana’s dream really began back at Cambridge, which is the human-machine interface. Machines aren’t going to interact with us smoothly and effectively unless they understand human emotion.  Our approach to that is to understand emotion the way humans do, which is what we call a multi-modal approach where we don’t consider just what you’re saying or the commands you’re typing on the screen, but what’s on your face, what’s in the tone of your voice, what gesture you’re making at the same time. All modalities matter.  That’s how humans have been evolutionarily been taught to convey and understand emotion. We’re going to enable machines to do that.

 

[14:25]

 

When you think about how much is communicated through spoken word versus body language.  I kind of harken back to – this again real early days – ’96, I did my very first chat-based focus group.  In that experience, I thought it was interesting because I was able to get feedback in that… I had transcripts of the discussion, but I felt like I lost a lot of the value because I couldn’t read the room so to speak.   

 

[14:59]

 

Yeah, we like to say some social research which shows that about…  You and I are in conversation, if we were actually face-to-face in terms of the emotion that we were communicating to each other, we’d be getting a bit over half, 55%, from face and gesture, 38% in terms of how we are saying things, and the actual words in terms of conveying the emotion – only 7%.  So you look at some of these technologies that are doing sentiment analysis. Now, they only have the text. That’s all they can do. But if you’re doing sentiment analysis off text, you’re actually reading 7% of what the human was conveying. So it’s a very tough approach. So, face-to-face we learn to read it.  And that’s the approach we’re taking with our technology, with our emotion AI.

 

[15:51]

 

Run us through those percentages again:  so 7% is words,

 

[15:56]

 

38% is how you say the words – the prosodic features, and then 55% are visual – that’s from… a lot of that what’s on your face but it also includes the gestures, body language, that kind of thing.       

 

[16:13]

 

So, as your systems develop will it be incorporating more and more of the whole person?  The whole body?

 

[16:20]

 

Absolutely, absolutely.  But I think I may have mentioned to you, but I’ll happily tell your listeners.  We now have a voice capability in pretty late-stage beta, and it’s actually been delivered in the automotive versions of our STK.  We’re already getting everything off the face. There’s always room for improvement, but it’s very well developed. And we’re starting to bring voice into it.  We will add gestures, and we’re also going to look… Context is very important. When you and I are judging ourselves, we consider if we’re talking across the table might be very different than when like if we’re sitting at a sporting event, right?  So context matters as well in understanding emotion.

 

[17:02]   

 

It’s super complex, this whole reading emotion.  And I take it for granted and all of us as human beings take it for granted.  Somehow we just figure it out early and forget it’s hard, right? This mass of communication.  I think of language as the hard part of it, but really it’s the body language piece. Do you see that the body language transcends region and culture?

 

[17:28]

 

Ahh, that’s an interesting question.  We’re just getting into the body language rather than the face.  On the face… the expressions that a face can make are universal because it’s dictated by the muscles.  But when people make them? How often they make them? What can they mean? Even sometimes a nod and a shake are different than what we think they mean in the U.S.  So, yes, very much cultural. I’m not an expert in the body language itself but, based on head shaking and things like that, I would absolutely expect the meaning of gestures, the norms of gestures to be culturally specific.

 

[18:06]

 

Again, just kind of a broader context, isn’t it?  

 

[18:09]

 

Yeah, absolutely.   

 

[18:11]

 

So, let’s go back a little bit to when you first started seven years ago in this space.  In terms of trade, was the transaction more of a one-off? So, “Hey, we’ve got a study. We want you to run your algorithms across these videos and we’re going to pay you for that”?

 

[18:30]

 

It began that way.  We also had a separate product, a hardware product, which measures the galvanic skin response off your skin.  We did a lot of custom research projects of people. So, we were almost like professional services, a research agency almost.  We were going “Can you guys do this and do this plus analyze it?” And it was very interesting but, in terms of building a business, it was hard work.  And it wasn’t obvious how it would scale. There are definitely organizations that have made their mark as services organizations. But we were looking for a more scalable software solution.  And that’s where – particularly advertising inside of market research began to show promise because it was repeatable even if we were doing some of it by hand in the very early days. Obviously, we weren’t doing the facial coding by hand.  But, in terms of delivery, we knew that we could automate it; it was sort of straightforward software engineering. With time and effort, we would get it. And that’s why we ended up pivoting so hard into focusing on that area for years because it was repeatable, it was scalable.  We had other partners but Kantar became a significant partner. And the drive of them was to get them to standardize on that. Instead of saying that this was an option that a client could elect and their sales force could CHOOSE to sell, it had to have enough value; the price point had to have the reliability; it had to have the case studies – you know everything that a large multi-national needs to have to back it up such that they would just include it in their standard products.  And that was a couple-of-year effort, including… You know we had this wonderful cloud-based system, and then we found out that in parts of Asia, the market research was done off-line. [laughter] So we had to figure out how to do that. That was a bit of a surprise. But, anyway it was successful: they did standardize it. And, along the way, they went from commissioning projects to bulk ordering, bartering batches to get a better price and then eventually multi-year contracts that were based on it being standardized in their offerings.  It was quite a journey. One of the key learnings for us which we’ve now taken forward to all our partnerships was a belief that the way we’re going to win business by having a collaborative, transparent partnership, that we going to sit down “This is what our science can do. This is where the research can take us. If we don’t have what you exactly need today, we’re not going to make it up. We will tell you what we have, how it applies, and how we can get to what you want.” And that can be a slower sale, but it creates really enduring partnerships, and that I think tremendous value for Affectiva.    

 

[21:28]

 

I love that…  Every company that I’ve run into so far, especially in the market … 100% in the market research space, has been built on the backs of solid relationships as opposed to technology or services or price point or you-pick-the-USP that is used.  I want to hone in a little bit though on the early days compared to where you are now. And, for our listeners, you guys are definitely the dominant player in emotion AI, but yet you didn’t start there, right? That wasn’t the initial – I’ll call it – value prob.  And, as Paul Graham, one of the founders of Y Combinator, he has this framework for startup success, which is you start a business in a non-scalable way and then you scale as the business… You figure out to scale it as the business expands, right? You’re sitting on a lot of code and IP and Smarts, etc.  Is a big part of the value for the company in the norms, the thousands and thousands of hours of video that you have?

 

[22:54]

 

I think it is a big piece.  I think norms married of expertise would almost be the formula.  We had early expertise, and we knew the value of the data but it was partnerships where we brought our expertise to bear that brought us in more data.  So we did get into a virtuous cycle where simply by doing more business and delivering value, we brought more data in. That data improved the core science, the classification ability of the AI itself, and it also powered the norms database.  So now we… You fast forward seven years, and it’s this seven million plus videos of various people around the world. And then all of that powering norms so that practically any country in the world that could come to us and say, “Well, is this the norm for chocolate ads in Morocco?”  And we could say, “Yes. No. This is the norm. We know what it is.” So, we didn’t have that all to begin with. To your point, we had to sort of scale and produce it. Very early on it was clear that it helps to be an AI company so that you sort of appreciate the value of data inherently.  But very clearly on as you talk to customers, they’d be like “Well, what does this mean?” And we knew the value of being able to tell them what it means. Not just a normative value but by doing case studies that can tie it to things that they actually care about, which are things like sales-lift, brand recall, earned media – the stuff that really measures the effectiveness of advertising

 

[24:34]

 

Which, of course, is a golden goose as you guys have continued to crack that nut.

 

[24:41]  

 

One other thing is…  I was just thinking back that you start out with non-scalable things.  Yes, absolutely, I totally agree with that. You have to find what the customer wants, and the chance that you are going to do that by building a scalable system first and that’s going to be exactly what they want, is very small.  You have to be able to iterate quickly and get to them and be flexible. So, you have really non-scalable systems: People dragging stuff into Excel or some data analytics program and producing plots and saying this is the way to do it or that the way to do it and writing custom scripts.  I think the first project we delivered for Kantar took us a week plus to deliver the results, involving pretty much entire team and how it should work, and to do it. And today it’s a self-service thing that takes two minutes off in the cloud somewhere; we don’t even notice it happening. But the key is you do the non-scalable stuff to get the feedback from the customers of what they really want.  And when you’re sure… reasonably sure that you have what you want, that’s what you scale.

 

[25:46]

 

Are you seeing on a go-forward basis that there are other people that are going to figure out these or have figured out these algorithms or the AI or what have you?  The missing piece for them is actually the data to help inform their systems, right? Are you seeing that as part of the go-forward model or are you thinking of that …? Keeping that as the moat around your IP?    

 

[26:11]

 

Yeah, I think that we view data more as a moat around the IP.  I don’t see us sharing it, certainly not in areas that directly compete with where we want to be.  So I would view data more as a moat. Yes, I just think there is the answer there. We do collaborate academic research because there’s a lot of value in our data for analysis that we may never tap.  So, from a general knowledge point of view, we love to collaborate particularly with academics about certain things. One of the data points that came out of that is finding that in the U.S. (we know this for a lot of countries) but in the U.S., females tend to express about 20% more than males do in terms of advertising, reacting to advertising.  So, you need to obviously keep that in mind when you’re analyzing an ad. But, if you go over to the U.K., males and females express at the same level and markedly lower than the U.S. levels. An example of what research can find and the kind of research we would do. And some of that is definitely done with academic institutions that are interested.  

 

[27:24]

 

So, are you selling directly?  Obviously, you’ve got strategic partnerships inside the market research agency world.  But then are brands a major part of your customer base?

 

[27:36]

 

Well, that’s a great question.  Let me get at it from sort of a core strategy point of view and try to bring the answer back to you.  We’re an emotion AI company. We want to be the experts in understanding human emotion whether that’s for the analytics case like market research does or more for the interactive case – say when you’re talking to an Alexa or talking to your car or interacting with your car.  So, our focus is on the emotion side. The full-fledged solution for a brand, for a brand advertiser should – obviously we include objective scalable measure of emotion – it should include the cognitive side and, depending on what they’re doing, it should include more or less research assistance in terms of …   “This is a fifty-million dollar advertising campaign” – spent a lot of time with the team that does that every day and understand every last nuance. Or “This is a quick run on the… quick, digital campaign and we’ll see how it works, in which case do a quick test and move on.” We want to be the emotion component.  And so, we’re committed to a partnership channel wherein our partners, generally speaking, work with the brands directly and we provide the emotion. Having said that, there are exceptions to that rule. Sometimes the brands themselves are essentially their own research agency, which case we’re happy to deal with them and work with them.  And then, of course, we’ve worked with 25% of the global 500. Pick any name you want. (I’m not necessarily going to be at liberty to share them.) But when they want someone from Affectiva to get on a call and help them understand what the emotion traces mean, of course, we’re there for our partners. We do that all the time. But the key partner relationships with these larger firms are with the market research agencies, with the brands.         

 

[29:35]

 

Yeah, because…  and it makes sense on both sides because what you’re doing is providing the data.  Then what the researcher is doing with it, whether it’s internal or external, is providing the so-what, now-what insight.

 

[29:51]

 

Exactly.  And there are definitely other firms that have gone other ways, and that’s fine.  For us, it keeps us out of a channel conflict, right? We don’t find ourselves competing with our partners for business; we almost have exactly the same interests.  In the analysis of even a short, 30-second ad or, if you’re cutting a 60 to a 30, intent matters, context matters. I can’t just look at an emotion trace without the context and tell you that it’s a good ad.  People ask that all the time: “Well, was that good?” It depends on what you’re trying to do a little better, right? What emotion were you trying to evoke? We can tell you what you’re doing. We can tell you that there are some general rules, of course.  And those can be tailored to match exactly what you’re trying to do. But deeply understanding what the brand’s ad campaign is trying to do and then relating the emotional data and, indeed, the cognitive data back to that – that’s the role of the research agency.  And that’s who our partners are.

 

[30:44]

 

So, I want to shift gears a little bit and talk about employees.  We have three listener types: one of which is aspiring insights professionals.  At your company and given your broad experience, what do you see as the three characteristics of an all-star employee?

 

[31:01]   

 

Absolutely.  First of all, I’ll use four.  So, I’ll cheat.

 

[31:05]

 

Yeah, you can use as many as you want.  

 

[31:09]

 

Because I got to claim that there’s a foundation layer, right?  So, you have to have the core expertise in what you’re doing. When we hire an AI researcher, take it as a given that they have to have an expertise in deep learning and the latest in AI research, right?  So that’s my given. You have to have the expertise in whatever thing we’re hiring you for. What separates the average machine-learning researcher from an excellent one or a software engineer or the average biz-dev person?  I would list three things: passion, flexibility, and get things done. So, going a little bit deeper into each of those. Passion is… It’s a long haul on a startup, right? where there’s going to be great ups and there’s going to be downs and there’s going to be sideshows and all sorts of things.  What brings you to work every single day to do your best – and those occasional late nights and weekends, and messed-up vacations and all that – is a passion for what you’re doing. So, understand what the company is doing. We’re trying to humanize technology. Go there if you have a passion for it.  Don’t go to a startup if you don’t have a passion. Don’t go to a startup if you think, “Ahh, you know they may get bought in six months and I’ll get rich.” That does happen every now and then, but more often it is usually eight years before you’re an overnight success, right? So have a real passion for what you’re doing.  If that passion drives you. It’ll make you stand out as an employee. In contrast to very large companies… Parenthetically, I worked at a company that got bought by IBM. So I’ve done several years at one of the world’s largest. Flexibility. Yes, there’s the core job that you’re going to do but there are all sorts of other things.  For example, we just put on an Emotion AI summit a couple week ago. Hundreds of people came to Boston, hosted by us, to talk about trust and AI, diversity and AI – some very important, general AI topics. Yeah, we had help and we spent some money. But everybody in this company did an amazing job of pitching in on things that had nothing to do with their jobs.  We were standing around and we had a moose in demos. “OK, it’s a few blocks that way.” “Help us load the van up” and things like that. But every day in a startup, it isn’t just the trivial stuff like that. It’s very important stuff like, ahh, you may not be comfortable on customer calls, but this customer needs an expert, and you’re the one that’s available. So get on that call and help us explain what’s going on to this customer.  So passion, flexibility, and then finally get things done. Don’t be a talker. Don’t sit back and hypothesize this or that. From researchers to engineers to absolutely everything, startups only create value by getting things done. So have a knack for getting things done. Get to closure. Get to finish. Deliver stuff to the market. Deliver stuff to the customer. And that all is based on people who get things done.

 

[34:15]

 

I interviewed a…  I don’t even want to say the job title cause I don’t want to hurt anybody but this is such a great story relevant especially to your last point.  Jamie Plunkett and I interviewed this senior-level person. It was a really important, critical role – like a building-block role for the organization.  The guy looked great on paper. He left the office. It was an eight-hour interview. We looked at each other. You know he’s checking all the boxes. And Jamie actually told me, “Jamin, this guy is going to say all the right stuff but at the end of the day, he’s not going to do shit.”  And that’s exactly right. I was like this sort of view of “I’m going to create this façade of work” as opposed to the adage of rolling your sleeves up and getting the work done alongside the rest of the employees. Then also, the flexibility in that there is no task that’s too small or too great for me to tackle.  Even if I’m operating outside my comfort zone, I’m willing to suck it up and have that uncomfortable customer call, right?

 

[34:25]

 

Yeah, so I’m an old engineering guy.  So, I have a standard speech to new engineering managers who have the fortune or misfortune to work for me.  Engineering management is about delivery, right? There’s like two scales that I’m going to grade you on – two grade ranges, if you will, at the end of a project, right?  There’s one that sort of runs from A to B+ to C, and that’s for projects that finish and deliver, right? And then there’s another one that might touch B+ just but it’ll go a little lower than that and that’s for projects that don’t.  So, don’t have the great excuses about why you didn’t deliver or it couldn’t be done. Get it done. You have to get it done to get into that A range. That’s what engineering is about.

 

[36:11]

 

That’s a 100%.  I mean it’s across the board.  More obviously in sales, but even marketing or product or customer service…  It’s all about that ownership mentality across the whole organization.

 

[36:26]

 

Exactly.  Yeah, and it can be hard.  I haven’t run into it at Affectiva, but certainly in startups, you find persons sitting in key spots in the organization and you suddenly…  It’s not a “sudden”; it’s a slow dawning on them. “Oh, they sound great. They’re really good at analyzing the problem.” But you begin to realize that they never actually get anything done with that.  And, if the startup is going to go anywhere, you do have to make a hard choice. Maybe, you can find a different role where they are better suited but, more likely, it’s time for them to move on.

 

[36:59]

 

Anytime you factor in burn rate, I agree with the latter part, right?  100%, 100%. So, what is one secret that you see or that you may have for driving growth, profitability or success of your company?   

 

[37:17]

 

I think it’s…  One of our secrets is to be true to our core, true to who we are.  When we are doing, when we are delivering emotion AI and we’re working with partners or large firms or whatever to bring our expertise to bear on solving a real problem that they have or their customers have, we do great.  When we start getting into, “Oh, we’re something more than that. We could be the market research agency. Oh, maybe we can cobble together a panel. Oh,” that kind of stuff, that’s when we lose our way. The secret to our success is stay true to our core.  We are an emotion AI company. We want to be the world’s expert in humanizing technology and helping machines understand and adapt to human emotion. And THAT has to be our focus. There are all sorts of adjacent opportunities that would drag us off focus. I’m not saying that those aren’t legitimate strategies for some other company, but that’s not what Affectiva is about.  So the secret to our success in emotion AI is staying on point. It hasn’t been perfect; we’ve definitely moved around a little bit and tried a few things often. And that has just reinforced my belief in stay on your core. Focus on your core. I’d say parenthetically the hard part in a startup is you want to really be focused until you’re sure that focus is wrong. You start off stupid;  you have to be open; you have to keep your eyes open. And even when we say that we’re really, really focused on market research, we had to keep our eyes open for the other markets were going. And that has helped. But, as a core strategy, even when you say it’s emotion, that still leaves you lots of possible places to focus. For us, we want to be true to emotion AI.

 

[039:13]

 

Not only does that clarity benefit the organization from a trade-off where you spend time and treasure but it also creates channel clarity for your customers.  So that, whether you’re servicing Millward, Brown, or Pepsi, neither feel threatened.

 

[39:33]

 

Exactly.  I mean I’ve had that conversation, explaining it to some very large brands, and they appreciate the clarity and they like…  “OK, I understand how I get done, what I need to get done.” So, they’re not looking for some confused “maybe it’s this,” “maybe it’s that.”  And I think that it also helps them. Some of these large brands have ended up at relatively small partners. But it still helps them that…there’s a nice, young market research firm called ProtoBrand, doing really great.  We’ve been partnering with them for just a few months. And we love it when they win deals. It’s a passionate agency that wants to be an agency, wants to provide the insights work, closely with the brands. That works really well for us.  And I think that, as you say, it gives simplicity and clarity to the brand of who’s doing what.

 

[40:21]

 

See and what happened right here…  I want to just take a moment to highlight this for our audience cause it’s so subtle and so important.  The way that I got Decipher’s largest customer, who’s still incidentally I believe the largest customer, is by doing what you just did, which is promoting their business in conjunction or even ahead of my own.  In so doing, you create, you bridge – I’m going to tie this all the way back to the beginning, right? – you start building or forging these relationships that are mutually beneficial not just now but it’s like compounded interest time and you just create a time machine where you go forward a thousand years.  It’s a very powerful tool. And I’m surprised honestly more people don’t do that.

 

[41:13]

 

And I think…  I’ve been in a couple firms whose strategy ended up giving them a channel conflict.  Early on I think we were considering a path that someone could channel conflict and people were kind of poo-pooing it, saying, “Oh, there’s always a way to work around it.”  But it’s actually really hard because the channel conflicts when they exist are fundamental, right. Let’s say that I’m trying to pitch Pepsi against Kantar. That’s an insane place for me to be.  You can’t work around it because fundamentally in the end Pepsi is going to pick somebody, right? And whoever loses isn’t going to be terribly pleased about it. So, sometimes you have to have a pretty clean foundation that does not have that conflict in it.   And then, yeah, there’s a lot of things you have to do to build on top of it.

 

[42:04]

 

So what is Affectiva offering right now that is getting purchased in the marketing research space?  

 

[42:10]

 

Yeah, we call it Affectiva for Market Research or MR.  And that is basically a quantitative solution for understanding the emotional reaction to video content.  It’s often, maybe predominantly, used for advertising, but it’s also being used for much other content. Let’s give a shout-out to the Good Doctor, which is a TV pilot that we tested.  And because I happened to watch the test, I’m now a fan of the show. It’s quantitative testing of video content. We’ve done 35,000+ of these. As I’ve said, our science and the norms that are derived from the science are based on over seven million people interacting with video content.  Eighty-seven countries, 25% of the Fortune 500 and now an increasing number of TV studios and networks are using it to analyze their content. And essentially what happens is there’s no change… using our stuff or not using our stuff. If you’re just doing a survey, there’s no change. The only difference is your panelists (and parenthetically we strongly believe in informed consent, the panelists are asked to opt-in for emotion AI; in market research, people call it facial coding.  So I will use that term going forward for facial coding… And while they’re watching the ad, your branded content, your TV episode, your movie trailer, they’re recorded and their emotions and expressions are measured. This is then presented back quantitatively in two major ways. There’s scores, various summary scores that attempt to summarize how joyous they were or how confused they were, and their overall engagement with the ad, but also a moment by moment trace where you can synchronize along with the video and say, “Here’s a joke.  This ad built to a joke. And look at this: males loved the joke, females are turned off by the joke.” Did we mean to do that? Usually probably not. And so you can be informed moment by moment as well as in the overall effectiveness. All these are then tied to norms based on that database of over 35,000-test run. And we can relate those norms and also norms plus cognitive norms on the survey side and tell you how they relate to sales-lift and brand recall, shared media. And the good news is your intuition about how emotion and sales-lift should be related is actually true.  So there isn’t a big counter-intuitive case to be made here. It turns out that if people are positively engaged, particularly when your brand is on the screen or when the brand reveal occurs, that’s a good sign for sales-lift. And neutral engagement is “Nah,” and negative engagement – negative reaction, I should say – is obviously poor. This is subtlety. This technology is designed to embed along and to work with the methodology that your market research agency uses today. We don’t require any particular methodology or any particular platform or any type of panel. We’ve worked with hundreds of agencies over the years.  Almost certainly the platform that you’re on as an agency, we’re already integrated with. If not, our professional services organization can take you through that in a very small number of days.

 

[45:40]

 

I love that.  Very, very valuable.  Are you picking up the voice or the Amazon-effect in terms of market adoption?  Did you get the question? I did a bad job of articulating it.

 

[45:58]

 

I don’t know if it’s a bad job.  It sort of raises two questions. One I want to touch on is the importance of privacy.  I think that anyone who does business in Europe knows GDPR came online in April and we and presumably everybody else is fully GDPR-certified.  So one thing that it begins with is everyone has informed consent about what you’re doing with the data and how it’s used. And then, of course, you have the right to be forgotten.  You’re only going to use it for the needs sake. By in large, we don’t look at voice in quantitative ad testing. Which is one, because they’re not expected to say anything, so there really isn’t much going on in the voice channel.  And, two, from a privacy point of view. So we actually drop the audio out. It’s obviously more private to never have something that you don’t need. However, we have done a few qualitative studies, and our technology is used for doing online interviewing.  And, obviously, in both those cases, people understand that the voice channel is being measured. And that’s a very interesting area. I think it’s a potential growth area for us as our combined voice and face capability gets deeper and broader. We’re much earlier on with that than we are with the face-only offerings.  But, I think in terms of understanding what’s going on when people are talking – obviously, we relate back to those statistics – the voice is extremely important.

 

[47:27]

 

Yeah.  makes sense.  I tell you I could honestly – I know we’re approaching the end of the time.  It’s hard for me to stop. This is a lot to unpack right now because, as you just said, voice is really important and voice is becoming more and more important, especially in the context of the new purchase journey brought to us by Google Home, etc.  It’s just vital that this emotional connection is a pivotal part of any successful brand strategy. So anyway, thank you, Tim, very much for being on the show.

My guest today has been Tim Peacock, COO of Affectiva.  Tim, thank you for being on the Happy Market Research podcast.   

 

[48:16]

 

It’s my pleasure.  Happy to do it.

 

[48:24]

 

Next time on the Happy Market Research Podcast, I’ll be interviewing Rudy Nadilo, President of Dapresy.  We’ll be covering the early of days of how he and Toby found their product market fit here in North America as well as making some really unusual but important connections between photography and visualization of data in a meaningful storytelling-based way.