INTERVIEW

From NLP to LLMs: Insights in Language Understanding from Industry Leaders

Featuring Alyona Medelyan – Founder and CEO, Thematic

SHARE

A timely panel discussion explores the evolution of language understanding.

Hosted by Richard Owen, the panel features three experts: Brian Curry, Co-founder and Head of Engineering Customers at OCX Cognition; Maurice FitzGerald, Head of Content at OCX Cognition; and special guest Alyona Medelyan, CEO and co-founder of Thematic. Medelyan, originally from Ukraine, holds a PhD in Artificial Intelligence, making her uniquely qualified to discuss this rapidly evolving field.

Originally referred to as Text Analytics by business users and Natural Language Processing by researchers, this field has been transformed by the rise of Generative AI. The panel will discuss how Large Language Models (LLMs) have eliminated the need for pre-canned taxonomies and time-consuming training, enabling a more nuanced understanding of language. Brian Curry will break down the game-changing potential of LLMs in solving complex language problems and customizing models for unique business needs. The panel will also explore the transformative applications of LLMs, from enhanced chatbots to natural language data queries. Alyona Medelyan will share invaluable tips on choosing the right AI solution, emphasizing the importance of testing with real data.

Richard
Today’s rather unusual formula today than compared to our typical podcast because we’ve got a panel discussion and we’re going to be focused on the topic, which I know is critical for a lot of people who are looking at CX programs, of Natural Language Processing.

And in addition to the sort of usual panelists we have, my colleagues, Maurice Fitzgerald, Head of Content at OCX Cognition and Brian Curry, our Co-founder and Head of Products at OCX Cognition. Very pleased to announce that we have a special guest here today, Alyona Medelyan who is the CEO and founder of Thematic and is probably better qualified than anyone we know to talk on the topic of Natural Language Processing. So welcome Alyona. Thank you very much for joining us today.

Alyona
Thanks for having me. Exciting.

Richard
And just as a side note, as you rush off to read her LinkedIn profile, I’ll share just one observation here that Alyona’s originally from the Ukraine and holds a PhD in artificial intelligence. So we’re all a bit upstaged here today on this topic, but we’ll try not to be too overawed. And we’ll just dive in. So Alyona, we’re recording this on a technology, Riverside as you know, which as I speak is transcribing everything I say into text. And so there is an absolute boom in the application of speech to text. Just about every product I see seems to have it baked in. Is this a good thing from your perspective for NLP? Is it an enabler? Is it commoditizing NLP? What are your thoughts on that?

Alyona
No, it’s amazing. It’s the first step in analyzing our speech is basically transcribing it into text first. And then all of the other methods apply that you would apply to text as well. So we’re using it in our company. It saves so much time and adds so much value to business users. And for NLP, that just creates more data to analyze.

Richard
So from your perspective, does it move the real value away from the creation of data out of speech and into the analysis?

Alyona
Well, the value is for the users is that now you don’t have to write notes. You just copy and paste. And a lot of tools even summarize your meeting notes and recording of calls for you. From the NLP perspective, it’s always been a challenge in the past when, let’s say, in customer experience world, we were asked to analyze contact center data and we would run all of the different text to speech APIs and compare them. And they were all pretty average, to be honest. And finally, we now have technology at the level that is good enough and necessary for companies to benefit from all of these transcriptions of their contact center data.

Maurice FitzGerald
Could I ask, do you feel that technology for the transcription has moved on a lot recently? I’m thinking of an old example from my past when I personally had to do the negotiations with a large European television station. When we at HP we had autonomy software. We’d contracted with them for digitizing all of their archives, which included transcribing the, the voice into text. And how can I put it? It couldn’t deal with regional accents at all. It didn’t matter whether it was French, German, English, you know, English from the London area was fine… from certain suburbs of Glasgow, it just returned garbage. Has that progressed a lot?

Alyona
Definitely. So I work in a company full of New Zealanders and their accent can be quite difficult to understand as well. And yeah, we, our first question, will it work on our accent? And it works.

Maurice
Oh.

Richard
New Zealanders, do they speak Hobbit? Is that the problem? I didn’t think Kiwi was that difficult. One of the topics that I think is probably relevant for our group, and Brian perhaps you can comment on this. So, you know, the most obvious application in the past has been looking at survey commentary through the lens of NLP. But…

Alyona
They speak Kiwi.

Brian
Thank you.

Richard
But I think the mathematics of survey performance are getting harder and harder to see that as being valuable. I know you’ve got a point of view on this. Could you share your perspective?

LISTEN

WATCH

From the NLP perspective, it’s always been a challenge in the past when, let’s say, in customer experience world, we were asked to analyze contact center data and we would run all of the different text to speech APIs and compare them. And they were all pretty average, to be honest. And finally, we now have technology at the level that is good enough and necessary for companies to benefit from all of these transcriptions of their contact center data.

Alyona Medelyan

Founder and CEO, Thematic

MOST POPULAR

Digital First, People Second?

Joe Wheeler

..

B2B Marketing Gets Intensely Personal

Ralph Oliva & Liam Fahey

..

Humor as Tool for Reflection and Change in Business

Tom Fishburne

..

Customers as the Best Guide for Innovation and Change

David Tudehope

..

Customer Experience Should Be Seen as the Means, Not the End

Tom Monahan

Brian
Yeah, I mean, I think if you back up and think about the use cases we have for processing language, it’s generally speaking it’s to understand the topics of interest and the sentiment or attitudes of customers as they move through experiences with our brands and our products. That’s really what customers are looking to do.

The way in which we make sense of all of this unstructured sets of attitudes in the world is we apply structure to them. And one of the easy ways, one of the most traditional ways was with surveys. Because surveys allow you to structure the question and limit the answers so that you can really get back a structured data set. But the problem that we see for many of our customers today is it’s a stretch to see survey rates that really exceed 5% on average in a B2C context, they might get a little higher in B2B. And so just the survey vehicle itself is just not coming back, let alone the amount of signal in it. So if we think of the kind of information we get from a survey, it might start at 5% with the answer to the first question and then wane over time as you move through the body and you get different response rates at deeper questions. And probably even more so, you get a little bit of a reticence to go into the natural language fields and type in free text that we can perform that kind of Natural Language Processing on. So it’s been a difficult vehicle to yield a lot of signal for customers in the past.

Richard
Does that mean, Alyona you might want to chip in on this, does that mean that the survey won’t be the center of the universe for NLP much longer? I mean, I think it’s where a lot of people started, but it seems like, well, first of all, do you agree that it’s getting thinner and thinner as a data source? And where would we go if not the survey?

Alyona
So you’re right that in the CX context, text analytics has been applied primarily to surveys and PS surveys that are very short, customer satisfaction and so on. I think what Brian is referring to is the fact that everybody can send a survey within seconds and this is thanks to the democratization of the technology for sending out surveys.

But traditionally, survey is a market research method that experts knew how to use in a way that actually would get a representative sample. It wouldn’t skew the results. And of course, there are a lot of users who misuse it. And as a result, they have an unrepresentative sample. We tend to work with large companies quite a large team of experts, research teams, and surveys still play an important role. And another thing that I’ve noticed is with survey responses being small, like as Brian mentioned, the length of the survey impacts it, the wrong methodology, but also it depends on the brand.

There are certain brands that we see where people write a lot and they’re glad that they’re being asked, and why discount all of those signals and all of this data. And there are some brands where people just have been disillusioned and frustrated with them for many, many years. They don’t want to ask another survey, also knowing that nobody is going to do anything with it. So we see the brands that kind of facilitate this discussion.

And Atlassian is one of those brands. So they are the makers of Jira and Confluence, developers, which Brian will know, very vocal about their tools. And they ask for feedback. And then they basically create these communication cycles with the developers and saying, hey, we’ve heard you. 20% of you asked, we focused on this and we’ve prioritized this. So,it’s an opportunity to build trust with the brand.

Maurice
Yep.

Richard
So what I’m hearing you say is to some extent, the better companies will still be able to sustain this, or the circumstances that companies face. And that’s probably more true for B2C. When you get into B2B, things get a little messy. You can’t really build a representative sample of business to business customers. There’s too much heterogeneity in the group. So at some level, what people might like to do is try and say, well, we’re going to end up with a small sample, but it’s going to be representative. But it’s so hard to get any responses. It’s really hard to imagine they’re going to be able to construct a representative sample from a small group. You know, they’re going to end up with virtually no responses. And mathematically, it just, if you drop the idea of homogeneity, then you lose the ability to extrapolate anyway. So you know, even if you think you’ve got a representative sample, it’s probably not accurate.

Alyona
Yeah.

Richard
So it sounded to me like, you know, consumer or very high volume brands still have more leeway with this than lower volume. Is that fair?

Alyona
Well, we are a B2B business and sending out a survey is tricky. And so we typically don’t do them. We try to personalize if we need to find out what people want. We try to schedule as many interviews and discussions as we can because then maybe your sample is still limited because you can’t talk to every customer about everything all the time, but you can go in depth on the things. And often even like small scale surveys will still show you kind of the direction. They won’t tell you all of the answers for sure, especially with a small sample, but they can give you a direction and then you can dig deeper in the interviews.

Maurice
And are you recording and then using the software to analyze the responses you get in these interviews?

Alyona
Yep.

Brian
I think that one of the things that we’re learning as we build the OCX platform is that there’s really no safe single source for understanding customer attitudes or predicting their attitudes or predicting their behaviors. We’re sort of looking at a variety of sources. And for us, probably in the first epoch of the company, it really meant looking at attitudinal data that most companies were largely collecting through surveys and then joining it with operational signals. And these would be more like behaviors, things that are sitting in products like Jira pulling those in and correlating those with the attitudes to start to be able to predict the attitudes when those aren’t present, but the operational data is there. That was kind of epoch one.

I think where we’re now focused, because of all of the explosion of NLP, particularly with the emergence of large language models, is that a lot of the signal that was before kind of trapped inside of language context is becoming available to us. And so it allows us, without having to natively build another NLP engine, we’re able to use things like one-shot or few-shot learning on top of LLMs from people like Anthropic. Or to use the ability to do augmentation of the prompt by looking at local data sources through RIG and things like that, are really making those language sources much more interesting and available to us. Are you finding similar things yourself, Alyona when you’re looking out there as it kind of feels a little bit like an emerging golden age for language as a source of intelligence?

Alyona
100%. Yeah, I’m a huge proponent of LLMs and it’s an amazing tool. I’ve tested it many times on language problems that were tricky, ambiguous words and very tricky negations.

I never drank a coffee that tastes so good. Or, you know, like we can floor our language is so complex and a lot of traditional techniques and rule-based techniques.

Richard
So what we’re talking about, large language models, uh, isn’t, isn’t there a risk to some degree that the LLM space commoditizes in favor of the very, very large companies, you know, you’re thinking, I think at Google, AWS, uh, Microsoft, when I looked up.

NLP just on a browser search, not surprisingly, Google came back with Google. And Google’s recommendation was to go to Google and buy Google Machine Learning. And so for everybody who’s been surrounding the AI space with additional technologies, and I think probably that’s a fair categorization for Thematic, doesn’t that represent a challenge? Is there a risk of commoditization, or is that an opportunity? How do you think about it?

Alyona
I think there’s both. When I started the company, Google already had their NLP API. And Amazon already had a version of that. And there were lots of open source tools that people could use. I think the people were asking me, why are you starting this company? Google already solved this problem.

Well. Nothing has changed and developers, you know, obviously can do what we do in-house. Nothing has changed from the perspective of like, that argument remains, it may be stronger. And it’s because it’s even easier to analyze data with off-the-shelf products. But at the same time, I think that the positive thing is that people know that AI can actually do this. Because when I started the company people were questioning, can this algorithm analyze my customer feedback as accurately as I can do in my Excel spreadsheet? This was the question that we had to answer over and over and over. Now, all we need to do is, all we need to do is, is my AI better than somebody else’s AI?

Brian
Right.

Maurice
Easy.

I think where we’re now focused, because of all of the explosion of NLP, particularly with the emergence of large language models, is that a lot of the signal that was before kind of trapped inside of language context is becoming available to us. 

BRIAN CURRY

Co-founder and Head of Products, OCX Cognition

Unlocking Growth and Retention in Manufacturing With Customer AI Analytics

Obstacles can seem endless. However, there’s a powerful tool that can help you navigate these challenges and transform your business.

=

Alyona
Overall, the reason why I started this company, because I felt like there is this technology that is not reaching people. And I’m excited that there are now even more opportunities for people to benefit from this and save time and be more efficient. And the whole solution is not just the algorithm. The algorithm are just tools to, as Brian said, to convert data into unstructured data into structured. And from then on, the whole product around helping you to work with this data, build trust with this data, that still remains and LLMs remain very much black boxes. And one of the findings that I had that kind of created our unique advantage for Thematic and why even companies like Atlassian who have a lot of engineers or LinkedIn and Doordash, why they use us is because you still need to teach the language model how you talk about, how you want the data to be analyzed because any language model doesn’t know your business knowledge and it’s very difficult to kind of teach it. You still want to kind of correct it.

So we build user interfaces where the human and AI work together. And I think that is our unique advantage and something that’s difficult to build.

Richard
So, Brian, that’s an interesting point. LLMs, and you mentioned, we were talking earlier this morning about this, perhaps also for the audience that aren’t familiar with how LLMs actually work, this notion of LLMs have to learn, first of all, in the same way that traditional NLP does. Could you take people through that a little?

Brian
Mm.

Sure. I mean, I think if you think about the way LLMs work today, they sort of have a two-part function where they, what they do really, really well and what Alyona talked about being kind of a breakthrough of getting past some of the bad tokenization and sentiment analysis we used to suffer from. They do that piece very well. They deconstruct and understand the language very well.

But their mission really, they weren’t built simply to then find topics and sentiment like we were often focused on inside of CX questions, right? They’re built to also be conversant and look logical and they can kind of put together a response that’s sort of bigger than that. I think when you unpack this, as somebody who’s trying to decide what to do around how do I make natural language a part of my solution is you have to start to understand that there’s, like with all things, there’s an emerging infrastructure layer that’s going to be, I think as you said, provided by the big three or four, right? They’re going to, it takes a lot of resources, a lot of compute, access to a lot of data to train these transformer models on such big data sets.

And so what’s going to happen is I think we’ll begin to be able to relegate things we had to do down to the infrastructure stack and focus as application providers on the value add that really connects to the business problems, right? And to do those kinds of things. And so it’s getting easier for us to connect an LLM to domain specific knowledge with technologies like retrieval, augmented generation, to be able to basically wrap context around the prompt that you’re sending to the engine and get back things that feel like it’s speaking your language. And those things will improve over time.

But I really don’t think these general purpose platform providers are going to solve the nuanced business applications that we’re trying to solve in the application layer. And that’s sort of where I see the future panning out. It’s really in the application layer.

Maurice
I have a question that’s relating to the way that these large language models perform. Aluna, you hinted, I think, at part of this. I’m not talking about the classic ambiguity thing, which has been resolved largely by LLMs. It’s like the old pair of sentences that I use to test.

NLP software was okay, the cat walked into the room. It was fluffy. And then you try to determine what it is. And the traditionally the, in the old days, the software identified the room as being fluffy because it associates the adjective with the prior noun. Now that has been resolved.

But I remember you talking to me a long time ago about the challenge of building industry specific taxonomies because the same words meant completely different things in different industries. How’s that been developing with LLMs and so on?

Alyona
Yeah, one of the first things I did is I put in a sentence into ChatGPT.

I liked watching GOT last night and ‘got’ is usually a stop word that traditionally you just throw it away and focus on all of the others. And I said, what are the themes in the sentence? And ChatGPT said, well, this person really likes Game of Thrones, GOT. So it’s incredible. And LLMs know way more than humans even.

You put in some specialist sentence from a medical record and it will know what the acronym means because all of the surrounding words indicate which meaning is used here.

Brian
I think when we look at the way we used to have to, for instance, I’ve been at this a long time, so I’ve seen various iterations along this path, and I do feel like this is a tipping point moment in NLP. LLMs are a big leap forward. But I’ve seen them in the past. We used to have to construct all of our knowledge about how to deal with corporate directories and databases before…

Lightweight Directory Application Protocol came along and you had LDAP and now you basically speak LDAP. What we’re now starting to see is the application’s role is it speaks prompt, right? So you’re really engineering prompts to core infrastructure layers that handle a lot of that heavy lifting. They’re not gonna get GOT wrong anymore. You don’t have to worry about fixing that. What you have to worry about is…

Really, what’s the business question you’re trying to answer? What’s the user group or the audience you’re trying to inform and give insight to? How do I stitch together the prompt to extract the right things out of it? And I think one of the things that does feel like it’s still early days is getting that very specific language out of, let’s say a corporation or out of an industry. That wouldn’t be common knowledge out of the box to an LLM because that LLM has been training on the internet, right? So you have to figure out how to augment that. But I have seen providers starting to build their own LLMs on top of the open source, right? So somebody that I recently talked to at an AWS forum, they spent about three years building their own Natural Language Processing to look at pharma databases and point out opportunities for them to point research in new directions based on lots of data they were already collecting. And they were able to outperform their own tool with the LLM they built on top of, in their case it was GPT-3, in less than nine months. So even if you’re going to build your own stuff, it’s going to give you a springboard. It’s going to give you a fundamental place to depart.

Richard
It feels like it’s kind of reset all the rules. I was thinking NLP is not a new concept. I think my first exposure to it was OEMing a product in 2003. And it was a company that got bought by business objects subsequently. I don’t even remember the name of the company now. And you have, in some ways, you could argue that Clarabridge was one the granddaddies of it all. And Sid Banerjee started those guys in the early 2000s.

But I think the playing field is getting leveled now. In some ways, the investments that you might have made as a company five years ago or 10 years ago, how useful are they anymore? You’re really now able to leapfrog all that, as you just described, Brian. But we need better applications, don’t we? I read two pieces in preparation for this, and one both around asking about applications of large language models and NLP.

And I was a bit disappointed, I have to say. One of them came back and said, well, you can analyze unstructured comments and surveys. Okay, we’ve talked about that. You can convert speech to text. Okay, I think we all understand that pretty well. You can analyze contact center conversations. I think that makes sense. And chat bots. And it seems like chat bots have become the absolute epicenter of the LLM universe right now. Everybody seems to be converging on building a better chatbot. Is that just me, or do I see better chatbots on our horizon as the biggest opportunity that we all have? That can’t be right, can it?

Alyona
The biggest opportunity.

Richard
Please, not just better chatbots. Do we even need chatbots? I don’t want to save us from better chatbots. What should we be thinking of when we think of the application of LLMs?

Alyona
Well, in CX context, I think one that you haven’t mentioned that I’m really excited about is making access to all of this data much easier through a natural language query interface. And I’ve seen so many solutions. And it doesn’t matter if they’re focused on unstructured data or structured. Basically, open up the whole data-driven decision-making to any user in the company, where before they had to like construct a complex SQL query or know how to operate a very complex user interface. Now they just, they can ask a question and they don’t need to create a prompt because the providers, ourselves included, basically have built this layer that takes the question and connects it to the data that’s already been analyzed and makes it accessible to anybody who can ask a question.

Richard
Brian, I know you love that answer.

Brian
I think it’s a great answer. We’ve been fascinated with this since it was really showing up early days in the BI stack with people like ThoughtSpot who were doing it in LQ kind of natively before LLMs and saying, listen, you don’t need to know SQL. You can just know how to do a Google search, and you can get there. I think it’s not panned out as much as I had hoped in terms of the use cases really showing up as being the everyday use cases, but I think it’s got really high potential today. And so natural language query, I think also natural language summarization of data. You know, if there’s a 90% clickstream in the data, just write me a summary and don’t make me click 17 times on a, on a set of charts. I think that’s really high potential as well.

Alyona
Yeah.

Maurice
Right, but the challenges that are at least a challenge that I see is that while we can get very clear summaries or themes about what’s going on, let’s say as the biggest theme that might come out of some survey sources or call center analytics might be a lot of people complaining about late delivery.

The catch is that then you discuss that in the leadership team and the finger pointing starts. Ah, yeah, it’s these inventory people who don’t know how to pick orders in the warehouse. You know, they’re showing things as available that are out of stock or the sales person told them that it was going to be red, but it’s actually blue. So it’s the sales person’s fault. And it’s, you know,

You’ve got great experience, Alyona, with working with companies who have to deal with these themes that are substantial, clear as stated, but don’t necessarily correspond to the work of an individual team. I mean, have you, what have you seen as effective ways for companies to get beyond the problem into where it comes from operationally? And not just by gut feel or intuition.

What have you seen, what successes have you seen with that sort of thing?

While we can get very clear summaries or themes about what’s going on, let’s say as the biggest theme that might come out of some survey sources or call center analytics might be a lot of people complaining about late delivery. The catch is that then you discuss that in the leadership team and the finger pointing starts.

MAURICE FITZGERALD

Head of Content, OCX Cognition

SOLUTIONS FOR YOUR TEAMS

Why VOC teams should upgrade to Customer AI now

CX and VOC Teams

Faster insights make Customer AI the right choice

Data & Analytics Teams

Empower Success teams with data driven insights

Customer Success Teams

Use  Customer AI, improve your Ideal Customer Profiles

Sales & Marketing Teams

Avoid surprises and anticipate outcomes

Executive Leadership

Alyona
Well, number one is there needs to be trust across all of these departments in how the data is analyzed, that if there is an insight, that it’s a valid insight. And what we’ve seen is that people actually use insight they found in Thematic to put them into like a shared document, let’s say a Google Doc, and tag different people. And if there is an issue with late deliveries we will know that it’s one of those three potential team who I’ve mentioned. Now it’s up to them to provide the data that it wasn’t their fault. Here is my stack of data that shows I’ve delivered on time. Here’s my stack of data. And so that it’s, it’s kind of, it needs to prompt people to find evidence. And I think it all comes from this culture of we are running this business together. And if the company succeeds, everybody succeeds. I think it comes from leadership to have this culture where the data and the insights are used as a helper tool rather than a finger tool, as you mentioned.

Richard
We’re sort of running up against time here. So I thought I’d finish by giving you what we like to call a low top spin lob to the net for you to see us out with. And so perhaps you could share with the audience what you think is the real sort of choice factors behind an NLP solution. So when people are looking at this, what should they be thinking about when they look at different vendors in the space?

Brian
Thanks for watching!

Alyona
I think they should just try it out.

People are very quick.

Richard
Well, that wasn’t the answer I expected, but I liked it. It’s a great answer.

Brian
I’m sorry.

Alyona
Yeah, I mean, people, the marketing websites, sell everything as the latest and the greatest AI, right? For any one vendor. So I’m not recommending to look for certain keywords on the marketing website or in what the salesperson tells you. Just put in your data and see what comes out. And this is how Maurice and I connected as well.

Richard
That’s great.

That is a great answer. And that’s a perfect place for us to leave on. I think that what strikes me from our conversation is that we are at the absolute early days of all of this. I think that 10 years from now, the types of application and use for all this technology are probably going to blow us away. And there probably won’t be things we thought of right now. They’re going to look completely different.

And so I just think we’re at the first step of a very exciting period for the application of these technologies. And as I said, I think, you know, it feels like it’s coming of age because we finally have the ability to convert all of speech effectively into text. That’s halfway there. And now we have these built in taxonomies we can use to make sense of them. It could be the start of a golden age, which I think we all hope for Alyona. Thank you so very much for joining us today from New Zealand. It was absolutely wonderful We appreciate your contribution and hope we get to speak again soon in the near term.

Alyona
Thanks for having me.

ABOUT THE CX ICONOCLASTS

Alyona Medelyan, Ph.D. is the CEO and Co-Founder of Thematic, an AI-powered customer feedback solution backed by YCombinator. Originally from Ukraine, Alyona holds a PhD in Artificial Intelligence. Her academic work in this field has been cited by 3,500 researchers worldwide.

She started Thematic after realizing that companies struggle understanding feedback at scale. She designed a solution that uses AI in a way that builds trust. Users see what AI discovered in feedback and why, but can tailor the output. With just one million dollars in funding, Thematic was able to secure the world’s biggest brands as customers. It’s now used by LinkedIn, Atlassian and DoorDash, among many others. The impact of the Thematic is measured in billions of users who use these products every day around the world.

FOLLOW ALYONA

Richard Owen is celebrated as a leading figure in the Customer Experience industry, primarily known for his contribution as CEO at Satmetrix, where he and his team, along with Fred Reichheld, developed the Net Promoter Score methodology, now the globally dominant approach to customer experience measurement. His efforts further extended to co-authoring “Answering the ultimate question” with Dr. Laura Brooks, establishing netpromoter.com, and initiating both the NPS Certification program and a successful conference series. Owen’s diverse 30-year career has seen him drive technology-led business transformations at Dell, lead software companies like AvantGo to a Nasdaq listing, and Satmetrix to acquisition by NICE Systems, while also engaging in venture investment and board roles. Today, he spearheads OCX Cognition, leveraging machine learning for real-time NPS and customer health analytics.

FOLLOW RICHARD

Brian Curry is celebrated as a leading figure in the Customer Experience industry, primarily known for his contribution as CEO at Satmetrix, where he and his team, along with Fred Reichheld, developed the Net Promoter Score methodology, now the globally dominant approach to customer experience measurement. His efforts further extended to co-authoring “Answering the ultimate question” with Dr. Laura Brooks, establishing netpromoter.com, and initiating both the NPS Certification program and a successful conference series. Owen’s diverse 30-year career has seen him drive technology-led business transformations at Dell, lead software companies like AvantGo to a Nasdaq listing, and Satmetrix to acquisition by NICE Systems, while also engaging in venture investment and board roles. Today, he spearheads OCX Cognition, leveraging machine learning for real-time NPS and customer health analytics.

FOLLOW BRIAN

Maurice Fitzgerald is our Editor-in-Chief providing deep insights based on almost 40 years of experience in business and customer experience strategy. Retired VP of Customer Experience for HP’s software division, his career includes HP, Compaq, DEC and Wrangler Jeans. Lead author of several books on CX strategy. Based in Switzerland.

FOLLOW MAURICE

Questions?
success@ocxcognition.com

 

SOFTWARE