Cover Story

Alum in AI: Lilly Chen ’19

Megan Clancy ’07

Lilly Chen ’19, CEO and Founder of Contenda

While Artificial Intelligence isn’t exactly a new technology, its current iteration is certainly a topic of great interest to many. Most people have their opinions on its current applications, and many have predictions about where the technology is taking us. But Lilly Chen ’19 isn’t just talking about AI or typing questions into ChatGPT. She is a CC alum working directly in the field. Chen is the CEO and Founder of Contenda, a technical content marketing startup, a “human-powered AI company.”

“Contenda is bordering the line between a bespoke consulting company and a software company,” says Chen. “Companies have lots of problems that they want to use AI to fix or they want to use AI to scale some solutions, except they don’t know what the trade-offs are with AI. They’re not experts in the field. They don’t have the resources, or the bandwidth perhaps, to do it themselves. So, they bring us in on a retainer and we build the tools for them and educate their team on how to scale it.”

Now in its third year, Contenda is already making its mark in the AI field. Chen gives a nod to her time at CC in helping her foster the drive and determination which has led her to succeed in the start-up world.

“I did a lot of things that I cared about in in college,” she says. “A really big part of my experience was the Esports program. It was something I helped found in my freshman year, and now they’ve won multiple titles.”

Her passion for Esports and the development of the program at CC did not, however, come without trade-offs. “I spent a lot of time doing that. If you’re going to spend forty plus hours a week on something, you’re just not going to study as much. That was a trade-off that I think I very naturally made at CC. For better or for worse. I just went ahead and did it. And that’s part of what being an entrepreneur is like. It’s just making trade-offs. All of my CC professors were very understanding and respectful of my choices too.”

“For better or for worse. I just went ahead and did it. And that’s part of what being an entrepreneur is like. It’s just making trade-offs.”

Lilly Chen ’19

And it seems she made the right choices, as Chen’s company continues to grow and set new standards in the AI sector.

When asked about how people are responding cautiously to the possibilities of AI, Chen acknowledges the credibility of some of those fears. “I think there’s a lot of fear about AI. Personally, I would like to see less AI automation and more AI interfaces. A lot of people are thinking about using AI to do things that humans do, and I don’t necessarily think that’s where we want to be long-term. Obviously, some short-term things would be great. If some robot would just do it for me. But I think that if you actually play that out and you go down the rabbit hole – is that actually a future that I want to live in? I don’t know about that.”

Chen believes the ideal situation comes less in AI automation and more in AI interface systems. “For example, I don’t want an AI to be taking my phone call with you. I don’t want you to schedule a meeting with my AI, a Lilly bot, where you interview her and ask her the questions and she responds. That’s not really what I want to see. But what I think is helpful is, let’s say for example, you’re going to take the recording of this conversation and you’re going to have that file. Maybe you run that file through an AI and ask it to name some follow up questions like what were some interesting things that you think you should ask more about? And then the AI responds with a list of questions that you look through and maybe you send me an e-mail with a couple of them. Right? That’s an AI interface. We’re using AI to enable and deepen our conversation and our connection, but we’re not using it as a replacement. Unfortunately, the majority of industry leaders would like to see it go towards automation.”

When asked what area of AI intrigues her most, she is quick to bring up anti-hallucination. “When we talk about how images are like more likely to be biased. Bias is to those image AI’s, as hallucination is the language model (LM). So, the main issue with LM is the fact that they make stuff up. They string words together that aren’t true, they sound plausible though. That’s hallucination.”

Back to that previous example with the Lilly bot:

“As a strategy, my company addresses that as an ethos situation, where we don’t rely on AI to be a pure generation platform. If all generative AI LMs are always going to hallucinate, maybe we shouldn’t be using them.”

Lilly Chen ’19

“Part of the reason why I really don’t like AI as a generation is because, let’s take that example again where I trained a Lilly bot that you’re interviewing. This bot is trained on my data, sure. But it could say anything right? Let’s say you asked my bot a pretty off-the-cuff question. Maybe you read somewhere that I had traveled, and so you ask, ‘how are you enjoying London?’ I used to live in London. Maybe the bot says, ‘yeah, the weather’s great. You know, really loving living in London. I go to this farmers market all the time.’ That was actually true. I did. But the thing is, it’s not anymore. I don’t live in London now. Based on the data that I gave, the bot could hallucinate this answer. Something that’s plausibly true but undeniably false.”

It’s easy to imagine situations a little more serios where this could go terribly wrong.

“Hallucinations are a really big problem,” says Chen. “In that example, it’s pretty mundane. But it could easily be a major detail about the latest updates in AI, right? Something completely factually incorrect about how something works. There’s a lot of reasons why hallucination happens. And there’s types of hallucinations that get really, really bad. So, part of what my company does and part of the reason why I keep saying that I don’t really think generative AI is the answer is because of hallucinations. It’s one of those things where you can always marginally improve on it continuously. But if you want to get to 100% you’re talking like, really, really pushing the limits of resources of time and all those other things. So, as a strategy, my company addresses that as an ethos situation, where we don’t rely on AI to be a pure generation platform. If all generative AI LMs are always going to hallucinate, maybe we shouldn’t be using them.”

Chen’s work toward ethical AI use extends beyond her company as well, as she serves on the board of the AI Infrastructure Alliance. “Part of our role is to educate the industry on what the pros and cons of AI look like, what the ethics should be and to reach out to companies actively talking to those major stakeholders and holding people accountable to what they say they’re going to do,” she says.

When asked what the future of AI looks like, Chen pushes back at the scenarios imagined of a purely AI automated world. “I really find it hard to believe that a future where we have pure generation for everything will be one that’s actually truthful, factual, and one that fosters human connection,” she says.

For CC alumni and current students who are interested in learning more about AI, Chen encourages everyone to try it. “I think that there’s nothing like first-hand experience,” she says. “In terms of seeing the pros and cons of things, I mean you can listen to me talk all day long, but I think until you actually experience it, it’s a very different understanding. Try out a variety of things. And really try to push on what you think an AI can do and then evaluate. Is it good? Is it bad? Why is it good? Why is it bad?”

Lastly, Chen says, to anybody out there working on AI who may they have some other thoughts, she would love for you to get in contact with her.

Leave a comment

%d