Health and Tech Summit 2025

Dec 16, 2025

Speeches
Health and Tech Summit 2025

This is the transcript of the keynote speech delivered by Benjamin Crockett, Director of Content at Longbeard, at the Gala Dinner of the Health & Tech Summit 2025, held on Tuesday, December 16, 2025, at the Cercle des Nageurs in Marseille.

Keynote Speech – Health and Tech World Summit 2025 Gala Dinner

Bonsoir, bonsoir à toutes et à tous.

I come to you from Rome, where I am part of the Rome operations for Longbeard—a company digitizing the patrimony of the Catholic Church and building what we call “Catholic AI”. Last month, in collaboration with the Pontifical Gregorian University, we organized the Builders AI Forum, an international initiative that brought together tech leaders, investors, academics, and Church leaders to try and understand what the Church’s mission should be in the age of AI. Pope Leo XIV addressed the forum, and his words have shaped much of what I want to share with you tonight.

I’d like to invite three companions to walk with us through these next few minutes. Each of these three characters carries with them a theme that matters for healthcare and AI:

  • Saint Luke the Evangelist
  • Saint Francis of Assisi
  • Saint Carlo Acutis

If you forget everything else I say tonight, remember those three names. They’re a surprisingly good compass for this moment.

So, why is the Church Talking About AI?

Well at the Builders AI Forum, Pope Leo XIV sent a letter to the participants. In it, he wrote: “Technological innovation can be a form of participation in the divine act of creation.” That sentence stayed with me.

“Technological innovation can be a form of participation in the divine act of creation.”

That sentence stayed with me because it reframes everything. Technology is not merely neutral infrastructure. It’s not just optimization. It’s not simply efficiency or scale. It is participation. And participation implies responsibility.

AI is the most powerful technology humanity has ever created. It has the capacity for extraordinary good—especially in healthcare—and it also has the capacity for extraordinary harm. So when the stakes are that high, moral voices matter. And here’s where our first companion, Saint Luke, quietly enters the room. Luke reminds us that medicine is not only science—it is also a story. A person is never just a case. A patient is never just data. The first ethical test of AI in healthcare is simple:

Does it help us see the person more clearly—or does it make the person disappear?

What Makes Catholic AI Different

At the Builders AI Forum, we were entrusted to build “Catholic AI”. But what do we mean by Catholic AI? To understand that, it helps to demystify what AI is. Building a large language model isn’t magic—it’s a recipe with three ingredients:

  • Compute — the raw horsepower
  • Architecture — the design of the system
  • Data — the key ingredient: what the model is fed

An AI system is only as good as the diet it is fed.

Most dominant models today have been trained on the entire internet: the profound and the profane—Shakespeare and Scripture, yes, but also misinformation, cruelty, and confusion. When you ask them moral questions, they don’t give you “Truth.” They give you the statistical average of what the internet says. The consensus of the crowd.

We realized early on that if we wanted an AI capable of helping people think seriously about human dignity, we couldn’t just place a “Catholic skin” on top of a secular brain. We had to change the diet. At Longbeard, we are focused on scanning, digitizing, and curating the intellectual patrimony of the Church—2,000 years of reflection on suffering, care, dignity, embodiment, and meaning. And for a healthcare audience, that matters—because these aren’t abstract ideas. They are the daily realities of your work.

Digitizing Memory for the Digital Future

This realization led us to build the Alexandria Digitization Hub in Rome, in collaboration with the Pontifical Gregorian University, and piloted with the Pontifical Oriental Institute—home to the world’s largest collection of manuscripts from the Eastern Churches, and often the first written record of Europe’s encounters with cultures ranging from India to Sub-Saharan Africa and the Islamic world. Just yesterday, we finished scanning the works of St. Bernard of Clairvaux—un saint Français and advisor to popes, master of spiritual theology, and the guide who leads Dante in his final ascent in Paradiso in Dante’s Divine Comedy.

In the Hub, robotic scanners gently turn the pages of rare manuscripts, converting centuries-old wisdom into digital text. We process these scans through Vulgate AI, transforming images into searchable, semantically rich data. In simple terms, we are making the Church’s memory legible to the digital age, and reintroducing voices that have been silent—sometimes for hundreds of years—back into living research and conversation. Because if wisdom is not visible to the digital eye, it risks being forgotten.

And here, our second companion—Saint Francis—offers a warning and an invitation. Francis reminds us that progress that leaves the poor behind is not progress. It is simply acceleration for the already powerful. So as we digitize and build, we have to keep asking: Who benefits? Who is protected? Who is left behind?

That’s not a side question. That’s central.

Magisterium AI

The first fruit of this work is Magisterium AI. I sometimes describe it as a digital librarian. Unlike a standard chatbot, Magisterium AI does not roam the open internet. It consults a curated database of over 29,000 magisterial and theological documents—and crucially, it cites its sources. That matters because it anchors people back in primary texts, so they can verify and reflect. It doesn’t replace human judgment; it supports it.

Today, Magisterium AI is used in over 165 countries, in more than 50 languages. We also offer infrastructure—an API—so other organizations can build applications on top of our engine. One example is the Hallow App, which uses Magisterium AI to power its chat feature. We’re trying to ensure that the digital ecosystem does not drift—quietly, unintentionally—away from the human person.

From Tools to Agency

But research tools are only the beginning. The next phase of AI is not just chatbots; it is agents—systems that plan, recommend, and act. So the real questions become:

  • Who owns them?
  • Who governs them?
  • And whose values do they serve?

A world where intelligence is concentrated in a handful of distant servers is not just a technical future—it’s a cultural one. It concentrates power. It shapes imagination. It tempts dependence. That is why we are building toward Sovereign AI, grounded in a Catholic principle: subsidiarity—decisions should be made as close as possible to the person, the family, and the community.

Our project Ephrem is a Catholic-aligned small language model designed to run locally, serving people without requiring them to surrender their interior lives to the cloud. It’s not about fear. It’s about agency. And this is where Saint Carlo Acutis becomes surprisingly relevant. Carlo was a teenager who loved computers—and used the internet to spread the Gospel. He didn’t treat technology as the enemy. He treated it as a mission field. Carlo’s presence in this talk is a reminder to all of us: the right response to powerful technology is not panic, and it’s not naive enthusiasm. It’s formation—so that we can use tools without becoming used by them.

From Pope Francis to Pope Leo

Both Pope Francis and Pope Leo XIV have been remarkably prophetic in their engagement with artificial intelligence. Pope Francis consistently emphasized human dignity—warning against systems that reduce people to data points or economic units, and insisting that technological power must always remain under meaningful human responsibility.

Pope Leo has continued—and intensified—this vision, calling builders of AI to cultivate moral discernment, and to develop systems that reflect justice, solidarity, and reverence for life. This matters deeply as automation accelerates and more people find themselves with increasing amounts of unstructured time. The danger is not simply economic displacement, but what I would call an existential cliff—a drift into distraction, isolation, and substitutes for meaning. The Church’s concern is not that people will work less. It is that they might forget what life is for. With proper formation, that time can become an invitation: to cultivate virtue and wisdom, to pursue artistic and intellectual life, to deepen relationships, family, and community. Because you are not your job—and your value is not measured solely by productivity or contribution to GDP.

And fortunately, this is not the Church’s first encounter with technological upheaval.

In many ways, our moment mirrors the end of the nineteenth century. Pope Leo XIV deliberately chose his name in reference to Pope Leo XIII, who guided the Church through the Industrial Revolution. Then, as now, new technologies disrupted labor, social structures, and human identity.

Contrary to popular opinion, the Church has a long history of engaging technology when it serves human dignity—from universities and hospitals, to the printing press, modern communications, and now artificial intelligence. The Church is not afraid of technology. It is afraid of forgetting the human person. And in healthcare, that distinction matters profoundly.

Clarity Creates Agency

I’d like to close with a simple idea: Clarity creates agency. We are living in a moment of extraordinary speed and uncertainty. And in moments like this, what people need most is not predictions—but clarity about where we are going.

Clarity about the kind of world we are building. Clarity about the kind of healthcare we want to practice. Clarity about the relationship we want between human judgment and intelligent machines. Because the future is not inevitable. And it is certainly not finished. As Pope Francis once said, in an address to young people: “It is not enough to take steps which may eventually lead somewhere; each step must be oriented toward a goal.” So the real question beneath every conversation about AI and healthcare is not whether technology will advance—it will.

The question is whether we are clear about what human flourishing actually looks like as it does.

A world where technology restores time for care rather than replaces it. Where efficiency serves compassion, not the other way around. Where people are valued not because they are productive, but because they are human.

If we can hold that clarity together, then we are not passengers in this transformation—we are participants. And when we know what we are building toward, we reclaim agency. When we reclaim agency, the future becomes something we shape—rather than something that simply happens to us.

A Final Invitation

So tonight, as we celebrate innovation in healthcare and technology, I invite you to carry one question with you:

What does human flourishing look like—and how can AI help us get there without losing ourselves along the way?

If we take that question seriously, then innovation becomes—as Pope Leo said—participation in the divine project of creating.

Thank you very much. Merci beaucoup.

Related Posts