There is a moment in every technology wave when someone stops celebrating the progress and starts asking the harder question.
For Adam Bittlingmayer, that moment came while he was working at Google Translate.
Adam was one of three engineers on a team that was, quite literally, changing how billions of people access information. The Google Translate integration with Chrome and Android meant that for the first time, someone who spoke only Kazakh or Tamil or Swahili could land on a webpage in English and get something. Not perfect. Not always reliable. But something.
And that something mattered enormously.
But Adam kept noticing the gap. Machine translation was getting better, faster, more accessible. Yet nobody had solved the fundamental problem: how do you know when to trust it?
That question led him to found ModelFront, a platform that predicts translation quality before a human ever has to review it. It tells you which AI translations are ready to publish and which ones will embarrass you.
In Episode 190 of the Localization Fireside Chat, Adam and I sat down to talk about what six years at Google Translate actually taught him, why the post-editing bottleneck is still broken, and why the real crisis facing LSPs today has very little to do with AI.
One of the most striking things Adam said was about the 1 billion versus 7 billion problem. When Google Translate launched, there were roughly one billion internet users, and most of them spoke English. Everything built in that era was built for them. The remaining seven billion people who don’t speak English were an afterthought, and in many ways they still are.
As Adam put it, nobody tells their kids not to bother learning English because AI will solve it. Not in 2026. Not in 20 years. The gap between what machine translation promises and what it actually delivers for high-value, nuanced content is still very real.
What makes ModelFront different is that it doesn’t try to replace the translator. It tries to make the workflow smarter by identifying which translations need human attention and which ones don’t. That distinction sounds simple, but the implications for any organization managing large volumes of multilingual content are enormous.
We also got into something that doesn’t get talked about enough in this industry. The existential pressure facing language service providers right now is not primarily caused by AI. It’s caused by competition. The barrier to providing translation has dropped so dramatically that LSPs are now competing with tens of millions of qualified translators around the world who are online, accessible, and willing to work at a fraction of the cost. AI is accelerating that pressure, but it didn’t create it.
Adam’s final advice landed simply and clearly: independence and control. If you are not in control of your own translation workflow, no vendor is going to make it more efficient for you. They have no incentive to.
This was one of those conversations that reminded me why I started this podcast. Adam thinks in systems, speaks eight languages, and has seen this industry from the engineering floor at Google to the founder’s seat at a company trying to fix what machine translation still gets wrong.
I think you’ll enjoy this one.
👉 Watch on YouTube: https://youtu.be/kYlF8ZN0gOQ
👉 Listen on Simplecast: https://localization-fireside-chat.simplecast.com/episodes/your-ai-translation-is-wrong-and-this-ceo-built-a-tool-to-prove-it-adam-bittlingmayer
👉 Connect with Adam on LinkedIn: https://www.linkedin.com/in/adambittlingmayer/
👉 Learn more about ModelFront: https://modelfront.com
👉 Machine Translation resources: https://machinetranslate.org
👉 LFC on LinkedIn: https://www.linkedin.com/company/localization-fireside-chat/
👉 Subscribe to the podcast: https://localization-fireside-chat.simplecast.com/
👉 Robin Ayoub blog: https://robinayoub.blog
👉 N49Networks: https://www.n49networks.com
Leave a comment