What Does The Future of Localization Tools Hold?

What Does The Future of Localization Tools Hold?

Will The Standard TMS Become Obsolete?

As AI, automation, and multilingual data orchestration redefine the way businesses handle global content, the emergence of LangOps platforms promises to transform traditional TMS workflows. But what exactly would a fully integrated LangOps platform look like?

To explore this question, I recently moderated a LocDiscussion roundtable with four leading experts in the field: Pascale Tremblay, Jochen Hummel, Manuel Herranz, and Yvan Hennecart. These industry veterans have been at the forefront of localization technology, AI-driven content solutions, and multilingual data management, making them the perfect voices to weigh in on the evolution of LangOps.

The conversation was inspired by a comment from Jo?o Gra?a of Unbabel, who, during the launch event of the LangOps Institute, expressed his eagerness to see the first truly integrated LangOps platform come to life. That sentiment resonated: many in the industry are working on similar solutions, but what would it take to build a comprehensive, scalable, and future-proof LangOps ecosystem?

In this roundtable, we tackled the key components of LangOps platforms, their impact on language professionals, and the business and technology shifts that will define the next phase of multilingual content operations. Each panelist brought a unique perspective, from client-side localization leadership to AI-powered language solutions, making for a rich and insightful discussion.


Here’s what they had to say about the future of LangOps, and why it’s no longer just about translation, but about unlocking the full potential of multilingual AI and content management.

Defining What Makes a True LangOps Platform

Stefan Huyghe: Great! Let’s get started. What would a first true LangOps platform look like? Who wants to take a stab at defining its key features and how it could serve as a one-stop shop for multilingual content and data management?


Jochen Hummel, CEO ESTeam and Coreon

Jochen Hummel: I’ll take that. I think a LangOps platform stands on three fundamental pillars: Knowledge, Structured multilingual knowledge, such as knowledge graphs. Data, A large repository of high-quality, bilingually aligned content and Large Language Models (LLMs). AI models that power various intelligent automation processes.

Beyond these, you also need an application layer, something that orchestrates these elements, provides an API, and exposes this wealth of multilingual knowledge to the outside world in a structured, accessible way.

Stefan Huyghe: Sounds like you’ve had some hands-on experience putting something like this together. Am I right in guessing that you might be working on a LangOps platform that fits this description?

Jochen Hummel: Yes, actually, we are. We’ve been promoting the concept of a Language Factory for quite some time. The idea is to automate content creation and translation while remaining agnostic about the engines used to power the process.

Our Language Factory already integrates many of these components. It leverages knowledge, data, LLMs, neural machine translation, and NLP technologies to produce multilingual content. However, LangOps goes beyond translation.

The real goal of a LangOps platform is to make multilingual data accessible for broader enterprise applications, not just for translation. This includes use cases like search, classification, chatbots, and other AI-driven solutions. The platform ensures that multilingual content remains synchronized and is easily accessible for various industries and use cases.

Stefan Huyghe: That’s fascinating! Manuel, Pangeanic has been working on AI-driven projects for years. I’ve been loosely following your progress, and I assume you’re also developing a LangOps solution. Does Jochen’s vision align with what you’re building in Spain?


Manuel Herranz, CEO Pangeanic

Manuel Herranz: Yes, absolutely. We’ve had a solid platform for quite some time. We’re now on version two, and version three (V3) is set to be released this year. And no, before you ask, it has nothing to do with DeepSeek V3; it’s just a coincidence!

With this new version, we’re integrating several new features, many of which align with Jochen’s vision. What’s really interesting is that Jochen didn’t even use the word ‘translation’ until your second question. And that’s because LangOps isn’t just about translation.

If anyone is thinking, Oh, this is just about making translation faster, they’d be missing the bigger picture. LangOps is about managing content holistically.

I completely agree with Jochen, LangOps should cover the full cycle, from content creation to deployment. Of course, every client has different needs, some will focus more on translation, while others will prioritize knowledge management, data creation, or content optimization. The key is to offer a comprehensive, flexible approach.

Our origins are in Natural Language Processing (NLP), though, nowadays, we just call it AI since NLP stopped being "sexy" about two years ago. If you’re not branding your work as AI, it’s almost like you don’t exist!

Jokes aside, AI’s relevance is only increasing. Language is knowledge, and a significant portion of what we consider "knowledge" is based on language. Sure, there’s speech, audiovisual material, and other data formats, but language remains the key component.

The ability to efficiently manage this linguistic knowledge, whether through knowledge graphs, glossaries, or structured data, is what sets LangOps apart from traditional localization. Different systems will emphasize different capabilities, some will focus on translation, others on knowledge management, and others on content creation.

At the end of the day, LangOps is redefining the role of all language professionals, not just translators, but everyone involved in multilingual content management.


What Businesses Need From the Next-Gen Platform: A Client’s Perspective

Stefan Huyghe: It’s fascinating to hear both of you speak about LangOps from the provider perspective. Of course, Pascale brings a wealth of real-world client-side experience from her time at Gap and has been deeply involved in LangOps initiatives there. Pascale, from a client’s perspective, what do you think are the key features a LangOps platform must have?


Pascale Tremblay, Chief Growth Officer at LangOps Institute

Pascale Tremblay: It’s really exciting to hear Jochen and Manuel describe the future of LangOps platforms because this is exactly where the industry is headed.

From a business standpoint, there are some critical enterprise features that any successful LangOps platform must have. First, interoperability is essential. Enterprises today are incredibly complex, with multiple systems and workflows all interacting with language in different ways. A LangOps platform needs to function as an agnostic language framework, integrating seamlessly into existing ecosystems. Instead of focusing on linear translation workflows, we should shift toward universal language frameworks that support content creation at scale.

Second, modular architecture is key. Enterprises can’t afford large-scale disruptive changes, they need incremental improvements that integrate smoothly with their existing workflows. A modular approach allows businesses to adopt LangOps step by step, testing new technologies while ensuring business continuity.

Another crucial element is resource optimization. One of the biggest advantages of a LangOps framework is that it allows for highly customized workflows, breaking down content and language into granular components. This leads to greater efficiency between clients and vendors, optimizing both time and resources. The real question then becomes: how can we optimize human input and machine automation even further?

Finally, real-time monitoring and analytics are becoming increasingly important. As content gets more fragmented and deployed across multiple channels, keeping track of everything in real time is a growing challenge. When I was at Gap, our team was managing 50 to 100 localization projects per day. Some were long-term initiatives, while others required immediate turnaround. Even with dashboards and automation, keeping everything running smoothly was incredibly demanding. A LangOps platform should provide real-time monitoring and intelligent automation to ensure that projects stay on track without overwhelming localization teams.

From a client perspective, a platform that combines interoperability, modularity, resource efficiency, and real-time tracking would be the perfect complement to what Jochen and Manuel are building.

I think in the near future, especially for those listening today, we’re going to see a shift toward content creation and a reframing of how we approach multilingual content. But it’s important to recognize that different industries will move at different speeds.

We’re going to see a lot of diversification in how LangOps is adopted. Translation isn’t disappearing overnight, far from it. Instead, we’ll experience a transition period where some industries, like tech, will have minimal translation needs, while others, like education or government, will continue to rely heavily on translation for at least the next few years.

The real challenge then becomes: how do we bring it all together? This is where a LangOps framework becomes critical, it structures and supports this transition, ensuring seamless interoperability between workflows, platforms, and technologies.


No More One-Size-Fits-All Localization: How LangOps Unlocks Customization at Scale

Stefan Huyghe: The keyword I’m picking up from everything you just said is flexibility, and that ties in perfectly with my next question.

Yvan, you work with global clients across Asia, the Middle East, and beyond. I assume that regional requirements vary quite a bit. What kinds of integrations do you think LangOps platforms need to provide to be truly effective for global teams working across different languages, cultures, and regions?

Yvan Hennecart

Yvan Hennecart: I completely agree, flexibility is key.

One of the biggest changes we’ve seen over the past few years is that clients, regardless of region, no longer want a black-box approach, where they just hand over content and expect a polished outcome without insight into the process.

Today, clients want to be involved. They want to understand how solutions work, see what’s happening under the hood, and have the ability to customize processes to fit their needs.

Picking up on what Pascale mentioned, interoperability is absolutely essential. A LangOps platform must be deeply integrated so clients can build workflows that match their specific business needs rather than being forced into rigid systems.

When we talk to customers today, we see a clear shift away from one-size-fits-all solutions. They want to mix and match components, bringing in their own tools to create a best-of-breed solution that fits their unique use cases.

For LangOps platforms to be truly effective, they need to offer high-level integration capabilities, including APIs, connectors, and seamless enterprise system integrations. They also need to be modular, allowing clients to add, remove, or customize functionalities to meet specific business requirements. Instead of forcing clients into rigid systems that require constant manual interventions, these platforms should be designed to adapt dynamically as needs evolve.

Historically, TMS platforms dominated for over 25 years because computational power and scalability were limited. But today, we finally have the opportunity to build something better, a truly flexible system that can handle the entire content lifecycle without being locked into legacy constraints.

Stefan Huyghe: That makes sense. Yvan, you mentioned resource optimization, and Pascale, you also touched on this earlier.

Yvan Hennecart: Yes, and one of the biggest challenges we’ve always faced, whether with TMS platforms or other language solutions, is that many of these technologies are incredibly complex.

One of the biggest issues, particularly when I worked with SDL, was training. Many of these systems had tons of features, but only a small percentage of users actually knew how to use them effectively.

So, training and continuous learning will be critical for LangOps adoption. This is where initiatives like the LangOps Institute can play a huge role in helping the industry adapt to new technologies, workflows, and AI-driven tools.

Ultimately, what we need is transparency, so clients and teams clearly understand how content is processed and optimized. Collaboration and fair compensation are also key,? we need to rethink how value is distributed across the localization supply chain. Finally, open access and open-source innovation are becoming increasingly important as the industry shifts toward more open-source platforms, allowing for better integrations and more customizable solutions.

This is a major shift, moving away from rigid TMS models and toward a more holistic approach that covers the entire content lifecycle without being locked into outdated frameworks.

Stefan Huyghe: One of the biggest changes we’re seeing is that technology is becoming more accessible. There’s a democratization of access, meaning that software companies will have to adapt to this shift. A major trend is the move toward low-code integrations, where users don’t need deep technical expertise to connect systems. Instead, they can configure workflows with basic knowledge, making localization solutions more flexible and user-friendly.

That leads me to my next question for our software providers, Manuel and Jochen. How do you think the next-generation LangOps platforms will change the way localization teams operate and collaborate with other functions?

Manuel Herranz: That’s a great question. The first and most obvious shift is that everything is moving to the cloud, if it hasn’t already. There’s very little room left for on-premise solutions, except maybe for specific databases that require local storage for security reasons.

I also fully agree with Pascale’s point about modularity. As providers, we are developing platforms that can process language in different ways, whether it’s transcription, translation, or summarization. These operations must be modular, allowing clients to pick and choose the services they need. They also need to be interoperable so that different systems can work together seamlessly across various technologies. And they have to be technology-agnostic, giving organizations the flexibility to plug in different AI models and tools based on their needs.

We’re all used to working with LLMs, and businesses are beginning to expect AI-driven solutions that can write emails, generate summaries, or analyze content. But needs evolve. Something a company doesn’t require today might become critical next year. For example, transcription might not be a priority for Pascale’s team right now, but if a business starts recording and analyzing multilingual meetings, transcription could become a valuable tool for knowledge management. This is why flexible LangOps platforms are crucial, they allow organizations to add functionalities as their needs evolve.

A simple real-world example: this morning, as we were switching to a new customer support platform, we realized that 20–30% of our responses to clients and internal teams could be repurposed into an AI-driven knowledge base. This kind of adaptive, data-driven approach is exactly what LangOps platforms need to support.

Ultimately, clients shouldn’t have to buy the entire platform. They should be able to consume only the services they need and integrate them into their existing workflows.

Stefan Huyghe: That’s a great point. Jochen, I’ve heard you talk many times about how data is shifting away from being stored at the provider level in translation memory files and moving toward a centralized client-controlled system. Do you see this as a fundamental shift in how language service providers will operate? And what do you think will change the most in the way we serve our clients moving forward?

Jochen Hummel: Absolutely. As I mentioned earlier, a LangOps platform must be built on three pillars: multilingual knowledge, structured and interconnected language data, high-quality domain-specific content, and large language models that enhance language operations.

Historically, localization has been hyper-focused on translation workflows, on recycling content and using translation memories to reduce costs. And yes, while reusing content is still valuable, we need to expand our perspective. The reality is that the data we create in localization is incredibly valuable beyond just translation. And that’s a massive opportunity.

Instead of asking, “What can AI do for localization?”, which is the question our industry keeps debating, we should be asking, “What can localization do for AI?”

We’re sitting on a goldmine of high-quality, structured multilingual data. This data is essential for text-based AI applications across industries. The real potential of LangOps isn’t just about making localization cheaper or faster; it’s about leveraging our data to power the next generation of AI applications.

For instance, the same multilingual data used for translation could be repurposed to improve AI-driven search capabilities in multiple languages, enable better text classification across global markets, and train chatbots and virtual assistants to operate in multiple languages.

This is why LangOps isn’t just another term for localization. Localization has traditionally been about cost-cutting and efficiency. LangOps, on the other hand, is about strategic value, moving from an outsourced service to a core function that helps companies scale AI-driven multilingual experiences.

If we embrace this shift, the localization industry can position itself at the heart of AI innovation, rather than being just another cost center to optimize.

Stefan Huyghe: That’s quite a stretch from traditional localization workflows. Pascale, from your practical experience, if you were managing a LangOps department in this new economy on the client side, what key features or applications would you want that have traditionally been difficult to implement?

I’m thinking about chatbots and other emerging technologies. What are the things you’d be most interested in exploring in this evolving tech landscape?

Pascale Tremblay: Listening to Yvan and Manuel, it’s clear that future LangOps platforms will be highly adaptable and flexible. The fact that they are agnostic opens up so many possibilities, it’s truly about enablement.

Looking at what has already happened on the client side over the last few years, we’ve seen a shift from localization to global enablement. There’s a growing recognition that our role isn’t just about translation; we’re enabling content to flow seamlessly across systems to support business objectives.

The direction of these platforms is great, but beyond the technology, we also need to rethink the business model, particularly the value that LSPs bring to clients. The key question is: how do LSPs evolve and integrate with these new platforms, and how do clients build their own language frameworks to work with them?

For example, when I built the localization program at Gap, I didn’t call it LangOps, but in essence, it was built on LangOps principles. The range of use cases was so diverse that I had no choice but to create a framework that could encompass everything. This brings us to a critical mindset shift: companies need to invest time in designing blueprints for their data and content frameworks.

Over the past 30 years, we’ve become experts at breaking down content into fragments. But now, as we enter a more advanced digital ecosystem, we need to properly structure and organize content and data. A well-structured framework allows companies to move from traditional translation workflows to content creation in any language. Without a structured blueprint, this transition will be painful and inefficient.

The challenge is that this approach is counterintuitive in a corporate environment. Most corporations operate with a mindset based on quarterly goals, prioritizing short-term business objectives. However, LangOps programs require a long-term strategy. They need to be built with incremental progress in mind.

Another major shift we need to think about is how global teams operate. Over the last decade, there has been a significant change in workforce structure within corporations. Localization teams used to have many internal employees, but today, the landscape has changed dramatically. We need to rethink how we operate, not just cross-functionally, but also cross-organizationally. How do we partner with LSPs in a more strategic way? How do we work closely with technology providers?

Instead of seeing vendors and technology providers as separate entities, we should foster closer collaboration, integrating them seamlessly into our workflows through interoperability. Yes, this will require serious effort up front. Building a robust LangOps framework takes time and effort, but it’s a layered approach, one step at a time. The good news? Once this framework is in place, it will stand the test of time because it’s flexible, modular, and interoperable.

Stefan Huyghe: From what I’m hearing in this discussion, the kind of solutions we need to provide are going to operate on a much larger scale than what we’ve traditionally been used to in the localization industry.

Yvan, what do you think are the biggest challenges in building and scaling a LangOps platform? And how can we ensure that it meets the needs of both language service providers and enterprise-level companies?

Yvan Hennecart: Well, first, I think it’s crucial to build a platform that is flexible, modular, and open. It must adapt to different workflows and business needs. Companies should be able to pick and choose the services they need without being locked into a rigid structure. And it should be highly integrated with external tools and systems to create a seamless user experience.

One of the main areas that hasn’t received enough attention in the past is integration. Historically, translation management systems (TMSs) focused on translation, but LangOps platforms need to handle so much more, content distribution, multilingual content delivery, transcription, data labeling, knowledge management, and more.

As Manuel pointed out earlier, there will be times when an organization needs transcription, but at another moment, they might need data management or translation. The platform must be able to handle all these needs dynamically.

Another critical point is transparency. The days of the black-box approach, where clients handed over content without insight into the process, are over. Today, customers expect full visibility and control over how their multilingual content is managed.

And, of course, cloud-based scalability is essential. Everything needs to be in the cloud to ensure seamless global collaboration and scalability. But beyond technology, I think scaling resources is just as important as scaling the platform itself. Too often, we ask linguists to perform tasks that are not strictly linguistic, such as recording, data labeling, and annotation. While these tasks are important within the content lifecycle, we need to rethink workforce structures so that the right expertise is applied in the right areas.

A truly open platform must clarify roles and responsibilities. Who is responsible for what? How do different contributors engage with the platform? What opportunities exist for professionals in this evolving LangOps ecosystem? If we get this right, LangOps platforms won’t just be tools, they’ll be ecosystems that empower professionals to grow, evolve, and develop career paths in this new landscape.

Stefan Huyghe: That’s a really interesting point, and it leads perfectly into a discussion about the evolving role of linguists in this new economy. Pascale, Manuel, do you want to share your thoughts on how the role of language professionals is changing?

Pascale Tremblay: I’d love to add something to Yvan’s point, and maybe also take the opportunity for a shameless plug for The LangOps Institute!

Yes, the role of linguists is going to become much more specialized and better defined. But at the same time, localization leaders, program managers, and anyone managing a LangOps program need to go beyond just language operations, they need to fully understand their organization’s content mapping and all the processes involved in content production.

Why is this important? Because when you understand the full content lifecycle, you can optimize how tasks are assigned, how workflows are structured, and how different roles contribute to LangOps success.

At The LangOps Institute, we’re working on methodologies to help professionals break down content workflows into smaller, more manageable units. Sometimes, workflows can be grouped based on process similarities, and sometimes they need to be structured based on audience impact, even if the underlying functions are entirely different.

This work requires collaboration, and it’s not just about cross-functionality within organizations anymore. We need to think about cross-functional collaboration within companies and cross-organizational collaboration between companies, LSPs, and tech providers.

It’s about building an interoperable, well-structured framework that makes LangOps efficient and future-proof. Yes, it requires a lot of effort upfront, but once the structure is in place, it will be scalable, adaptable, and long-lasting.

Stefan Huyghe: Manuel, do you want to add anything to that?

Manuel Herranz: Yes, absolutely. Just to build on what Yvan and Pascale said, we can’t demand that platforms be flexible, modular, and interoperable while expecting the workforce to remain static. Linguists and localization professionals will need to adapt as well.

Yes, maybe in the past, we’ve asked translators to do tasks they weren’t necessarily trained for, but let’s be honest, times are changing. If you speak a language, you should also be able to annotate data in that language. If you work in localization, you should be able to manage structured multilingual data. This wasn’t something we were taught at university, but our industry has always required adaptation. The tools we once used, like translation memories and CAT tools, are evolving, and so must we.

The way I see it, the LangOps shift isn’t about replacing localization, it’s about expanding it. The work is changing, but for those willing to adapt, the opportunities will be enormous.


Fine-Tuning LLMs: The Key to Smarter Global Content Operations

Stefan Huyghe: Jochen, let’s talk about large language models (LLMs) for a moment. It seems to me that the AI landscape has shifted significantly in just the last month or so, especially with the arrival of DeepSeek. Suddenly, the opportunity to build your own LLM has become much more feasible. What are the key benefits of building something locally? And what are the potential pitfalls?

Jochen Hummel: Well, to be precise, we’re really talking about fine-tuning rather than building an LLM from scratch. What DeepSeek has changed is the hardware requirements and the cost of running your own LLM. Many companies have been reluctant to use cloud-based LLMs because of data privacy concerns, sending their data to a third party for processing can be a security risk, and in some cases, that data might even be used to train external models. Now, with the ability to run LLMs locally, that concern disappears.

Beyond just running an LLM in-house, fine-tuning it for specific tasks and domain-specific data can dramatically improve performance. When working with LLMs, there are two primary ways to customize them: one is prompt engineering, where you adjust the way you feed information into an LLM to get better results. The other is fine-tuning, where you train the model on your specific data and tasks to improve accuracy.

Most people are now familiar with Retrieval-Augmented Generation (RAG), where you provide external knowledge sources to supplement an LLM’s responses. But the key question is: where does this external data come from? A LangOps platform should have access to structured multilingual knowledge so that it can automatically retrieve and incorporate the right data into AI-powered workflows.

For certain tasks, fine-tuning the LLM directly may be even more effective than a RAG-based approach. Take quality estimation, for example. If your Language Factory is already processing vast amounts of translated content, and you have a human-in-the-loop process where translators review and refine machine output, you can use that real-world correction data to fine-tune an LLM. This allows it to predict quality scores that align with your company’s specific quality standards.

Instead of relying on generic AI models, you’re building something that is tailored to your organization’s unique needs. The real game-changer is that fine-tuning is no longer just for tech giants. Companies that don’t have the budget for massive GPU clusters can now train and deploy their own LLMs more affordably, thanks to advances in hardware efficiency. In the future, LLMs will become standard building blocks for LangOps. The challenge will be knowing when to use RAG versus fine-tuning, and having a LangOps platform that simplifies these complex AI engineering decisions will be critical.

Stefan Huyghe: This really sounds like the end of the road for traditional translation memories. I won’t ask you, Jochen, since you were the original creator of Translation Memory (TM), that would be too easy! Maybe Manuel or Pascale can weigh in instead: what should organizations do to prepare for this evolution? I mean, let’s say I’m sitting on millions of words stored in a TM, what do I do with that data now in a world where LangOps solutions don’t seem to need TMs anymore?

Manuel Herranz: First off, you’re talking to an NLP guy here, so to me, Translation Memories are just another form of data. And data is gold. If you have a well-structured TM, that’s an incredibly valuable asset for any AI-driven system. It can be used to train or fine-tune an LLM for domain-specific translations, power a RAG-based system where your TM acts as a reference database for AI-assisted translation, or feed data into AI-driven quality estimation models, improving automated post-editing.

So no, I wouldn’t say TMs are useless, certainly not in front of Jochen! Times have evolved, and I know Yvan is well aware of this shift. Translation Memories are still useful, but their primary value is evolving. Rather than being the central asset of a localization workflow, they are becoming just one data source among many.

Organizations will still need a structured database system to store translated content, transcriptions, summarizations, client queries, chatbot responses, search data from knowledge bases, and prompts with LLM-generated answers. The mindset is shifting from saving only translated text to storing all data that contributes to knowledge management.

Stefan Huyghe: Yes, knowledge management seems to be the key here. Even in the old days, translation memories could become unwieldy, especially when shared across departments, leading to contradictions and inconsistencies. Pascale, what are some of the new roles linguists will need to prepare for in the AI era? How will the LangOps Institute help guide this transition?

Pascale Tremblay: First, I want to step back and address the role of TMSs. I fully agree with Manuel, I don’t believe TMSs will disappear anytime soon, but their value and function within the business ecosystem are changing.

For the past decade, the TMS was at the core of linguistic processes, connecting clients and LSPs. Whether an organization owned its own TMS or relied on an external provider, it acted as the backbone of the entire localization process. But now, we’re seeing a diversification of tools and processes. The TMS is no longer the core system, it’s one component in a much larger tech stack that includes AI-driven workflows, multilingual knowledge management, and content orchestration tools.

This shift is redefining how clients approach localization technology. What I’m looking forward to is how TMS providers will adapt to this transition. To stay relevant, they must become more interoperable, able to integrate seamlessly with the new AI-driven LangOps landscape. For at least the next three years, organizations will still rely on existing workflows that include TMSs. The question is: how do we transition from traditional TMS-centric models to LangOps frameworks? How do we ensure we retain and structure valuable content and knowledge for AI-driven applications?

Yes, there’s uncertainty and fear around these changes, but this is not an end, it’s a transformation. I see this as an evolution, and I’m excited to see how TMS providers will innovate to meet new demands.


The Changing Role of Localization Professionals

Stefan Huyghe: That brings us to changing roles. What are some of the biggest shifts you see happening in how we define localization leadership?

Pascale Tremblay: The role of the localization leader is undergoing a fundamental transformation. When you embed language into a LangOps framework, you shift left, meaning language operations move up the strategic ladder, much like DevOps did in software development.

This means localization leadership is no longer just about managing vendors or translation workflows. It’s about the strategic integration of language into global business operations, understanding and optimizing multilingual knowledge structures, and coordinating language at the intersection of content orchestration and AI-driven automation.

At Gap, for example, our language operations didn’t just touch one or two teams, it involved many different functions feeding into the system. To manage this, leaders need a deep understanding of language frameworks, a broad knowledge of business processes and content ecosystems, and the ability to design and orchestrate complex workflows. Localization leadership is becoming more strategic but also far more complex.

At The LangOps Institute, we are preparing for this shift by offering education and training focused on AI-specific language roles, how to work with LLMs, data orchestration, and automation, along with business management, strategy, and change management to help organizations transition from traditional localization to LangOps frameworks.

Jochen Hummel: The problem with TMSs is right in the name: TMS stands for Translation Memory System, it was built to manage translations, and that’s all it was designed to do. The future of LangOps goes far beyond translation.

This is why the industry keeps asking, "What can AI do for localization?" But the real question should be, "What can localization do for AI?"

As long as we limit our thinking to supporting translation workflows, we won’t achieve true LangOps transformation. We need to elevate translation memories into multilingual content repositories, terminology databases, and structured multilingual knowledge. This isn’t just about improving translation efficiency, it’s about supporting multilingual AI applications.

That’s why I’m so excited about LangOps as both a vision and a platform. TMSs are silos, focused only on translation, with a text-in, text-out approach. But LangOps is about building an application layer for multilingual AI, creating something far beyond what TMSs were ever designed to do.

Stefan Huyghe: Believe it or not, we’re already in the last five minutes of our roundtable, that’s how fast this conversation has flown by! Since Jochen already answered the question I was going to ask, let me turn to the rest of the panel. What are you most excited about when it comes to future developments in LangOps? What’s something that our audience might not yet be aware of that’s right around the corner? Who wants to go first?

Manuel Herranz: First of all, the future is bright. That’s my message.

And following up on what Pascale said about new roles, someone is going to get a pay raise from all of this! Instead of worrying about AI, automation, and what’s changing, we should be excited about the new responsibilities, new opportunities, and expanded roles that are emerging.

Yes, translation may become just 10% of what language professionals do, or even less, but that’s not a bad thing. What’s growing rapidly is the need for enterprises to manage their multilingual data, extract knowledge from it, and maintain structured repositories of language assets.

So, is there uncertainty? Sure. But we have a vision of where things are headed, and while we’re all adapting, including myself!, one thing is clear: the value of linguists, localization professionals, and knowledge managers is rising, not diminishing.

Stefan Huyghe: I like that perspective, Manuel, but I want to hear from the rest of the panel as well. Pascale?

Pascale Tremblay: I’m excited about the possibilities that LangOps brings to the table. Over the last 20 years, we’ve seen the localization industry evolve tremendously. And now, we’re entering an era where language technology is no longer just about translation, it’s about global content enablement.

When I was at Gap, I had to explain the concept of LangOps to executives who were still thinking about localization in a linear way. I was lucky, there was a perfect analogy in pop culture: Minority Report. In the movie, there’s a bot that greets people in multiple languages and instantly knows everything about them. That’s the future of data-driven multilingual AI.

The nonlinear nature of modern localization workflows, where data flows dynamically between content, AI, and personalization engines, is exactly where we’re headed. It’s about hyper-specialization, real-time adaptation, and enabling multilingual AI-driven experiences. That’s what excites me the most!

Yvan Hennecart: I completely agree. And to echo what Manuel said, when I started this conversation, I joked that I’ve been in localization for far too long. But for the first time, it feels like we’re finally getting all the right pieces in place to deliver the kind of multilingual experiences that users actually expect.

Instead of just optimizing for cost and efficiency, we are now in a position to deliver content in a way that is highly customized, intelligent, and deeply integrated into global business workflows. That’s the real breakthrough.

Stefan Huyghe: That’s a testament to the quality of this panel, I had 30 questions prepared, and we’ve covered six or seven of them just in the natural flow of discussion! I could keep this conversation going all evening, but let me end with a final question for Jochen.

What’s the next big breakthrough in LangOps? And… are you working on it?

Jochen Hummel: I don’t think the next big breakthrough in LangOps is necessarily a technical one, it’s an organizational one.

If you go into any large company today and ask, "Where do you find the most valuable language data for AI?" very few people will point to the localization team. And that’s a problem, but one we’re working hard to change.

For years, localization teams have been sitting on a goldmine of multilingual data, but this data was locked away in translation memories, terminology databases, and other siloed systems. The real breakthrough in LangOps will happen when organizations recognize the value of their multilingual data and leverage it for AI-driven initiatives, and when LangOps platforms make this data easily accessible, structured, and continually updated.

When companies start realizing that multilingual knowledge is a core business asset, not just a cost center for localization, then we will see a real shift.

And yes, I’m working on it. We’re building platforms that don’t just manage translations but power multilingual AI applications across the enterprise. If we get this right, the future of LangOps is incredibly bright, and instead of being an afterthought, our industry will be at the center of the AI revolution.

A much needed discussion, Stefan Huyghe. Business keep asking for AI solutions that are one step ahead of them. Our base TMS, which is already heavily AI-driven, is becoming plan B... So we've built an AI-first translation tool that trains on your TM and glossary so it gets smarter every time you use it. Leaving a link here for anyone curious >> https://taia.io/features/ai-translate-tool/

回复
Kenan Causevic

freelancer

3 小时前

aitranslations.io AI fixes this Localization Pros' tool evolution?

回复
Britta Aagaard

Chief Business Officer | Semantix

2 天前

The biggest challenge and opportunity for successful AI application continues to be: data. And inevitably, when multilingual, the complexity increases exponentially. A LangOps strategy can turn unstructured and disconnected textual data into structured enterprise knowledge.

We can’t predict the future, but all signs point to one thing: in an AI-dominated world, adapting our tools won’t be optional, it will be key for survival.

We’re excited to usher in a new generation of tools designed to empower innovative LangOps methodologies and transform industry processes. Our movement is driving change, challenging the status quo, and reshaping the future of language operations.

要查看或添加评论,请登录

Stefan Huyghe的更多文章

  • Breaking Out of The Translation Box

    Breaking Out of The Translation Box

    The State of The Language Industry In recent weeks, I have observed industry heavyweights like Don DePalma and Arle…

    20 条评论
  • How LangOps Can Transform Our Industry Into A Strategic Powerhouse!

    How LangOps Can Transform Our Industry Into A Strategic Powerhouse!

    In this edition of the AI in Loc newsletter, I’m thrilled to welcome Arthur Wetzel, the CEO of the newly established…

    7 条评论
  • Knowledge Graph-Based RAG To Change Localization Forever

    Knowledge Graph-Based RAG To Change Localization Forever

    As the localization industry starts 2025, it’s clear that the technological and strategic shifts we anticipated for…

    37 条评论
  • When Was The Last Time You Googled?

    When Was The Last Time You Googled?

    How Conversational AI is Changing Search and Localization The way we interact with information is changing entirely…

    44 条评论
  • MultiLingual Content Strategies - Redefined

    MultiLingual Content Strategies - Redefined

    Advanced LangOps Insights from the latest Expert Roundup In an era where technology rapidly evolves, localization…

    10 条评论
  • AI-Powered Localization Lessons from Asia

    AI-Powered Localization Lessons from Asia

    Asia is undergoing a quiet revolution in the localization space, driven by economic shifts, technological innovation…

    24 条评论
  • Create More Context-Aware Global Engagement

    Create More Context-Aware Global Engagement

    A Smarter Approach to Managing and Optimizing Global Communication For the last 30 years or so, organizations have…

    23 条评论
  • Content Centralization: The Foundation of Modern Globalization

    Content Centralization: The Foundation of Modern Globalization

    Transforming the Power of Multilingual Assets Beyond Standard TMS The globalization of business has brought with it an…

    46 条评论
  • The End Of Static Localization

    The End Of Static Localization

    Transforming LSPs for a Multilingual World in which Customers Talk Back Have you noticed that many Language Service…

    23 条评论
  • Intelligent NPCs: Localization's Great Expansion Opportunity!

    Intelligent NPCs: Localization's Great Expansion Opportunity!

    How AI Is Transforming Multilingual Gameplay NPCs (Non-Playable Characters) are integral to the fabric of video games…

    20 条评论