In digital media and advertising, it’s common to embed a range of business and media-related dimensions like brand, campaign objective, or audience directly into asset names across multiple platforms. This pattern appears at both large holding-company agencies and smaller independents. You might already know what I’m talking about, but if not, these examples might ring a bell:
- Using a CM_BundleID or CM_PlacementNameKey within DV360 line items to link them back to Floodlight activities or placement IDs in Campaign Manager.
- Adding an FB_ or IG_ prefix in Facebook Ads names, plus product or audience codes, to match how those assets are tracked in Google Ads or DV360.
- Including YT or GDN in naming conventions, along with brand tags, to quickly distinguish channels even though “YouTube” or “GDN” often appears as a separate field.
- Concatenating large key-value definitions (separated by underscores, tildes, or other tokens) into a single string to create a “unique” key for campaigns, placements, or creatives.
If any of this sounds familiar, then yes, you’re basing your operation on Taxonomy Strings.
Common issues when using Taxonomy Strings
Despite their widespread use, Taxonomy Strings can be a pain in the neck when you’re aiming for data consistency and accuracy. Non-technical stakeholders often struggle to interpret or verify a long string of codes, which also hinders collaboration among strategy, creative, and data teams. Let’s highlight a few more critical issues:
- Complexity & Potential for Error: Long, convoluted naming structures (platforms may also have character limits) are prone to typos, inconsistent abbreviations, and formatting mistakes. A single mislabel can disrupt reporting across multiple platforms.
- Risk of Inconsistent Naming Conventions: This is closely related to the first one. Different account teams might use slightly different abbreviations or separators. Without tight governance, the “universal key” can quickly become anything but universal.
- High Maintenance Overhead: Whenever a new product line, audience segment or any business abstraction is introduced, naming conventions must be updated. This can mean editing spreadsheets, scripts, and platform settings, reprocessing historical data and can even lead to discrepancies in fast-paced campaigns.
- Siloed Code Logic: The complex scripts or macros that parse these naming conventions often end up in departmental silos. If a key developer leaves, institutional knowledge about how everything is parsed can be lost.
Now imagine that only gets worse if you’re juggling constant value changes in Excel while continuously refreshing dashboards.
?Don't you remember working all night long to get those reports updated and ready? ?Don't you remember fixing the same data-consistency issues over and over again?
Why does this practice persist?
Although taxonomy strings can seem cumbersome, they’ve become widespread for several key reasons, here are some I've faced personally or some of my closest colleagues have been dealing with:
- Default “Solution” for Cross-Platform Consistency: Many ad tech platforms (DSPs, Ad Servers, Social Ads Managers, etc.) lack robust custom fields or metadata support. Embedding dimensions into the asset name ensures a “universal key” that appears in every exported report.
- Legacy In-House Reporting Tools & Excel “Add-Ons”: Over the years, agencies have built homegrown BI solutions or scripts in Excel/Google Sheets to parse these naming conventions. This infrastructure becomes ingrained, making a major overhaul both daunting and time-consuming.
- Limited Support for Granular Business Dimensions: Even when platforms offer labeling or business data fields, they rarely align across all channels. Agencies often need additional fields like product lines, cross-sell flags, or regional variations leading them to “shoehorn” that info into the asset name.
- Avoiding Overhead of External MDM Tools: Tools like Claravine, Adverity, or a custom data warehouse can centralize attributes and auto-generate campaign names. However, licensing and integration can be expensive or complex, leaving many agencies to default to a naming-based approach.
Possible solutions to the challenge
Large holding companies and consultants often publish “naming convention best practices,” effectively standardizing the use of the taxonomy. Although organizations like the IAB (Interactive Advertising Bureau) propose data standards, they don’t always address every possible business need. So, if you’re looking to move beyond the limitations of the Taxonomy Strings practice, you could consider these approaches:
- Implement a Master Data Layer or MDM (Master Data Management) System: Centralizes key business and media dimensions (e.g., brand, product, audience) so they’re defined once and referenced everywhere. Keep in mind that it requires initial investment, careful governance, and integration with each ad platform.
- Adopt Specialized Taxonomy Tools: Claravine, Adverity, or in-house solutions that auto generate consistent naming or directly push metadata into campaigns can reduce human error and eliminates complex naming scripts, but tool licensing and training can be costly.
- API-Driven Asset Creation: Rather than manually setting up campaigns in each platform, build or use an orchestration system that uses APIs to standardize metadata. It will requires technical expertise to set up and maintain, but automates consistency and reduces manual workloads.
- Establish Governance & Collaborative Processes: This is something you should consider always regardless the approach you want to follow. Adopting a governance framework ensures that dimension updates (e.g., new product codes) are systematically tracked and reviewed. It could require a complete culture change as teams must embrace centralized processes rather than quick-and-dirty fixes.
Conclusion
Taxonomy strings have become, in many cases, the de facto “industry standard” for bridging multiple platforms. Their advantages include consistent identification, reliable reporting visibility, and familiarity for teams. However, the downsides ranging from messy naming conventions to limited scalability can’t be ignored.
If your thinking on transitioning away from the status quo you can always introduce a master data layer to maintain a single source, adopt specialized tools or leverage API-based flows. Just remember that Implementing these solutions requires alignment between planning teams, data ops/engineering, and executive leadership plus a willingness to invest in new workflows. But over time, you’ll benefit from streamlined operations, improved data accuracy, and far less frustration when it comes to analytics and reporting.
Have you tackled these challenges, or do you still rely on taxonomy strings? Please share your insights and experiences in the comments below.