Data Pulse by fifty-five: Your Monthly Insights on All Things Tech, Data & AI
[Radar 2025 - which trends are we monitoring]
Generative AI, CTV, and MMM: The Biggest Trends in Adtech at Cannes Lions 2025
From what I saw, three major adtech trends emerged at Cannes Lions 2025: generative AI, next-generation Marketing Mix Modeling (MMM), and Connected TV (CTV).
The first, GenAI, has been a major trend for a few years now, and AI is still driving a transformation as profound as the one brought about by the shift to digital fifteen years ago. Yet I was struck by the fact that not a single piece of generative AI work made it into the creative awards, even though the technology is fully mature. One telling example: the first fully AI-generated TV spot created for the NBA Finals. It’s flawless, and was produced in under 48 hours for just $2,000 where a traditional ad would require a seven figure budget and months of preparation. This kind of shift creates friction in traditional creative agencies, where AI is still often seen as a threat. What used to take dozens of people now takes just one or two, which deeply challenges long-standing creative workflows.
Meanwhile, CTV is emerging as a new playground for brand building. It enables immersive, engaging storytelling, far from the rapid-fire demands of social platforms. Unlike YouTube or Meta, which require a visible logo from the very first second, CTV allows a return to longer narratives. And while this rapidly evolving channel still comes with inconsistent offerings and varying ad loads, CTV is creating the kind of buzz we saw a few years ago around programmatic display with new startups and unexpected alliances (like between Amazon and Disney or Netflix traditional TV networks) forming to simplify inventory and media buying. For brands, the challenge now is to embrace innovation while ensuring quality experiences that truly fit their identity.
Lastly, in terms of measurement and as fifty-five has been predicting for a decade now, MMM is entering a new era. It’s no longer about static PowerPoint studies delivered once a year, but about real-time, AI-powered tools connected directly to live data. Today, advertisers can assess the impact of each digital channel not only nationally, but also by product and store. Monthly updates ensure data freshness and simulations allow for rapid budget optimization. Most importantly, costs have plummeted: less than €100,000 for a system that then runs autonomously, making this once-exclusive tool available even to smaller advertisers. This shift positions MMM as a truly strategic, agile approach to media measurement.
[Behind the scenes]
How We Quantify the Consistency of AI in Conversational Analytics
Conversational AI analytics systems represent one of the most valuable AI use cases for businesses, yet they also pose significant business risks when they generate confident-sounding but fabricated insights. Imagine asking your AI analytics assistant: “Which product categories are driving our Q4 growth?” and receiving a confident response about “smart home devices showing 45% growth compared to last quarter.” The analysis sounds reasonable, the numbers are specific, and the insight seems actionable. But what if the AI fabricated those numbers entirely?
Traditional BI dashboards require users to understand metrics and validate data themselves. Conversational analytics, however, present information in natural language that appears authoritative regardless of accuracy. For e-commerce businesses, where data-driven decisions directly impact revenue and strategy, this uncertainty becomes a significant business risk.
Our solution: integrating UQLM (Uncertainty Quantification for Language Models) into conversational analytics systems to provide quantitative confidence scores for every AI-generated insight.
Our latest proof-of-concept demonstrates how to embed UQLM into a conversational analytics pipeline using Google Gemini and BigQuery. For each user query, the system generates multiple responses, applies semantic negentropy (a measure of distance to normality) and non-contradiction scorers to compute a 0–1 confidence score, and employs a 0.85 threshold to distinguish between reliable insights (achieving 95% accuracy) and suspect outputs. By automatically flagging low-confidence results, our POC empowers risk-aware decision-making while preserving a seamless user experience.
For more details on this POC, read my Medium article on the topic.
[Radar 2025 - which trends are we monitoring]
Meta, Scale AI, and the Price of Good Data
By Tiyab K.
"How much would you be willing to pay for clean, well-classified data to fuel your AI ambitions?” For Meta, that number is $14.3 billion, or the amount it paid to own 49% of Scale AI.
Scale AI transforms messy real-world data into the precise fuel that powers AI for OpenAI, Anthropic, Meta, the Pentagon, and major autonomous vehicle players. Its CEO Alexandr Wang calls the business an "AI data foundry." From a business perspective, it's a massive human operation teaching machines how to "think.”
After Meta's announcement, Google, Scale AI's biggest customer at $200 million annually, plans to cut ties. Microsoft, OpenAI, and xAI followed. Why? Because your data partner knows more about your strategic direction than your board does. They see what you're building, what problems you're solving, where you're headed.
In fact, what Large Language Model providers don't emphasize enough is that behind every breakthrough model is an army of humans meticulously labeling data, correcting machine errors, and teaching algorithms context and nuance. LLM providers paint pictures of engineer-free futures, of AI that builds itself. They won't mention their dependency: they need companies like Scale AI, which need human intelligence to make artificial intelligence work.
Having clean, well-labeled data isn't some outdated concept from the big data era. Everything we have already learned about data quality, governance, and infrastructure remains the foundation. Without it, AI initiatives are expensive experiments in failure. Meta didn't just buy technology; they bought the expertise required to make technology work. Similarly, LLMs are blind to your business context. Making them useful requires transforming your messy data into teaching material with the proper expertise, exactly what Meta paid $14.3 billion to secure.
So while everyone's distracted by the latest model release or AGI timeline debates, ask yourself: How good is our data, really? Who's curating it? Who has access to it? And what would we pay to make it AI-ready?
[Expert POV]
Media Activation in the Age of User Privacy
As the advertising landscape keeps evolving, brands face significant challenges in executing relevant and effective media activations – a task greatly complicated by technical limitations, growing concerns about user privacy, and the need to accurately measure campaign impact. In this context, various solutions are emerging to help brands navigate this environment while respecting user rights. Among these solutions, some stand out for their commitment to ensuring data privacy, allowing users to choose not to be identified after the fact. By leveraging proprietary data provided by strategic partners, these tools aim to harness a great wealth of information while preserving the user experience.
Our collaboration with Utiq illustrates this dynamic by offering expert support for the integration of activation and measurement solutions. This type of assistance is essential for maximizing campaign effectiveness and ensuring the upskilling of internal teams. The adoption of these technologies will also depend on advertisers' interest in the inventories on which they will be deployed, particularly in environments such as connected TV (CTV), where persistence and identifiers can enable the invaluable cross-device activations that brands are seeking.