How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads
The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.
Let me ask you something.
When was the last time a piece of software got dramatically better without anyone changing how it fundamentally works?
Not a bug fix. Not a new feature. Not a redesigned interface. Just the same basic idea, done bigger, and suddenly it could do things that seemed impossible six months ago.
That is what has been happening with AI for the past four years. And the reason it keeps happening is not magic. It is not some secret breakthrough happening behind closed doors at OpenAI or Google. It is something far more straightforward.
It is called scaling laws. And once you understand the idea, you will understand why AI has been improving so fast, why companies are spending hundreds of billions of dollars on it, and why some very smart people are starting to worry the ride might be slowing down.
Start Here: What Even Is a Scaling Law?
Forget the technical term for a moment.
Here is the honest version. AI scaling laws are essentially observed recipes for intelligence. They tell us that if you feed an AI model more high quality data, give it more computational power to train on, and make it bigger and more complex, the resulting AI gets significantly and predictably better. It is an empirical observation that the more resources you pour into training a large AI model, the smarter and more capable it becomes, following consistent and often exponential patterns.
That word predictably is the important one. This is not guesswork. This is not hoping for a breakthrough. Scaling laws transform AI development from a series of gambles into a predictable engineering discipline. Researchers can train a few smaller models, plot their performance, and then confidently predict how a much larger model will perform before spending millions of dollars training it.
Think about what that means for a company deciding whether to spend a billion dollars building a new AI model. They do not have to wonder if it will be smarter than the last one. They can calculate it in advance with reasonable confidence.
That predictability is why companies keep spending. Because the outcome is not a guess.
The Three Ingredients That Make AI Smarter
An AI model's performance improves in a surprisingly predictable way as you scale up three key ingredients: the size of the model, the amount of training data, and the computing power used.
Here is what each one actually means in plain English.
Ingredient 1: More Parameters (Making the Brain Bigger)
Parameters are the adjustable values inside an AI model that determine how it responds to any given input. You can think of them like the synapses in a human brain. More synapses means more connections, more nuance, more ability to hold complex ideas simultaneously.
GPT-3, which shocked the world in 2020, had 175 billion parameters. The models powering leading AI tools today operate in the trillions. The original scaling law showed that more data, more parameters, and more compute yields more capability. This drove the initial AI revolution.
Ingredient 2: More Data (More to Learn From)
An AI model learns by reading vast amounts of text. The more it reads, the more patterns it absorbs, the more nuanced its understanding becomes. Early models were trained on portions of the internet. Recent models have consumed essentially all publicly available text that exists in human history, multiple times over.
If you have a massive model but not enough data, the scaling laws show you will get diminishing returns. All three ingredients must be scaled in balance for optimal results.
Ingredient 3: More Compute (More Power to Process It All)
You can have all the data in the world and a beautifully designed model but if you do not have enough computing power to run the training process, none of it works. This is why companies like Microsoft, Google, and Meta are spending tens of billions of dollars building massive data centres filled with specialised chips. Next-generation models require between 102610^{26} 1026 and 102710^{27} 1027 FLOPs of compute and a billion dollars or more to train, with some expecting fourth-generation models to cost over $10 billion.
The chips are not just expensive. They are genuinely hard to build and the supply chain for them is one of the most strategically contested in the world right now.
The Moment Things Get Weird: Emergent Abilities
Here is the part that genuinely surprises people when they first hear it.
When you scale an AI model past certain thresholds, it does not just get incrementally better at the things it already could do. It suddenly gains abilities that smaller versions of the same model had no trace of at all.
Superlinear scaling occurs when the output grows faster than the input. Even a small increase in input can create a much larger increase in output. This is rare but powerful when it happens, and it is often seen as emergent abilities in very large AI models. A huge jump in model size and data can suddenly give the model new skills that smaller models simply did not have.
A concrete example. Early versions of GPT could not do multi-step maths. Not because the developers forgot to add that feature. Because the model was not big enough for that ability to exist. Once the model crossed a certain size threshold, mathematical reasoning appeared as if from nowhere.
Nobody programmed it. Nobody explicitly taught it. It emerged from scale.
This is why researchers find scaling laws both exciting and slightly unnerving. You are not always sure what new capability will appear on the other side of the next threshold until you get there.
The New Frontier: Teaching AI to Think Before It Answers
For the first few years of the scaling era, the only kind of scaling that mattered was training scaling. Make the model bigger, feed it more data, give it more compute during training. Job done.
But in 2025 and into 2026, a completely different kind of scaling became just as important. It is called test-time scaling, or inference scaling. And it works very differently.
It turns out that inference compute, the amount of computing power spent thinking about a problem, also has a scaling law all its own. This thinking process is essentially the model performing multiple internal reasoning steps before producing an output, which can lead to more accurate responses.
Think of it this way. A small child asked a basic addition question answers instantly. Asked to solve a word problem, they need to slow down, read carefully, work through the steps. The thinking time is part of what produces the right answer.
OpenAI's reasoning models introduced exactly this concept at scale: models that think before responding. By 2026, extended reasoning chains are standard in production systems. Intelligence improves along three dimensions simultaneously, and each dimension scales with compute. The frontier keeps moving.
This is why AI models today feel qualitatively different from the ones that existed two years ago. They are not just faster at pattern matching. They are actually working through problems in steps, checking their own reasoning, and revising answers before they show you the result.
The Question Everyone Is Asking: Is It Running Out of Steam?
This is where it gets genuinely interesting. And genuinely uncertain.
It is a well-kept secret in the AI industry that for over a year now, frontier models appear to have reached their ceiling. The scaling laws that powered the exponential progress of large language models and fuelled bold predictions of artificial general intelligence by 2026 have started to show diminishing returns. Inside labs, the consensus is growing that simply adding more data and compute will not create the all-knowing digital systems once promised.
For the last few years, the AI industry has had one answer to every problem: make it bigger. If the model hallucinates, add more parameters. If it fails at reasoning, throw more compute at it. But as we move through 2026, the laws of diminishing returns are kicking in. The next leap forward will not come from making models larger. It will come from making them efficient, stable, and architecturally better designed.
So what happens now?
The honest answer is that researchers are pursuing several different paths simultaneously. Some are focused on architectural improvements, finding smarter ways to wire the connections inside models rather than just adding more of them. Some are focused on better data quality rather than raw data quantity. Some are focused on inference scaling, getting more intelligence out of the thinking process rather than the training process.
The original scaling law still holds at the frontier and the curves continue climbing. But 2026 is increasingly about who has the smartest architecture, not who has the biggest model.
Why Any of This Matters to You
Here is the practical takeaway from all of this.
Understanding scaling laws helps you make sense of things that otherwise seem mysterious. Why do AI companies keep spending billions even though they already have impressive models? Because the math tells them that more spending produces better models in a predictable way, and better models unlock new markets and new revenue.
Why does a new best model seem to appear every few months? Because the improvement is not random. It is being engineered systematically, ingredient by ingredient.
And why should you pay attention to the debate about whether scaling is hitting its limits? Because if it genuinely does plateau, the pace of AI improvement will slow from what we have experienced over the past four years. The jumps will get smaller. The time between genuinely transformative releases will get longer.
That does not mean AI stops being important. It means it starts behaving more like other mature technologies, improving steadily rather than leaping forward every few months.
We are not there yet. But we are probably closer to that inflection point than the headlines suggest.
The scaling laws fundamentally shift our understanding of how intelligence emerges and scales. This phenomenon is not just interesting. It is the core driver behind the rapid advancements we have seen in large language models and other AI systems in the past few years.
For now, the recipe still works. Pour in more data, more compute, smarter architecture. Get smarter AI.
The question nobody can fully answer yet is how many more times that recipe will surprise us before the returns start to genuinely shrink.
My honest bet is at least a few more times. And they will be worth watching.
— Roo



