Why real thought leaders are leaving LinkedIn
Your Best Work Is Invisible (And It's Not Your Fault)
Red Wine, White Wine, and the Algorithmic Death of Originality
In 2001, a researcher at the University of Bordeaux served wine to a group of oenology students. Nothing unusual about that; Bordeaux is, after all, Bordeaux. What was unusual was what happened next. The students were asked to describe two wines: one white, one red. They dutifully complied. The white wine, they said, was fresh, dry, honeyed, lively. The red wine was intense, spicy, supple, and deep.
Both wines were, in fact, the same white wine. The only difference was that the researcher had slipped a tasteless red dye into one of the glasses.
I’ve been brooding about this experiment for some time now, because it seems to me to contain a truth far more important than the somewhat gleeful conclusion that wine experts are easily fooled. The students weren’t stupid. They weren’t even wrong, exactly. They were doing what all human beings do, all the time: they were actively constructing meaning from the signals they received. They brought to each glass everything they knew about red wine and white wine, and they experienced each accordingly. The experience was their own creation. The dye was merely a trigger.
Arthur Koestler, in his dense but thoughtful book The Act of Creation, put it this way:
“Language itself is never completely explicit. Words have suggestive, evocative powers; but at the same time they are merely stepping stones for thought. The artist rules his subjects by turning them into accomplices.”
The keyword there is accomplices. Not passive recipients. Not empty vessels awaiting instruction. Accomplices.
Which brings me, somewhat circuitously, to LinkedIn’s new algorithm.
The Algorithm That Doesn’t Understand You
In an unprecedented move for a social media platform, LinkedIn has recently published three technical papers outlining the future of its feed.
360Brew: A Decoder-only Foundation Model for Personalized Ranking and Recommendation (Link. The paper has been mysteriously withdrawn, but you can read it here)
Industrial-Scale Embedding Generation and Usage Across LinkedIn Posts (link)
Large-Scale Retrieval for the LinkedIn Feed Using Causal Language Models (link)
(Thank you, Amy Marriott, for making me aware of the last two).
Taken together, these papers describe a radical shift in how LinkedIn processes content, interprets users, and decides what anyone sees.
The engineering language is polished and confident, and the claims are ambitious. The algorithm, we are told, now understands your content. The algorithm understands your interests. And the algorithm can now match them with semantic precision.
I confess I find the word “understand” doing rather a lot of heavy lifting here.
Because what the papers actually describe is something more modest, though in its own way more troubling.
And that is that LinkedIn is building a feed that simulates understanding without possessing it and then uses that simulated understanding to decide what deserves visibility before a human being (aka you and your audience) ever gets a chance to see it.
Once again, a tech company hasn’t stopped to ask the philosophical questions behind what they are saying, assuming, or doing, and the implications are enormous. And not just for creators, but for meaning itself.
In this newsletter I hope to:
Ask those basic philosophical questions
Reveal why that leads to something deeply unintuitive
Show you how this severely limits the choices you have to do something about it
Show you what you CAN do about it.
But before we get into all of that, let’s unpack what the papers claim,
What LinkedIn’s new system claims to do
Across the three papers, LinkedIn positions its new feed as a unified, AI-first, giant leap forward system that can finally “understand” both posts and people.
And if you take what is said in them at face value, here’s how LinkedIn says its works:
1. It embeds every post into a rich semantic vector
The Industrial-Scale Embedding paper describes how LinkedIn now takes every post (text, video transcript, hashtags, layout metadata) and compresses it into a tiny 50-dimensional vector. Their term for this is “semantic embedding.”
But what does that actually mean?
Imagine taking a movie (characters, plot, lighting, soundtrack) and boiling the entire thing down into 50 numbers. Just 50. That’s the fingerprint LinkedIn generates for every post.
This, theoretically, allows a computer system to loop through various text, video transcripts, layout, and metadata threads (i.e, embeddings), compress them into a “meaning representation,” find similarities, AND make sense of the tapestry’s meaning, just as it would perceive connections within diverse content across the platform.
So, instead of searching for posts by keywords (like “marketing” or “AI”), LinkedIn now searches by these meaning patterns. But here’s the catch: in order for something to be meaningful, it must already be known.
The model only learns meaning from past patterns, past topics, past hashtags, past search relevance, and past content clusters curated by editors (remember those yellow badges). Those are literally the training tasks listed in the paper.
So the “fingerprint” isn’t looking for what makes your post unique.
It’s looking for what makes it similar to things that existed before.
2. It also embeds every user the same way
In the same papers, LinkedIn explains that they build user embeddings, meaning that you also get compressed into a pattern.
They do this by clustering your engagement history: what you liked, commented on, paused on, and shared. Then they take the fingerprints of the posts you’ve touched, mash them together, and say:
“This is who you are.”
It’s like a dating app that doesn’t know your personality, only that people who clicked the same things you clicked tend to like similar content.
But once again, this means the system isn’t matching you with posts.
It’s matching your historical pattern with a post’s historical pattern. Which means that your profile and your activity have essentially become a prompt, and the feed becomes a giant autocomplete engine of that prompt.
3. It fuses every modality into one universal model
In the papers, LinkedIn stresses that the model is “multimodal.” That means:
text → embedding
video transcripts → embedding
hashtags → embedding
metadata → embedding
engagement signals → embedding
Essentially, all your content is forced into the same universal representation as everyone else’s, which sounds efficient. But is also a bit like putting every language (English, Mandarin, Spanish, sarcasm, poetry) through a blender and expecting the puree to taste like all of them at once.
A heartfelt video, a nuanced essay, and a generic motivational post get reduced to vectors in the same mathematical space. If, somehow and for some unknown reason, they trigger similar engagement patterns, they become neighbors in that space.
But the overall main result of this is that the texture of the medium disappears, and only statistical resemblance remains.
4. It uses a large language model for retrieval
This part comes from the third paper: Large-Scale Retrieval Using Causal Language Models.
Essentially, it explains how Old LinkedIn fetched posts based on network connections, recency, or simple heuristics. Which is engineer talk for human fleshpods doing human fleshpody things.
It then goes on to explain how they “fixed” that inefficiency and how New LinkedIn uses an LLM as the retrieval engine itself.
This means (and this is VERY IMPORTANT), that before a human being sees your post and even before the AI ranking model looks at anything, the LLM selects the tiny subset of posts that are allowed to exist for you or your readers.
Think of it like a library where the librarian is an AI that only brings you books whose “fingerprints” look like books you’ve liked before.
And, if she can’t find a match, she doesn’t bring you the book.
Which means you never know it existed. Or to put it from the perspective of the author, some people will never know your book existed, either.
5. It incorporates popularity and engagement signals directly into the representations.
One of the most important (and least understood) details about LinkedIn’s new algorithm is buried in the Post Embeddings paper. And that is the fact that LinkedIn mixes engagement labels into the representation during training.
That means:
A post’s “meaning” is not defined by what it is saying but is partially defined by how well similar posts performed in the past.
Which, to return to my library metaphor, is like saying a book’s literary value is partly determined by prior Amazon sales of books with similar descriptions.
Which sounds smart, but it’s actually a trap because:
Your embedding is built from content you’ve engaged with.
A post’s embedding is built from content like it.
Retrieval selects posts most similar to both embeddings.
It’s a closed loop. The algorithm will always retrieve posts that look like what you’ve engaged with before.
In other words, the system bakes “what performed well” directly into the meaning fingerprint. So a post’s “meaning” is no longer separate from its popularity — they’re mathematically fused together.
The algorithm now treats “popular” and “meaningful” as the same thing.
And the result of this, of course, is that when engagement shapes meaning, originality becomes structurally disadvantaged. Because, as we mentioned above, the model can’t recognize something new because it has no engagement-based anchor in the embedding space. It can’t recognize something genuinely new as valuable because new things don’t match the pattern. So instead of showing you posts from people you follow or posts that are trending, the system uses AI to find posts that are mathematically similar to your fingerprint.
And the problem with this is that original ideas are, by definition, unpopular (because they’re new), so they're marked as “not meaningful” and never retrieved.
And this, as we will discuss below, could very well be the death knell for originality on LinkedIn and, if they are not careful, of the platform itself (at least as a social one).
The Philosophical Question AI Engineers Hate to Ask
All three papers repeatedly describe embeddings as meaning representations, and describe LLM-based retrieval using words like understanding, thinking, and reading.
And it may be doing the engineers an injustice, but there seems to be a fundamental confusion at work here about what communication actually is.
Because none of the papers ask the foundational questions:
What does it mean for a machine to “understand” anything?
Is that different from the way human beings “understand things”
If so, are we assuming that difference?
Are embeddings comprehension, or compression?
If originality is statistically deviant, can an embedding-based system ever detect it?
What happens to meaning when engagement shapes the semantic space?
In Order To Understand, We Need To Understand Communication
The standard model of communication (the one taught in business schools and assumed by most marketing departments) goes something like this: a sender transmits a message through a medium to a receiver. It’s clean, it’s linear, it’s reassuringly simple. And it’s almost entirely wrong.
Because what actually happens is rather more complicated. The “message” I put in is not the message you take out. They are, almost invariably, two different things. I say “I am modest,” and you conclude (quite rightly) that I am conceited. The same words, the same medium, the same transmission, and yet the meaning that emerges in your head is not the meaning I intended.
But rather than seeing this as a flaw in the communication process (as engineers and some marketers do), linguists and good marketers understand that it is the communication process.
Receivers of messages do not receive messages passively. They actively construct meaning from whatever triggers are provided. They bring their own knowledge, their own experience, their own prejudices to every encounter. Like those Bordeaux students with their dyed wine, they are creating the experience as much as receiving it. Which I hope I don’t need to prove, only a human being can do.
And this is why the algorithm’s fundamental premise is so troubling. Because by reducing both posts and users to embedded fingerprint patterns, and by matching these patterns against patterns, the system has eliminated the one thing that makes communication work: the creative participation of the receiver (aka a human being).
AI (in its current form as LLMs) doesn’t think. LLMs do not “understand” the words they use; embeddings do not “know” anything, and pattern compression is not comprehension.
As Noam Chomsky famously said: “To ask if AI thinks is like asking if a submarine swims.”
And by “understand” and “think” I mean “understand or think about anything.” AI doesn’t understand the words it’s using or the relationships those words have to each other. It doesn’t even understand what the word “word” means. It can only guess what the next one could or should be. And it can only do that by compressing patterns. Which, of course, is not the way human beings think or use words. But more importantly, it also reveals the severe limitations of AI and LinkedIn's algorithm. Because when all you can do is compress patterns, then all you can do, by definition, is recognize things that look like past patterns. And originality, by definition, doesn’t look like the past.
Engineers Are Not Evil
I should perhaps pause here to acknowledge that I have been somewhat unfair to the engineers. They are, after all, solving a genuine problem: how to surface relevant content from an ocean of noise. And pattern-matching is, within its limits, genuinely useful.
“If you liked this, you might like that” is the same logic that drives Amazon recommendations, Spotify playlists, and even “Santa Claus,” and nobody pretends those systems are useless.
But there is a difference between “useful for selling more products” and “capable of recognising value,” just as there is a difference between being “efficient at distributing familiar content” and being “hospitable to original thought.”
What LinkedIn’s New AI is actually doing
It’s not that LinkedIn’s new algorithm dislikes originality. That would imply a judgment, a preference, a stance. The algorithm has no stance. LLMs don’t understand intention just as multimodal fusion doesn’t create comprehension.
All AI systems can do is perform one function:
Pattern compression.
All it can do is transform:
ideas → patterns
experiences → patterns
emotions → patterns
stories → patterns
originality → statistical deviation
This turns people’s feeds into a mathematical similarity engine, not a discovery engine. And this distinction is critical because once you understand that retrieval has become the true gatekeeper and that the retrieval model does not select anything outside of a pattern as “relevant,” then:
ranking never sees it
the feed never sees it
and your audience never sees it
This is presented, in their paper, as “deep relevance.” But when you really think about it, it’s more like pattern-matching masquerading as comprehension. Because when you strip away the engineering confidence of these papers, only one fact remains:
LinkedIn is no longer showing you content based on social connections, recency, or even engagement. It is showing you content based on how statistically similar your embedding is to the embedding of a post.
You have an embedding.
Every post has an embedding.
The retrieval model fetches posts most similar to your pattern.
The ranking model reorders those similar posts.
Popularity influences the embedding space itself.
Which means that the most original voices (the ones who have something genuinely new to say, something that doesn’t fit the patterns) are not being suppressed or deprioritised or penalised. They are simply being made invisible.
Not shadowbanned.
Or deprioritized.
Or penalized.
Invisible.
And not because it’s bad.
Or wrong.
Or not interesting
But because it’s unfamiliar.
Fame and the Second Law of Thermodynamics
There is a useful metaphor from physics that may help explain this better.
The Second Law of Thermodynamics, very roughly, says this: when two objects touch, and their temperatures are different, heat will flow from the warmer to the cooler until their temperatures are equalised. Left to themselves, things run down and get colder. That is the natural state of the physical world.
With the new algorithm, something very similar is happening inside LinkedIn’s feed.
And if the system only retrieves posts that match existing patterns (if it only surfaces content that resembles content that has already succeeded), then over time, the feed will inevitably homogenise. Posts will adopt the same cadence. The same emotional beats. And the same “story-shaped” structures as other posts (until people get bored and a new one becomes viral)
Not because humans choose these. But because the machine retrieves these, and only these.
In other words, as the temperatures equalize, the warmth of genuine distinctiveness slowly bleeds out of the system. And what we will soon be left with is what can only be called lukewarm content safe enough to be retrieved, familiar enough to be recognised, and similar enough to be served.
The irony is that this is precisely the opposite of what makes something worth paying attention to. And what people really want.
People, for the most part, want to avoid being two things: confused and bored. This is why and how famous people become famous. Fame (the kind of indiscriminate, slightly mysterious quality that allows brands and people to command attention and charge premiums) depends on being different, standing out, and occupying a distinctive position in the collective imagination.
And you cannot achieve that by being similar to everything else. Because that would be boring, and that is one of the main things people don’t want to be. Unfortunately, similarity is the only currency the new algorithm recognises.
So if old LinkedIn was like a radio station where a DJ played songs based on what was trending or what your friends requested, New LinkedIn is like a radio station that’s been trained on millions of listening patterns, but it doesn’t play songs because they’re good. It plays songs because they’re statistically similar to songs people have heard before.
So if you’ve listened to a lot of pop music, the station will play more pop. If you’ve engaged with motivational posts, you will see more motivational posts. But a genuinely experimental song (something that doesn’t fit the pattern) never gets airtime. Not because it’s banned. But because the algorithm doesn’t recognize it as “music worth playing.”
In other words, the algorithm that LinkedIn has invented is extremely good at playing what worked yesterday. But it’s incapable of recognizing what might matter tomorrow
Which leads to the most important point in this entire newsletter. How do you get people to listen to your new song?
Resistance From The Inside is Futile
And the answer to that question in the world of the New LinkedIn radio station is not going to be “you get them to play it on the radio.”
If retrieval eliminates originality by design, then resistance (trying to write more boldly, more uniquely, more challengingly) gets interpreted by the system as:
noise
deviation
low-similarity mismatch
unaligned content
You could get lucky; you could happen to be one of the songs that get statistically played and discovered that way. Sure, writing about the things everyone else is talking about may get your views. But you don’t want to rely on that. Because now your most human ideas, the one that helps you build a brand and stand out, become the least visible. And unfortunately, there is no way to hack this. In the retrieval-first, embedding-driven feed, the machine decides what exists. You can’t “optimize” your way out of a system that treats originality as an outlier. At least from within the system.
I am conscious that this may all sound rather gloomy. And perhaps it is. But there is, I think, a reason for cautious optimism buried somewhere in all of this.
The Silver Lining
The algorithm is, in the end, a mirror. It reflects back at us what we have already rewarded, what we have already clicked on, and what we have already engaged with. If the feed is full of tepid sameness, that is because we have trained it to serve tepid sameness. The system is doing exactly what we asked.
Which means that what happens next is, to some extent, up to us.
We could, if we chose, reward distinctiveness over familiarity. We could engage with ideas that challenge rather than confirm. We could seek out the statistical outliers (the strange, the new, the genuinely surprising) and by doing so, teach the machine that these things, too, are valuable. But that, of course, relies on millions of people doing this. So instead of seeking out the statistical outliers (the strange, the new, the genuinely surprising), what we, and by we I mean you, need to start doing is actually doing those strange and new and genuinely surprising things, and by doing them, get people to talk about what we are doing and in so doing teach the machine that these things, too, are valuable.
The algorithm cannot understand what any of this means. That, I think, is clear enough.
But humans can.
So if LinkedIn wants to remove the one thing that actually makes anything meaningful from its algorithm, we can, in effect, force it back in by doing things that human beings will talk about.
And perhaps that’s the lesson in all this that we should not outsource the things that matter most to systems that are structurally incapable of caring about them.
Meaning has always lived in human minds.
It seems unlikely that it will be moving any time soon.
The question now is, how do we do this?
How do we reintroduce humanity back into a system that wants to reject it?
Resistance Start’s Off-Feed
The first step is to step outside the feed entirely and recognise that real ideas spread in human networks, through human judgment, by human recommendation, and that DMs, group chats, the email threads, the podcasts, the newsletter you are reading right now, are increasingly becoming the places where the algorithm cannot follow.
The algorithm can observe this happening after the fact (it’s quite good at spotting bandwagons once they’ve started rolling), but it cannot and will not initiate it.
The creators who are still surviving inside the feed are the ones who are achieving escape velocity. In other words, whose outside personal reputation, fame, relevance, or whatever you want to call it, forces inclusion, regardless of the embedding space.
Which leads us to the second way rebellion happens from outside the feed.
Do Things Worth Writing About.
The second step is more fundamental, and it echoes something Benjamin Franklin once said:
“You can write things worth reading, or you can do things worth writing about.”
In other words:
You can post things worth engaging with. Or you can do things other people post about.
For most of LinkedIn’s history, that was a nice distinction. Today, it may be a survival strategy. Because when other people talk about what you have done (when they quote your work, or reference your ideas, or when they can’t help but share what you’ve created), the algorithm has no choice but to notice.
My friend Jason Mitchell, CEO of Movement Strategy, talks about this all the time. The goal of social media isn’t to post things that go viral. It’s to be the thing that goes viral. It’s to do something so remarkable that other people can’t help but talk about it. Sometimes that thing you do can be something you do on the platform. But the more powerful ones usually aren’t. They are usually things that happen in the real world, or at least algorithmically free ones.
In other words:
Posting about yourself = you fighting the algorithm.
Other people posting about you = the algorithm has no choice but to notice.
Because when people talk about you, when they quote your work, when they reference what you’ve done, LinkedIn’s system detects relevance signals it can’t ignore. You’re no longer an outlier. You’re a pattern. You’re a topic. You’re something the algorithm recognizes as worth retrieving.
The goal, in other words, is not to game the algorithm’s pattern recognition. The goal is to become a pattern the AI algorithm recognizes.
And then when you post about what you’ve done yourself, those posts work even better. But not because the algorithm suddenly understands you. But because the algorithm has already learned that people are talking about you. Your posts become retrieval-worthy because they’re connected to something the system has already identified as relevant.
In Conclusion
LinkedIn has built a system that:
confuses pattern with meaning
confuses similarity with relevance
confuses engagement with value
confuses conformity with quality
And in doing so, it has created a feed that is extremely good at one thing:
Reproducing the past.
Which means that the best creators, the ones who are truly trying to build a personal brand that lasts longer than a Bitcoin influencer, won’t be those who try to game or hack the AI algorithm. It will be those who step entirely outside the embedding space.
Meaning (at least as far as LinkedIn is concerned) has moved elsewhere.
Meaning lives in the places the algorithm cannot see. And that is where the next generation of original thought will go (and has already), which is one of the many reasons why you are reading this either in your email or on Subastack, and not on your LinkedIn feed.
This Is Why I Started a Substack
Actually, my last three Substacks are a great example of this.
First off, my Substacks occur off the LinkedIn feed in an algorithm-free space (your email). Secondly, they’ve triggered something in some people. And they’ve triggered them enough so that people are copying certain sections and creating LinkedIn posts about them.
I have no way of calculating the impressions from this on LinkedIn. I can’t track how many times they were quoted on LinkedIn, forwarded in Slack, discussed in group chats, or referenced in conversations I’ll never see. But I know with absolute certainty that those “impressions” are more valuable than anything I could post directly to the LinkedIn feed.
Why? Because they’re real. They’re human-driven.
This is one of the biggest issues the search industry is facing right now. AI doesn’t fall for traditional SEO tricks. It doesn’t reward keyword stuffing or link manipulation. What it looks for is genuine relevance. It looks for what people are actually talking about. It looks for patterns that are already rolling.
So now we know that the best LinkedIn strategy is to build up your reputation by doing things outside of LinkedIn that are worth talking about on LinkedIn. The question from here becomes:
What kind of things should I do?
How do you do things worth writing about?
AI Algorithm Proof Yourself By Mastering The Art of Personal Publicity
I’ve written about this extensively in past newsletters (back when this was called “The Unforgettable Newsletter.”). But it’s also been the central lesson I teach in my Free monthly Masterclasses and in my new digital course.
And it all comes down to two principles.
The first one is:
You don’t need to build a personal brand. You need to master the art of personal publicity.
There’s a massive difference.
A personal brand is what you say about yourself. Personal publicity is what the world says about you. One is marketing. The other is magnetism.
Marketing is beholden to algorithms.
Magnatism, on the other hand, is only beholden to people.
And magnetism is more powerful than marketing. It has the power to override algorithms. And in some cases, even become them.
But magnetism comes from showmanship. Not salesmanship.
The second principle is:
The only way to build a personal brand is to use it.
And the best way to use it is through personal publicity.
Most people, however, do this backward. They spend all their time crafting a personal brand first. This is a big mistake.
But I have already talked about this ad nauseam in past newsletters and on LinkedIn.
And this newsletter is already too long.
So if you’d like to learn more, I invite you to take a look at the archives, sign up for the free masterclass, or take the Art of Personal Publicity digital course. Code SHOWTIME or BARNUM → $199 (normally $450)
In Thursday’s Paid subscriber email, I‘m going to show you how to use AI to “do things worth writing about.” That post is out and here is the link.
Until then,
Keep it human
Justin (advertising's newest philosopher) Oberman
p.s. This is really my email. If you hit reply, I will see it.






I took a month off and debating my return to be honest.
This is a long treatise that leaves out the differentiated effects of algorithimic bias.
Your essay is being hailed as definitive and brilliant, precisely because you leave out the issues that makes folks uncomfortable.
If you are curious about what that leaves out Justin then read what Black women are saying about their experiences and then add feminist analysis of an intersectional understanding of gender bias in technology.