Archived: Spite House: AI, disintermediation and the end of the free web | Lauren Pope: La Pope Ltd

This is a simplified archive of the page at https://lapope.com/2025/12/15/spite-house/

Use this page embed on your own site:

A zine about becoming a spite house. Refusing to move as AI bulldozes the free web and disintermediation kills content discovery.

ReadArchived

A zine about becoming a spite house. Refusing to move as AI bulldozes the free web and disintermediation kills content discovery.

Chapter 1: Inat kuća

a roof, featuring ornate windows and the name "Inat Kuća" on its facade
Fred Romero from Paris, France, CC BY 2.0 via Wikimedia Commons

You can find spite houses – also known as nail houses and holdouts – wherever and whenever there’s growth.

When I was 21, I went to Bosnia. I remember delicious strong, sweet coffee, the crystal clear fast flowing rivers, the Sarajevo Roses made by filling mortar shell scars in the pavement with red resin. And I remember Inat kuća, the ‘House of Spite’.

In the 1890s, the Habsburg monarchy took power in Bosnia. The authorities wanted to build a grand new city hall in the capital, Sarajevo. They picked a site, bought the properties that stood on the plot and were ready to start building. Except for one detail. A stubborn man named Benderija, who refused to sell his property.

The authorities kept pressuring him. Eventually, Benderija agreed – but on his own terms. He made them dismantle his house brick by brick and rebuild it across the river – directly facing the new city hall. The house still stands today as a monument to righteous, defiant spite.

The story always stuck in my head. It resonated with my sense of independence and admiration for people who stand up to bullying ruling classes. You can find spite houses – also known as nail houses and holdouts – wherever and whenever there’s growth. People who refuse to give up and somehow manage to keep their home standing while everything changes around them. 

But most are sadder and more futile than Inat kuća. They’re not moved and rebuilt. They don’t stop the change. The highways, shopping centres and office buildings appear around them regardless. Looking at those other spite houses – stranded, cut off, bypassed – I wonder: is it admirable to refuse to move when the world changes around you? Or are you just cutting off your nose to spite your face?

Chapter 2: First-movers and holdouts

I’ve been thinking about spite houses a lot lately. 

In October 2025, I got this LinkedIn InMail ad:

creenshot of a message from a LinkedIn bro. It reads ‘Hi Lauren, By the end of 2025, there will be three types of People: Those who make things happen - securing first-mover advantage in AI. Those who watch things happen - stuck in reaction mode as peers move faster. Those who wonder what happened - left behind as AI transforms the world without them. Which one will you become?’

I couldn’t stop ruminating on it. It’s the perfect example of the dystopia I feel like we’ve entered this year, as AI disrupts everything around us. 

We’re being told that AI is inevitable, that using it is what the cool, successful movers and shakers do, and that ignoring it is for the crusty luddite holdouts. 

And I feel stuck between those two positions.

It certainly feels like an AI-dominated future is probably inevitable. But I still haven’t quite given up hope that there’s another way forward. 

I’m a curious person when it comes to digital, and I love to experiment with anything new. But I feel intense dread when I think about what AI means for the content sector, my peers, my nonprofit clients, and myself.

I’m also still in love with the idealism and promise of the free web. I started my first blog when I was 21 – where I wrote mortifyingly earnest posts about that trip to Bosnia. I turned 42 this year. So 2025 is the halfway point for me – the point at which I’ve been publishing on the free web for longer than I haven’t.

So, just like the LinkedIn Bro suggested, I’m asking myself ‘Which one will I become?’ Do I move with the times, accept that AI is inevitable, and dive in? Or do I become a holdout – a spite house – clinging on to the idea of the free web and website publishing, while AI builds a new world around me?

Do I become a holdout – a spite house – clinging on to the idea of the free web and website publishing, while AI builds a new world around me?

Chapter 3: Impressions up, traffic down

When I talk about AI, I’m talking about large language models and generative AI tools – things like ChatGPT, Perplexity, Google’s Gemini and AI Overviews, Apple Intelligence – that are changing how people find and consume information online.

As this kind of AI has crashed into the content discovery ecosystem, content publishers have felt the impact. People are using AI tools instead of search and AI Overviews are resulting in ‘no click searches’, as users find the answer they need on the search engine results pages without ever visiting a website.

I’ve seen it in my own data. Organic search sessions on my website have fallen by just over 15% since the introduction of AI Overviews. At the same time, my impressions – appearances in Google search results pages –- have increased.

It’s not just me. Beth Downes and Chris Unitt reported a 10% drop in organic search traffic across a group of 100 cultural organisations’ websites from January to July 2025, compared to a 30% rise over the same period in 2024. Scope, the disability charity, has seen a 13% drop in active users compared to the previous year, despite an increase in impressions.

The pattern is consistent: impressions up, traffic down. People are getting their answers without visiting the source.

I’ve spent a lot of time arguing that page views are a ‘so what?’ metric. What matters is whether you’re getting the right information to the right person.  And I stand by that, but I do think that page views count here. Because it’s not just traffic that’s dropping, but content discovery itself.

How disintermediation works

For years, there’s been an unspoken agreement: publishers create content and make it freely available online. In exchange, search engines and social platforms help people discover that content. Users click through to websites. Everyone benefits.

AI is breaking this agreement through disintermediation. AI tools still use our content, they just don’t send anyone to our websites. They’re taking content from multiple sources, synthesising it, and delivering it directly to users. People don’t need to visit the source.

Before AI: Person searches → finds your website in results → clicks through → reads your content on your site → you can convert them/build relationship

With AI: Person searches → AI scrapes your content → AI presents synthesised answer → person never visits your site

A study by Pew Research found that people who see an AI summary are half as likely to click on a search result link as those that don’t. 8% of people who see an AI summary click, versus 15% of people who do not see an AI summary. Only 1% clicked a citation link in the AI summary itself. Given that more than 1 in 4 Google searches now trigger an AI Overview result, the impact is significant.

What’s really at stake

Organisations publish content on websites for different reasons: marketing and promotion, building reputation, driving conversions, supporting sales. For my charity clients, content is about working towards a cause – reaching people who need help, providing advice and support, building awareness of issues, driving action, donations, and volunteering.

Imagine a charity that exists to support people with a rare disease. It has advice for dealing with symptoms on its website, based on original research and the lived experience of the people that use its services. This is a major way that it delivers on its charitable goals. It builds its email list through the content, developing relationships with people who might become donors or volunteers. When someone visits the site, they can be retargeted with ads for fundraising campaigns. The charity gets a grant from a foundation, partly because it can demonstrate the website’s reach. 

Now imagine someone searching for support with this rare disease. They get an AI-generated summary using the charity’s information, mixed with content from a blog post by someone selling a questionable supplement. The person never visits the charity’s website. They never see the full context of the advice. They never join the email list. They never become a donor or volunteer.

The charity loses the ability to offer comprehensive support. It loses the chance to build trust with someone who needs support. It loses potential volunteers, donors, and advocates. And most dangerous of all, it loses control over the accuracy and safety of the information. When people don’t get good information, the consequences can be huge. Without proper advice for early intervention, that user might need more intensive in-person services later.

That’s what disintermediation is doing, here and now.

Some people are being pragmatic about the shift. They’re adapting to the new reality, and optimising for AI summaries, or lobbying the tech companies to think more about accuracy and trusted sources when it comes to sensitive content.

But I’m not feeling pragmatic. I’m filled with spite.

This isn’t just about traffic. It’s about who owns and benefits from the work you’ve done. Years of graft and institutional knowledge from charities and publishers and organisations and individuals (like me) is all being used to train models we don’t control, to benefit companies that never asked permission. 

And I’m not ready to accept that any of this is inevitable.

A narrow wooden house with pretty window boxes wedged tightly between two much larger red brick apartments buildings
Skinny House, Boston. Image by Rhododendrites, CC BY-SA 4.0, via Wikimedia Commons

Chapter 4: The web that was

I’m not willing to accept disintermediation because I’m not willing to give up on the free web. It’s not just that I’ve grown up with/on it and built my career within it – it’s that it’s a beautiful, utopian idea.

Tim Berners-Lee’s vision for the web was radical. A space where anyone could publish anything, and anyone could access it – all for free. No gatekeeping, no fees, no censorship, no intermediaries. 

Before the web, if you wanted to share information with the world, you needed a printing press, a publisher, a broadcasting license, or a distribution network. And to get those, you needed money and connections. (And that’s why zines and pirate radio were so punk and transgressive.) The web removed those barriers. All you needed was something to say and a way to get online. (With the important acknowledgment that many people around the world still face barriers to getting online.)

But what made it truly special was that Berners-Lee convinced his bosses at CERN to put the web’s intellectual property into the public domain. He believed that for the web to contain everything, everyone had to be able to use it freely. You couldn’t charge people to search or upload and expect mass adoption.

This created something new: a distributed, open commons. No one owned it. No one controlled it. Content creators kept their intellectual property. The web was just the infrastructure – the roads and the signposts – connecting one person’s work to another’s. 

That vision worked, for a while. Millions of us built our digital homes on that free web. We published blogs, created resources, shared knowledge, and found communities. The web’s value came from its openness – anyone could link to anyone, anyone could be discovered, anyone could contribute. 

The platform era

AI isn’t the first example of disintermediation or the first threat to the free web. 

In the early days of the internet, someone gave me a book called The Rough Guide to the Internet. It was like a travel guide for the web, making recommendations for websites you could visit for different things. 

The fact that that book existed (and had many editions) hints at the issue with the free web made of citations and links: the bigger it gets, the harder it is to find things.

As well as that book, other solutions were emerging to fill the gap. To start with it was link lists, forums and directories – word of mouth in written form. Then we got search engines, which systematised the process of finding things. Eventually social networks, which added a layer of social proof back into discovering things online. These tools started off free and quality driven, but they were private businesses, not public infrastructure. So naturally, after a while, you had to pay to be seen, and enshittification followed.  

Enshittification is a term coined by Cory Doctorow to describe the way that the quality of online products and services declined as big tech companies seek ever-increasing profits. Cory is talking about things we’ve probably all experienced as users. Needing to get past all of Google’s ads and a bunch of spammy listicles to find the information you want on a search results page. Having to scroll through a thousand words of preamble on a recipe website (or just looking for the ‘Jump to recipe’ link) because the publisher had to bump up the word count to rank and make money from ad revenue. Or scrolling through rage-bait posts and mindless ads from everyone and anyone but the people you actually follow on social networks.

For those of us working in content, comms and marketing, the platform era was tough. You had to play a game and pay close attention to changing rules if you wanted to be seen, and commit an ever-increasing amount of budget to visibility. And every action we took – professionally and personally as a user of the web – benefitted the 1%ers behind the platforms. But at least we still owned our content, could control distribution, and could use the stuff we published to work towards our goals.

Chapter 5: The web they’re building

It feels like the platform era is solidifying into something even further away from the promise of the free web. A web where AI is the gatekeeper, intermediary and lens for everything. One where the content and websites we publish are training data, not destinations. 

When asked about the future of the web, Demis Hassabis, head of DeepMind, Google’s AI research lab, hinted at a future where publishers will want to upload their content directly to an AI model or LLM: ‘The web is going to change quite a lot. If you think about an agent first web… it doesn’t necessarily need to see renders and things like we do as humans.’ 

The vision seems to be a web without websites – where AI consumes raw data and information, rather than websites and digital experiences. That’s what ChatGPT’s Atlas browser is doing right now. And the push towards this version of the web is relentless. It’s seemingly impossible to opt out of your content being fed into AI if you still want it to be discoverable. 

The more sinister take is that if all information is filtered through AI, how can we trust it? Are we comfortable that these tech companies will reword, reinterpret and filter, in a fair and truthful way? That biases and ideologies won’t make their way through? I’m not. Throughout the history of the web, we’ve seen example after example that algorithmic technology is biased, and AI is built on algorithms. As Safiya Umoja Noble wrote in Algorithms of Oppression:

‘Algorithmic oppression is not just a glitch in the system but, rather, is fundamental to the operating system of the web.’

Safiya Umoja Noble

We need a web, and a world, where a small number of companies don’t have a monopoly on information.

Elsewhere at Google, they’re insisting that the free web is doing great. In a podcast, Nick Fox, Google’s senior vice president of knowledge and information said: ‘From our point of view, the web is thriving’, and ‘There’s probably no company that cares more about the health and the future of the web than Google.’ It’s like the truth doesn’t exist anymore. It all feels extremely Trump-coded to me. Even the sentence structure – ‘There’s probably no x more y than me’ – is straight out of the Trump gaslighting phrasebook. 

The free web isn’t thriving. Anyone who publishes on it knows this. We’re losing traffic and business models, but we’re also losing a vision of what the web could be. Open, distributed, owned by everyone and no one. A space where you could build something and people could find it. Where linking mattered, where citation mattered, where the web was a conversation, not a one-way extraction of value. Where you could fact-check and unpick the biases (some of the time, anyway). I’ve lived on this web my whole adult life. I can’t watch it become something else without saying something. 

AI slop

AI slop. The phrase of the year. It’s been hard to go a day without hearing it. It speaks to something I think we all know, but that we’re not really supposed to acknowledge. And we’re definitely not supposed to say it if we want to be employable. But I’m going to say it: AI’s not that great.

I’m sure some people will want to come at me with the ‘No, buts…’: ‘It’s as good as the prompt’, ‘It’s great at doing x’, ‘It’s getting better all the time’, ‘It doesn’t matter – if we don’t use it we’ll get left behind’ etc, etc, blah, blah, blah. 

But the kind of AI we’re talking about here is built on the concept of endless recycling. And chewing up and spitting out the same content over and over again is enshittification. 

This kind of slop didn’t start with AI. For as long as I’ve worked in the discipline, a huge amount of content has been created through regurgitation, because companies wanted to rank for a keyword, irrespective of whether they actually had something to add to the topic. AI is just speeding this up, because it’s making it easier than ever to create unoriginal content. 

But it’s also exacerbating it. If the free web is over, if the deal between the publishers and platforms is broken, where’s the incentive to create anything original and new? It’s a race to the bottom, where we’re all trapped in a closed loop of ever crappier content and more and more unreliable information. 

There’s evidence showing that AI’s not that good, too. BBC research found that more than half of AI answers to questions about news had significant issues. 19% of answers citing BBC content introduced factual errors (wrong facts, numbers, dates). And 13% of quotes were either altered or didn’t exist in the source article. An MIT report showed that 95% of AI pilots at companies are failing – delivering little to no measurable impact on profit and loss.

And I want to briefly mention two very important issues that should have been an ethical barrier but which are being brushed under the carpet. The first is that this kind of AI is built on theft of intellectual property and copyright. The legalities of this might be complicated, but the ethics of it seem incredibly clear to me. The second is the carbon and environmental cost. The data centres that AI tools use consume a huge amount of energy and water. By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatt-hours, making them the 5th biggest electricity consumer in the world, between Japan and Russia. 

Based on all this, I think Ethan Marcotte summed it up perfectly when he wrote:

‘as a product class, “AI” is a failed technology. I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors.’

Ethan Marcotte

In the same article, he also said ‘large language models (LLMs) have repeatedly and consistently failed to demonstrate value to anyone other than their investors and shareholders. The technology is a failure, and I’d like to invite you to join me in treating it as such.’

But despite all the evidence to the contrary, it’s being marketed as perfect, infallible and inevitable. And it’s everywhere. The hovering icons, the constant ads, the hot takes on LinkedIn, the conversations over coffee, company-wide AI-first mandates… it’s hard to resist the push.

The Theranos of it all

There’s an Emperor’s New Clothes element to it all, that reminds me of Theranos. For anyone who doesn’t share my obsession with this story, and didn’t read, listen to, or watch the book, podcast, or TV show, here’s a brief overview. Theranos was a health technology company founded by Elizabeth Holmes with an irresistible proposition: a revolutionary product that could run hundreds of tests from a few drops of blood. Sounds great, right? Lots of people thought so. Theranos raised hundreds of millions in investment, reached a $9 billion valuation, and Holmes became a Silicon Valley and media darling. 

Except the technology didn’t actually work. At all. Theranos’s proposition was just that – an idea, not a reality. Holmes and her business partner, Sunny Balwani, were able to keep up the pretence, thanks to excessive hype, uncritical investment, and people wanting to believe in the dream so badly that they ignored the warning signs. And I can see that happening with AI.

The deception isn’t just about the capabilities. (And I don’t want to dismiss AI as a whole. Just this specific LLM-based aspect.) It’s also about finances and business models.

We’ve all heard that AI companies are haemorrhaging money and that the bubble is going to burst at some point. And when you dig into it, the numbers are pretty grotesque.

Bain & Company’s numbers say AI companies need $2 trillion in revenue by 2030 to justify their current investments. That’s ‘more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.’ The AI industry says it’s currently making $45 billion a year.

In case you struggle to wrap your head around big numbers, as I do: If you made £1,000 every single day, it would take you 123,000 years to reach £45 billion. Saving at the same rate, it would take you 5.5 million years to reach £2 trillion.

Apparently the issue is that the margins are negative – it costs more to run an LLM than you can charge for it. And with each new version, and each new customer, the companies lose more money. This might be the biggest bubble ever. And if/when it bursts, it will create an economic disaster affecting the average person more than it does any of the tech elites pushing AI.

Chapter 6: Building a spite village

I’m seeing a wide range of emerging responses to the death of the free web and the ongoing challenges with content discovery, from AI integrated to AI resistant.

Many organisations are learning all they can about generative search and working on optimising their content to be more visible. Others are exploring a new business model, and licensing their content to LLMs – but, at the moment, this only seems to be an option for major players in the publishing world.

In the pragmatic middle, there are organisations trying to push for a better version of the new web. They’re documenting and sharing what AI is good at, and making sure people understand what it isn’t good at and why. 

Others are using their influence and lobbying for a better version of the technology. For example, the Patient Information Forum has led an initiative among health charities asking Google to prioritise verified sources of health information in AI Overviews.

Elsewhere, some organisations are leaning into platforms, by investing more time, effort and money into social networks. Subscription-based platforms like Substack and Beehiv are booming too. And there’s an interesting world of more community-based platforms, like Discord, Reddit, the Fediverse, even WhatsApp – where people find content and information every day.

Doubling down on platform reliance makes me nervous – it feels like short-term thinking. Social networks and subscription-based platforms can help you reach your audience, but ultimately, if they exist to make money, they will enshittify, and it will get harder and more expensive for your content to be discovered.  

I think also platforms mask what really matters when it comes to getting your content to people, which is:

  • Owning your data and information and the places its stored and published
  • Having a direct route to reach your audience – no intermediary
  • Building a relationship with that audience

Email and a website – the most under-hyped digital channels – give you this, and there’s no substitute. So that’s what I’m sticking with.

I know I could (should?) optimise and adapt. And sometimes I want to. Writing is hard. Thinking is hard. Resisting the temptation to hand the reins to my little pal Claude is tough some days. Is it better than me? Am I missing something? Am I just a crusty luddite holdout? These aren’t rhetorical questions. They’re real doubts I sit with.

But I’m choosing to keep publishing freely anyway – on my website, in emails, in zines – even though I can’t work out where the incentive for original content comes from if we’re all just feeding LLMs. I’m prioritising linking and citation, even though I don’t know if they matter anymore when AI strips them away. (Wait until you see the bibliography for this zine!) I’m building for a web that might not survive, because doing nothing feels worse.

I’m becoming a spite house.

I don’t know if being a spite house is brave or foolish. I don’t know if it matters that spite houses usually lose – that the highway gets built anyway, that progress (or what passes for progress) rolls on regardless.

I just know that I can’t bring myself to dismantle what I’ve built and rebuild it on terms set by Google, OpenAI, and the tech bros who are reshaping the web without asking permission.

One little thought gives me hope though. A spite house is only a spite house if it stands alone. Individual resistance is futile – and lonely. But collective resistance? That’s different.

There are people choosing the harder path, and pushing back. An AI web isn’t inevitable. There are content creators prioritising research and originality. There are publishers experimenting with community models. People are building on the Fediverse and in decentralised spaces. Initiatives like Solid offer the promise of tech that would protect people and their data while promoting collaboration. People still believe in the free web.

This isn’t about nostalgia or refusing to adapt. It’s about questioning whether AI dominance is actually inevitable, or whether it’s just being marketed that way.

I don’t have neat answers. I don’t know if this approach will work in any measurable sense. But I know that this zine exists because I chose to publish it on the free web, to work through these ideas in public, to invite conversation.

Maybe that’s all a spite house really is: a visible reminder that another way is possible, even if it’s harder. Even if it doesn’t win.

The question isn’t whether to resist the inevitable. The question is whether it’s actually inevitable at all.

A solitary narrow residential tower standing on an island of earth surrounded by deep construction excavations, with modern high-rise buildings behind it
Nail House in Guangzhou. Image by Tim Wu, CC BY-SA 4.0, via Wikimedia Commons

Bibliography/reading list

$2 trillion in new revenue needed to fund AI’s scaling trend, Bain & Company

Against the protection of stocking frames, Ethan Marcotte  

AI Is the Bubble to Burst Them All,  WIRED 

Algorithms of oppression: how search engines reinforce racism, Safiya Umoja Noble

AI means the end of internet search as we’ve known it, MIT Technology Review, Mat Honan

Atlas of AI, Kate Crawford

ChatGPT’s Atlas: The Browser That’s Anti-Web, Anil Dash

Cocomelon for adults, Search Engine

Colossus 1, Search Engine

Colossus 2, Search Engine  

DeepMind CEO Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030?, Alex Kantrowitz, YouTube 

Disintermediation is a wake-up call for our sector, Third Sector, Zoe Amar

Explained: Generative AI’s environmental impact, Adam Zewe, MIT

Google users are less likely to click on links when an AI summary appears in the results, Pew Research

Google’s Nick Fox: Reinventing Search with AI Mode, AI Overviews, and Agents, AI Inside, YouTube 

Groundbreaking BBC research shows issues with over half the answers from Artificial Intelligence (AI) assistants, BBC 

Health charities publish report on Google AI search results, Patient Information Forum 

How are Google’s AI Overviews affecting search traffic for arts and culture websites? Cultural Content, Beth Downs and Chris Unitt

How to Reduce the Risk of Disintermediation on Your Platform, HBR 

Is Google about to destroy the web?, BBC, Thomas Germain

It’s the End of the Web as We Know It (And I Feel Fine), MG Siegler, Spyglass)

 Just Keep Writing, Soft Coded, Danielle McClune

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory, BBC

MIT report: 95% of generative AI pilots at companies are failing, Sheryl Estrada, Fortune 

Once the AI bubble pops, we’ll all suffer. Could that be better than letting it grow unabated?, Eduardo Porter , The Guardian

Searches up, traffic down: How the AI search cliff impacted 17 nonprofits, M+R

Social Quitting, Cory Doctorow

Spending on AI Is at Epic Levels. Will It Ever Pay Off?, Eliot Brown & Robbie Whelan, Wall Street Journal / https://archive.is/yVm8w 

Technofedalism: What Killed Capitalism, Yannis Varoufakis

The 2026 AEO / GEO Benchmarks Report, Conductor 

 The AI-Scraping Free-for-All Is Coming to an End, John Herrman, NY Mag

The era of free websites is coming to an end and there’s nothing you can do about it, Lance Ulanof, TechRadar

The Hater’s Guide To The AI Bubble, Where’s your ed at 

The impact of AI on online support and advice, Content at Scope, Stephanie Coulshed

The state of AI in 2025: Agents, innovation, and transformation, McKinsey 

The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence, Timnit Gebru & Émile P. Torres

This Is How the AI Bubble Bursts, Yale Insights

We deserve better than an AI-powered future, Jane Ruffino 

What we mean when we talk about an AI ‘bubble’, World Economic Forum 

Why I gave the world wide web away for free, The Guardian  

Yes, everything online sucks now—but it doesn’t have to, Ars Technica

Icons by Knickknacks Design from Noun Project (CC BY 3.0)

Credits/thank yous

Ruth Oliver for helping my thought process and editing

Rob Mansfield, Rachel McConnell and Yasmin Georgiou for listening to my existential crisis 

Ash Mann for providing so many of the sources for this via his newsletter

More posts

An abacus

A step-by-step guide to choosing KPIs, building your measurement framework, and demonstrating content's real impact

An abacus

Why it might be time to stop speaking content, and start speaking business. Learn how to translate content impact into language that leaders actually care about.

Unlocking content potential: a report on organising structures and capability

A survey of over 70 content teams, exploring the organising structures they use, and how they help or harm content capability.

Like this? Get more, straight to your inbox.

Sign up and get new blog posts emailed to you. Plus, get the 10 Things newsletter: articles, opinions, tools and more curated to spark ideas and make connections for anyone who’s interested in content with purpose. No more than four emails a month. Unsubscribe whenever you like.