How I Turned The Common News Into a Local SEO Engine

April 14, 2026

When we first shipped meeting summaries in The Common News, they were useful product pages, but they were not yet strong search assets.

That distinction matters. A page can be valuable to an existing user in the app and still be weak from an SEO perspective if search engines cannot clearly discover it, understand what it is about, or connect it to a larger topical structure across the site.

The work I did was about changing that. We turned meeting summaries from isolated content objects into indexable, canonical, locally specific pages that search engines can reliably crawl, interpret, and rank.

The Shift: From App Pages to Search Assets

The biggest shift was treating each meeting summary as a real SEO asset instead of just a record in the app.

That meant putting the canonical URL structure first:

  • We added clean, slug-based URLs for summary pages.
  • We created permanent redirects from the old UUID-style URLs.
  • We upgraded metadata for title, description, Open Graph, and Twitter cards.
  • We expanded the JSON-LD so Google can better understand the subject of each page.

We also tightened the crawl surface around those canonical pages:

  • The sitemap now emits the canonical URLs instead of stale or duplicate variants.
  • Each entry includes lastModified so crawlers have a stronger signal for reprocessing updated content.
  • We cleaned up robots.txt.
  • We made sure Search Console submission reflected the right sitemap and canonical structure.

Individually, none of those changes are groundbreaking. Together, they create a much clearer contract with search engines about which URLs matter, what each page represents, and how quickly crawlers should revisit them.

Topical and Geographic Specificity Was the Real Advantage

The more important work was not metadata. It was relevance.

Local civic search is won by specificity. People do not just search for broad phrases like "city council summary." They search for project names, committee hearings, neighborhood controversies, wards, streets, rezonings, and infrastructure issues happening near them.

So we added structured SEO fields and entity relationships for each summary so pages can be tied to real local concepts such as:

  • projects
  • committees
  • wards
  • streets
  • neighborhoods

That is a much stronger signal than relying on vague generic copy. It gives the page a durable relationship to real civic entities that recur across meetings and across the municipality over time.

We also moved more meeting and agenda content into the server-rendered HTML.

That mattered because search engines can now see the actual local issues, governing bodies, and project names directly in the first response instead of waiting on client-side fetches. In practice, that makes the pages more eligible to rank for long-tail queries around:

  • specific hearings
  • recurring committee topics
  • named infrastructure projects
  • local controversies tied to a place or governing body

This was especially important for civic content, where the details are the whole point. If the first HTML response does not expose the specific names and context, the page is less likely to rank for the exact searches residents actually make.

We Built Internal Linking Around Real Civic Entities

Another major piece was internal linking.

Search engines do not just evaluate individual pages. They evaluate how pages relate to one another. So in addition to the summary pages themselves, we started building municipality-scoped hub pages for recurring entities like projects and committees.

The structure works like this:

  • A summary page can rank for the specific event or meeting outcome.
  • A project hub can rank for the broader ongoing issue.
  • A committee hub can rank for the governing body over time.

That turns the site into a connected topical graph instead of a pile of disconnected articles.

We also made sure committee pages were built from canonical records in the real committees table rather than noisy text extraction. That matters because weak entity extraction creates duplicate or low-quality pages, which hurts the overall SEO surface instead of improving it.

Cleaning the Data Layer Was Part of the SEO Work

One of the most important lessons here is that SEO quality is downstream of data quality.

If your publishing system generates noisy entities, weak slugs, or inconsistent metadata, you eventually pollute your sitemap and dilute the authority of your strongest pages. So part of the work was cleaning the data layer itself:

  • Future summaries now auto-generate slugs, SEO metadata, and entity links through the publishing pipeline.
  • We backfilled the historical archive so older summaries benefit too.
  • We removed low-quality committee entity generation from free-form summary prose.
  • We replaced that noisy extraction with canonical committee data.

That prevented the site from generating junk pages while also making the archive much more coherent over time.

The Result

The result was not just "better metadata." It was a full publishing system for local civic content that is built to be crawled, understood, and ranked.

Each summary now has:

  • a stable canonical URL
  • stronger metadata and structured data
  • visible local context in the server-rendered HTML
  • entity relationships to recurring civic concepts
  • internal links into broader municipality-level hubs

That combination is what gives the site a real chance to rank for high-intent long-tail local queries, which is exactly where a product like The Common News should win.

Closing

The broader lesson is that SEO is rarely one setting or one tag. Good rankings usually come from systems work: URL structure, rendering strategy, entity modeling, internal linking, and data quality all reinforcing one another.

For The Common News, that meant turning civic summaries from app content into durable local search pages. That is what made the SEO work meaningful.