How Binformation keeps 334 councils up to date

One of the first questions people ask when they hear about Binformation is "how does it know my bin day?" The assumption is usually that there's some central government API for UK bin collections. There isn't. There is no national database of collection schedules. Every council publishes their own data on their own website in their own format.

So how do you get 334 councils into one app? You scrape them all. Here's how that works.

The UK Bin Collection Data project

The data behind Binformation comes from an open-source project called UK Bin Collection Data, hosted on GitHub. It's community-maintained, has over 320 stars and 210 forks, and as of May 2026 it's published over 360 releases. The latest was v0.166.1 on 2 May 2026. It is actively maintained.

The project maintains an individual scraper script for every supported council. Each scraper knows how to visit that council's bin collection page, submit a postcode and address, and extract the resulting schedule data. The output is standardised JSON: dates, bin types, collection days.

How the scrapers work

Most council websites are straightforward HTML pages. For these, the scrapers use Beautiful Soup 4, a Python library that parses HTML and lets you pull out specific data points by navigating the page structure. You tell it "find the table with class 'collection-dates', then get the text from each row," and it does.

Some councils build their schedule pages using JavaScript. The page loads empty and then populates via AJAX calls or a client-side framework. Beautiful Soup can't handle these because there's nothing in the initial HTML to parse. For these councils, the scrapers use Selenium, which automates a real web browser. Selenium opens the page, waits for the JavaScript to run, then reads the rendered result.

A handful of councils actually publish their data in structured formats: CSV files, or occasionally something that looks like an API. These are the easiest to work with. The scraper just downloads the file and parses it directly.

Each scraper produces the same output format regardless of how it got the data. This is what makes it possible to support hundreds of councils with a single app.

Server-side, not on-device

Binformation does not run these scrapers on your phone. That would be impractical. Some scrapers need Selenium, which requires a full browser engine. Even the simple ones need network requests to council websites, which would drain battery and hit rate limits.

Instead, Binformation's server handles all of this. When you set up the app and enter your postcode, the server identifies your council, runs the appropriate scraper against the council's website, and returns the results to the app. The schedule is then cached on the server for six hours and on your device until the next daily refresh.

Server infrastructure handling council data
Binformation's backend checks council data daily.

This architecture has several advantages for you as a user:

Your phone never contacts a council website. No battery drain from web scraping. No risk of your IP getting rate-limited by a council server. No need for browser automation on a mobile device.

The server can retry failed requests, cache results across all users in the same area, and serve stale data briefly if a council's website is temporarily down. Binformation's server refreshes daily between 06:00 and 20:00, so your schedule is updated without you doing anything.

What happens when a scraper breaks

Scrapers break. It's inevitable. A council redesigns their website, changes the CSS class names on their collection table, switches to a new platform, or moves the page to a different URL. When that happens, the scraper for that council stops returning data.

The UK Bin Collection Data project runs regular integration tests against every council scraper. These tests verify that each scraper can still fetch and parse data correctly. Failures are flagged immediately in the project's test suite (tracked via CodeCov) and on their interactive coverage map.

When a scraper breaks, someone in the community (or the project maintainers) updates it. This typically means adjusting CSS selectors, handling a new page structure, or switching from Beautiful Soup to Selenium if the council added JavaScript rendering. With over 210 forks and regular contributors, most fixes land within a few days.

During the gap between a scraper breaking and being fixed, Binformation serves the last cached schedule. Bin collection schedules don't change very often outside of bank holidays, so a cached schedule from a few days ago is usually still accurate. If you see a schedule that looks stale or doesn't match your council's website, it's likely a temporary scraper issue.

Reliability in practice

Binformation's effective uptime across all 334 councils is around 98%.

Binformation's effective uptime across all 334 councils is around 98%. The 2% gap comes from brief outages on individual council websites and occasional scraper breakages. Most of those outages are measured in days, not weeks.

That 98% figure isn't a guarantee. It is what I've observed over the months the app has been running. Some councils are rock-solid. Others (particularly ones that use JavaScript-heavy platforms or frequently redesign their sites) break their scrapers more often. The community maintenance model means fixes are driven by volunteers who care about the project, which works well but isn't the same as a paid support team on call 24/7.

I think honesty about this is important. Binformation is not going to be perfect 100% of the time. What it does is give you a much better experience than manually checking your council website every week, with the trade-off that occasionally, for a day or two, a scraper might be catching up.

Why open source matters here

The UK Bin Collection Data project works because it's open source. No single person or company could maintain 334 individual scrapers against 334 independently changing council websites. It takes a community. Anyone can contribute a new council scraper or fix a broken one. The project's 210+ forks and 360+ releases show that community is active and engaged.

Binformation benefits from this directly, and I'm grateful to the contributors who keep it running. The app builds on their work by providing a consumer-friendly interface, push notifications, and server-side caching that makes the data accessible to people who'd never open a GitHub repository. When we find or fix a broken scraper on our side, we push the fix back to the project so everyone benefits.

If you're technically inclined and your council isn't supported yet, you can contribute a scraper to the project. The repository has documentation on how to write one. Most scrapers are 50-100 lines of Python. Once it's merged into the project and passing tests, Binformation can pick it up.