Build Lean SaaS cube logoBuild Lean SaaS
Back to Always-On Agents
Course lesson 7decisionskillintermediate

Build an AI Release Notes Watcher

Build an AI release notes watcher that monitors GitHub releases and RSS/Atom feeds, dedupes changes, and turns important updates into sourced article briefs.

Austin Witherow
10 min read
Use the attached skill

If you write about fast-moving AI tools, the hard part is not finding news. It is noticing the right release early, checking it against primary sources, and turning it into a useful implementation article before the topic goes stale.

This lesson builds a cheap hourly release notes watcher for Codex, Hermes, OpenAI, and other trusted agent tooling sources. It watches GitHub releases and RSS/Atom feeds, including official changelog or blog feeds when a vendor exposes them; ignores what it has already seen; and only wakes Codex or Hermes when there is a release worth researching.

The output is a sourced research brief, not an auto-published article. A human still approves the angle, checks the claims, and decides whether the change deserves a public article, a course update, a skill patch, a digest item, or no action.

Want the working capture layer? Install the free Release Source Capture skill to watch GitHub releases and RSS feeds, dedupe events locally, and emit normalized JSON that Codex or Hermes can turn into article briefs.

Who this is for

Use this workflow if you are a developer, indie SaaS builder, technical writer, or course creator who needs to monitor AI tooling changes without manually checking GitHub releases and official RSS/Atom feeds every day.

It is especially useful when your content can go stale because a tool changes install commands, config behavior, auth, model selection, plugin loading, scheduled jobs, or API behavior.

What this watcher helps you catch

  • A Codex CLI release that changes install commands or config behavior.
  • A Hermes Agent update that changes tool execution, scheduled jobs, or Discord behavior.
  • An OpenAI API or model update that affects an existing tutorial.
  • A docs update that invalidates part of a course lesson.
  • A new feature that deserves a hands-on implementation article.

Instead of relying on memory or social feeds, the watcher creates a repeatable editorial intake system.

What makes this different

This is not a generic RSS reader, social scraper, or auto-blogging workflow. The watcher is designed for implementation content:

  • It uses an allowlist of trusted primary sources.
  • It dedupes events locally so hourly polling stays quiet.
  • It emits structured JSON instead of prose.
  • It separates detection from interpretation.
  • It routes only meaningful changes to Codex or Hermes.
  • It creates briefs for human approval instead of publishing automatically.

The workflow in one picture

Keep the detector cheap and deterministic. It should run from cron without an LLM call and answer one question: "Did an allowlisted source publish something new enough to inspect?"

Let Codex or Hermes do the expensive work only after a change survives dedupe and ranking.

Step 1: define the trusted source registry

Start with a small allowlist. Do not point the system at the whole web.

A good v1 registry has five fields per source:

The id is the stable key in the state file. The type tells the capture script how to parse the source. The url is the canonical source. The product helps the later writing brief group related changes. The importance gives the ranking layer a useful default.

For v1, only include sources you would cite directly in a public technical article:

  • GitHub releases for installable tools.
  • Official changelogs and docs pages.
  • Official RSS or Atom feeds.
  • Vendor blog posts.
  • Maintainer-authored announcement posts if they are consistently reliable.

Avoid scraped social timelines in v1. They are noisy, brittle, and usually need a human interpretation layer anyway.

Step 2: separate detection from interpretation

The hourly detector should not decide whether a release matters to your roadmap. It should normalize facts and stop.

A normalized JSON event should look like this:

That shape gives the next agent enough context to research without making the polling job clever. The hourly run stays mechanical: fetch sources, parse items, hash canonical fields, compare with state, print new events.

The paired Release Source Capture skill emits this shape.

Step 3: dedupe with a state file

The state file is the difference between "check the feed" and "tell me only when something changed."

The state file should store each seen event hash by source:

This is enough for an hourly detector. If the same release appears again, the script ignores it. If a release page changes materially and the hash changes, it can surface again as an updated event.

Start with --dry-run so you can inspect the first batch without marking anything as seen. Remove it only after the registry is clean enough to keep.

If you want to skip the boilerplate, use the free Release Source Capture skill. It already implements the registry, fetch, normalize, hash, and state-file dedupe loop described above.

Step 4: rank the content opportunity

After detection, use a small rubric before involving a writing agent.

Score each new event from 0 to 10:

  • 3 points if it is from a high-priority source like Codex, Hermes, OpenAI, or a direct dependency.
  • 2 points if it includes new user-facing features.
  • 2 points if it may cause breaking compatibility or migration work.
  • 1 point if it unlocks a tutorial or demo.
  • 1 point if it has clear primary sources.
  • 1 point if it affects an existing Build Lean SaaS lesson, skill, or template.

A release that scores 7 or higher deserves a research brief. A score of 4 to 6 can go into a weekly digest. Below that, store the event and do nothing.

The phrase that should stop the scroll is "breaking compatibility." If a release can break install commands, config, auth, model selection, API behavior, plugin loading, or cron execution, treat it as maintenance work first and news second.

Step 5: generate the research brief

When a new release clears the threshold, hand the normalized event to Codex or Hermes with a constrained research prompt:

That prompt keeps the agent from skipping straight to prose. The brief is the quality gate. It lets you check whether the release is real, whether the impact is meaningful, and whether the site already has a better page to update.

Step 6: turn the brief into a tutorial

If the human approves the angle, write the article as an implementation tutorial. Do not summarize the changelog and call it done.

A strong structure is:

  1. What shipped.
  2. Who needs to care.
  3. The compatibility or workflow impact.
  4. A quick reproduction or install check.
  5. A mini tutorial that uses the new capability.
  6. Common failure cases.
  7. How to update your agent workspace.
  8. Links to primary sources.

For Codex and Hermes content, the tutorial should show the operator loop: inspect, plan, run a dry run, apply the change, verify, and leave notes for the next heartbeat.

The value is the operator answer: what should someone running always-on agents install, change, test, avoid, or document because this shipped?

Step 7: keep human approval in the loop

There are three places to require approval:

  • Before adding a new source to the registry.
  • Before generating a full article from a release event.
  • Before publishing or updating a live lesson.

The hourly job can ping Discord with a short message:

That keeps the watcher useful without letting it become a publishing machine. Its job is to surface opportunities, not create obligations.

Step 8: install the paired skill

Install the skill into a local Codex or Hermes skill directory:

Create a registry and save it somewhere stable:

Run a dry run from the installed skill directory:

Before writing state, check that the URLs are canonical, the summaries are useful enough for triage, and repeated dry runs do not produce surprise duplicates.

When the output looks right, run once without --dry-run to seed the state file. The first real run marks current items as seen, so the hourly job does not alert on every old release.

Step 9: run it hourly

Cron is enough for v1. Create a log directory first:

Then add the hourly job:

If you are using Hermes cron jobs instead of system cron, keep the same pattern: the script collects data, and the agent reasons only when stdout contains new events.

Example end-to-end handoff

A good first event looks like this:

That is the full loop: source change, structured event, ranking, research, and a human publishing decision.

The operating rule

Detection should be cheap. Publishing should be deliberate.

The watcher wins when it catches meaningful changes early, hands Codex or Hermes enough primary-source context to prepare a useful brief, and still leaves the final call with a human.

Next action

Keep this inside the course path

Continue the lesson sequence, install the skill when one exists, or use DevelopJoy when you want the workflow wired into your real workspace.