Also, don't forget to set up an RSS or Atom feed for your website. Contrary to the recurring claim that RSS is dead, most of the traffic to my website still comes from RSS feeds, even in 2̶0̶2̶5̶ 2026! In fact, one of my silly little games became moderately popular because someone found it in my RSS feed and shared it on HN. [1]
From the referer (sic) data in my web server logs (which is not completely reliable but still offers some insight), the three largest sources of traffic to my website are:
1. RSS feeds - People using RSS aggregator services as well as local RSS reader tools.
2. Newsletters - I was surprised to discover just how many tech newsletters there are on the Web and how active their user bases are. Once in a while, a newsletter picks up one of my silly or quirky posts, which then brings a large number of visits from its followers.
3. Search engines - Traffic from Google, DuckDuckGo, Bing and similar search engines. This is usually for specific tools, games and HOWTO posts available on my website that some visitors tend to return to repeatedly.
RSS is my preferred way to consume blog posts. I also find blogs that have an RSS feed to be more interested in actually writing interesting content rather than just trying to get views/advertise. I guess this makes sense—hard to monetize views through an RSS reader
It's funny back in the Google Reader days monetizing via RSS was quite common. You'd publish the truncated version to RSS and force someone to visit the site for the whole version, usually just in exchange for ad views. Honestly while it wasn't the greatest use of RSS it was better than most paid blogs today being ad-wall pop-up pay-gate nightmares of UX.
Even the short snippets are better if one wants to aggregate interesting topics and then read what seems interesting. Not just endlessly scroll each site individually.
Please also enable CORS[1] for your RSS feed. (If your whole site is a static site, then please just enable CORS site-wide. This is how GitHub Pages works. There's pretty much no reason not to.)
Not having CORS set up for your RSS feed means that browser-based feed readers won't be able to fetch your feed to parse it (without running a proxy).
If you want to get a red line, you need to use red ink. If you use blue ink, you'll get blue lines. And I can draw you cat. (I'm no artist, but I can give it a try.) But it it won't be a line anymore. A line and a cat: those are two different things.
Now that browser developers did their best to kill RSS/Atom...
Does a Web site practically need to do anything to advertise their feed to the diehard RSS/Atom users, other than use the `link` element?
Is there a worthwhile convention for advertising RSS/Atom visually in the page, too?
(On one site, I tried adding an "RSS" icon, linking to the Atom feed XML, alongside all the usual awful social media site icons. But then I removed it, because I was afraid it would confuse visitors who weren't very Web savvy, and maybe get their browser displaying XML or showing them an error message about the MIME content type.)
I use RSS Style[1] to make the RSS and Atom feeds for my blog human readable. It styles the xml feeds and inserts a message at the top about the feed being meant for news readers, not people. Thus technically making it "safe" for less tech savvy people.
Browsers really should have embraced XSLT rather that abandoned it. Now we're stuck trying yet again to reinvent solutions already handled by REST [1].
XSLT is the solution domain specialists and philosophers. Abandoning it is the vote of the market and market interests, the wisdom of crowds at work. This is the era of scale not expertise, enjoy the fruits.
Effectively no one was using XSLT at any point (certain document pipelines or Paul Ford like indie hackers being the exceptions that proved the rule). Browsers keep all kinds of legacy features, of course, and they could well have kept this one, and doing so would’ve been a decision with merit. But they didn’t, and the market will ratify their decision. Just like effectively no one was using XSLT, effectively no one will change their choice of browser over its absence.
Its hard to judge usage when browsers stopped maintaining XSLT with the 1.0 spec. V1.0 was very lacking in features and is difficult to use.
Browsers also never added support for some of the most fundamental features to support XSLT. Page transitions and loading state are particularly rough in XSLT in my experience.
Blizzard used to use it for their entire WoW Armory website to look people up, They converted off it years ago, but for awhile they used XML/XSLT to display the entire page
RSS.style is my site. I'm currently testing a JavaScript-based workaround that should look just like the current XSLT version. It will not require the XSLT polyfill (which sort-of works, but seems fragile).
One bonus is that it will be easier to customize for people that know JavaScript but don't know XSLT (which is a lot of people, including me).
You'll still need to add a line to the feed source code.
> message at the top about the feed being meant for news readers
There's no real reason to take this position. A styled XML document is just another page.
For example, if you're using a static site generator where the front page of your /blog.html shows the most recent N posts, and the /blog/feed.xml shows the most recent N posts, then...?
Shout out to Vivaldi, which renders RSS feeds with a nice default "card per post" style. Not to mention that it also has a feed reader built in as well.
Isn't ironic that browsers do like 10,000 things nowadays, but Vivaldi (successor to Opera) is the only one that does the handful of things users actually want?
I don't use it myself because my computer is too slow (I think they built it in node.js or something). But it makes me happy that someone is carrying the torch forward...
With the lack of styling, I'm sorry to say I didn't notice the RSS icon at first at all. Adding the typical orange background to the icon would fix that.
For a personal site, I'd probably just do that. (My friends are generally savvy and principled enough not to do most social media, so no need for me to endorse it by syndicating there.)
But for a commercial marketing site that must be on the awful social media, I'm wondering about quietly supporting RSS/Atom without compromising the experience for the masses.
Is there any reason today to use RSS over Atom? Atom sounds like it has all the advantages, except maybe compatibility with some old or stubborn clients?
Based on my own personal usage, it makes total sense that RSS feeds still get a surprising number of hits. I have a small collection of blogs that I follow and it's much easier to have them all loaded up in my RSS reader of choice than it is to regularly stop by each blog in my browser, especially for blogs that seldomly post (and are easy to forget about).
Readers come with some nice bonus features, too. All of them have style normalization for example and native reader apps support offline reading.
If only there were purpose-built open standards and client apps for other types of web content…
This is what I use. It’s on macOS too and amazing on both. Super fast, focused, and efficient.
It’s by far the best I’ve tried. Most other macOS readers aren’t memory managing their webviews properly which leads to really bad memory leaks when they’re open for long periods.
iCloud sync is a nice feature too. I use the Mac app mostly for adding feeds and the iOS app for reading. Anytime I read an interesting web post, I pop its url into the app to see if it has a RSS feed.
Same question, but for Android and desktop / laptop too. Never used RSS much before, hardly, in fact, I don't know why, even though I first knew about it many years ago, but after reading this thread, I want to.
The question is, do you have this traffic because of RSS client crawlers that pre-loaded the content or from real users. I'm not pro killing RSS by the way, but genuinely doubtful.
> The question is, do you have this traffic because of RSS client crawlers that pre-loaded the content or from real users.
I have never seen RSS clients or crawlers preload actual HTML pages. I've only seen them fetching the XML feed and present its contents to the users.
When I talk about visitors arriving at my website from RSS feeds, I am not counting requests from feed aggregators or readers identified by their 'User-Agent' strings. Those are just software tools fetching the XML feed. I'm not talking about them. What I am referring to are visits to HTML pages on my website where the 'Referer' header indicates that the client came from an RSS aggregator service or feed reader.
It is entirely possible that many more people read my posts directly in their feed readers without ever visiting my site, and I will never be aware of them, as it should be. For the subset of readers who do click through from their feed reader and land on my website, those visits are recorded in my web server logs. My conclusions are based on that data.
> I have never seen RSS clients or crawlers preload actual HTML pages
Some setups like ttrss with the mercury plugin will do that to restore full articles to the feed, but its either on-demand or manually enabled per feed. Personally I dont run it on many other than a few more commercial platforms that heavily limit their feed's default contents.
Presumably some the more app based rss readers have such a feature, but I wouldnt know for certain.
I do not deliberately measure traffic. And I certainly never put UTM parameters in URLs as a sibling comment mentioned, because I find them ugly. My personal website is a passion project and I care about its aesthetics, including the aesthetics of its URLs, so I would never add something like UTM parameters to them.
I only occasionally look at the HTTP 'Referer' header in my web server logs and filter them, out of curiosity. That is where I find that a large portion of my daily traffic comes via RSS feeds. For example, if the 'Referer' header indicates that the client landed on my website from, say, <https://www.inoreader.com/>, then that is a good indication that the client found my new post via the RSS feed shown in their feed aggregator account (Inoreader in this example).
Also, if the logs show that a client IP address with the 'User-Agent' header set to something like 'Emacs Elfeed 3.4.2' fetches my '/feed.xml' and then the same client IP address later visits a new post on my website, that is a good indication that the client found my new post in their local feed reader (Elfeed in this example).
From the referer (sic) data in my web server logs (which is not completely reliable but still offers some insight), the three largest sources of traffic to my website are:
1. RSS feeds - People using RSS aggregator services as well as local RSS reader tools.
2. Newsletters - I was surprised to discover just how many tech newsletters there are on the Web and how active their user bases are. Once in a while, a newsletter picks up one of my silly or quirky posts, which then brings a large number of visits from its followers.
3. Search engines - Traffic from Google, DuckDuckGo, Bing and similar search engines. This is usually for specific tools, games and HOWTO posts available on my website that some visitors tend to return to repeatedly.
[1] https://susam.net/from-web-feed-to-186850-hits.html