NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Experimental blog that is only available to read through a feed reader (theunderground.blog)
chrismorgan 3 days ago [-]
You can very practically make this viewable in normal web browsers if you give it an XSLT stylesheet, and preferably use <content type="xhtml"> instead of <content type="html"> so that you don’t even need any JavaScript to unescape things (grumble grumble, disable-output-escaping, grumble grumble, messy unmaintained XML pipelines with ancient feature support, grumble grumble).

Here’s a sample, having taken this feed.xml, switched it to <content type="xhtml">, and added my own stylesheet: https://temp.chrismorgan.info/2024-05-04-hn-40246841.xml

There’s all kinds of fun stuff you can then add, such as pagination as you get more entries, rather than just deleting old ones. I’d suggest adding actual links using the fragment and xml:id attribute (mapped to HTML id in the stylesheet), but that wouldn’t play nicely with pagination shifting entries.

You can even do things like publish an Atom entry document for each entry, so they have their own paths, but that that URL is still an Atom document. Basically, if you want to, you can completely realistically have a full blog where everything is in Atom containers instead of HTML containers.

giantrobot 3 days ago [-]
While XML and its ecosystem has faults, I love the model where annotated/hinted serialized data is sent to the client. The user agent is in charge of deciding how to display that data. It doesn't need a mountain of arbitrary third party code to display that data.

I've done the "transform XML with XSLT in the browser" a few times before and it worked pretty well. Writing the XSLT was a pain in the ass because the tooling to do so sucked but once it was written it just worked.

It even worked inside JavaScript since that could just grab that XML, ignore the stylesheet, then proceed because it was just an XHR. The modern JSON based equivalent is just an under specified mess.

XML is actually pretty cool and useful. I find JSON to be a poor replacement.

ciabattabread 3 days ago [-]
Oh man, XSLT stylesheets. I remember working with that for a templating engine for a college project. But I've haven't dealt with web programming since just before the iPhone achieved mass popularity. Is XSLT still a thing?
basscomm 3 days ago [-]
> Is XSLT still a thing?

On the frontend, not really. I built a small blog with XML and XSLT a few years ago and I was able to get things to mostly work, but XML and XSLT support in browsers is bad. It's been stuck at 1.0 with missing features for years and Chromium and FireFox both keep threatening to remove it. One day they will follow through with it.

dmorgan81 3 days ago [-]
I doubt it's still used much in web programming, but for backend data processing there are plenty of systems that output XML. XSLT is a great resource when you need to simplify a gnarly document.
vallismortis 3 days ago [-]
I love XSL. There are some problems that it is absolutely stellar at solving. NCBI recently changed their JATS schema. No need to change any code, just modify the stylesheet and everything hums along as if nothing changed.
neuronexmachina 3 days ago [-]
They come up sometimes in the GIS space when working with https://en.wikipedia.org/wiki/Geography_Markup_Language
captn3m0 3 days ago [-]
An extension like Feed Preview also works well: https://code.guido-berhoerster.org/addons/firefox-addons/fee...
chrismorgan 3 days ago [-]
That’s user-side. Giving the feed a stylesheet works for everyone, not requiring that they install an extension first.
basscomm 3 days ago [-]
Making an XSLT stylesheet for an RSS feed is pretty easy, but I found out the hard way that you have to put about 512 characters of junk in your RSS file to do it. Otherwise browsers will just show unstyled XML no matter what else you do:

See also: https://www.nfriedly.com/techblog/2009/06/how-to-use-xslt-to...

chrismorgan 3 days ago [-]
That was fifteen years ago, back when most browsers tried to do useful things with feeds. They’ve all long since given up on that.
basscomm 3 days ago [-]
The link is from 2009, but nothing has changed.

I tried everything I could think of to style my RSS feed but nothing worked until I added a bunch of garbage at the top of the file to keep the browser from ignoring my instructions, and this was all the way back in 2020.

chrismorgan 3 days ago [-]
I’ve been styling Atom feeds with no bother since 2019, and worked with RSS feeds briefly in 2020 and likewise experienced no issues.

The two browsers you were complaining about are IE, long dead, and Firefox, which removed its feed reading functionality in 2018. I can only assume that you’re misremembering when you last tried this, because what you’re describing simply doesn’t happen any more.

basscomm 1 days ago [-]
> I can only assume that you’re misremembering when you last tried this, because what you’re describing simply doesn’t happen any more.

You assume incorrectly. The earliest date in my rss file is August 2019, which is the test entry I made to make sure it worked.

Firefox dropped Live Bookmark support in December 2018, version 64.0. I was using Debian stable and Firefox ESR at the time, which was Firefox 60.9.

So I did get bitten by this, but I decided to try styling rss feeds at the exact wrong time.

internetter 3 days ago [-]
You can even do it without switching the content type: https://boehs.org/in/blog.xml
chrismorgan 3 days ago [-]
That applies a CSS stylesheet to the XML tree directly, which has serious limitations: you can’t make links, you don’t get a document title, you don’t get a useful accessibility tree, you can’t change the order of the tree…

I would always recommend using an XSL transformation first, in order to avoid these limitations.

kwhitefoot 3 days ago [-]
Surely the biggest problem that most bloggers have is that no one reads what they write. Doesn't this make it worse?
bunderbunder 3 days ago [-]
I recently stumbled across a neocities site where the author specifically called this factor out as a reason why she uses a 90s-style personal site instead of posting things to social media or Medium or whatever. She observed that she's grown tired of the modern Web's obsession with accumulating Whuffie (my choice of words, not hers), and thinks it's probably an emotional net negative. With a personal site, she feels freer to treat her Web presence as a personal project that she does purely for her own personal satisfaction. And, because of that, she gets more joy out of it. She specifically called out that she doesn't have any traffic counters because she feels she's better off not worrying about that.

For my part, I also enjoyed browsing her site more than I do modern blogs. It was a refreshing reminder of what the Internet was like back when spending hours browsing it was called "surfing", and the term "doomscrolling" hadn't been invented yet. I'm seriously considering creating my own Neocities site (or similar). If I do, I won't be worrying about engagement metrics, either.

giantrobot 3 days ago [-]
I much prefer the "homepage" model over the "blog" model. A blog and the ecosystem around blogs has an implicit need for constant content. Blogs are typically displayed by feed readers sorted by time. If you don't post for some period of time your content will fall out of view. If blogging isn't your main interest it's fucking tiring.

A homepage can feel finished. You build some pages around your interests and can stop. You don't need to be a "blogger", you can just share your interests. A homepage doesn't even need tending. It can just be a self contained thing. You can constantly update it but you don't need to.

Unfortunately Google punishes any content that wasn't published recently. In order to just be found by most people you have to jump on the content generation treadmill. It's commendable when services like NeoCities bubble up existing content to new viewers. Same with Marginalia which does a good job finding content based on your search terms and not the fact it carries their ad network and claims to have been updated in the last nanosecond.

thejohnconway 3 days ago [-]
Yeah, seriously, the nostalgia for blogs is understandable, but I remember feeling they were ruining the internet. People used to have site laid out with categorised pages that were linked together. A bit like a personal Wikipedia. They would occasionally edit pages and keep up to date. Blogs replaced that with a chronological layering short posts. Bloggers feel pressure for little and often. They are not a good way to read about a subject (Wikipedia isn’t a blog!).
derefr 3 days ago [-]
> Blogs are typically displayed by feed readers sorted by time. If you don't post for some period of time your content will fall out of view.

I've never used an (RSS/Atom) feed reader that would just barf every post of every blog into a single top-level feed. Feed readers, almost by definition of the genre, are programs that give you a sidebar with a hierarchy of subscribed feeds, grouped into categories (where a given feed can appear under multiple categories), and with unread counts beside each category and feed. You can then scroll through a category, or a specific blog. There usually isn't even an option to scroll through "everything" (though I guess you could do this by having every feed in the same category.)

And personally, I don't think I've ever bothered scrolling through a category, either. I usually just decide what category I feel like looking at; notice a few blogs in it that are badged as unread, that I haven't looked at in a while; click into them; look at the few newest (or oldest) posts; and then maybe mark the rest as read if I feel like reading those few "caught me up."

It's basically the same flow as if the feed posts were emails in an email client, sorted into individual per-blog folders.

In short: what are you talking about?

> You can constantly update it but you don't need to.

The thing that confuses me about the "homepage" website style — and has since the 90s, when I was maintaining one of my own! — is: how do I present content that I am constantly adding bits and bobs to it, in such a way that people can discover that the content is evolving, and go look at it?

Like, picture a "homepage"-style site, with one page showing off the author's rock collection, and another page with lines from famous rap battles. I can browse this at my leisure the first time I visit, and enjoy it, and that's great. But say I do that, and bookmark the site, and then I come back to it again a year later.

Am I expected to just click through to each page again? Find the new rocks among the existing rocks, and the new quotes among the existing quotes? I don't think I'd bother. (Maybe I would if they were added chronologically to the bottom over time. The original "Evil Overlord list" was a site that grew like this. But most sites just slotted new stuff in alphabetiacally, or some other even-more-obscure way. The Jargon File always bothered me for this reason.)

Presuming the author has added stuff since then, they likely did that because they want me (and their other "fans") to see it; but due to the lack of discoverability, I'm never going to. I feel like there's got to be some solution to this problem that doesn't involve just reinventing the modern web.

Maybe a changelog, right there on the home page? I've seen "homepage"-style sites with these before. (Usually they use unstyled HTML4 tables.)

But if I picture a "homepage" with a changelog... and I take that and automate the process of producing the changelog from the changes (as any programmer would feel the urge to do)... and I get annoyed at how mechanistic and opaque the changelog entries are, so I modify the workflow so that each change collects a plaintext "commit message" from me along with the change... then I feel like the results would just gradually converge into looking and acting like a blog. A blog that links to static "shrine" pages, sure! But still, the site's "spine" would be a blog. Disappointing.

But maybe the "homepage"-idiomatic approach, would be to have a changelog per page, rather than one global one? It feels less blog-like, yes; and I've definitely seen 90s websites that do this as well, each page being its own standalone "document" including its own embedded revision history. (RFCs are such documents.)

My issue with that, is that it'd kind of suck for being able to visit the site and find out whether there was anything changed since you last looked over everything.

Feels like you'd need a separately-maintained email newsletter?

Or perhaps some kind of indexer that regularly spiders these sites, scrapes the revision logs from each page (or just deltas them from previous versions), and then either spits out an RSS feed — or exposes a special kind of search engine that allows you to search any given website for its newest pages, taking a URL and outputting the site's pages ordered by last-modified date descending. Ideally with some heuristic that synthesizes some optimum between a summary of the page, and a highlight of its most recent change.

...but then, isn't that too a blog?

khrbrt 3 days ago [-]
"Whuffie" is a reference to the Cory Doctorow novel Down and Out in the Magic Kingdom. It's set a post-scarcity society, Whuffie acts as a sort of currency and is earned from clout and social status. The protagonist goes from being wealthy in Whuffie to destitute and needs to earn his way back up.

I read it a long time ago as a weird story. Might be time for a reread.

https://en.wikipedia.org/wiki/Down_and_Out_in_the_Magic_King...

debo_ 3 days ago [-]
I do this too. I have a Gemini gemlog with an http proxy in front, and I write up a little summary of my posts each quarter and email it to a group of ~30 people I like to keep in touch with. I do this in place of having social media.
DHPersonal 3 days ago [-]
The 90s style is certainly a fun marvel to me, but I think Neocities is an easy place to set up a modern site built with static site generators. I built my own non-90s blog with Jekyll and use the Neocities CLI to update the site.
Gormo 3 days ago [-]
Cloudflare Pages is also a decent option. They offer completely free static site hosting.
urban_alien 3 days ago [-]
What's the link to her blog?
bunderbunder 3 days ago [-]
I think that sharing a link to her site on a high traffic venue like Hacker News might be contrary to the author's wishes.
xandrius 3 days ago [-]
I guess the situation is so bad that a gimmick like this might actually help.
Kwpolska 3 days ago [-]
I have a lot of feeds in my reader, but I don't add feeds without first seeing the content on the Web. This gimmick is nothing special to me. For someone who doesn't use feeds, I doubt they would start doing so for a mystery meat blog.
nazgulsenpai 3 days ago [-]
I imagine the author has baked in that people who don't use feeds won't use a blog that requires it be read as a feed.
Retr0id 3 days ago [-]
That's only a problem if your goal is to be read by as many people as possible.
baby_souffle 3 days ago [-]
Or if you’re publishing helpful reference material.

I am under no illusion that my content is popular… but if I spend the better part of a weekend trying to get a config file working for a poorly documented tool, I’ll share it knowing that low double digit number of people will google and find it.

Content exclusively behind rss is like tech support exclusively on discord. Discoverability approaches zero.

Closi 3 days ago [-]
Or if your goal is that interested parties can find and access it easily.

Otherwise why make it a blog and not just a private document?

Retr0id 3 days ago [-]
But it's not private?
antiframe 3 days ago [-]
By adding addition friction, one reduces that 'accessibility'. Adding unnecessary friction by doing extra work setting something up as feed-only so that readers have to do extra work to decide if the feed-only content is worthwhile, the question I have is 'why?'

Subscribing to a site sight unseen, no pun intended, is not something I or anyone who is protective of their feed inbox would do. I curate my feeds so that I can sit down a few times a week and deeply read.

Retr0id 3 days ago [-]
Precisely, that's the whole point.
Kwpolska 3 days ago [-]
It absolutely does. I can't share a link to a post, so spreading the word about the blog's existence is pretty hard.
darreninthenet 3 days ago [-]
And if they want it read, they need to write something to read... this experimental blog has made four posts, the last one in January.
miduil 3 days ago [-]
genmon 3 days ago [-]
thanks for sharing! I also run https://www.aboutfeeds.com which is a simple "explainer" for those new to RSS

the limitation of feed styling is that it requires custom http headers:

  Content-Type: application/xml; charset=utf-8  # not application/rss+xml
  x-content-type-options: nosniff
and so it doesn't work on GitHub Pages, which is otherwise my go-to for building + hosting static sites.

so... if anyone from GitHub is reading this, please make it possible to tweak the page headers! (and if you know anyone there, please pass on this request :)

arccy 3 days ago [-]
joneil 3 days ago [-]
I never realised this - thanks!

It feels like by adding styling I could make those who click on the “feed” link, but don’t yet know what RSS is have a much better experience than seeing unstyled XML and being confused.

Retr0id 3 days ago [-]
> I'm sorry about using a non-spec compliant ID on the entries, but there are no URLs for blog posts to set it to.

Assuming they're just arbitrary IDs, using urn:uuid: might be a more compliant version of the same idea https://www.ietf.org/archive/id/draft-ietf-uuidrev-rfc4122bi...

kevincox 1 days ago [-]
The URLs also don't need to exist as an HTML webpage. They just need to be unique (ideally globally unique). urn:uuid: is always a good option but using URLs under a prefix you control is just an easy way to get a globally unique ID.

(In fact one of the problems with live URL IDs is that people often accidentally change the ID if the like URL changes.)

jtvjan 3 days ago [-]
Just gotta add a nice XSLT stylesheet to that feed.xml and people with only web browsers can also enjoy the blog.

ex1: https://darekkay.com/atom.xml ex2: https://feeds.nos.nl/nosnieuwsalgemeen

superkuh 3 days ago [-]
There was a military analysis site with free and paid versions. For 7 years running if you subscribed to the RSS feed you received the full paid content in the feeds for free. It was great. Eventually enough people must have been clicking "paid" links to paid resources (pdfs, images, etc) in the feed content that they noticed and stopped providing full post content in RSS (for free and paid).
esbranson 3 days ago [-]
As mentioned by chrismorgan, internetter, miduil, and others, CSS can be used to style XML without XSL transforms and is much more performant. The first time I saw this in the wild was the United States Legislative Markup (USLM) XML used by the US government, it can style large documents that their previous XML format with XSLT would choke on.[1] It's always interesting when government is more cutting edge than experimental stuff on the Web. It would be nice to get some standardized, basic CSS going for Atom feeds that anyone can link to and just works.

[1] https://www.govinfo.gov/features/beta-uslm-xml

Fileformat 3 days ago [-]
I made feed.style[1] to help people add a decent XSLT stylesheet to their feed.

I'm using it myself[2] and really like the effect.

I think it always makes sense to have a stylesheet (and use text/xml content type): otherwise people clicking a rss/atom link are greeted with a wall of xml (or a download prompt). Hard to think of a worse UI for people who aren't familiar with feeds & feed readers.

[1] https://www.feed.style/

[2] https://www.fileformat.info/news/rss.xml

Fileformat 3 days ago [-]
feed.style also has a 'try it' feature: Here is what the OP's feed looks like with the stylesheet:

https://www.feed.style/example.xml?feedurl=https%3A%2F%2Fthe...

codetiger 3 days ago [-]
BYOF - Bring your own frontend
flir 3 days ago [-]
That sounds a lot more fun than Gemini, tbh.
sgtnoodle 3 days ago [-]
I read the XML directly in my browser out of spite.
082349872349872 3 days ago [-]
I had hoped it would include a harsh criticism of determinism, as well as of intellectual attempts at dictating human action and behavior by logic, which the Underground Man discusses in terms of the simple math problem: two times two makes four, but these XML nodes from the Underground seem to have a different author.
3 days ago [-]
bayindirh 3 days ago [-]
This is the same as tdarb.org's "Shinobi Website" phase[0]. For more information see [1].

Sadly, tdarb.org is dead, but the HN link[1] is not.

[0]: https://web.archive.org/web/20220523124655/https://tdarb.org...

[1]: https://news.ycombinator.com/item?id=31372373

javajosh 3 days ago [-]
I guess its time to make a blog that's only available to read through ssh now.
hyperorca 3 days ago [-]
You can even start selling coffee there, while you're at it.
sujayk_33 3 days ago [-]
someone's already doing that LoL
tentacleuno 3 days ago [-]
pedrogpimenta 2 days ago [-]
that's the joke
wffurr 3 days ago [-]
Do you mean a .plan and .project file for finger?
anthk 3 days ago [-]
SDF has blogs over gopher://

gopher://sdf.org/1/phlogs

And often, if not usually, or always, the gopher content outmatches the web sites as maintaing a gopher blog it's dead easy compared to the web. No styles, you can post via a script, images would just placed as a link in the end, or under an images/ directory to be read later.

thejohnconway 3 days ago [-]
As someone that considers images to be really important to most things I’m interested in, that completely sucks. As does Gemini for that matter.

Otherwise I do actually quite like the idea of a protocol that is styled by the user agent, not the creator.

anthk 3 days ago [-]
A lot of Gemini clients will inline the images, such as Lagrange. Not an issue as you would have with Gopher.
thejohnconway 3 days ago [-]
A lot… but what percentage? I’d want a protocol that recognised the semantic relation of images to text, and clients should follow that.
anthk 3 days ago [-]
With Gemini, even with terminal clients you can open the inlined images in a typical blog on a gemini capsule with a fast external viewer, such as sxiv with you just can fullscreen with 'f' and q with 'q'. In the case of gopher, it's either a directory with a gophermap, or a text file with the content. You can't have two at the same time. Using a gophermap directory as a fake text post it's a bit of a hack abusing the 'i' item, and it can look really bad on some older clients.
082349872349872 3 days ago [-]
> the idea of a protocol that is styled by the user agent

You mean a markup language styled by the user agent? And it could be served by a stateless request protocol, so all requests are equivalent? We could call them JCML and JCTP, I guess...

thejohnconway 3 days ago [-]
You’re being snarky, but here’s a serious answer. I don’t see what relevance statelessness has to do with it, so I’ll leave that aside.

The ‘problem’ with HTML is that it’s lacking a lot of features we now realise we want, and at the same time is flexible enough with CSS and JavaScript for people to solve them in their own ways. This had led to a proliferation of approaches, and stuff can’t be automated or make accessible reliably. It also makes creation more complicated than it needs to be.

082349872349872 3 days ago [-]
Fair enough, my apologies (and I'll leave the other aside too).

I guess where we differ is not in the identification of the problem, but you think an improved markup language might be a solution, and I think any solution would necessarily be more social than technical?

thejohnconway 2 days ago [-]
It would be both. Technical solutions are social solutions. A new language could be part of the solution.

I don’t think it’s going to happen, by the way, I’m just identifying the problem as I see it, and why I don’t think HTML, Gopher, or Gemini going about solving it in the right way.

bovermyer 3 days ago [-]
Nothing's stopping you.

Or you could just run a blog on the Gemini protocol.

Either is fine.

surfingdino 3 days ago [-]
Go mTLS, go hardcore.
CalRobert 3 days ago [-]
What a coincidence! I just was looking up a comic I used to read - Bizarro - and recalled that years ago I used an RSS reader to keep up with all sorts of comics, etc. and just fell out of the habit. And then I found that Bizarro has no RSS feed.

Thanks for doing one small thing to keep a great tool alive.

gradientsrneat 3 days ago [-]
Objection! The blog's rss page doesn't block common browser user agents, so it's still technically possible to view in a web browser. ;)

More seriously, RSS readers do exist which use the browser's user agent, so I don't recommend blocking those.

PurpleRamen 3 days ago [-]
Curious which impact this will have on the web-crawler. Will they ignore it? Will they push it, as they have subscribed themselves to the feed? Or will they experience a divide by zero-situation and go bonkers?
kevincox 1 days ago [-]
Many web crawlers (including Google IIUC) will subscribe to the feeds. But I think they just use it as a way to efficiently discover new pages rather than directly surfacing the results in search. So likely since the posts don't have a web URL they won't appear in search results.
doodpants 3 days ago [-]
...But I don't want to read blogs in my feed reader. I always click the permalink to read the post at its original source. What's the purpose of forcing me to endure my reader's inferior UX?
WorldMaker 3 days ago [-]
You could try reading this site in a different feed reader that does polish its UX?
doodpants 2 days ago [-]
Well, it's not really about the UX. My purpose for using a feed aggregator is to keep up with blogs that I want to read, not to replace the sites on which I read them.
rubinlinux 3 days ago [-]
How do i know if i want to sub to this if i cant see what type of things they post first?
crtasm 3 days ago [-]
I can see by clicking the .xml link in my web browser.
Waterluvian 3 days ago [-]
It’s basically just an API then?
Gormo 3 days ago [-]
No, it's just an Atom feed, with no corresponding HTML page. No APIs are really involved.
PurpleRamen 3 days ago [-]
It's delivered in HTML embedded in XML, instead of pure HTML or some JavaScript/HTML-shenanigan. Not really an API.
lasermike026 3 days ago [-]
This works.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:00:45 GMT+0000 (Coordinated Universal Time) with Vercel.