Distribution
https://twitter.com/Iron_Spike/status/1495165593456160769
Nah, WP Crowdfunding was a one-time $400 fee, and when the project is over, I'm converting the funding page to a store page. It's actually really well-integrated into Wordpress!
And my online store is already high-traffic, has been for years, so the hosting plan won't change. https://twitter.com/PaulFerence/status/1495164119250579459
My most expensive ongoing charge, actually? My mailing list.
After you get over 50k addresses, the cost goes monthly and triple-digits.
Weep. 💸
@cabel · Feb 19, 2022
Our list is over 500k at this point and we’ve since switched to a self-install of “Sendy” and I absolutely love it. We pay very little to Amazon SES to deliver the emails and Sendy is a one-time purchase. Lots of admin, yes, but tuck this one into your back pocket for someday!
@Iron_Spike · Feb 19, 2022
OOOOOO. Thank you!
- https://developers.google.com/search/docs/appearance/site-names
- Mastodon instance with 6 files
- https://muenchen.social/@fnordius/110855606336446530
- https://web.archive.org/web/20220316060312/https://whalecoiner.com/articles/progressive-enhancement
- https://seirdy.one/posts/2020/11/23/website-best-practices/
- https://tracydurnell.com/2022/07/20/add-your-website-to-directories/
- https://defector.com/the-internet-isnt-meant-to-be-so-small
- https://mxb.dev/blog/the-indieweb-for-everyone/
- https://biblioracle.substack.com/p/speed-and-efficiency-are-not-human#%C2%A7in-pursuit-of-the-non-disposable
- https://notes.jim-nielsen.com/#2023-04-03T0916
- https://web.archive.org/web/20221220052540/https://manuelmoreale.com/on-the-current-decentralisation-movement
- https://www.webcomics.com/comiclab/comiclab-ep-251-everything-old-is-new-again/
- https://adactio.com/notes/20226
- https://www.princexml.com/
- https://mastodon.social/@lori@hackers.town/110551141192054607
- https://mastodon.social/@socketwench@hackers.town/110531690905675718 -- this is why you want an XSLT stylesheet for RSS/Atom pages (sadly does not work for JSON, but I guess you could do content negotiation for that?)
- https://natclark.com/tutorials/xslt-style-rss-feed/
- POSSE
- https://adactio.com/journal/20323
- https://www.postybirb.com/
- https://rarebit.neocities.org
- https://ericwbailey.website/published/my-jeans-metadata-may-outlive-the-company-that-sold-them/#absurdity%2C-ai%2C-and-archiving
- https://www.onceupondata.com/post/objective-function-engineering-interface-design/
- https://biblioracle.substack.com/p/speed-and-efficiency-are-not-human#%C2%A7but-what-if-were-not-geniuses
- https://www.smashingmagazine.com/2020/08/autonomy-online-indieweb/
- https://cheapskatesguide.org/articles/why-have-a-personal-website.html
- Metadata
- Offline with service workers
- Oh, Google doesn't use
link[rel=next|prev]
anymore…
- You can do some wonderful things with the site design to reinforce your comic, like https://lackadaisy.com and Paranatural do
- How Perry Bible Fellowship makes the curtains part when the comic image starts loading using nothing more than HTML and CSS
- Is it still worth submitting to…
- https://www.topwebcomics.com
- http://www.thewebcomiclist.com
- Piperka? Use
<a rel="start prev next index">
if so
- http://fleen.com/2008/01/01/i-want-my-ten-dollars/
- HTTPS
- https://www.phpied.com/minimum-viable-sharing-meta-tags/
- https://snook.ca/archives/html_and_css/open-graph-and-sharing-tags
- https://meiert.com/en/blog/minimal-social-markup/
- what about RSS readers/email? the latter might be hackable with
<table>
, but the former… https://twitter.com/mrled/status/1369419333613613059 — https://twitter.com/mrled/status/1369665161611591680
meta[name=description]
probably more important than usual, since for regular web pages you can easily just reuse the first few sentences of text content. That’s not as desirable for webcomics, though, I think.
- Why bother with valid, semantic HTML?
- Default (and usually non-overrideable) styles in RSS readers, Reader Modes, Apple Watch
- Better autotranslation results
- Better results in assistive technology (but not as good as you’d hope)
- Browsers are nowadays quite good at fixing invalid HTML all the same way, but keeping your HTML valid helps other services, tools, and parsers that aren’t browsers:
- https://stackoverflow.com/questions/48670709/how-can-i-prevent-google-translate-from-changing-the-html-structure-of-my-page
- https://www.stubbornella.org/2023/09/17/expanding-your-touch-targets/
- “PS: There are various reasons to not register on a website merely to see things on the website, including that registering usually requires you to agree to their terms of service, which often contain things ranging from bad to terrible.”
- https://wheresyoured.at/p/absentee-capitalism
- https://mastodon.social/@noracodes@tenforward.social/110559608594656805
- https://manuelmoreale.com/on-the-state-of-the-web:
You can still set up your tiny quiet corner on the web, do your own things, and connect slowly with other people. You can still set up a forum dedicated to something you're passionate about and create a community with 50 other people, even if Reddit turns to shit. Things can live on the web simply because enough people care about them and pour time and love into them. And that is what makes the web special.
The organization saves development and testing costs by writing and deploying native JavaScript that targets only modern browsers. Through an approach inspired by BBC News’ cutting the mustard, the Foundation enables millions of people (1% of its 2 billion monthly users) to access Wikipedia through a JavaScript-free experience. This is the same experience that all page views start at prior the (optional) arrival of JavaScript code.
The Wikimedia Foundation’s development principles and browser support policy reflects this by emphasizing the importance of progressive enhancement.
Viewing Wikipedia through a web browser is the most common access method, but Wikipedia’s knowledge is consumed far beyond the canonical experience at Wikipedia.org. “Wikipedia content goes everywhere. It’s distributed offline through Kiwix and IPFS, rendered in native apps like Apple Dictionary, and even shared peer-to-peer through USB sticks,” said Timo. What these environments have in common is that they may not involve JavaScript as they require high security and high privacy. This is made possible at no extra cost due to APIs offering complete content HTML-first, with CSS and embedded media based on open formats only.
https://openjsf.org/blog/2023/10/05/wikimedia-case-study/
Digital preservation
- https://xkcd.com/1683/
- https://cloudfour.com/thinks/spoiled-by-the-web/
- https://wordpress.com/100-year/
A public website is a great first step towards preserving your work, since that lets the Wayback Machine and other web archivers automatically save copies. (Many of my favorite webcomics from growing up can now only be found there; 8EB, etc. Note that the autocrawler isn’t guaranteed to get everything, especially for less popular sites, so it’s a good thing these services also let you manually request archiving particular URLs. I wonder if you can script an archival request when publishing a page…) Letting others archive their own copies is an especially good idea in the face of unnoticed data corruption over time.
Even the Library of Congress recognizes HTML for its digital preservation qualities
-
https://www.loc.gov/preservation/digital/formats/fdd/fdd000475.shtml
-
https://www.loc.gov/preservation/digital/formats/fdd/fdd000481.shtml
-
Avoid 3rd parties, because they go down independently of your website (like that one Robin Rendle post's TypeKit fonts)
-
Lots Of Copies Keep Stuff Safe (LOCKSS)’s name has a good point, especially with the unfortunate lifespan of most digital storage methods. You may even want to print out hard copies, since normal-ass paper can last 70+ years if stored properly
-
Descriptive filenames/organized filepaths
-
https://en.wikipedia.org/wiki/Digital_preservation#Personal_archiving
-
Platforms like Instagram, Twitter, Facebook, Imgur, etc. are great for immediate reach, but very bad for preserving your work both in the longish and long term
- Aggressive media compression you can’t control
- Can change terms/owners/business models on you, or shutter
- Twitter.
- Tumblr’s legendary attempt at automated porn detection that had overzealous blast ranges
- Flickr’s scares over the years despite being the OG photo archive
- Imgur’s 2023 ToS changes
- Hyper-optimized for “now”, almost never good for archival reading (like when I tried to read that Nuzlocke comic on dA: slow janky pageloads even when clicking “next” (maybe because I was logged out), had to zoom in on each one, was very difficult for the author to get pages presented in the right order, the next button doesn’t even work every 20 pages because of dA’s boneheaded React code)
-
Static HTML is best for keeping websites up with a minimum of maintenance
-
Don’t abuse JS or your site’ll end up like the Wacky Crackheads — archive.org tried its best, but alas
-
Adobe/Pantone have shown you can’t trust their file formats. And, you know… Flash. (And Director/Shockwave, like early Platinum Grit)
-
But what’s so different about Web standard formats like HTML, CSS, and SVG compared to formats like Flash, PSD, and native apps?
- Open standards with multiple competing implementations
- Browsers are ferocious about backwards compatibility; even Chrome didn’t succeed in offing SMIL
- Lack of licensing fees
- What about the formats that aren’t Web standards, like JPEG/PNG/GIF, MP3/MP4, etc.?
- JPEG is indeed not great as a source format, and even PNG has disadvantages compared to archive-quality TIFF.
- You probably shouldn’t publish your comics’ original/raw source files online (unless, maybe you want to?), but
- Compression is great for ensuring the maximum numbers of readers can/will read your comic in the short- to longish-term, but it hurts truly long-term preservation (even the lossless kind). However, holy god video without compression is a giant pain — if you keep your video files small and/or short, maybe it’s doable? Maybe.
But I’m an artist, not a coder!
- https://buttondown.email/cascade/archive/001-css-is-a-liberal-art/
- https://thoughtbot.com/blog/tailwind-and-the-femininity-of-css
- https://codepen.io/tag/art
That’s how I felt for about a decade of my life.
If 12-year-olds could learn how to make Web pages through word-of-mouth back in the 90s…
- when they weren’t “supposed” to,
- when computers and internet connections were hundreds of thousands of times slower,
- when browsers were incredibly worse,
- when there were far fewer tutorials freely available
…then you can learn how to make Web pages today. In my experience, learning how to make comics takes way more technique and patience. If you can make comics, then you absolutely can learn enough HTML+CSS to make your webcomics do things that interest you.
https://daringfireball.net/2003/07/independent_days
The web is where independents shine. Independent web sites tend to look better and are better produced. Their URLs are even more readable. This isn’t bluster about the future, this is a description of today. With a text editor and an Apache web server, you’re on equal footing with any web site in the world.
This can still be true today. Big, well-funded websites have drawbacks your site doesn’t have to:
- Too much JavaScript
- More specifically, too much tracking JS, which necessitates cookie banners and other bullshit readers hate
- Image presentations that aren’t ideal for comics
- Overaggressive image compression
- Modals, carousels, and deceptive patterns
- Being next to content/ads that’re optimized for engagement (sex, violence, anger, political causes opposed to what your work is trying to say, etc.)
- “Recommended” sidebars and other widgets that try to get your readers to leave your work
- Their own branding and UI fighting against or competing with your work
- If you choose to run ads, you get to control where they go, how big they are, what they advertise, and what network you get them from (there are even ad networks optimized for webcomics specifically!)
Also, there’s lots of historical context behind front-end web developers (the kinds that use HTML & CSS) being thought of as artists.
Why your own website?
You can block AI from using your work on a website.
- https://howtomarketagame.com/2021/11/01/dont-build-your-castle-in-other-peoples-kingdoms/
- https://neil-clarke.com/block-the-bots-that-feed-ai-models-by-scraping-your-website/
- https://github.com/healsdata/ai-training-opt-out/blob/main/README.md
- https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers#google-extended
- https://searchengineland.com/google-extended-does-not-stop-google-search-generative-experience-from-using-your-sites-content-433058
Hey! For anyone who has a website and doesn’t want their work used for generative models (a.k.a. “AI”), I did some research and found ways to block some of them.
For ChatGPT and Google, add the following to your site’s robots.txt
:
User-agent: Google-Extended
Disallow: /
User-agent: GPTBot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
GPTBot
is used for the actual Large Model training, and ChatGPT-User
is I guess a sneaky workaround to let people using the chat interface ask ChatGPT to fetch and process URLs?
noai,noimageai
pseudo-standard
For deviantArt and hopefully others, add the following to every HTML page’s <head>
:
<meta name="robots" content="noai,noimageai">
This may or may not work for companies other than deviantArt in the future — as the Cohost admins point out:
It’s possible that dataset or model vendors will see platforms blanket-tagging pages with noai
and noimageai
, and decide that the original intent of the standard — to reflect an artist’s personal, conscious decision not to have their artwork scraped — has been compromised, and their scrapers will start disregarding the flags altogether. In this case, they’ll probably blame us for not playing fair; so be it. At least then they’ll have to admit how thin their commitment to respecting this standard was in the first place.
(You can also set it as an HTTP header on every URL, not just HTML ones, via X-Robots-Tag: noai,noimageai
. Doing that is a little bit more involved, since setting it there requires access to your Web server software and syntax specific to it.)
There’s also ai.txt
, which I don’t know who respects it — but it’s a small text file, so it doesn’t hurt to have it.
# ai.txt
User-Agent: *
Disallow: *.txt
Disallow: *.pdf
Disallow: *.html
Disallow: *.gif
Disallow: *.ico
Disallow: *.jpeg
Disallow: *.jpg
Disallow: *.png
Disallow: *.svg
Disallow: *.svgz
Disallow: *.webp
Disallow: *.m4a
Disallow: *.mp3
Disallow: *.opus
Disallow: *.mp4
Disallow: *.webm
Disallow: *.ogg
Disallow: *.js
Disallow: *.css
Disallow: *.php
Disallow: /
Disallow: *
What about Google (Bard) and Bing (AI)? [outdated, see above]
Unfortunately there seems to be no way to disallow Google and Bing from using your site for large model training (i.e. Google Bard and Bing AI) without also blocking them from accessing your website entirely. That’s probably intentional on their part — you could block GoogleBot
and BingBot
entirely if you’re okay with being unsearchable on the Web.
I’ve seen recommendations to block CommonCrawl’s bot, but I doubt that does any good:
- It doesn’t retroactively remove you from older crawls (starting from 2008)
- There are plenty of other crawlers for “AI” peddlers to use if enough people block CommonCrawl that it’s no longer useful to them
- CommonCrawl is a charity with good uses, for research and suchlike
- Gives you a big head start each time you start on a new platform, so you don't have to rebuild your follower count completely from scratch each time
- https://mastodon.social/@anniegreens@social.lol/110638969297544737
- https://mastodon.social/@bramus@front-end.social/110635311524556630 (note this was already pretty true for the Media tab, which is especially relevant to comics)
- You don’t have to second-guess and protect against basic formatting/mangling of your art
Nobody can shadowban you, autocensor your work, or abuse automation to falsely detect Terms of Service violations. You can draw nipples of any kind, and Tumblr can’t tell you no. Your website is immune to the enshittification every big company platform seems doomed to.
Your webhost may screw with you, kowtwo to DMCA takedown requests, or even just plain go out of business — but web hosting is a commodity. You can take your website to a competitor. Or even host it yourself, if you’re willing to leave a computer (or phone!) on 24/7.
The tradeoff for this is you’ll probably have to start paying for your website if it gets sufficiently popular, whether that be from outgrowing free hosting plans, signing up for a CDN, or paying a guy to worry about it for you.
But a popular website is a good problem to have!
Several years ago, I was unhappy with the hosting service that my website was on. So I downloaded all the data from the entire website, uploaded a multi-gigabyte tarball to a new hosting service, changed my domain name so it pointed to the new service, then closed my account with the old service.
Erin Ptah is the author/artist of the comics Leif & Thorn and But I’m A Cat Person. Both are hosted on multi-database websites that integrate third-party applications, open-source components, seamless interactions with major platforms like Patreon and Twitter, and the artist’s own custom-written code.
—https://www.comicsbeat.com/opinion-kickstarter-wont-explain-their-blockchain-protocol-so-i-will/
There can be technical hiccups with transferring website data like this, but notice the words I used. Technical hiccups. The kind that are inconvenient, but solvable.
Web publishing is weirder than normal publishing
One thing that can trouble authors is that the Web has its own style of “good publishing”.
Graphics editor programs let you control every pixel, promising identical reproduction when you publish. But the Web’s not like that. Its unprecedented ease-of-access and global reach comes at the cost of its inherent flexibility and eternally-partial support across almost every personal computing device there is:
- Differences of how browsers/operating systems draw text, scale images, etc.
- Screens of differing resolutions, color depth, and quality
- Differing support for features in HTML, CSS, JS, etc.
A Dao of Web Design
Website platforms are still platforms
Instagram
- Portability
- None.
Can publish HTML comics
- No.
Worthwhile site functionality
- Almost none. You get one link in bio, and that's it.
- No theming possible
- [Does Instagram have RSS?]
- Reading the comic chronologically from the beginning is not possible.
Shitty parts
- Users without Instagram accounts have a really bad time
- No control over how images are displayed
- Aggressive image optimization can make your comic look bad and you can’t do anything about it
- Supporting probably the worst tech company in the world
- Followers not actually likely to see your updates
Do your own your site?
- No
Tumblr
- Portability
- eh. There are export tools, at least
Can publish HTML comics
- Kinda — they let you make “custom HTML pages” in the theme editor. Those don’t work in their normal posting workflow though, so you’ll need to make a post linking to them to show up in people’s Dashboards/RSS/schedule posts, etc.
Worthwhile site functionality
- ✅ Good RSS (choose from all posts or specific tags-only)
- Themes as custom as you can code
- Seems to have notifications support for updates once you “follow” a blog, but that requires a Tumblr account
- 🆗 You can mess with the theme to do things like PWAs
- The right theme can set up
/chrono
posts to read posts in order, like most webcomic archives should
Shitty parts
- Automated NSFW detection algorithms are still annoying in the best case
- Tons of shitty JS you can’t remove
- Image resizing/optimization can be aggressive, very long images get cropped
Do your own your site?
- No
Wix
- Portability
- [I don’t know yet]
Can publish HTML comics
- [dunno yet]
Worthwhile site functionality
- ❌ No RSS as far as I can tell
Do your own your site?
- No
wordpress.com
- Portability
- ✅ Great — you can freely export all your site content and infrastructure and upload it to your own hosted WordPress install
Can publish HTML comics
- I think so? Might require some creativity with their tag allowlist, though.
Worthwhile site functionality
- Can choose from various themes, but I dunno if you can code your own
- Good RSS (choose from all posts or comments)
Do your own your site?
- No, but good portability
Self-hosted WordPress (a.k.a. wordpress.org)
- Portability
- 🌟 Most popular option by far — hosts have tons of import/export tools
Can publish HTML comics
- 🌟 (not as easy as it could be, but a plugin could easily make it so)
Worthwhile site functionality
- 🌟 (for everything other than HTML comics, there’s a plugin for it, or content online showing you how to code it yourself)
- Email subscription updates, good RSS, infinitely customizable RSS
Shitty parts
- You have to stay on top of updates. Like your phone/app updates, but with higher security problems if you fall behind.
Do your own your site?
- Yes
Squarespace
- Portability
- Not sure yet, but nothing else out there uses its HTML templating system if you turn on Developer Mode
Can publish HTML comics
- [not sure]
Worthwhile site functionality
- Sort of has RSS, but it’s annoying and you can’t get one for all site updates.
Do your own your site?
- No
Static HTML hosts
- Portability
- Best of all possibilities.
Can publish HTML comics
- Duh
Worthwhile site functionality
- Things like comments and dynamic behavior are by default not possible. But if you’re not afraid of code, you can set it up with IndieWeb technologies and regenerating pages.
Do your own your site?
- Yes
Why mailing lists?
https://buttondown.email
Why old-fashioned feeds?
- https://toot.cat/@jamey/110798736243334838
- https://utcc.utoronto.ca/~cks/space/blog/tech/AtomVsRSS
- https://darekkay.com/blog/rss-styling/
- https://www.theverge.com/23778253/google-reader-death-2013-rss-social
- https://notes.jim-nielsen.com/#2022-12-31T1228
- https://justingarrison.com/blog/2022-11-22-hugo-rss-improvements/ (note: author seems to not closely read the feedly docs he links to; his desired improvements, like theme color, are mentioned by feedly as also needing an SVG
<webfeeds:logo>
, among other bits & bobs)
a.k.a. Why RSS/Atom/h-feed
/JSON Feed/etc.?
https://developers.google.com/search/docs/appearance/google-discover#feed-guidelines
RSS by platform
Some platforms make this easier than others:
- WordPress and Tumblr: It Just Works. They were doing RSS long, long before it was cool.
- Squarespace: error-prone. It only generates them per-collection, but most themes don’t have the HTML necessary to make subscribing to them easy. Squarespace also makes it easy to upload comics as things other than collections.
- Wix: not that I know of
- Adobe Portfolio: doesn’t seem to. It can consume RSS feeds from elsewhere and display them though, which is a fine how-do-you-do.
- Most open-source CMSes either support RSS by default or have RSS plugins
- Build your own: you can make RSS files by hand…
- Neocities mentions RSS on the homepage, but the first search result for “neocities rss” is rssguide.neocities.org, which describes building it by hand; https://indieweb.org/NeoCities; https://webmentions.neocities.org
Why bother?
If this is all so confusing, and most readers doesn’t use feed readers, then why bother?
- RSS subscribers are power users: I used to check my comic bookmarks manually every day, but when I switched to RSS I managed to read like five times the webcomics without feeling overwhelmed
- RSS subscribers are fiercely loyal — because subscribing to feeds is a little bit annoying, anyone who does make it through is pretty damn invested in you
- Some of it gets surfaced automatically
- Chrome Mobile’s recommended posts use RSS under the hood
- search engines will use your feeds to more efficiently crawl and index your site updates
- Tons of services let you import RSS feeds to automatically update widgets or pages
RSS, also known as/confused with Feeds, Atom, and Google Reader, are how websites can let people subscribe to updates.
To be fair, the whole situation is confusing. The underlying technologies don’t have that much going on, but they seem complex and unfriendly to non-techie users because of historical reasons, bad acronyms, and developers’ inability to let things be.
- RSS feed
- A URL that promises that every time you view it, you receive a list of the website’s recent updates in a data format that isn’t for humans to read, but for other programs to process and then show humans the updates.
- The feed can republish the full contents of the updates, or just provide a link to the full URL of the updates, or anywhere in between.
RSS reader
a.k.a Feed reader
: A program designed to automatically check RSS feed URLs, process the results, then show the updates to humans.
- Google Reader
- A specific RSS reader popular back in the day, mostly because it combined “read website updates” with a social layer that recommended things from RSS feeds other people were reading. Google killed it despite its popularity because that’s how Google do.
Feed reader
Reader app
: Turns out most programs that call themselves “RSS readers” can subscribe to more than just RSS. Feedbin, for example, can also subscribe to email newsletters, social media accounts, podcasts, and YouTube channels. (Don’t tell anyone, but podcasts and YouTube channels publish their own RSS feeds, so Feedbin just knows how to look for those.)
: They can be phone apps, desktop programs, websites themselves, or a mixture of any of those. Some, like Feedbin, also let you consume their subscribed updates in other reader apps if you want, so the distinctions are all pretty loose.
Different kinds of feeds
RSS the standard feed file format
a.k.a. RSS 2.0
: Different from the phrase “RSS feed” because that one has the genericized trademark problem.
: A specific way to code website updates into an XML file. Because it came first, not all of the decisions it made were good ones.
- Atom feed
-
A specific flavor of RSS feed. All Atom feeds are RSS, but not all RSS feeds are Atom.
-
Basically any RSS reader treats them the same to their users, but if you’re a programmer, going with Atom tends to be less buggy and easier to code: it’s still XML under the hood but has a better-defined processing model, more explicit escaping, and the weird warts of RSS 2.0 were filed off.
- JSON Feed
-
Even though the difference between RSS and Atom already confuses people, one day a programmer thought the technologies they’re based on were kind of old and icky. So they made a new data format for feeds using a technology called JSON that is more appealing to modern programmer sensibilities, but accomplishes exactly the same thing.
To be more fair to the inventors, programming a correct RSS/Atom feed is harder than just passing some data to the JSON-creating function that every programming language has nowadays.
h-feed
-
The opposite of JSON Feed, sorta. One of the general issues with feeds is that you’re duplicating your website updates:
- First, you publish updates on your website — in HTML.
- Then, you publish those updates again as RSS-flavored XML, Atom-flavored XML, or JSON
But HTML is already a data format that computers are really good at parsing and showing to humans. Why not just sprinkle in some hints to your website’s HTML to let Reader software reuse your website to provide an update feed automatically? That’s h-feed
: some class
attributes added to a website’s existing HTML to let software check for updates.
As much as I like h-feed
, separation between your site’s HTML and your feed updates can be nice:
- Feed-only content
- Reader programs don’t always have the features browsers do (embeds, CSS, JavaScript, etc.), so sometimes you want to provide a lower-fidelity version for Reader programs specifically. Like, if you embed an interactive data visualization into your blog post to support your point, RSS readers probably won’t support the
<iframe>
used to embed it, the JavaScript that parses the dataset, or the CSS+SVG used to actually visualize the data — so instead, you’d use a preview image inside a link to the full thing for Reader programs specifically.
- Or, what Adrian Roselli does with his RSS Club
- Some RSS feeds don’t have websites attached.
Ads? Affiliates? "How can my site make money, or at least break even?"
- RIP Project Wonderful
- https://danluu.com/blog-ads/
- https://blog.pragmaticengineer.com/affiliates/
- Patreon or its more creator-friendly alternatives
- https://blog.pragmaticengineer.com/how-to-become-a-full-time-creator/
- https://lethain.com/tech-influencer/
Beware bringing Notification Syndrome onto your site
-
The actual quote by Deming: “It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.”
-
https://web.archive.org/web/20230522195228/https://blog.patreon.com/please-please-dont-a-b-test-that
-
https://justingarrison.com/blog/2023-09-06-the-data-driven-falacy/
-
https://mastodon.social/@sinbad@mastodon.gamedev.place/110791222429968133
-
https://www.alexmurrell.co.uk/articles/the-age-of-average
-
https://www.alexmurrell.co.uk/articles/the-errors-of-efficiency
-
https://www.alexmurrell.co.uk/articles/big-questions-for-big-data
-
https://www.alexmurrell.co.uk/articles/magpie-marketing
-
https://www.alexmurrell.co.uk/articles/the-digital-dark-side
-
ComicLab #251: Everything Old is New Again
-
Whom the Gods Would Destroy, They First Give Real-time Analytics
-
Beware the Cult of Numeracy
-
Some blogging myths § myth: page views matter
-
The Calm Web: A Solution to Our Scary and Divisive Online World
-
https://critter.blog/2023/06/09/are-toddlers-polluting-your-analytics/
-
https://themarkup.org/hello-world/2023/05/27/our-maps-dont-know-where-you-are
-
https://www.newyorker.com/magazine/2021/03/29/what-data-cant-do
-
https://tylergaw.com/blog/no-more-google-analytics/
-
https://anotherangrywoman.com/2023/07/05/scams-upon-scams-the-data-driven-advertising-grift/
-
https://en.wikipedia.org/wiki/Project_Wonderful:
Ryan North was talking with a friend about how they severely disliked existing online advertisement services such as Google ads and AdSense, because advertisements on those platforms are priced based on user clicks or displays, and "the Internet isn't really designed to keep track of who clicked where, when, and who viewed what page when."
-
https://robinrendle.com/notes/notes-on-hypertext/
-
https://mastodon.social/@yuki2501@hackers.town/110814821490396003
Surveillance and the great splitting of the web
I just stumbled upon @ploum 's blog. Their latest article is called "splitting the web". It describes how the internet - mainly the WWW, is splitting in two: A corporate version filled with ads and trackers, and hobby version, the darknet where none of that shit exists. Very recommended read. https://ploum.net/2023-08-01-splitting-the-web.html
I'd like to add that just a few minutes after reading it, this conversation happened between me and another person here on Mastodon:
What site do you recommend me to do this or that? I'm asking because I want to be sure which domain I add to my javascript allow-list.
And that made me realize that we're already living in an online, virtual version of a cyberpunk dystopian setting: We're the outcasts, the people who are trying to live by; we are living in the slums where there are no commodities, no fancy stores, no malls, no holographic assistants, no flying cars and no gigantic glass skyscrapers; but we're also free from the high tech surveillance machinery, the hovering camera drones and servitors and police agents wearing faceless helmets and ready to strike down on anyone who gives them a bad look.
Like Ploum said in their essay, this place used to be filled with criminals and seedy services but now more and more normal people are driven to it out of necessity because life in Babylon City has just become impossible to keep up with.
The web's infrastructure is crumbling down, and tons of services exist just to monitor "who watches what" for advertisement purposes. Over 90% of the space from that one page article you're reading is filled with obnoxious ads with microscopic close buttons - some don't even have a close button, depending on who made the juiciest deal with the website - and over 90% of internet traffic is not the content, it's thousands of requests between you, the site, and ads and tracking companies.
If you're zapping YouTube, over 50% of your time is spent watching mandatory ads that don't give a shit how much content you actually watch, and most of the other half is you inadvertently getting pulled by the riptide of recommendations designed by a behavior-aware algorithm. And while the algorithm says to itself, "oh, this person's behavior is exactly as predicted, feed that info to the algorithm!", the truth is that you forgot to click pause and went for a bathroom break while running YT on autoplay. The overhyped AI sold to surveillance companies keeps feeding its own data like Ouroboros - the snake that eats its own tail, except you're part of it sometimes. Just sometimes. The algorithms have passed their usefulness and are devolving like a runaway tire out of its car, driven by inertia and are only spinning around. I'm willing to bet that not even their designers are entirely sure how it works.
Corporations and states are all fighting each other like dogs to harvest surveillance data while all the economy, all the jobs, all the air conditioned rooms with server racks and CAT-5 tendrils, all the cloud services, all the fast food restaurants strategically located, are spiraling around the very surveillance machine users want to get rid of. It's just a matter of time before the entire thing crumbles down beautifully like dominoes, and we are the outsiders who want to be in a safe place when that happens.
Away from us-east-1, us-west-2, and other cloud servers where everything is happening, we are setting up our crappy DIY servers running Apache or nginx where everything looks like a bad 1990s website written by a 15yo kid who just discovered their fandom; but it just works and our friends are happy with it.
Welcome to the Sprawl.
Notes from old Sparkbox presentation on analytics
- A/B testing: remember A/A testing, signal bias, outside factors, bots
- What scale do you need before A/B testing can even work?
- avoid simple counts — “vanity metrics”
- find what your weakness is with analytics — it’s not always on the site itself
- redundant data is required; across situations, even better
- avoid huge surveys
- What speed impact do analytics have?
- BEWARE simplistic measurements. Analysis is 95% of the job, and what you think is biased. People get degrees in statistics for a reason
- IF the data is accurate
- AND complete
- AND you’re asking the right questions
- AND you understand the situation
- AND you analyzed correctly
- AND the collection software works enough of the time %-wise (blockers, slow devices, etc.)
- THEN you can trust the data. (So basically, never. Unfortunately, data looks trusthworthy)
-
Acquisition: they came
-
Activation: they tried
-
Retention: they returned
-
Referral: they recommended
-
Revenue: $
-
Happiness
-
Engagement (time spent)
-
Adoption
-
Retention
-
Task success (accomplishment)
See: search/browse
Think: save for later/discuss/reviews
Do: action
Designing with Data (note the first 3 steps are "free", which means while they're insufficient too many organizations stop there)
- Frame the question: known unknowns, unknown unknowns, iterate, new sources
- Start w/analytics: segment, "behaviors", general trends
- Add social listening: see what social media says (potential bias: only talky social users)
- Consider a study: attitude and/or behavior
- A/B test alternatives; but remember big picture
- Measure w/meaning: what, why, who
- End w/more questions
Punk DIY websites
- https://cheapskatesguide.org/helper/website-creation.html ; https://cheapskatesguide.org/articles/joys-and-sorrows.html (though this guy seems to be a conservative nut, so maybe don't link him)
- that maya.land link about personal websites being a radical act