mastodon.gamedev.place is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server focused on game development and related topics.

Server stats:

5.8K
active users

The official Obsidian Web Clipper is *almost* exactly what I need, but is a narrow miss. 😕 obsidian.md/clipper

Up until now I've used Markdownload and then copied the resulting files into Obsidian, which is more clunky than the new clipper, but it has one important feature: it downloads images. I clip articles not only to help me find them again, but also to protect against web rot, but the official clipper links to web versions of the images. I don't trust those not to randomly go away.

obsidian.mdObsidian Web ClipperHighlight and capture web pages in your favorite browser. Save anything and everything with just one click.
The Seven Voyages Of Steve

Unfortunately Obsidian doesn't seem to have a way to convert those to local images, automatically on clipping or even very easily manually afterwards. So for that reason I'll be sticking to Markdownload until they add that as an option.

@kepano OK, the fact that it was mentioned suggested a long lead time - but I've solved this via a plugin for now anyway so all good. Thanks!

@sinbad if I'm understanding correctly, you just want to download web pages to refer to later, even if they drop offline?

wget -pk something.com

👍

-p downloads the resources as well as the raw page, so that it renders fully. -k rewrites the page to link to the downloaded assets instead of the originals.

Full man page here: man7.org/linux/man-pages/man1/

(works on Windows too, just needs installing first)

something.comSomething.

@sinbad urgh, thanks Mastodon for making something.com the link card instead of the more useful man page 😂

@dev_ric not exactly, I want them stripped of all the cruft and converted to markdown. Markdownload works well for that as a general tool but now Obsidian has a clipper which does the same, barring the images but I’ve solved that with a plugin. Thanks anyway!

@sinbad ah I see. You could still use wget to grab and then pandoc to convert the HTML into MD, so still just 2 commands (or throw them into a bash script to just feed the url to), but removing the 'crud' would probably require more effort. Transpiling through an array and stripping from that tree in the process would be my approach. Not worth the time if you've sorted with another tool anyway.