infosex.exchange <3

You are probably looking for the infosec.exchange Mastodon instance

This host is mostly for my random stuff, and in little part acts like a well-intentioned placeholder for the typosquatted domain.

Discoverability and Archiving

Currently I'm using this host for saving the items from my own feeds to the Wayback Machine and provide in-links for search engines. I hate that I have to do this, but the non-sense ideology of Mastodon pretty much ruined the search feature for Fediverse as a whole, and this wasn't changed by the fact that they owned their mistake and implemented search eventually.

Yes, I (or anyone else) could do similar things with other peoples published feeds, regardless of the tantrum. No, you can't defederate this, because the process doesn't rely on an instance.

Gluttony Section for Search Engines

@MxVerda AFAICT while Neocities states that it generates RSS it is in fact just a hosting platform, and it is up to you to include RSS in your uploaded content (may worth asking them about this). Handcrafting RSS is a bit cumbersome, and since the links on your blog seem to be broken already, you are probably better off using a static site generator that takes care of proper linking and RSS too (possibly with a plugin).

this post | permalink
@FuzzyAleks I'm bringing this up because a clearly visible migration to bsky started as the election got decided. I don't think the timing is a coincidence.
this post | permalink
[RSS] Cross-Site POST Requests Without a Content-Type Header

https://nastystereo.com/security/cross-site-post-without-content-type.html
this post | permalink
@pancake 1st
this post | permalink
@mrose.ink.bsky.social said it perfectly:

https://bsky.app/profile/mrose.ink/post/3lbwpud2mes2n

"One enduring complication with all this is that scraping happens all the time for reasons that people *don’t* find inherently objectionable, and in fact support—the Wayback Machine, all kinds of public health and extremism research, etc. The mistake was assuming that goodwill transfers.

A key problem in the Disc Horse (and policy to a lesser extent) is reminding people that scraping as a technological process is Important, Actually, for all the things You Think Are Good, and any proposed solutions to curtail GAI training uses need to be VERY narrowly tailored to not impact those.

All the proposed solutions so far have had some critical flaw that makes them unworkable.

Manual consent? Ok, how do we implement that at scale? robots.txt style flags are fine, but they’re also not legally binding—and that’s good! If they were, Wayback wouldn’t be able to index!

So exclusion protocols can be ignored, For Good Reason. “What if we give an exclusion protocol the force of law for this specific use?” Closer, but there’s active debate in the courts about whether this is all a fair use, and if the answer is “yes,” then it doesn’t matter

…then best case scenario the tags are rendered null (because you can’t legally override fair use), and worst case you’ve just recreated a DMCA 1201 style lockout trick, and we have spent the last 25 years seeing just how incredibly those fuck up everything around them."
this post | permalink
@joepie91 OK, please let me know when the scraping stops because of our collective will!
this post | permalink
@joepie91 - You're still assuming you can know about the scraping in the first place
- Money doesn't stink
this post | permalink
@joepie91 Do you really think people who want to e.g. earn money with this give a flying fart if they are excluded from a community (which they weren't part of in the first place)?
this post | permalink
@joepie91 Based on arguments I had over here people definitely believe that technical measures at the publishing platform (such as limiting search) can affect this. Also, what is the point of being outraged about the single person who is open about his scraping while I guarantee you a dozen other orgs do the same rn just don't talk about it?
this post | permalink
Here we go again explaining supposedly technologically literate people that what they *publish* on the Internet can and will be scraped... Bluesky's explanation ("we can't enforce this") is on point btw.

RE: https://infosec.exchange/@josephcox/113551853623942786
this post | permalink
Next Page