Spamming the Internet
When Google announced that Reader was to be no more, I, like many other geeks took things into my own hands. I was, until recently, a daily user of Reader. There are only a handful of RSS feeds that I follow, but I like them for the very fact that they are not Rich content or an Immersive Experience.
They are plain-text, pure content.
My solution was pretty ghetto, I must admit. I set up a wordpress blog and set up IFTTT to watch some of my favorite feeds and re-post them there automatically. It was a couple hours setup, and a few minutes setting up the AJAXy goodness of inline expanding posts.
It was only today that I found one of the posts in a google search result. Weird.
I don’t pretend that this is original content, and I’m not entirely sure that what I’m doing is permissable. In my defense, every post links to the original source, and many (Hacker News, in particular) are links with no other content.
But it really got me thinking about what the next stage in the evolution of the internet might look like. Lets say stage 1 was about publishing. Not really big media publishing; just making your voice heard and communicating. And stage 2 was about interacting. Linking, commenting and dialogue. I’m going to say stage 3 is about living. People putting huge amounts of their life on the internet just because that’s where they live. Not really seeking a reaction or looking for pageviews or likes or whatever. Just existing on the internet.
But humanity is not the only inhabitant of the web. My robot assistant will testify to that. My aggregator is gathering stuff for me to read and, as a side effect, publishing something that someone else may like to read too. No longer am I posting stuff that I want others to hear, or that I want others to engage with. I am farming out robots to post stuff purely for my own consumption.
So my question is: am I spamming the internet? Or is this the fourth stage of the internet? A stage where original, human produced content is just a small droplet in the ocean of automatically generated content.
On May 7, 2013 7:07 AM, “Jack” wrote:
Funny story, I found this post from completely unrelated search terms hitting feedr.
IMO you’re fine, re-distributing content is pretty normal. You could always slap a robots.txt on feedr.
Jack
Good point. A robots.txt would effectively make it a non-issue. The problem with that is that I think I do want it available in search results. I just wish there was some human & machine friendly way of saying “hey, this is not original content, but feel free to bookmark and index it if you like”.