I’m lucky to have had access to the internet for most of its maturation. From the era of geocities to present, quite a number of phases have unfolded.
[Point 1 – Financial incentive is a large part of why the internet is now garbage]
When internet was in its infancy, collaboration was okay. It was certainly slap-dash. You could generally trust what you found on the internet to be well-intentioned if not entirely accurate.
Unfortunately, a number of times money has come into the picture and made the experience palpably worse. The first wave I experienced first hand was the pop-up revolution. Swaths of the internet became borderline unusable (mostly the shadier sections) as ads for gambling, pornography, sweepstakes, etc would exponentially multiply. Technologically, it was easily defeated at the browser level, albeit slowly.
The next problem I remember was the spam revolution. At its worst I would receive nearly a dozen spam messages a day in my yahoo mail trying to sell me illicit medications or dubious products.
What’s interesting about these two is to think how much damage they did to the internet experience to make only a paltry amount (people would send thousands of spam messages for pennies).
The next victim was chat and internet forums. Around this time google’s pagerank algorithm had been publicized, and this lead to mass exploitation of the algorithm by posting unrelated links on all kinds of forums (before rel=nofollow) in order to try to gain influence. Though the other problems I have mentioned were all technologically fixed, to this very day I still get spam comments attempted on this blog.
Next were the “Nigerian Princes,” a whole array of scams executed out of Nigeria relying on the fact that banks couldn’t correctly identify bad checks, even though the necessary public-key-cryptography had existed for decades.
However, all of this pales in comparison to the next set of problems.
The next waves were commercialization of individuals by mass-tracking. Though this had been underway for a long time, it hadn’t hit its stride until at least 2010.
And conversely, hacking. Ransomware (SF Muni was hacked with ransomware, HBO was extorted, numerous individuals) for one. Bitcoin hacks. The influx of money drew an influx of parasites with nothing to lose.
But in my opinion the most insidious form in the one we’re just seeing now.
[Point 2 – a class of people are willing to sell their integrity by dishonestly promoting beliefs and brands without disclosing financial incentive]
Wikipedia has always been a gem of the internet to me. Considering the pool of inputs on the internet (based on the average set of unfiltered comments you’d see on a forum), to take only anonymous internet input and be able to distill and refine to a point of reputability, all without financial incentive, was an amazing concept.
Reddit (and HN, youtube, facebook), similarly, relies on a system of upvoting and downvoting from anonymous members to filter the popular, true, or funny ideas to the top.
Reddit has long been obsessed with idea of the “shill,” a corporate or political account sponsored to promote an opinion online. I think the most common response has been skepticism, most of all because the idea that what’s being said on reddit matters wasn’t really taken seriously.
However, it’s hard to deny now that the shill exists, in light of revelations about Monsanto’s “Let nothing go,” program. Monsanto hasn’t denied these accusations, and hopefully the relevant testimony is fully declassified soon.
In the prior cases of financial incentives wrecking the internet, the answer has mostly been technical, but increasingly complicated. The issue of paid dishonesty (false reviews, false claims on discussion sites, false outrage on twitter) seems most complicated of all. For bot accounts, a technological answer is certainly possible (captcha on every twitter login, retweet, like).
But with the Monsanto case, we’re hearing about paid individuals through a 3rd party. One answer might be to make such actions illegal, and offer financial incentives to those who whistle-blow corporate offers (but as a political solution rather than technical, it would be an order of magnitude more involved). The upside of this is it would also be applicable to undue influence on government offices, scientists, wikipedia editors, as well as individuals.
Another notion, if simplistic, is the idea that logic holds more sway than the bandwagon. Certainly not true for all communities, but a community shift away from emotion-laden narratives and just focus on facts could potentially undercut the ability for a community to be swayed. On the downside, it’s this very emotional attachment to news that motivates a lot of people to read it (“Biases confirmed, Knew it.”). Also, on the issue at hand of Monsanto, their are also numerous accusations that they tried to have an undue influence on the scientific research itself.
So the open question is this – is there a structure of forum that incentivizes individuals to approach the objective truth while at the same time resisting influence of an arbitrary number of antagonistic actors?
(To get hypothetical – If there were a truth serum, for example, a solution would be trivial, if impractical. Compel each post to be accompanied by a video of the author of that post drinking the serum and then saying what’s in their post)