You read something posted by a random person on the internet. You, or more likely the operator of a social network working on your behalf, does big data artificial intelligence data crunching to figure out whether you can trust that statement. Among other things it checks, because it is difficult to remain anonymous on the internet, the agent on your behalf figures out the identity of the poster, what other actions things they have said, and their reputation and trustworthiness of similar things they have said. Perhaps it does collaborative filtering to group the poster into a group of similar identities to decide whether the entire group can be trusted for statements like this.
Perhaps in order to protect some notion of anonymity, the social network tells you only a probability estimate of whether the statement can be trusted. However, this leads to disconcerting feeling of not knowing how the number was achieved.
Figuring how what statements can be trusted is an extremely difficult problem, but perhaps solvable. The trick we are exploiting is that the social network operator has a panopticon, gets to be Big Brother, seeing everything, providing the untrustworthy nowhere to hide. That power, normally reserved for three letter agencies in big government, has been made available to you.
People will try very hard to game the system.
We need a machine learning feedback mechanism of grading whether your trust was misplaced.
This is at odds with the social network providing you the ability to hide information, preventing it from being visible to those who would hurt you with it, which could be the social network itself when it decreases your reputation because of it.
No comments :
Post a Comment