Theres been a lot of talk on G+ lately about Google's supposed ban on "fake accounts". Anecdotal evidence of otherwise worthy folk who have gotten their free Google+ accounts shut down for violating the terms of service which disallows participation using a false identity have raised the ire of many a googling geek. While no different from Facebook's stated policy, it seems that Google might actually be enforcing their policy. Geeks have pointed to at least one obvious problem with the real identity plan: how does Google verify that a legitimate-looking name is not, in actuality, a fake one? While they should be able to correlate a Google account with the IP used to access their sites at a given time and add that to any number of other clues from their vast archives (Google searches, cookies, geo-location data, etc.) this clearly would be an investigation of some magnitude for each and every case. From a business perspective they'd be foolish to waste very much time on that foolishness. As I said in one comment I think that:

Google really doesn't care so much that you are using an alias, but instead doesn't want it to appear that you using an alias to whomever they are planning to sell the data. ...this isn't about Google not knowing who they're dealing with, its about the PERCEIVED quality of Google's product (information about you) to potential buyers.

I've [proposed]()/blog/2011-07-10-does-comment-quality-indicate-content-quality) in the past that the Slashdot crowd-sourced engine of reputation (karma) and moderation might be good model for other comment engines, and I think Google can learn from it as well in the context of Google+. Google should build in methods to verify that people really are who they say they are. They could do this by adding multiple OpenID verification, domain ownership verification like they already do with Analytics, and GnuPG signature verification for starters. They could attempt to make it easier for users of Disqus or other systems to authenticate with Google and cross-associate their content. This would allow Google to begin to build a "web of trust" around user accounts. Using the "key signing" model of OpenPGP while simultaneously hooking in to the additional verifications mentioned above, a user account (or posts by that user account) could eventually be assigned a "veracity" score indicating how sure Google is that a given user is the same person linked to these other sites or posts or comments. A high score would be much coveted because it could clearly indicate a known source.

The best thing about this kind of a system is that there would be no real need for Google to worry very much about whether the account was using a "real" name or not because the veracity (truthfulness) of a user's identity would not be dependent on how realistic the name appears but on the quantity and quality of trust relationships developed between users over time.

Previous Post Next Post