I first noticed the new experimental @twittersuggests feature a couple months ago when it @mentioned me in a tweet to a newly registered Twitter user. At the time I thought this was a cool way for the company to actively use their own product to help solve a discovery problem for new users to the service. My Twitter account was included in a series of tweets that mentioned other notable accounts (@superamit, @juliebenz, and @sacca), so the secondary reaction was a positive emotional one — I was flattered.
Twitter describes the service on its help pages as:
…an experimental feature that helps you find interesting new accounts to follow by tweeting Who To Follow suggestions, personalized just for you! This feature was created by Twitter, and it looks like a normal Twitter account – it will Tweet recommendations which you can reply to, retweet or mark as favorites.
Pretty cool, right?
Since then not every mention has been as flattering (obviously, the purpose of this service isn’t to dole out flattery to nobodies like myself), but for the most part they have been decent overall. Over time, the quality of the mentions declined. Today tipped the scale. In a tweet posted earlier I was @mentioned alongside what can only be described as a spam account. Nay, a porn spam account. See for yourself:
So, I may be guilty for tweeting a lot. I may also be guilty for running my mouth off from time to time. But how in the world am I in the same class as a porn spam account? Better yet, how can this possibly be acceptable from an official Twitter account?
How does it work?
@twittersuggests is a feature which looks like a Twitter account – it algorithmically generates suggestions of users to follow and sends them to you.
@twittersuggests will tweet recommendations to you via @mentions, and this Tweet will appear in your @mentions timeline.
Sure, the company describes this with words like “algorithmically” and “experimental,” but it’s really hard to believe that this was launched with any sort of testing whatsoever. If there are any resources applied to this experiment, they certainly don’t appear to be doing any tuning that is having a positive impact. To the contrary, the quality appears to be decreasing over time. The sad thing is, if I were new to Twitter I might find a service like this valuable if the accounts recommended remained of decent quality, but that’s just not the case here. Worse still is that there are so many simple ways this could be avoided.
Before I get pummeled with the argument the “false positives are expensive” argument (Yes, I’ve read @kellan’s excellent write-up, and have firsthand experience with this as well) let me call out that this is an entirely different scenario. The cost of false positives is only applicable when you choose to deny accounts access to basic services. If a company restricts an account from using the basic functionality of a site because of an unsubstantiated suspicion, then sure…that’s expensive.
However, tweeting account recommendations that might otherwise trip overly sensitive spam-detecting algorithms is a
choice mistake. Twitter owns this account, they have the right to be overly choosy about the accounts featured in their recommendations, and an account that includes obvious keywords like “sex” and “porn” is a safe one to filter out of that list, just to play it safe. Now, building a recommendations engine is tough. It’s not easy to get these things right, and I’m certainly sympathetic to this. I guess I’m reacting so strongly here because this feels like one of those avoidable mistakes, especially because there is literally no harm in restricting an account like this from being recommended.
In other news…
Speaking of mouthing off…I shared my thoughts on the news of the Beyonce-pregnancy-VMA induced milestone Twitter reached in terms of TPS (FYI — that’s, obnoxiously, “tweets per second”) this weekend, and look what happened. Awesomesauce.