Biz & IT —

Compromising Twitter’s OAuth security system

Twitter recently transitioned to OAuth, but the social network's …

Twitter officially disabled Basic authentication this week, the final step in the company's transition to mandatory OAuth authentication. Sadly, Twitter's extremely poor implementation of the OAuth standard offers a textbook example of how to do it wrong. This article will explore some of the problems with Twitter's OAuth implementation and some potential pitfalls inherent to the standard. I will also show you how I managed to compromise the secret OAuth key in Twitter's very own official client application for Android.

OAuth is an emerging authentication standard that is being adopted by a growing number of social networking services. It defines a key exchange mechanism that allows users to grant a third-party application access to their account without having to provide that application with their credentials. It also allows users to selectively revoke an application's access to their account.

Some of the more technical aspects of this article will be easier to understand if you have a basic familiarity with the standard and the problems that it is trying to solve. We published a primer earlier this year that you can refer to if you are looking for additional background information.

The OAuth standard has many significant weaknesses and limitations. A number of major Web companies are collaborating through the IETF to devise an update that will fix some of the problems, but it's still largely a work in progress. The current version of the standard—OAuth 1.0a—is an inelegant hack that lacks maturity and fails to provide clear guidance on many critical issues that are essential to building a robust authentication system.

Website operators who adopt the current version of the standard have to tread carefully and concoct their own solutions to fill in the gaps in the specification. As a result, there is not much consistency between implementations. Facebook, Twitter, and Google all have different variants of the standard that have to be handled differently by third-party applications. Twitter's approach is, by far, the worst.

Not so secret consumer key

Applications that communicate with OAuth-enabled services can use a set of keys—called the consumer key and consumer secret—to uniquely identify themselves to the service. This allows the OAuth-enabled service to tell the user what third-party application is gaining access to their account during the authorization process. This works relatively well for server-to-server authentication, but there is obviously no way for a desktop or mobile application that is distributed to end users to guarantee the secrecy of its consumer secret key.

If the key is embedded in the application itself, it's possible for an unauthorized third party to extract it through disassembly or other similar means. It will then be possible for the unauthorized third party to build software that masquerades as the compromised application when it accesses the service.

It's not quite as bad as it sounds, but the problem is how Twitter is using the key. It's very important to understand that a compromised consumer secret key doesn't jeopardize the security of the users of the application. The key can't be used to gain access to the accounts of other users, because accessing an individual account requires an access token that individual instances of the client application obtain automatically on behalf of the user during the authorization process.

The function of the consumer secret is really just to let the remote OAuth-enabled Web service know who is making the request—kind of like a user agent string. In the context of a desktop or mobile client application, it's basically superfluous and shouldn't be trusted in any capacity.

Against all reason, Twitter requires every single application—including desktop and mobile software—to supply a consumer key and a consumer secret. To make matters worse, Twitter intends to systematically invalidate compromised keys. This means that when somebody extracts the key from a popular desktop Twitter client and publishes it on the Internet, Twitter will revoke access to the service for that client application. All of the users who rely on the compromised program will be locked out and will have to use other client software or the Twitter website in order to access the service.

To restore access after a key is exposed and invalidated, the developer of the compromised application will have to register a new key, embed the key in a new version of the program, deploy the new version to end users, and get the users to go through the authorization process again. This is going to be especially challenging for developers who rely on distribution channels like the iPhone application store, which have a lengthy review process. They could find themselves in a situation where their users are locked out for weeks when a key is compromised.

When this happens, the users will simply get authentication errors and will have no way of knowing the cause. They will likely switch to a different client application rather than waiting for the developer of their preferred client software to issue an update with a new key. It's obvious that this could be enormously problematic for client application developers—the risk alone could potentially deter developers from wanting to write software that works with Twitter.

When some concerned third-party developers brought this issue to Twitter's attention, the company refused to change course and responded by saying that they expect developers to take a "best-effort security" approach to protecting the integrity of their keys. Twitter acknowledges that it will always be possible for a determined attacker to extract the consumer secret from a desktop or mobile client application, but the company believes that such attacks will largely be deterred if developers take basic steps to obscure and obfuscate their keys in their source code.

The issue here is that Twitter wants to use the keys as an abuse control mechanism to centrally disconnect spammers and other unwanted users of the service, but OAuth was simply not designed to be used for that purpose. The idea is that centrally disabling a spammer's consumer secret key will lock out all of the spammer's user accounts, theoretically simplifying spam control for Twitter. It's unlikely that this naive strategy will work in practice, however.

Any spammer with a hex editor can trivially compromise the keys of popular applications and use those keys to evade Twitter's abuse controls. By using the consumer key and consumer secret key from a popular third-party Twitter application, a spammer can make it harder for Twitter to lock out all of his spam accounts at once without also locking out a large number of legitimate users of the compromised application. Even if individual spammers aren't sophisticated enough to know how to extract the keys, they can easily buy consumer secret keys from people who know how to get them out of mainstream Twitter clients.

There are a lot of other scenarios where "best-effort security" and a little bit of obfuscation aren't going to be a sufficient deterrent. For example, the developer of a popular commercial third-party Twitter client might compromise and anonymously publish the consumer secret key of a competing application so that they can get it temporarily disabled. In addition to those kinds of obvious business incentives, mischief makers might compromise keys just for the lulz.

I repeatedly attempted to make Twitter aware of the problems with its OAuth implementation, but the company largely ignored my concerns. When I opened a support ticket, it was promptly closed and I was directed back to the developer mailing list, where I received no response from Twitter after writing several posts outlining my concerns. My attempts at responsible disclosure were unsuccessful.

Channel Ars Technica