According to OAuth about page, it was Blaine Cook who initiated the birth of the the standard while working at Twitter in Nov. 2006. Blaine mobilized the initiative by getting Chris Messina involved which attracted others at CitizenSpace to join the effort (an excellent demonstration of benefits co-working social environments offer). By April 2007, the initiative got to formalize and, by October 2007, OAuth Core 1.0 spec was finalized. The question of interest to me is, why did it take a year and a half to uncover the first vulnerability?
It’s puzzling because OAuth was well known and popularized, attracted a large body of developers, many of whom I presume read the spec, and implemented by many, some very large companies. I’ve read the spec as well and discussed it with peers and partners in the security and payment industry on several occasions.
I think the right answer might be that our collective perspective in dealing with the standard was focused on implementation, application, and hype while wrongly assuming that the standard was secure. Recollecting my thoughts when I was reading the spec for the first time, I now realize that it was the safety in numbers and the lure of promising applications that influenced me to focus only on implementation.
The good news is that I think OAuth will be given the proper shake it needs to get any remaining kinks out. The bad news is that we are likely to repeat the mistake when the next popular grassroots standard emerges in a hurry. Relatively fast pace of community/grassroots standard initiatives is not a concern only if mass appeal can be effectively leveraged to shine intensive searchlight on all aspect of the standard.
I recently implemented my first OAuth client and had a slightly uneasy feeling that there was a bit of magic — I couldn’t cleary see how it was truly secure. Fact is, it was pretty darn close — the hole was never exploited (that we know) and steps are being taken to close that hole.
That said, a simple fix will still leave a bit of “magic:” there is an authentication equivalency that is not being addressed (in OAuth terms, that user@consumer == user@service_provider) by way of a properly out-of-band mechanism. I would prefer to see that hole sewn up more definitively — essntially “whitelisting good guys” rather than “blacklisting bad guys” (which can become a game of whack-a-mole). I should note that many knowledgeable folks on the list feel the proposed fix is adequate. Two-legged OAuth, a less common use than the typical Three-legged flavor, is an excellent protocol (the exploit was discovered only in the Three-legged variety), and I suspect we’ll see more adoption as a stand-in for HTTP Basic & HTTP Digest authentication.
I don’t think it’s that complicated. There have been all manner of crypto protocols and security protocols that have been in use for years before critical flaws were discovered. Some were developed by standards bodies or subject to academic peer review. The fact is that any flaws left in a system after it’s been reviewed and revised and published can be quite hard to see, even if they’re obvious after someone points them out.
Matt, while I agree that other protocols had flaws of varying nature, uncovering of fundamental flaws like the OAuth one and DNS one are rare events, often resulting in industry-wide scramble.
Also, I don’t think we should accept incidental trickles of flaw discoveries as the norm when there are actions we can take to improve the process.
IMO, it’s the same with open source projects. OS participants are primarily focused on using, extending, or repurposing the project code which means while bugs are uncovered, security vulnerabilities are rarely found until attack incidents occur or by chance.
So I think some attention should be paid to finding ways or factors to introduce in grassroots initiatives and open source projects that encourages early detection of security vulnerabilities.