Identicon and QR Code

I was recently asked to provide some information on identicons, a good excuse to restart blogging.

This post, more like notes actually, compares Identicon to QR code which may seem similar visually but are not.

WARNING: I think in random fragments, brief moments of coherency, so my posts will be the same.

Machine vs People

Content

  • QR codes are containers of information.
  • Identicons are shadows of information they are associated with. 

Usage

  • QR codes are used to transfer information from real life (RL) objects to computers using only optical means.
  • Identicons are used to distinguish individuals or groups of information.
More to come later. Sorry.

Installing sqlite3-ruby gem on Snow Leopard

Problem:

After upgrading to Snow Leopard, I had to rebuild/reinstall MacPorts and RubyGems as recommended. While doing this, I found that sqlite3-ruby gem install failed with errors related to extconf.rb file.

Solution:

Not sure why this works but I found a working solution at StackOverflow which replaces:

/usr/local/lib/libsqlite3.dylib

with a symbolic link to one that came with XCode for Snow Leopard:

/Developer/SDKs/MacOSX10.6.sdk/usr/lib/libsqlite3.0.dylib

You can find the full ‘ln’ command at StackOverlow page above but be sure to rename the original in case you need to restore it.

Using JSP with Jersey JAX-RS Implementation

This post shows you some tips you’ll likely need to use JSP with Jersey in typical Java webapps.

Tested Conditions

While Jersey 1.1.1-ea or later is probably the only hard requirement for the tips to work, my development environment is listed here for your info. You are welcome to add to this rather meager basis for sanity.

  1. Jersey 1.1.1-ea
  2. Tomcat 6.0.20
  3. JDK 1.5
  4. OS X Leopard

Change JSP Base Template Path

Default base path for templates is the root of the webapp. So if my webapp is at “/…/webapps/myapp” then Viewable(“/mypage”, null) will map to “/…/webapps/myapp/mypage.jsp”

To change this, say to “WEB-INF/jsp” as it’s commonly done for security reasons, add following init-param to Jersey servlet/filter in web.xml:

<init-param>
<param-name>com.sun.jersey.config.property.JSPTemplatesBasePath</param-name>
<param-value>/WEB-INF/jsp</param-value>
</init-param>

Return Viewable as part of Response

It was not obvious to me (doh) where Viewable fits into Response when I have to return a Response instead of Viewable. It turns out, Viewable can be passed where message body entity is passed. Example:

return Response.ok(new Viewable("/mypage", model).build();

Use “/*” as servlet-mapping for Jersey

The primitive servlet-mapping URI pattern scheme, which somehow survived many iterations of the servlet API, impacts JAX-RS hard if servlet-mapping is overly broad. Unfortunately, pretty restful URL calls for servlet-mapping to be “/*” instead of something like “/jersey/*”, breaking access to JSP files as well as static resources.

To work around, you’ll have to use Jersey as a filter instead of a servlet and edit a regular-expression init-param value to punch passthrough holes in Jersey’s routing scheme. To enable this, replace Jersey servlet entry in web.xml with something like this:

<filter>
 <filter-name>jersey</filter-name>
 <filter-class>com.sun.jersey.spi.container.servlet.ServletContainer</filter-class>
 <init-param>
  <param-name>com.sun.jersey.config.property.WebPageContentRegex</param-name>
  <param-value>/(images|js|styles|(WEB-INF/jsp))/.*</param-value>
 </init-param>
</filter>
<filter-mapping>
 <filter-name>jersey</filter-name>
 <url-pattern>/*</url-pattern>
</filter-mapping>

That’s all for now. Hope this post saved you some headaches.

Firefox Extension Developer Tips

Just a couple of tips for Firefox extension developers, hard earned after many hours of head scratching. Not adhering to either tips will confuse Firefox and XPCOM component will fail to load.

XPCOM components get loaded before chromes are loaded.

[Update: The most common problem related to this is Components.utils.import call fails during launch with NS_ERROR_FAILURE exception. To fix, wait until app-startup notification is received before importing javascript modules.]

This means anything defined in chrome.manifest won’t be available until “app-startup” event is observed. Note that Resource URI scheme “resource://” introduced in Firefox 3 uses resource directives in chrome.manifest which means you should defer Components.utils.import calls until “app-startup“.

XPCOM components implemented using Javascript should be defined as a pure object, not function.

So it should look something like this:

var MyServiceModule = {
  registerSelf: function(compMgr, fileSpec, location, type) {
    ..
  },
  ..
};

Real-Time State of Mind

I need to get back to blogging more often. Having to type more than 140 characters feels wierd. 😉

Given that I’ll be attending TechCrunch’s Real-Time Stream CrunchUp this Friday, I thought a blog post on a key real-time stream problem would help me into a real-time state of mind.

Real-time streams have many technical problems to overcome many of which are thankfully being resolved by advances in technology and infrastructure but the problem that interests me the most is the user experience problems:

Information, real-time or otherwise, is meaningless if users are drowned within it.

Typical Twitter users see only a fraction of tweets from people they follow. The notion of Top Friends (related to my social radar diagram from 8 years ago) will help but at the cost of additional chores users have to do separate the greens from weeds.

The financial industry has used real-time streams for a long time so there is a lot to learn there technically. But, when it comes to user experience, they haven’t cracked the nut either, forcing traders to use bewildering number of charts and numbers on multiple displays and input devices to trade. So the emerging consumer real-time stream developers will have to break new grounds ourselves.

HTML5 Microdata Fantasy

I haven’t been tracking HTML5 design efforts lately but what’s being proposed for microdata (see posts by Sam Ruby and Shelly Powers) yucked me sufficiently to revisit an old fantasy of mine about HTML (man, what a boring life I have). My fantasy was to add general element/structure definition facility to HTML. It should easily extended to support microdata as well.

The way I envisioned it being used is like this:

<address>
<street>123 ABC St.</street>
<city>Foobar</city>
<state>CA</state><zip>94065</zip>
</address>

which sure is preferable to:

<div item>
<span itemtype="street">123 ABC St.</span>
<span itemtype="city">Foobar</span>
<span itemtype="state">CA</span>
<span itemtype="zip">94065</span>
</div>

As to how a semantic structures and syntactic sugars can be defined, one very arbitrary way could be:

<head>
<def name="address" package="http://test.com/1/mapking"
    params="{{street city state zip}}">
  <div>
    <span>{{street}}</span>
    <span>{{city}}</span>
    <span>{{zip}}</span>
    <span>{{zip}}</span>
  </div>
</def>
</head>

I don’t have any illusions that this fantasy has even a tiny chance of coming true though. Besides, it’s like a beggar asking for caviar when any kind of microdata support will satiate our hunger.

Boss! Boss! The Plane. The Plane!

update:

Here is a more elaborate version of the def element for the bored:

<def name="name" package="http://ting.ly/name"
  attrs="$$first last$$">
  <span>$$first$$ $$middle$$ $$last$$</span>
</def>

which could be used like this:

<name first="Don" last="Park"/>

There are lots of wholes in this sketch which is why it’s a fantasy.

Why wasn’t OAuth Vulnerability found earlier?

According to OAuth about page, it was Blaine Cook who initiated the birth of the the standard while working at Twitter in Nov. 2006. Blaine mobilized the initiative by getting Chris Messina involved which attracted others at CitizenSpace to join the effort (an excellent demonstration of benefits co-working social environments offer). By April 2007, the initiative got to formalize and, by October 2007, OAuth Core 1.0 spec was finalized. The question of interest to me is, why did it take a year and a half to uncover the first vulnerability?

It’s puzzling because OAuth was well known and popularized, attracted a large body of developers, many of whom I presume read the spec, and implemented by many, some very large companies. I’ve read the spec as well and discussed it with peers and partners in the security and payment industry on several occasions.

I think the right answer might be that our collective perspective in dealing with the standard was focused on implementation, application, and hype while wrongly assuming that the standard was secure. Recollecting my thoughts when I was reading the spec for the first time, I now realize that it was the safety in numbers and the lure of promising applications that influenced me to focus only on implementation.

The good news is that I think OAuth will be given the proper shake it needs to get any remaining kinks out. The bad news is that we are likely to repeat the mistake when the next popular grassroots standard emerges in a hurry. Relatively fast pace of community/grassroots standard initiatives is not a concern only if mass appeal can be effectively leveraged to shine intensive searchlight on all aspect of the standard.

On Twitter’s OAuth Fix

While the OAuth team is working on addressing the OAuth session fixation vulnerability at the spec level, Twitter made following changes to reduce the exposure window:

  • Shorter Request Token timeout – This is good practice in general. Developers tend to be too generous and, all too often, forget to enforce or verify enforcement.
  • Ignore oauth_callback, in favor of URL set at regration time – this prevents hackers from intercepting callback.

Double-callback is still possible though which means Twitter OAuth Consumers will have to detect extraneous callbacks and invalidate access to everyone involved because they have no way of telling who is who.

Remaining exposure to the vulnerability is when hacker’s simulated callback arrives before the user. We are talking temporal exposure of a couple of seconds at most which, given current Twitter use-cases, is not that big a deal. I wouldn’t do banking over Twitter though. 😉

On OAuth Vulnerability

Twitter’s OAuth problem turned out to be a general problem affecting other OAuth service providers and well as consumers using ‘3-legged’ OAuth use-case. For details, you should read not only the relevant advisory but Eran Hammer-Lahav’s post Explaining the OAuth Session Fixation Attack.

First hint of the vulnerability surfaced last November as a CSRF-attack at Clickset Social Blog which was initially diagnosed as an implementation-level issue. Well, it turned out to be a design flaw requiring some changes to the protocol.

There are actually two flaws.

The first flaw is that parameters of HTTP redirects used in OAuth can be tempered with or replayed.

This flaw allows hackers to capture, replay, and mediate conversations between OAuth Consumer and Service Provider flowing over the surface of User’s browser between the User, Consumer, Service Provider.

I think the easiest general remedy for this flaw is including a hash of the HTTP redirect parameters and some shared secret like consumer secret. A more general solution like evolving tokens could be done as well but inappropriate as a quick remedy.

This flaw should not affect OAuth service providers that manage and monitor callback URLs rigorously.

The second and more serious flaw is that the User talking to the Consumer may *not* be the same User talking to the Service Provider.

This means that a hacker can start a session with TwitsGalore.com then phish someone to authorize at Twitter to gain access to TwitsGalore.com as that someone without stealing password or session-jacking.

Solving the first flaw simplifies the solution to the second flaw by reducing the possibility of the hacker intercepting callback from Service Provider to Consumer which is not supposed to have any sensitive information but some implementations might include. Wire sniffing is a concern if HTTPS is not used but the relevant concerns for the flaw are integrity and identity, not secrecy which is an application factor.

Removing the possibility of callback URL tempering leaves double callback, meaning that the hacker start things off, tricks someone into authorizing without intercepting the callback, then simulate a callback to Consumer. Note that the Consumer would have started a HTTP session with the hacker, session associated with the RequestToken in the callback. Even if HTTP session is not created until the callback is received, there is no way for the Consumer to tell who is who.

I think Service Provider have to send back a verifiable token, like a hash of the RequestToken and consumer secret so the hacker can’t simulate the callback.

Regardless of which solutions OAuth guys decide on, one thing is clear. It will take time, many weeks at least, if not months. That’s going to put quite a damper on developers in the Consumer side of the OAuth as well as the Service Provider side.

OpenID Middlemans

Apparently the invite-only OpenID meetup at Facebook took place tonight. The fact that it was held at Facebook points to a shift taking place in the OpenID world. What’s coming is obvious: somehow retrofit Facebook Connect into OpenID architecture. Repeat after me. Yes, we can.

Facebook Connect can become a OpenID middleman, serving attribute-enriched OpenID to consumer sites that selected Facebook as its OpenID supplier. OpenID middlemans solve two key OpenID usability issues as well as opening up the potential to solve some privacy issues.

The first usability issue the middleman solves is the need to type in OpenID URL by replacing the URL input box with a button saying Signin with OpenID or a branded version like Facebook Connect button.

The second usability issue is users forgetting which OpenID they’ve used at a OpenID consumer site. Site can save that in a cookie but that opens up privacy and taste issues, particularly since consumer sites will be less trusted than OpenID supplier services like Facebook and Google.

The middleman can also support anonymous personas for users to minimize privacy issues but, to do so, they’ll have to provide bridging service between the sites and the real identity to meet the needs of consumer sites.

Who will be the players? Facebook and Google, of course. Throw in MySpace, Yahoo, Microsoft, and AOL as well. I reckon security, payment, and infrastructure companies to come in too, late of course. Now, they are all OpenID providers but, to act as middlemans, they’ll have to also act like OpenID consumers to either pass on third-party OpenID identity or return a proxy identity. IMHO, it’s a very small price to pay IMHO since only oddball users will choose to do so.

Yes, it’s going to be a party night and, when the dawn comes, small OpenID providers will just fade away like old soldiers, taking the name with it too and leaving behind only big name portals and social networks wrapped in brand names.