Sonoma and TEN

I used to frequent Sonoma coast and Napa when I was single.  Monterey too.  It's not that I liked the scenery.  Being young and hot blooded doesn't leave much room for the scenery.  I usually went there with girls I dated so I can score.  Dating is fun, but waiting is not.  It's silly to travel somewhere and sleep in two rooms.  So a yes was a meaningful yes, oh yeah.  Sometimes I miss being a shallow bastard with a one-track mind.

Tim Oren is back from his Sonoma run and makes some nice suggestions on places to go there.

Tim also comments on my Fixing E-Mail post and my response to his metadata rant.  Tim points to Postini, champ of his portfolio, as an example of new e-mail technologies that will rescue us from spammers.  Cool.  I wonder where they got the name Postini?  Houdini?  'Ini' is a bit too addictini IMHOini.

Here is a bit more on the trusted e-mail network idea:

A Trusted E-mail Network (TEN) is a PKI network of mail servers.

Every message sent is signed by the originating TEN server to identify the sender.  Performance degraded by crypto operation is offset using crypto hardware and meaningful application of mail priority.  Ultimately, slower means more protection against spammers.

Only TEN users can send e-mail from a TEN network.  Anyone can receive a TEN e-mail.

Every TEN user is identified and the user's profile is maintained actively by at least one TEN service provider.  Each complaint against the user affects the user's profile as well as the profile of the organization sponsoring the user and the TEN service provider.  Variouis types of penalties affecting the quality of service are applied to the offending user or organization according to their profiles.  TEN service providers that fail to maintain required level of service are ejected from TEN.

Quality of identity is maintained by payment.  New subscribers start at the ground level.  A side benefit from time-based quality of identity is reduced turnover within the network, that is they can maintain their quality of identity only if they change service within the network.

That's it for now.  I am still thinking about TEN, but I had to spew just now because I need a refreshing drink of peer criticism.  You wouldn't give TEN a Ten, would you?  Heh.  When I briefly dug into sendmail, I was surprised to find it almost ready for TEN.  With a bit of codeworks and a load of capital, TEN could be Visa for e-mails.

Update #1: 12:26PM

TEN differs from S/MIME-based solutions because it doesn't require the sender to have a user cert which must be issued, installed, revoked, and checked, the nightmare that brought down lofty dreams of PKI.  With TEN, all that requires is for the sender to add a SMTP server to his e-mail client because it's the user's TEN server that signs the e-mail, possibly in S/MIME format.

To the receiver, it doesn't really matter who signed the e-mail as long as someone trustworthy did.  Non-repudiation can be provided as a value-added TEN service that requires stronger authentication methods.  Receipient feedback to the originating TEN server can be done in several ways including attaching a message-specific URL to the end or a Click Here to Kickass hyperlink.

A typical TEN user will have two SMTP accounts, a TEN account and a junk-mail account.  Because e-mails sent via TEN account is fee-based, the user will use the junk-mail account for unimportant messages and subscribing to mailing lists.  For important e-mails though, the user will choose to use TEN.

A TEN account should cost around $20 to open so everyone can get a TEN account just in case.  There will be a low monthly maintenance fee with reasonable monthly traffic allowances.  traffic-overflow can be sold per message or by bulk.  Spammers can't abuse the system because bulk e-mail is trickled initially, giving enough time for complaints to flow back to the TEN server, squashing the remainder and slapping the sender around.

 

Eolas Legacy

The picture of post-Eolas era has become clearer with Microsoft's Changes to Default Handling of ActiveX Controls by Internet Explorer and it's not pretty.

To avoid violating the disputed Eolas patent, web developers have three options:

  1. Generate <OBJECT> tag using JavaScript.
  2. Add a special (read non-standard) attribute.
  3. Require users to click OK on a dialog.

It's like some jerk patenting bottled beer and forcing everyone to either pour canned beer into a bottle or stop calling it beer.  Otherwise, the bottled beer will ask "Are you sure you want to drink me?".  Sheesh.  By the way, the jerk's name is Mike Doyle.

Here is a beer-belly's salute to Your-Ol'-Ass, er, Eolas.  *Burp*

RSS-Data Clarified

It seems that RSS-Data is confusing to some people (hello Matt ;-p) so I'll attempt to clarify.

RSS-Data is a proposal to create an RSS 2.0 extention that allows arbitrary instances of generic data to be embedded in RSS 2.0 feeds without requiring introducing new elements with their own micro-schema and namespace.

It is just a proposal at this stage meaning there is no concrete format yet.  Examples out there are just sketchs of what it might look like.

It is an RSS 2.0 extension that adds a new way to extend RSS 2.0, an extension extension, if you will.

A typical way to extend RSS 2.0 involves adding a set of new elements belonging to a namespace like this:

<item>
  <link>blah</link>
  <description>blah</link>
<my:element xmlns:my="http://...">value</my:element>
<your:element xmlns:your="http://...">value</your:element>
<Signature xmlns="http://www.w3.org/...">...</Signature>
</item>

Note that 'my:element' has it's own schema and a namespace, knowledge of which leaks to the application layer even if RSS parsers pushed up all unknown elements up to the application layer.  At application level, these new elements and attributes have to be fetched and navigated somehow.  Unfortunately this is cumbersome for most languages.  Think about having to compare URL to distinguish elements and how one could differentiate attributes from child elements.  XPath could be used, but there are performance and readability penalties.

An extension by RSS-Data might look like this:

<item>
  <link>blah</link>
  <description>blah</link>
  <data xmlns="http://rss-data.org/..."
name="signature" type="structure">
<data name="value" type="binary">Akjdfaiwjfqeesdah=</data>
  <data name="cert" type="binary" transform="base64, asn.1">
askjdfa;kljalkjweqasdf
</data>






</data>


</item>

With just one legal RSS 2.0 extension that adds one new element plus a set of attributes, I can add arbitrary data into RSS 2.0 feed.  While this rendition of RSS-Data removes the need to define new elements and associated namespace for each type of data, there are two problems:

  • Language bindings are still problematic.  'Name', value, and 'type' can be mapped fairly well to language specific features, but mapping 'transform' is difficult.  I suppose 'transform' can be removed, but it is nice to have.
     
  • Existing XML standards like XML-Signature are not being used.

Solving the first problem involves turning all attributes into child elements and mapping and pushing child elements down another level.  This is essentially what XML-RPC and SOAP has done.  A nice bonus is that this approach works very well with modern scripting languages (i.e. item.signature.cert[0].name).

The second problem is harder to solve and RSS-Data proposal is silent on the problem.

Obvious use of RSS-Data is shoveling data with dynamic structure (i.e. database query result) out through RSS 2.0 feeds and having the data be displayed in a scrollable grid or a nice chart by RSS clients without them really understanding what the data is.

Given that RSS-Data is ultimately just an extension of RSS 2.0 and does not prevent other extensions, I think RSS-Data is a Good Thing.  Let people vote with their feet.

Hurrah for The Terminator

Despite confusing Tim Oren, I am pro-Terminator and am happy to see Arnold win the election.  It turns out Tim is also a support of Arnold.  Right on, Tim!  Arnold said all the right things in his victory speech and I am looking forward to watch him kick California into shape.  Hurrah!

RSS-Data

What I like about Jeremy Allaire's RSS-Data proposal:

  1. Reduced need to change RSS schema, binding, and parser to support new payloads.
     
  2. Possibility of reusing XML-RPC code and SOAP code.
     
  3. Arguably faster to parse.
     
    In my experience, element-rich XML documents are faster to parse than attribute-rich XML documents.  But this is not important given readily available processing power at the consuming end.

What I dislike:

  1. Ugly and harder to read although not as bad as RDF.
     
  2. Increased need to change RSS application to support new payloads.
     
  3. Contextually inconsistent and verbose.

    <name>
      <name>value</name>  <!– implicit style –>
      <name>
        <name>name</name> <!– explicit style –>
        <value>value</value>
      </name>
    </name>
     

With RSS-Data, developer's attention will shift from RSS parsers to RSS application frameworks capable of supporting new payload types and routing mechanics via plugins.  Despite irksome cosmetic downsides of RSS-Data, I like that.

Only problem is that one can make the same arguments for RDF which makes me a hypocrite.  No news there, but I find it ironic to see RDF folks attacking RSS-Data.

Update #1 – 2003/10/08 10:21AM PST

Text moved to RSS-Data Clarified because it was too long.

Outsourcing in Korea

Yes, it's happening to Korea too.  Kuk-Min, one of the largest banks in Korea, is outsourcing its call center operation to China where there is a large population of Ethnic Koreans willing to work for much less money than those living in Korea.  Cost savings up to 63% is expected.

Red Hat 9 and IBM JVM 1.4.1

For an hour, I tried to install IBM JVM 1.4.1 on my Red Hat 9 server without success.  Uploading 61Meg RPM file took a while via SFTP, hopefully because I haven't tweaked the bandwidth throttling on SFTP daemon and not due to some bandwidth congestion at the server end.

The trouble started when I gave the file to RPM.  It was my first invokation of RPM.  That was fast, I said.  Output said the package was installed but nothing about where it installed to.  After stumbling around the file system and books, I found the command to get information from the package.  Apparently, the file was relocated to /opt/ directory.  That's strange, I said.

I thought maybe I am supposed to run a script there to install for real.  Nope.  I browsed the docs and read the readme file which basically said IBM JVM 1.4.1 is not compatible with Red Hat 9 due to thread library change in Red Hat 9.  Great, isn't it?  I decided to post this info here because I couldn't find it on the web.  I hope it helps someone out later.

Linux is for geeks with very high tolerance level in all things except Microsoft.

Update #1 – 2003/10/06 1:49AM PST

I don't know exactly how it happened, but both IBM and Sun's JVMs runs just fine on my Red Hat 9 server now.  Weird.  I ran Tomcat on all three JVMs (IBM 1.4.1, Sun 1.4.2, Blackdown 1.4.1) without a glitch.

BTW, Tomcat 5.0.12 has been solid so far even though it's a beta.  Cool admin consoles are nice and JSPs are snappy.  Reading through Tomcat developer list and bug database, there are no serious bugs remaining.  In fact, Tomcat team is talking about whether to release 5.0.14 now or wait until JSP 2.0 and Servlet 2.4 API are approved by Sun.  They are not expecting any more changes in the spec and wants more people to bang on Tomcat 5.0.

I recommend that you try it now so the poor team can have some bugs to fix.  Boy, I wish Sun had this sort of problem.