While reading about new and noteworthy features in Eclipse 3.1M7, I ran across Eclipse performance bloopers page which I found interesting.
For example, I didn't know that String.substring().intern() can prevent garbage collection of original string's character buffer because substring() grabs a reference of the buffer which intern() adds another reference at that won't be released until VM shutdown. I think some XML parser, DOM, and XSLT implementors might be doing this unintentionally.
Another blooper is, IMHO, a general design problem with Java stream I/O API. Specifically, there is no direct way for library implementors to tell whether an InputStream or OutputStream passed into their library (i.e. XML parser) is buffered or not.
If the implementor assumes it is and the caller forgets to buffer, then I/O performance crawls. If the implementor always buffers and the caller also does, then I/O performance suffers again. Of the two evils, double buffering is better than unbuffered I/O but, if you have N layers of libraries, you could end up with N buffers.
IMHO, the code that creates a stream should buffer the stream as needed. Libraries can also use markSupported() for input treams, which returns true for most buffering input stream implementations, to decide whether to add buffering or not. For output streams, you are out of luck.