Friday, December 21, 2007

Everybody Hates the Popular Kid

Via Patrick, I just read Steve's latest blog post about code complexity. I think the vast majority of Steve's points are spot-on. Complexity in a codebase ultimately leads to disproportionate support and maintenance costs, complexity when on-boarding new developers, and a general productivity hit. When it takes me 30 minutes to get "in the zone", all it takes is one call from the boss to tear that down. That's a 60-minute productivity hit for a benign interruption and no tooling in the world will help you keep it all in programmer "RAM".

I was talking with a colleague of mine about this (Alok Singh) and we both agree that certain levels of complexity simply extend beyond the limits of most people's cognitive skills, it's simple and complicated all in one.

I agree with many of Steve's core assertions in this piece. But, I can't quite get past the Java-bashing and stereotyping that permeates the post and seems en vouge these days. Yes, there are bad java programmers out there, but is he really hitting on the limitations of the Java language and/or Java programmers in general?

A couple of things that stuck out for me:

"Design Patterns are programmers, and only programmers who use certain languages. Perl programmers were, by and large, not very impressed with Design Patterns."
I'm not sure I see this as an endorsement. I don't really know any non-Perl programmers who find another programmer's Perl easy to maintain. I've written a decent amount of it, and two of the top ten most miserable experiences in my programming career have been trying to decipher CPAN modules. Terse code does not make for manageable code and Perl programmers tend to pride themselves on brevity, readability be damned.
"It's obvious now, though, isn't it? A design pattern isn't a feature. A Factory isn't a feature, nor is a Delegate nor a Proxy nor a Bridge."
I don't really know what Steve's background was prior to Google, but I can say that as someone who works on commercial software which is regularly extended, our customers do actually think that having a factory is a feature. If we swap out authentication mechanisms, customers extending or writing plugins want to use a factory - they want one way to access a single API in Java. Lots of them. We could say, "no, use the REST API" but that hasn't actually seen much uptake (although it's there) and quite simply doesn't scale for core functionality like authentication.

Next is:
"Dependency Injection is an amazingly elaborate infrastructure for making Java more dynamic in certain ways that are intrinsic to higher-level languages. And – you guessed it – DI makes your Java code base bigger."
Actually, I can't see how this could possibly be correct so I'd love to hear more detail. IoC approaches like Spring reduce my code because I'm not using those design patterns that Steve is so against. I don't really care about creational logic any more because the reference I need is injected for me. I don't see Singletons or Factories any more, just references. I may have more configuration files than I did before, but I certainly have less code. Mature libraries like Spring do other nice things for me like hiding unnecessary unchecked exception handling, transactions across multiple resources and making an object remote when it has no inherent remote-ness. For example, I can publish any interface across Hessian, Burlap, JMX or RMI using spring configuration, the code doesn't know the difference. IoC has its issues to be sure, but code bloat is not one of them.

Moving on:
"I recently had the opportunity to watch a self-professed Java programmer give a presentation in which one slide listed Problems... The #1 problem he listed was code size: his system has millions of lines of code."
Ok, so the guy is a bad programmer, or he is bad programmer by virtue of being a Java programmer? I don't see the correlation. If he were giving a talk on VB would this even make the blog rounds? There are lots of bad programmers out there, but I feel like there is a real tendency to use guilt by association against the Java community. At the end of the day, one thing that is generally not disputable is that aside from the core *nix and C library collection, Java is *the* most ubiquitous set of libraries available. Steve likely has the ability to write a native hook against a native C library, I quite simply don't have the time and the Java libraries are my second-best resource.

As a counter-point, I looked at the amount of code in the mainline stable Linux kernel (, which by count runs at 768119 in just the C - no headers, macros, etc. That's pretty big, and if you look at the committers, there are hundreds of people and companies (but not the almighty Google I might add) contributing towards that code base. But, for the life of me, I can't find a blog post slamming the amount of code in the Linux kernel or that it's too complex, or that it isn't contributing towards the greater good. Similarly, Apache HTTPD core seems to be running over 250K for just the HTTPD C code and surely requires a more skilled developer pool than your average Java developer would fill.

At the end of the day, I don't think Java is a programming panacea. But, if I put on my agile hat, and see that customers want solutions that they can run in their existing environments without worrying abut low-level library dependencies, and reuse a crazy amount of existing code, and that they can hire decent programmers to fulfill, then Java meets those tests and might actually be the simplest solution that delivers business value. C might be faster, Perl more "cool", but neither necessarily makes the most business sense. Business value isn't just about what the technology Illuminati tip their hat to. It's grounded in accessibility and more importantly, can I hire people to implement and support it. In that sense, Java is a decent solution these days. I tend to think business viability is a decent measure of a language, regardless of how it got there.

Thursday, December 20, 2007

Socializing Eventual Consistency

I've followed Werner's blog for a while now and his latest on eventual consistency is one of my favorite posts. I recently advocated for an eventual consistency approach for a service offering we're putting together and lost. Part of that I think was that the technology to build an eventually-consistent client could really use some improvement and I'll talk to that in a later post. More at play though was the general issue of pushing people's comfort zones with new concepts.

I was somewhat conflicted after the discussion because the Agile part of my brain wanted to believe that the simplest thing we could do was flow with out skill set and just write to an SQL database. The systems experience in me was screaming the whole way at building a tightly-coupled, highly intertwined system when staring at a green field that didn't really have highly-structured relational data.

As always, I could have done a better job in making my case, focusing more on the technical and less on the emotional, but I sure wish I would have had read Werner's blog before the discussion. Two things really stood out:

"Eventually consistency is not some esoteric property of extreme distributed systems. Many modern RDBMS systems that provide primary-backup reliability implement their replication techniques in both synchronous and asynchronous modes."

When I talk to people about things like eventual consistency, I generally get a response along the lines of "that's cool, but we're not Amazon". No, most companies aren't, but I'm not sure that we shouldn't leverage all the brain power and learning from Amazon to our benefit. You can only make buggy whips for so long while thought leaders speed ahead.

The other piece that stood out was:
"Inconsistency can be tolerated for two reasons: for improving read and write performance under highly concurrent conditions and for handling partition cases where a majority model would render part of the system unavailable even though the nodes are up and running."

Customers are always going to want the C, A and P from commercial software, but experience tells us that they are not all possible. If you buy Werner's arguments, the P is impossible for distributed systems. Most business-focused users I've encountered, when really pressed, almost always choose the A over the C.

When trying to socialize concepts like these, there is truly an art to helping people out of their comfort zones. I'm not as good at it as many people, posts like Werner's go a long way towards helping with that.

Now, if they can just fix that SimpleDB API...

Thursday, December 13, 2007

Cheers to Adobe

This is great news from Adobe and should really help them distance themselves from other Ria platforms by providing the middleware to help build better apps without the hefty price that used to come with LiveCycle DS. While I always liked LiveCycle DS, the price and proprietary nature always made recommending it difficult. No more! Now, if we can just get them to keep opening up and give the community the player :)

I also read this announcement to mean that the AMF spec is fully open as well which is also great to hear. I was always unclear on exactly what was available surrounding AMF, no more ambiguity there is equally big news.