I just posted a piece to InfoQ on the SEI's new 87 page "Evaluating and improving architectural competence" paper. Towards the end of the report there is a section that discusses principles embodied in the models, expressing that goals should be clearly articulated. One of the examples they used made me chuckle:
Documenting the architecture is likely to lead to high-quality architectures because documentation is essential to effective communication, which is essential to effective understanding and use by the architecture’s stakeholders, which is in turn essential to providing timely and useful feedback.
Don't get me wrong - I love most every aspect of my job, but "documenting the architecture" is not one of my favorite activities by a long shot. Its right up there with filling in timesheets and doing SR&ED claims.
I think part of the problem is figuring out what to document. Most of the stuff I've read suggests spending weeks working through the various architectural views. Obvious guy (aka Heraclitus) says that since change is the only constant, you are taking on a significant maintenance burden in synchronizing the document with the evolving system. Meanwhile, the developers and other architects have filed your treatise in the tl;dr folder.
I agree that "effective communication" is "essential to effective understanding and use by the architecture’s stakeholders", but I'll take providing working code anyday.
Tim Bray started a project last year that attempted to find the fastest way to do string wrangling on log files using Erlang. The idea is to see how languages (like Erlang) that work well on multi-core machines compare with more traditional languages; in terms of designing a solution and performance. Pretty soon, Really Smart People were contributing solutions, including those written in languages that are not concurrent savvy per se (e.g. Perl). As it turns out, a Perl implementation kicked Erlang's butt, but the project was somewhat flawed as Tim points out in "Wide Finder 2":
There were a few problems last time. First, the disk wasn’t big enough and the sample data was too small (much smaller than the computer’s memory). Second, I could never get big Java programs to run properly on that system, something locked up and went weird. Finally, when I had trouble compiling other people’s code, I eventually ran out of patience and gave up. One consequence is that no C or C++ candidates ever ran successfully.
This time, we have sample data that’s larger than main memory and we have our own computer, and I’ll be willing to give anyone who’s seriously interested their own account to get on and fine-tune their own code.
What I loved about this the first time around was the discussion around how people approached the problem and their refinement of strategies based on some serious analysis of the bottlenecks. This is kind of geeky, but this is the type of stuff I find fascinating. Tim must have spent a fair bit of time lobbying for the corporate resources needed to get round 2 off the ground, so hat's off to you Tim, and well done Sun for having the foresight to back this project.