Wednesday, November 20, 2013


 I am now doing my writing over at my personal blog which can be found at http://blog.paulwilliams.biz.

Monday, March 17, 2008

The good about Windows

There is one good thing you can say about Windows: It's a known quantity. I couldn't run FireFox 3 Beta on my Linux machine because the moludy old version of RHEL 4 my company lets me run doesn't ship with the right library set. Furthermore, it's so old that the right library set won't compile without more handholding than I care to apply. With Windows, the target is well understood, even if it is an older version.

Service providers can learn from this. Make sure that your outputs withstand the test of time, and build into your deliverables an upgrade path that cannot become obsolete. There will be time for EOL for your outputs, but make sure you take care of the customers that you leave behind. At least leave a kind word and some suggestions for dealing with life beyond support.

Wednesday, April 4, 2007

Achieve Corporate Strategy in 11 Easy Steps

It's probably not that easy, but it is documented. Dr. Glenn Gomes has published a set of presentation slides on strategic management on his website. The slides provide a light overview of the strategic management process. While light, they will get you thinking about strategic issues. That's never a bad thing.

Wednesday, March 28, 2007

The 6 Sigma Myth

Admittedly, I have very little direct experience with 6-Sigma. It reeks to me as a management fad, much like other "Quality" initiatives. I'm sure there's nothing wrong with the fundamental tenants of 6 Sigma, but sometimes it seems as if getting a couple of "6 Sigma Black Belts in the room should solve our problems!" (actual quote).

I had an interesting experience with a 6 Sigma-trained project manager once. We were getting ready to "go-live" -- that is to process data in our system. We did not have any of the connectivity ready, nor any of the automation. However, she found a loophole for calling the task done: Apparently the concept of "go live" was narrowly defined to be the ability to process data (which was complete). Therefore, the task of "going live" was done, even though at a practical level we were nowhere near that milestone.

It seems funny to me how stamping a label on these quality fads (which really seem to amount to writing down and agreeing to a project plan) immediately impresses business people.

Then again, maybe I've had a tainted experience or two.

For a final laugh, check out this screen shot. It's from a Six-sigma how-to page at the Six Sigma Guide website.

Six-Sigma at work

Monday, March 26, 2007

Useful posts this week

I don't want this site to just be a link farm -- I really don't. I hope to come up with some of my own useful analysis and become a thinker on my own terms. But damn it if there aren't some posts out there really worth reading. I guess I haven't cornered the market on brains just yet.

Saturday, March 24, 2007

The Cure for Silver Bullet Syndrome

I had an invigorating conversation with a co-worker the other day. We were discussing the next generation of technologies for application component integration: Web Services and SOAA. As we got into the discussion, he made a statement to the effect that tools will help flatten out any difficult learning curve in implementing the architecture. This seems to be a fundamental flaw in thought at businesses worldwide, and thus is a worthy topic for consideration here.

What is it about software "tools" that allure? In my experience as a software engineer I have a great number of "tools" that I have had exposure to to "help" in the development process. Unfortunately, precious few actually were helpful, and more than a couple were complete disasters.

There was one tool, called Xenos d2e which was supposed to help our company mine print data so we could reformat the document with a new "look and feel" that conformed to the client's changing taste. This tool worked well enough -- given the page were static. Add too much dynamic formatting and the tool would loose track of its page markers. Of course that could be worked around by scripting code on the back end -- in REXX. Well, nobody knew REXX, so those tasked to work with this product had to ramp up quickly on the language. Then when the project transitioned to a new team the new team had to learn REXX to maintain the software. The tool was unhelpful.

Contrast that experience with a tool written by a colleague which did almost the same thing, though it was written without a back-end programming language. The tool did exactly what we needed -- to extract data from the input print stream based on defined markers. The difference? Even though the tool written by my colleague was less capable, it was designed to do exactly what was needed, took less overall time to learn and had fewer maintenance issues. This tool was helpful, and cost less on learning, maintenance and license fees.

In my current position, I deal with a certain source control package which has many integrated features and lots of bells and whistles. Unfortunately it cannot easily define multiple lines of development against a common code base, and therefore we have much difficulty in parallel development for small enhancements based on current production while doing major feature rewrites for upcoming releases. This extra management costs quite a bit of time and therefore is not efficient. And yet we gladly paid $20,000 for the privilege of being less productive than paying $0 and using Subversion, but lacking an integrated bug tracking function that we rarely make use of anyway.

Kathy Sierra over at Creating Passionate Users foresees an even more sinister issue. By hiding the implementation details, Kathy argues that we are training ourselves to not understand how technology works. One initial reaction might be that since the tool helps you get the job you want done, you don't need to know the underlying technology. This might be true for many end users, but it's extremely important for technology service providers to understand what's going on underneath the surface.

What were to happen if you suddenly had a client that was using a slightly different version of the technology? For instance, if your tool handled web services abstraction reasonably well, but your next 800 lb gorilla client used a web services handler from IBM that deviated from the standard in subtle but frustrating ways and your tool would be worthless.* If you never bothered to learn how to implement a web service without the tool, you would be stuck. Likewise if the spec was revised by the standards body and your tool's author had let the tool become obsolete. It's quite likely that your tool would limit your effectiveness in dealing with clients that had recently purchased their technology stack.

Don't let it happen to you! Use tools, sure. But be sure you know what's beneath the surface.

* I'm not saying that IBM has a deviant web services stack, but they're an easy target as their J2EE stack seems to deviate from the spec.

Tuesday, March 20, 2007

Why do most projects fail?

Michael Wade at Execupundit.com has written a short list of the reasons why most projects fail. He posits that many projects fail due to individual errors of omission. There's two things to like about this:

  1. I've been guilty of this, which ultimately caused problems, and
  2. This idea fits with one of my professional philosophies: "If there's a problem, look first to yourself."
That second one is key to leadership. I feel that as a leader it's my responsibility to figure out if there were something I could have done differently to avoid the problem situation. Only if there is nothing that I could have done can I fully blame another individual or organization. It's been my experience that the most effective leaders I've followed used this philosophy, and that they weren't "born" leaders, but they learned from their mistakes by reviewing their performance and incorporating those lessons into their approach.