5 Comments

“Fed” Up With Bad Software? Learn from the Woes of Healthcare.gov

by Brenda Hall

Brenda_Hall_100_x_120All the noise and chatter over the ability to sign up for healthcare under the Healthcare.gov website is a clear indicator of one of the things that irks me the most — websites that don’t work. Online experiences such as shopping, planning for vacations, or any other effort requiring interaction with a computer or mobile device, that doesn’t deliver a successful experience is very annoying. It’s also detrimental to the lifespan of the product and ultimately the parent company responsible for its operation.

I see an elevating trend with people getting plain fed up, and in most cases the issue can be pointed directly to ‘buggy’ software. As in any effort to build a website, the need for an experienced team under the guidance of strong leadership is critical for success. The “team” is vast and varied… not everyone is there to write code. Business Analysts working with the user (in this case everyday Americans from many walks of life) should be making sure the requirements given to the technical team are clear and well documented. “Leadership” will include people who have built e-Commerce websites in the past. And project managers are there to assemble the project plans that deliver a successful result on time. The failure of this contractor to deliver a health care enrollment portal may involve many ‘gotchas’ along the way… but Quality was certainly the sacrificial cow on the altar.

Any experienced project manager will tell you projects are driven by three pillars: Cost, Schedule and Quality. If one of those elements goes awry, the project is at high risk for failure. And far too often, pressures on cost and schedule force quality to draw the short straw. No time to test? Really? So it’s okay to turn your customers on to something that isn’t working? Testing is a critical part of producing good software (and websites.) Testing is expensive. Testing takes time. Testing is hard. Testing takes experienced professionals to execute.

The experience so many people had with trying to sign up for healthcare is a modern-day example of what so many of us have come to tolerate, but not be happy about. “Why did I get all the way to the end and then get stuck? Why did I get bounced out just as I entered my credit card data? I haven’t changed my password….why isn’t it working now? Why is my data lost every time there’s an upgrade?” The complaints go on and on. I think people are fundamentally fed up with bad software in the market, and this most recent fiasco has voices in volume.

There really is no good excuse for Healthcare.gov to have gone ‘live’ in the condition it was in. We can ‘sidewalk supervise’ all day long, but the message is clear. Someone dropped the ball on Quality. It just wasn’t important enough and a lack of Testing is the reason this website failed to deliver. Costs have now skyrocketed with all the media attention and ‘24/7’ effort according to President Obama… far more than should have been spent if the job had simply been done right from the start.

However, Healthcare.gov was one enormous advantage over most projects — it’s being built by the government using federal tax dollars. As crippling as this failure has been, it will still move forward. A similar fiasco by a company in the private sector would undoubtedly result in such a reputational disaster that the software would be doomed to failure beyond repair.


Leave a comment

Understanding Build Server and Configuration Management – Part 2

Implementing Configuration Management: How to Start the Process

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360In my last post we discussed the benefits of creating a development environment with continuous integration and configuration management using a centralized build server. We have had a number of clients that hesitated to start the process of building such an environment because they were too busy and thought it would take too much time and too many resources to make meaningful progress. After all, if you have a team of smart developers, they should be able to handle the build process on their own without any trouble, right? Other teams simply lack the leadership or buy-in from management to put the proper processes in place.

The following are a few basic steps your team can take to start the process of configuration management. Each step solves specific problems and will be invaluable in bringing more consistency and stability to the development process.

1.  Establish a source control branching strategy and stick to it

The first rule of branching is don’t. The goal here is to establish a small number of branches in source control to help you manage the process of merging and releasing code to test and production. Avoid the common situation where multiple and nearly identical branches are created for each deployment environment. Merge operations can be a nightmare in any environment. The number of branches should be kept to a minimum with new branches created only when something needs to be versioned separately. This even applies with distributed source control systems like Git. There may be many disparate development branches but QA and release branches should be kept to an absolute minimum.

2. Establish basic code merging and release management practices

The goal is to establish this process along with the branching strategy in order to foster a more stable code base whose state is known at all times. There are many resources available that give examples of which kinds of strategies can be used for environments with different levels of complexity. Avoid the situation where features for a given release are on multiple different branches and integration has to be done at the last minute and involves multiple merges.

3. Create and document an automated build process

Make sure the entire source tree can be built using a single automated process and document it. Avoid the situation where only one person knows all the dependencies and environmental requirements for a production build. We have seen all too often cases where it takes too much time and money to reverse engineer and reproduce a build when the point person is not available, or when critical steps of the manual process are forgotten or performed incorrectly.

4. Create and configure a build server using off the shelf components

There are many packages available for putting together your own build server and continuous integration environment. TeamCity is one of the more popular of these.

5. Establish continuous integration on the central development branch

Designate a central development branch for continuous integration. The first step is simply to initiate the automated build process every time someone checks in code. Packages like TeamCity make this very easy and include notification systems to provide instant feedback on build results. Avoid the situation where one team member checks in code that breaks the build and leaves other developers to debug the problem before a release.

6.  Create and document an automated deployment process

Avoid the inevitable mistakes of a manual deployment process or the huge waste of time to manually reconfigure a deployment for a different environment: Create automated scripts to make sure a build can be easily deployed to any environment.

7. Create unit tests and add them to continuous integration

Avoid nasty system integration problems before they happen. Also, avoid situations where changes in one area break basic functionality in another by establishing a process and standard for developers to create automated unit tests using a common framework like JUnit or NUnit. These tests should be configured to run as part of the continuous integration process and if any of them fail, the entire build process fails, and the whole team is notified.

Finally, remember that software development is a very complicated process. Even with a team staffed with geniuses, people will make mistakes, undocumented branching and build processes will confuse and confound the team, and there will always be turnover in any organization. With even the most basic processes put in place for software configuration management, any organization can avoid the horror of weeks of time spent debugging code integration problems or even worse the situation where no one is quite sure how to reproduce the production version of an application in order to find and fix a critical bug. No one is going to jump into this and reach CMM level 5 in a few weeks, but starting small and establishing meaningful and attainable milestones can get any team a long way towards the goal of a stable, reproducible, and flexible build and deployment environment.


Leave a comment

Invisible Pillars of Development: Motivation, Morale, Skill, Diversity, and Chemistry

by Phil Smith, Vice President of Operations and Services

Phil Smith Bridge360A few months back I wrote a blog for this site that talked about 3 pillars of development that are visible: cost, schedule, and quality. Like arithmetic, these items are fact-based with little room for interpretation. Dollars, time, and adherence to requirements are all measurable and tangible. Multiply this, divide by that, and you can measure earned value and know if your project is ahead, behind, or progressing as expected.

Yet the most important dynamics of the workplace are not always measureable. For instance, how is team morale? Does the organization have enough diversity? Is the team chemistry contributing to overall performance, or hurting it? This time around I’ve elected to write about things that are more subjective, including the less visible “pillars” of development and the unspoken roles of team leadership.

Let’s look at that first.

A project manager works within the realm of cost, schedule, and quality. It is possible to affect one or two of these elements by making adjustments to the other(s). For example, we can reduce cost and advance the schedule if we are willing to sacrifice quality.

A project leader creates an environment where motivation, morale, skills, diversity, and chemistry all contribute to improved cost, schedule, and quality. It is important for us to think at a higher level specifically about these items that I’ve listed, because improving these areas is the best way to affect cost, schedule, and quality without having to make sacrifices.

Now I’ve stepped into a leadership dialog that may require multiple books to explain. Given that this is a blog and that my goal is to invoke thought, I’ll stick with the high level and will let the people who write books do their writing. In fact, I’ve included some books at the end of this blog that I believe cover these topics quite extensively.

I do want to share some of my own thoughts and I invite you to comment on the blog with your own ideas in return. In the title of the article I refer to the following concepts as development pillars as well, because they are essential to success. Continue reading