Leave a comment

Understanding Build Server and Configuration Management – Part 2

Implementing Configuration Management: How to Start the Process

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360In my last post we discussed the benefits of creating a development environment with continuous integration and configuration management using a centralized build server. We have had a number of clients that hesitated to start the process of building such an environment because they were too busy and thought it would take too much time and too many resources to make meaningful progress. After all, if you have a team of smart developers, they should be able to handle the build process on their own without any trouble, right? Other teams simply lack the leadership or buy-in from management to put the proper processes in place.

The following are a few basic steps your team can take to start the process of configuration management. Each step solves specific problems and will be invaluable in bringing more consistency and stability to the development process.

1.  Establish a source control branching strategy and stick to it

The first rule of branching is don’t. The goal here is to establish a small number of branches in source control to help you manage the process of merging and releasing code to test and production. Avoid the common situation where multiple and nearly identical branches are created for each deployment environment. Merge operations can be a nightmare in any environment. The number of branches should be kept to a minimum with new branches created only when something needs to be versioned separately. This even applies with distributed source control systems like Git. There may be many disparate development branches but QA and release branches should be kept to an absolute minimum.

2. Establish basic code merging and release management practices

The goal is to establish this process along with the branching strategy in order to foster a more stable code base whose state is known at all times. There are many resources available that give examples of which kinds of strategies can be used for environments with different levels of complexity. Avoid the situation where features for a given release are on multiple different branches and integration has to be done at the last minute and involves multiple merges.

3. Create and document an automated build process

Make sure the entire source tree can be built using a single automated process and document it. Avoid the situation where only one person knows all the dependencies and environmental requirements for a production build. We have seen all too often cases where it takes too much time and money to reverse engineer and reproduce a build when the point person is not available, or when critical steps of the manual process are forgotten or performed incorrectly.

4. Create and configure a build server using off the shelf components

There are many packages available for putting together your own build server and continuous integration environment. TeamCity is one of the more popular of these.

5. Establish continuous integration on the central development branch

Designate a central development branch for continuous integration. The first step is simply to initiate the automated build process every time someone checks in code. Packages like TeamCity make this very easy and include notification systems to provide instant feedback on build results. Avoid the situation where one team member checks in code that breaks the build and leaves other developers to debug the problem before a release.

6.  Create and document an automated deployment process

Avoid the inevitable mistakes of a manual deployment process or the huge waste of time to manually reconfigure a deployment for a different environment: Create automated scripts to make sure a build can be easily deployed to any environment.

7. Create unit tests and add them to continuous integration

Avoid nasty system integration problems before they happen. Also, avoid situations where changes in one area break basic functionality in another by establishing a process and standard for developers to create automated unit tests using a common framework like JUnit or NUnit. These tests should be configured to run as part of the continuous integration process and if any of them fail, the entire build process fails, and the whole team is notified.

Finally, remember that software development is a very complicated process. Even with a team staffed with geniuses, people will make mistakes, undocumented branching and build processes will confuse and confound the team, and there will always be turnover in any organization. With even the most basic processes put in place for software configuration management, any organization can avoid the horror of weeks of time spent debugging code integration problems or even worse the situation where no one is quite sure how to reproduce the production version of an application in order to find and fix a critical bug. No one is going to jump into this and reach CMM level 5 in a few weeks, but starting small and establishing meaningful and attainable milestones can get any team a long way towards the goal of a stable, reproducible, and flexible build and deployment environment.


Leave a comment

How to Prepare for Test Automation

by Nadine Parmelee, Senior Quality Assurance Engineer

Nadine-Parmelee1-bExecuted correctly, test automation is a major benefit to an organization with a strong return on their investment and efficient speed to market over competition. There are other benefits as well, but these are the top two. When launching a test automation initiative, there are a number of factors to consider. Perhaps foremost is how test automation will need to integrate with the current business, and technical resources of an organization.

Assessing Resources
Resources must be aligned in order to maintain and continue to build upon the test automation investment made by the organization. Projects mature, morph and change, and test automation is part of that living ecosystem.

An organization needs to look across the product delivery landscape to make sure they have the confidence their entire team is positioned well and has the expertise to move forward implementing test automation. The “team” in automation consists of everyone from Product Managers (responsible for the product delivery) to the Development Team (programmers) to Test Team (testers) all the way to Customer Support (who have to manage any issues coming in from the field). Once emphasis has been placed on having resources in line with the initiative, one of the next steps is to focus on how success will be measured.

Which Tools to Use
One of the first test automation project decisions to make is what tool(s) to use. There are many automation tools available today, and all have their own specific advantages. Take the time to determine which tool is best for your delivery goal. There is no “one-size-fits-all” when it comes to tools. And not all tools are appropriate for all projects, so it is important to make sure the tool you are using will work in your project environment (or environments). Will your project need to be supported on mobile devices? Some tools work better with some coding languages than others, and the controls the developers are using may make different tools a better option for your environment. Be sure to try the tools you are considering with your application before making the decision about which ones to purchase, and keep in mind that tools do change and mature just like your software projects.

There’s an old saying, “I’m in a hurry, so please slow down.” The point, of course, is to take your time and do the task correctly to avoid mistakes that cause you to start over. Properly executed test automation is much the same thing, and the surest way to deliver a high-quality software product is by realizing these three keys:

  • Making Test Automation a part of the overall Quality Assurance Strategy yields the most efficiency and reduces time to market
  • Test Automation is an investment that brings ROI only when an organization embraces it as part of their living ecosystem
  • Test Automation requires experienced resources that span the internal delivery team


Leave a comment

Adapting Enterprise Applications for Mobile Interfaces

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360Since the explosion of smartphone and mobile device technology spurred by the release of the iPhone in 2007 and Android-based devices starting in 2008, mobile devices are making more and more inroads into the enterprise business environment. Many major websites have released mobile versions that automatically adapt to the smaller screen real estate, and others have developed native applications that allow for even greater flexibility and function than is available on a web page. Smartphones and tablets are also being used as mobile point-of-sale devices, having completely replaced cash registers in some places. The future possibilities are nearly endless. However, integrating a new technology into an existing application infrastructure can be a challenge, especially for large enterprises, and in this post, we will discuss a basic strategy for adapting an existing enterprise application for use with a mobile interface.

Many large enterprises have invested in web application technologies, some homegrown and others off-the-shelf. While these applications are technically accessible from any device with a browser, adapting an application to a smaller interface can be a much bigger challenge than a simple website. Most applications have interactive editing features that can be difficult to navigate in a mobile interface. Creating a native application for these devices is arguably a better investment for the long run, especially considering that the enterprise has the luxury of dictating the specific platform to be used.

The environments for developing native mobile applications have evolved significantly, making the task of creating an interface for a mobile application a relatively easy one. The challenge then becomes the integration with back-end services. This is an area where the benefits of advancements in web services and service-oriented architecture (SOA) can really be seen. Enterprise applications that are already built on a service-oriented architecture will likely already have a web service interface to give clients access to data and application services. Also, development tools have advanced to the point that adding basic web services onto an existing server side application can be a relatively straightforward task. The key is to look at all mobile application interfaces in your enterprise simply as new consumers of these data and application services. If you carefully consider the specific use cases of the mobile interface you would like to create, it is likely you will be able to narrow down the required services to a relatively small number. It is therefore possible to roll out a whole new mobile interface to your existing infrastructure within a few short weeks or months, depending mostly on the complexity of the services rather than the mobile interface itself.

Finally, there is the question of how to interact with the web services from the point of view of the mobile client. With traditional SOAP web services this is a relatively easy task on the desktop due to the large number of development tools and libraries that provide most of the client code for you. In a mobile operating system, your options are much more limited. In the absence of a helping library, writing an interface to a SOAP web service can be time consuming and challenging. It is therefore a much simpler and more efficient approach to use architectures based on existing transport technologies and semantics. This is the environment where REST-based service architecture excels. Web services based on REST use basic HTTP and can pass data using simple existing formats like XML or JSON. In fact, there are a number of open-source libraries available to read and write JSON data, and more modern versions of mobile operating systems have actually included JSON libraries in their development API. With these tools in hand, calling a REST service becomes a simple act of sending an HTTP request to the right URL and making a simple call to read the JSON response.

On the server side, adding a REST layer over an existing service infrastructure is also relatively straightforward. Many of the existing web service application frameworks already contain features to make the addition of REST services very simple. Also, because the mechanics of calling the services are so simple, the process of testing the services in isolation can be as simple as opening a browser and hitting the right URL.

Obviously, the details can get more complicated than what I have outlined here, but in our business we have been able to prove that using this approach, a couple of developers can create a new mobile interface to an enterprise system in a matter of weeks rather than months.


Leave a comment

Starting Your Development Effort Right

by Chris Durand, CTO

ChrisDurand-B360I recently had a discussion with a friend who owns a small business and needs a web application to run day-to-day operations. She’s a pretty savvy person but has a limited technology background. She came to us looking mostly for help in laying out the requirements, and how to hire a contract developer. But software can be a complicated business with thousands of little questions and details lurking around every corner, so we wanted to give her some advice to help her avoid the pitfalls. Here were the key takeaways.

Define the Scope

First, be clear about the scope and what you are trying to accomplish.  For many projects, a phased approach is best, especially for smaller companies with more limited resources. Trying to do everything in a single release costs too much and takes too long, which drives up the risk. If you have finite time and resources (and most of us do), every addition to scope is a potential subtraction to quality control and keeping to your delivery schedule. You are better off determining what you absolutely need to get out the door in phase 1, letting it generate revenue while you plan for your next release. Be ruthless about cutting features. Often what you leave out is just as important as what you put in.

Screen Mockups

Once you have an idea of what you are building, you should make screen mockups to clarify which screens you need in your application and what is on them. You can include brief notes with the mockups to point out key details or functionality that is not obvious. You should have at least one mockup per screen in your application. Screens that have different modes or layouts may need multiple mockups to fully illustrate what is going on.

I like to start out sketching on paper; then once I have a rough idea of the layouts and content, I use a mockup tool to do more comprehensive mockups. My favorite tool for this is Balsamiq (http://www.balsamiq.com/). It’s cheap, easy to use, and gives great results. The key here is to focus on layout, content, and function, not colors, graphics, and making things look pretty.

User Stories or Workflows

Now that you know what screens you need, it’s time to write up workflows (sometimes called user stories) to explain how key features will work in the application. You know that hand-waving you’ve been doing about how customers will find a product, add it to their cart, and check out? Now is the time to get down to specifics: which page are users on at each step? Which button is the user clicking on each page? Simple workflows can be done in a bulleted list, while more complex workflows may require a flowchart or other diagram. This helps you understand how your screens work together, and the application as a whole.

Requirements

Mockups and workflows fall into the general category of requirements, but here I’m talking about other details that are not easily captured in these earlier efforts. If you want your application to do something specific, you need to write it down. The more detail you can provide, the better you understand your application and the more likely you will get what you expect from your development team. Here you can list out details for each screen like how to handle errors, how many characters each text field should hold, and anything else that you can think of that is important.

Summary

Software applications contain thousands of details. What color should something be? What happens when a user clicks this button? How long should a product name be?  (30 characters long? 100 characters long?)  Developers have to make dozens of decisions about these kinds of things every day. If you don’t make these decisions up front, your developer will make them for you, and they may not always do what you want. Good developers know what details matter and require additional input from you without bothering you with trivial issues, but they are not psychic. The more you clearly understand your application and can convey it to the development team, the more likely you will get what you expect and have a successful development project.  Good luck!


Leave a comment

Even Agile Needs a Quality-First Focus

by John Brooks, Vice President of Engineering

john_brooks_b360_1There is no doubt that in the software development world, Agile Development is a hot topic. Just about every company engaged in software development describes itself as an “Agile house” and I haven’t seen a software-related resume without Agile listed in a very long time. Amazon Books has more than 2,000 books related to the topic.

This often leads to an automatic assumption in our industry that by practicing Agile methodology, the resulting software will be higher quality than it would be if another development methodology were used. Ubiquitous references to this exist almost everywhere Agile is discussed, and it’s easy to understand why. I believe there are very high value aspects to Agile practice that can benefit almost every project. But what we as an industry sometimes miss when we see one development practice as a miracle method is a focus on the real end goal of software development—quality—instead of how we develop it. We should be focused on delivering quality software that truly works as it should.

Too many companies and Agile practitioners take it for granted that high software quality is an automatic side effect of Agile; but just like any other methodology, Agile’s practices must be properly executed, or quality will suffer. Agile is great, but Agile alone does not a perfect project make. There are some ways to bolster your Agile practices for a tighter, more quality-focused development process overall.

Software veterans can tell you the Waterfall model with its “iron triangle” was almost universal back in the 1970’s and 80’s, and is still in practice in many companies today.[i] That iron triangle, consisting of requirements, cost, and schedule, created an unintended negative effect: the critical dimension of quality was not even included! With requirements, schedule, and cost all fixed, the only variable was quality. And we all know how well that worked out.

Steve McConnell, author of Code Complete and recognized authority on software development, recently addressed this during a speech in Austin that I attended. He has found, as I have, that what does work for most enterprise projects is a hybrid approach, and he’s done the research to prove it. Business leaders must be able to plan, budget, and determine business value at the beginning of a project. A pure Agile practice can make that difficult because the planning elements (of the iron triangle) are too vaguely defined at that point. By adding a bit of Waterfall up front, with Agile for the rest of the project, the business gets what it needs when it needs it, and development can utilize the more efficient Agile methodology. A Waterfall-Agile hybrid.

There still needs to be a focus on quality during all phases of the software development life cycle. The tendency to skimp on QA during a Waterfall project has transferred right over to the practice of Agile on many projects. Bad habits like code-and-throw, a lack of unit testing, long release to QA cycles, and not having testers involved early enough are symptoms of this.

The “extra room” created by the flexibility and speed of Agile should be used to more tightly integrate testing into the practice. Otherwise, the real world pressures of the iron triangle will still surface.

As an industry, we need to include predictability and quality as equals to the classic iron triangle elements of requirements, cost, and schedule. None of these elements should be considered immutably fixed. We can do this without throwing out the best of either Waterfall or Agile. At Bridge360, we specialize in the practice of delivering quality software first, no matter what methodology our clients are using. We help our clients implement lightweight, lean methods of delivering the best quality software possible.


[i] Leffingwell, Dean (2010-12-27). Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise


Leave a comment

The ‘Ghost in the Machine’

by Jerry D. Cavin, Senior Software Engineer

Jerry_Cavin-3

Jerry Cavin is also an Adjunct Professor of Computer Science and Astronomy at Park University.

How many times have you coded an equation and the results produced are not what you expected? Then you rearrange the equation and get completely different results! If the computer is calculating correctly, why does it produce incorrect answers? Could it be the ‘Ghost in the Machine?’

Gilbert Ryle’s notion of the ‘Ghost in the Machine’ was introduced in his book, The Concept of Mind, a critique of Rene Descartes’ discussion of the relationship between the mind and the body (the mind-body dualism). The expression has been widely used in the context of a computer’s tendency to make unexplainable numerical errors. Are the errors the fault of the human, or is it the fault of the ‘Ghost in the Machine?’ To gain insight into these mathematical errors made by computers let us examine how the ‘mind’ of the computer views our concept of numbers.

INTEGER NUMBERS

The ‘mind’ of the computer perceives finite number sets. The size of the number set depends on the size of the space where the number is stored. If the number is stored in 8 bits, the number set starts with 0 and ends at 255. One of the bits can be reserved to represent the sign of the value (i.e. + or -). In this case, the numbers that exist are from -128 to 127. Numbers can also be stored in a larger space, but that does not change the fact the ‘mind’ of the computer still perceives finite number sets. So what happens when 1 is added to the maximum value in a finite integer set? It wraps around to the smallest negative value in the set. In our previous example for 8-bit values, when the computer adds 1 to 127 it would give the answer of -128. This overflow, or wrap around error, can occur if you are adding, subtracting, multiplying or dividing. Integer overflow errors are dangerous because they cannot be detected after it has happened. The ISO C99 standard states an integer overflow causes ‘Undefined Behavior.’ This allows compiler manufacturers, conforming to the standard, to implement anything from completely ignoring the overflow to causing the program to abort. Most compilers totally ignore the overflow, resulting in erroneous results being calculated. Depending upon how the integer variable is used, the overflow can cause a multitude of serious problems. If an overflow occurs with a loop index variable, it may cause an infinite loop. If an overflow occurs with a table index variable, the buffer overflow may cause data corruption. Another strange condition can arise with integers that we do not see in the human world. Under some circumstances an equation may give an answer of +0 or it may give an answer of -0. These two values of 0 are the same, but it may cause some confusion. This is a problem to many older compilers; some compilers detect this condition and simply change the answer to +0. Newer compilers use two’s complement mathematical operations and never encounter this issue.

What does your computer do?

Continue reading


Leave a comment

Requirements, really?

Nadine Parmelee

by Nadine Parmelee
Senior Quality Assurance Engineer

Change is hard. Project teams frequently resist the addition of new processes and procedures, and one process frequently fought is the creation and review of requirements. The reaction is understandable—it’s most likely resistance to creating more work in the midst of already tight schedules. On the other hand, some of the consequences of not having requirements or having poorly thought out requirements are:

  • Developers guessing how the application should behave
  • Secondhand guessing: testers guessing how the developers guessed the application should behave
  • Customers not liking or having difficulty with the product
  • Spending time and money reworking a product to remove ambiguities
  • Inability to map test case to requirements, which can lead to not really knowing if the testing is complete

The benefits of creating requirements and making them clear in the first place are numerous. Here are just a few:

  • Understanding the purpose of a new feature
  • Understanding the workflow of a new feature
  • Understanding the options of a new feature
  • Understanding how a new feature integrates with existing functionality

And that’s just the beginning of what ultimately amounts to a little time spent upfront for maximum gain and minimizing stress on a project.

Creating requirements does not have to be an exercise in torture. Even the most basic requirements list is better than no requirements at all. Whenever there is functionality without requirements, developers and testers will make assumptions on how each thinks the functionality should work. Those assumptions may not match the intended behavior. By at least starting a list with the most basic of the requirements, early misunderstandings of what is expected can be avoided.

Additionally, the more people that review the requirements as early as possible, the sooner questions about functional behavior and whether certain functionality is even possible can be raised and potentially save large amounts of time and money. Reworking requirements adds delays and costs to any project, so the better the requirements are to begin with, the more efficiently the project schedule and budget can be maintained. Reworking code over and over and the subsequent testing of that code can add even more costs and delays until the project team can agree on the desired functionality. The earlier everyone has the same understanding of what needs to be done and the desired outcomes, the more quickly a project can progress and the more quickly new personnel can get up to speed on what they need to do to help complete the project.

When requirements are organized well and are succinct and clear, it is easier for developers and testers both to meet those requirements and map specifications and test cases to those requirements and improve the product quality from the beginning. By having better quality early on, the odds of getting a project completed on time and on budget are greatly improved.