Leave a comment

Analyzing and Selecting the Right Software Automation Tool

By Nadine Parmelee, Senior Quality Assurance Engineer

Nadine-Parmelee1-bWith all the tool choices available today, selecting an automation tool is a project all on its own. There are a number of variables to consider when selecting an automation tool, as the upfront cost of the tool is not the only consideration. There are potential costs in tool configuration requirements as well as resource training. Next you have to identify the resources you need and how long they will be needed for your project. Taking the time to do the research for selecting the right tool is essential for both tactical and strategic success. Below is a list of elements to take into consideration when researching automation tools:

  • Is this tool recommended for the programming language used for your product?
  • Initial cost per user of the tool – Is it a “per seat” cost or is there a “site” cost available?
  • What are the maintenance costs of the tool?
  • What are the configuration costs to interact with defect and manual test case tools?
  • How many systems the tool can be utilized on?
  • Are there any training costs?
  • Is customer support provided by the tool vendor?
  • What is the expandability for the future for platforms and environments?
  • What expectations should the tool fulfill for test coverage?

We have helped a number of clients with their tool analysis and have seen firsthand how there is no single solution available for everyone. In some cases clients have wanted to use tools they already had purchased corporate licenses for, and we suggested a different tool because of application compatibility or the amount of time that would be involved to create workarounds to best fit the tool for their situation. Some clients look for “free” open source tools thinking their return on investment will be greater, yet they neglect to account for configuration costs and the costs of training involved in getting the resources up to speed. Moreover, there is the need to provide back to the open source community any changes made with the code that might help others; again a time-consuming effort to comply with while still getting the product tested.

To ensure that your team selects the right tool, make sure they are asking the questions that are most relevant and essential to your requirements. This will advance the effort forward without losing non-essential time exploring too many options. I’d recommend that a BRD (business requirements document) for the tool evaluation effort be created up front. This doesn’t necessarily need to be overkill, but collectively focusing stakeholders on requirements is worth the effort in order to keep the business and technical team in sync with expectations. Having a clear understanding of your automation project needs defined and understood is always a best practice. I also recommend that the selection process include obtaining evaluation copies of the potential tools and give your team time to evaluate them. Some tools might be eliminated from consideration quickly while others may necessarily require more time to evaluate.

In my opinion, one of the best tool evaluation methods is to take one of the most complex tests expected for the automation project and try to create those automated scripts with the evaluation packages. Make sure your project team keeps the primary automation project goals in focus; if data creation is the goal of the automation project, an automation tool that doesn’t work with a specific GUI screen or control may not be out of the running. If user interaction is the primary concern, a tool that works great with the GUI but not so great with your database structure may still be a viable candidate. It is important to analyze the strengths and weaknesses of the tools against your project needs and pick the one that will be the most efficient for your situation. Your automation project requirements may get adjusted along with the tool evaluation process as your team evaluates both together.

Test Automation is essentially a strategy that provides acceleration to market with confidence, if well executed. Taking the time to plan well and select the right tool or tools is critical for that confidence. Remember, once you take steps toward test automation, you should consider this effort as part of your ongoing quality assurance ecosystem. You will want to ensure you keep your automation scripts up to date as your product changes over time.


Leave a comment

Understanding Build Server and Configuration Management – Part 2

Implementing Configuration Management: How to Start the Process

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360In my last post we discussed the benefits of creating a development environment with continuous integration and configuration management using a centralized build server. We have had a number of clients that hesitated to start the process of building such an environment because they were too busy and thought it would take too much time and too many resources to make meaningful progress. After all, if you have a team of smart developers, they should be able to handle the build process on their own without any trouble, right? Other teams simply lack the leadership or buy-in from management to put the proper processes in place.

The following are a few basic steps your team can take to start the process of configuration management. Each step solves specific problems and will be invaluable in bringing more consistency and stability to the development process.

1.  Establish a source control branching strategy and stick to it

The first rule of branching is don’t. The goal here is to establish a small number of branches in source control to help you manage the process of merging and releasing code to test and production. Avoid the common situation where multiple and nearly identical branches are created for each deployment environment. Merge operations can be a nightmare in any environment. The number of branches should be kept to a minimum with new branches created only when something needs to be versioned separately. This even applies with distributed source control systems like Git. There may be many disparate development branches but QA and release branches should be kept to an absolute minimum.

2. Establish basic code merging and release management practices

The goal is to establish this process along with the branching strategy in order to foster a more stable code base whose state is known at all times. There are many resources available that give examples of which kinds of strategies can be used for environments with different levels of complexity. Avoid the situation where features for a given release are on multiple different branches and integration has to be done at the last minute and involves multiple merges.

3. Create and document an automated build process

Make sure the entire source tree can be built using a single automated process and document it. Avoid the situation where only one person knows all the dependencies and environmental requirements for a production build. We have seen all too often cases where it takes too much time and money to reverse engineer and reproduce a build when the point person is not available, or when critical steps of the manual process are forgotten or performed incorrectly.

4. Create and configure a build server using off the shelf components

There are many packages available for putting together your own build server and continuous integration environment. TeamCity is one of the more popular of these.

5. Establish continuous integration on the central development branch

Designate a central development branch for continuous integration. The first step is simply to initiate the automated build process every time someone checks in code. Packages like TeamCity make this very easy and include notification systems to provide instant feedback on build results. Avoid the situation where one team member checks in code that breaks the build and leaves other developers to debug the problem before a release.

6.  Create and document an automated deployment process

Avoid the inevitable mistakes of a manual deployment process or the huge waste of time to manually reconfigure a deployment for a different environment: Create automated scripts to make sure a build can be easily deployed to any environment.

7. Create unit tests and add them to continuous integration

Avoid nasty system integration problems before they happen. Also, avoid situations where changes in one area break basic functionality in another by establishing a process and standard for developers to create automated unit tests using a common framework like JUnit or NUnit. These tests should be configured to run as part of the continuous integration process and if any of them fail, the entire build process fails, and the whole team is notified.

Finally, remember that software development is a very complicated process. Even with a team staffed with geniuses, people will make mistakes, undocumented branching and build processes will confuse and confound the team, and there will always be turnover in any organization. With even the most basic processes put in place for software configuration management, any organization can avoid the horror of weeks of time spent debugging code integration problems or even worse the situation where no one is quite sure how to reproduce the production version of an application in order to find and fix a critical bug. No one is going to jump into this and reach CMM level 5 in a few weeks, but starting small and establishing meaningful and attainable milestones can get any team a long way towards the goal of a stable, reproducible, and flexible build and deployment environment.


Leave a comment

My View on the 15th International Conference on Human-Computer Interaction

by Jerry Cavin, Senior Software Engineer

Every year, HCI International brings together thousands of people from all over the globe who are interested in the seemingly endless approaches to interaction between computers and people. This past July, that conference came to Las Vegas, and I was fortunate enough to be invited as a presenter for the poster paper I submitted titled “A HCI/AI Tool for Astronomy.”

HCI-2013-logoMore on my presentation in a moment, but first let me tell you about some of the other amazing things on display at the conference. The conference is divided into three parts: Vendor Exhibits, Poster Paper Sessions, and the parallel Paper Sessions.

Vendor Exhibits

The vendor exhibition was held in The Mirage Hotel Event Center and featured companies of virtually all sizes demonstrating countless products used in human-computer interface experiments. Some of the ones that caught my eye the most included Brain Products and Mindo, who were both displaying portable and wireless EEG monitoring devices. Others, like Smart Eye and EyeTracking Inc. were demonstrating their eye-tracking devices.

These were just some of the companies showing products for monitoring brain activity during interaction studies. But there were also a pair of publishers present that attracted my attention.

One of the publishers present was Springer Publishing, one of the world’s leading publishers of scientific, technical and medical content. They’re also the publisher of my astronomy book, “The Amateur Astronomer’s Guide to the Deep-Sky Catalogs.” At the Springer booth, I enjoyed visiting with their Editorial Director, Beverly Ford. She knew many of the people who were key sources for my book.

I also spent some time with CRC Press, which specializes in producing technical books for engineering, science and mathematics. Here I met with Cindy Renee Carelli, the Senior Editor of Industrial Engineering and Human Factors and Ergonomics. For anyone interested, Cindy is currently seeking writers for a wide variety of Computer Science fields.

Like many conferences, there was also access to past information from preceding events. But, because this is HCI, it was made available in a very interactive fashion. Highly interactive computer systems allowed intricate hand gestures to move backwards and forwards along a timeline to explore past HCI Conferences. Then, you could spend time reviewing pictures and events from each conference before returning to the timeline.

Poster Paper Sessions

Jerry Cavin is also an Adjunct Professor of Computer Science and Astronomy at Park University.

Jerry Cavin is also an Adjunct Professor of Computer Science and Astronomy at Park University.

The Poster presentations occurred over three days toward the end of the conference. There were over 300 poster presentations covering a vast array of topics. I was very proud to present my poster, “A HCI/AI Tool for Astronomy” alongside my son, Zac, who is very knowledgeable in Astronomy and has accompanied me on many visits to observatories across the country observing the night sky with professional astronomers. Zac presented and described the pictures and graphs illustrating the patterns of binary stars, exoplanets and variable stars, while I described how the Expert System based application could analyze the data and identify the patterns.

The conference required that the Poster Paper authors be available for an hour each day to present their topics, however, we found it very easy to spend up to three hours talking with people that stopped by our poster paper. We spoke with people from many different countries, including Denmark, Sweden, Taiwan, Sri Lanka, Spain, and Germany. We also spoke with biotech researchers from The University of Virginia, and engineers from Sandia Labs.

One gentlemen showed particular interest in how well Expert System-based applications would perform large data set analysis. He indicated that he was working on a project for the London Stock Exchange to design an application capable of providing real-time fraud detection. We discussed several different solutions to his problem of real-time pattern detection, and he returned to our area several times to discuss other challenges of working with “big data.”

The subject of manipulating “big data” came up several times in other conversations, as well. It’s certainly an indication that the manipulation of “big data” is becoming more commonplace in many different industries.

The Parallel Paper Sessions

The Parallel Paper Sessions were held every day of the conference in several small conference rooms outside the Event Center. During these sessions, the authors were given 10-15 minutes to present their whitepapers. One of the sessions I found particularly intriguing was, “Reconsidering the Notion of User Experience.” The session presented several papers describing how to capture subjective and objective measurements of a user interacting with an application to better document the user experience.

Other sessions I attended discussed the tradeoffs between emotions and effectiveness, expectations and efficiency of a user interface design. The objective for the session I attended focused on increasing user satisfaction, allowing the designer to create a longer product life cycle.

Finally, some ‘me’ time

When finally afforded some time to escape the conference center halls of The Mirage, I did some window shopping across the street at The Venetian Hotel. Inside The Venetian is a large shopping mall reminiscent of a small Italian village, and includes the store, Bauman Rare Books. Ahh, heaven! The store’s proprietor, Mary Olsson, shared with me a 1927 copy of E. E. Barnard’s “A Photographic Atlas of Selected Regions of the Milky Way.” The book is exceptional in that it contains 51 of Barnard’s original linen-backed silver photographic prints. There are only 700 copies of this book known to exist, and I was thrilled to turn the pages of this one. But, at $13,500, it was also a book that needed to stay in the store on this day.

I also was allowed to examine a 1929 printing of “A Relation Between Distance and Radial Velocity Among Extra-Galactic Nebulae.” This is the first book in which Edwin Hubble proposed that velocities between galaxies were proportional to their distance from Earth — a principle known today as Hubble’s Law, which describes the expanding universe. Although priced a bit lower at $7,500, it too remained with the store.


Leave a comment

The Convenience of ClojureScript for Complex Applications

by Paul Bostrom, Senior Software Engineer

Paul-BostromIn my last blog post, The Case for Clojure, I discussed the advantages of using the Clojure programming language for writing applications on the Java Virtual Machine (JVM). Clojure’s success on the JVM led the language designers to compile the language for JavaScript engines, resulting in ClojureScript. Practically every device that can connect to the Internet has a web browser containing a JavaScript engine, making it an appealing target for developing applications.

Unfortunately, the JavaScript language contains flaws that make it difficult for developers to manage the complexity inherent in large applications. The widespread adoption of JavaScript presents a barrier to fixing these flaws without breaking existing applications. The JavaScript community has constructed various conventions in attempt to refine JavaScript’s faults, however these conventions have an ad-hoc nature and are not consistently applied. ClojureScript provides developers with an entirely new language and set of semantics that can be executed on any existing JavaScript engine. ClojureScript avoids JavaScript’s shortcomings while providing a number of new programming paradigms not currently available in the JavaScript ecosystem.

A Comparison of Clojure and ClojureScript

Clojure and ClojureScript are essentially identical languages, although they differ in the way that they deal with their respective execution environments, the JVM and the JavaScript engine. ClojureScript has the ability to interoperate with all of the interfaces that JavaScript uses, like the browser window, the document object model (DOM), and HTML5 APIs. ClojureScript uses the same immutable data structures as Clojure on the JVM. In a multi-threaded environment like the JVM, immutable data structures provide a nice alternative to the complex locking code used to preserve the integrity of data structures across multiple threads. Almost all JavaScript engines are single threaded, and only one particular section of code can run at any time. However, JavaScript engines are designed to handle asynchronous events that might be generated by a user interaction like a mouse click. It is hard to predict in what order related sections of code will run when developing JavaScript applications since these events can occur at any time. Immutable data structures make the development process easier by explicitly delineating when application data can be modified.

ClojureScript and JavaScript each share some common characteristics, so the difference between program structures is not as vast as the difference between Clojure and Java. ClojureScript and JavaScript are both dynamic, functional languages, so many concepts can be expressed with the same degree of conciseness. ClojureScript has an additional advantage over JavaScript, though, since it uses the same macro system provided by Clojure. This allows developers to extend the capabilities of the compiler. Well-designed macros allow repetitive, boilerplate code to be abstracted away, resulting in much more concise source code.

Namespaces

ClojureScript enforces a specific organizational structure for code called namespaces. All libraries outside of the core language must be referenced by their unique namespace (similar to Java packages). The lack of an equivalent structure can be a pain point for JavaScript developers. An interesting scenario involves the use of the popular JavaScript libraries jQuery and Prototype. Each of these libraries defines a global symbol ‘$’. JavaScript code using both of these libraries cannot differentiate between calls to ‘$’ in jQuery or Prototype without writing additional code to manage this symbol conflict. ClojureScript’s namespaces conveniently avoid such conflicts.

Development Tools

The ClojureScript community has contributed a number of tools to increase productivity during application development. The tool ‘cljsbuild’ leverages the Clojure build tool ‘lein’ to automatically download third-party dependencies and then compile ClojureScript code into JavaScript. It has a convenient feature to automatically recompile when it detects changes to the source code.

One of the most innovative development tools for ClojureScript is called the “browser REPL.” The acronym “REPL” stands for “Read-Evaluate-Print-Loop.” It is a concept that is found in a number of development languages and usually consists of a text prompt that accepts source code to be instantly evaluated. The browser REPL is an extension of this idea and will evaluate ClojureScript code in the context of a running web browser, skipping the usual compile/refresh process. While this is certainly a valuable tool, most major web browsers provide a similar console for evaluating JavaScript code. The ClojureScript community has taken this concept a step further and integrated the browser REPL with a number of popular text editors. This allows editing of currently executing ClojureScript source code directly in the browser from the comforts of a full-featured text editor. There are few other development environments that offer such a dynamic experience.

Language Innovation

The Clojure development community has a reputation for producing innovative new libraries to resolve issues across all types of problem domains. A recent example of this is a new library to do asynchronous programming called core.async. The main architect of the library (as well as the creator of Clojure) describes the library here. To summarize, JavaScript applications have traditionally used callback functions to handle asynchronous events like mouse clicks or responses from AJAX calls. This library has the potential to simplify how ClojureScript applications are structured, providing a nice alternative to callback functions. Due to the compiled nature of ClojureScript, in addition to the macro system’s ability to create abstractions out of portions of code, a library like core.async can provide new syntax without introducing any drastic changes to the language or the runtime environment. In contrast, the standard that JavaScript is based on, called ECMAScript, moves at a much slower pace in order to standardize any new syntax. A lot of the features available today in ClojureScript cannot be introduced in the ECMAScript standard until a large group of stakeholders agree upon them. This is often necessary due to the importance of stability in future JavaScript engines, but it often leaves JavaScript developers wanting for more powerful language features for constructing applications.

As browser applications become more complex, JavaScript developers must expend additional effort to produce correct, maintainable code. ClojureScript provides numerous features for dealing with application complexity and creating clear and concise code. I encourage readers to consider using ClojureScript for your next web-based project.


Leave a comment

Adapting Enterprise Applications for Mobile Interfaces

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360Since the explosion of smartphone and mobile device technology spurred by the release of the iPhone in 2007 and Android-based devices starting in 2008, mobile devices are making more and more inroads into the enterprise business environment. Many major websites have released mobile versions that automatically adapt to the smaller screen real estate, and others have developed native applications that allow for even greater flexibility and function than is available on a web page. Smartphones and tablets are also being used as mobile point-of-sale devices, having completely replaced cash registers in some places. The future possibilities are nearly endless. However, integrating a new technology into an existing application infrastructure can be a challenge, especially for large enterprises, and in this post, we will discuss a basic strategy for adapting an existing enterprise application for use with a mobile interface.

Many large enterprises have invested in web application technologies, some homegrown and others off-the-shelf. While these applications are technically accessible from any device with a browser, adapting an application to a smaller interface can be a much bigger challenge than a simple website. Most applications have interactive editing features that can be difficult to navigate in a mobile interface. Creating a native application for these devices is arguably a better investment for the long run, especially considering that the enterprise has the luxury of dictating the specific platform to be used.

The environments for developing native mobile applications have evolved significantly, making the task of creating an interface for a mobile application a relatively easy one. The challenge then becomes the integration with back-end services. This is an area where the benefits of advancements in web services and service-oriented architecture (SOA) can really be seen. Enterprise applications that are already built on a service-oriented architecture will likely already have a web service interface to give clients access to data and application services. Also, development tools have advanced to the point that adding basic web services onto an existing server side application can be a relatively straightforward task. The key is to look at all mobile application interfaces in your enterprise simply as new consumers of these data and application services. If you carefully consider the specific use cases of the mobile interface you would like to create, it is likely you will be able to narrow down the required services to a relatively small number. It is therefore possible to roll out a whole new mobile interface to your existing infrastructure within a few short weeks or months, depending mostly on the complexity of the services rather than the mobile interface itself.

Finally, there is the question of how to interact with the web services from the point of view of the mobile client. With traditional SOAP web services this is a relatively easy task on the desktop due to the large number of development tools and libraries that provide most of the client code for you. In a mobile operating system, your options are much more limited. In the absence of a helping library, writing an interface to a SOAP web service can be time consuming and challenging. It is therefore a much simpler and more efficient approach to use architectures based on existing transport technologies and semantics. This is the environment where REST-based service architecture excels. Web services based on REST use basic HTTP and can pass data using simple existing formats like XML or JSON. In fact, there are a number of open-source libraries available to read and write JSON data, and more modern versions of mobile operating systems have actually included JSON libraries in their development API. With these tools in hand, calling a REST service becomes a simple act of sending an HTTP request to the right URL and making a simple call to read the JSON response.

On the server side, adding a REST layer over an existing service infrastructure is also relatively straightforward. Many of the existing web service application frameworks already contain features to make the addition of REST services very simple. Also, because the mechanics of calling the services are so simple, the process of testing the services in isolation can be as simple as opening a browser and hitting the right URL.

Obviously, the details can get more complicated than what I have outlined here, but in our business we have been able to prove that using this approach, a couple of developers can create a new mobile interface to an enterprise system in a matter of weeks rather than months.


Leave a comment

Even Agile Needs a Quality-First Focus

by John Brooks, Vice President of Engineering

john_brooks_b360_1There is no doubt that in the software development world, Agile Development is a hot topic. Just about every company engaged in software development describes itself as an “Agile house” and I haven’t seen a software-related resume without Agile listed in a very long time. Amazon Books has more than 2,000 books related to the topic.

This often leads to an automatic assumption in our industry that by practicing Agile methodology, the resulting software will be higher quality than it would be if another development methodology were used. Ubiquitous references to this exist almost everywhere Agile is discussed, and it’s easy to understand why. I believe there are very high value aspects to Agile practice that can benefit almost every project. But what we as an industry sometimes miss when we see one development practice as a miracle method is a focus on the real end goal of software development—quality—instead of how we develop it. We should be focused on delivering quality software that truly works as it should.

Too many companies and Agile practitioners take it for granted that high software quality is an automatic side effect of Agile; but just like any other methodology, Agile’s practices must be properly executed, or quality will suffer. Agile is great, but Agile alone does not a perfect project make. There are some ways to bolster your Agile practices for a tighter, more quality-focused development process overall.

Software veterans can tell you the Waterfall model with its “iron triangle” was almost universal back in the 1970’s and 80’s, and is still in practice in many companies today.[i] That iron triangle, consisting of requirements, cost, and schedule, created an unintended negative effect: the critical dimension of quality was not even included! With requirements, schedule, and cost all fixed, the only variable was quality. And we all know how well that worked out.

Steve McConnell, author of Code Complete and recognized authority on software development, recently addressed this during a speech in Austin that I attended. He has found, as I have, that what does work for most enterprise projects is a hybrid approach, and he’s done the research to prove it. Business leaders must be able to plan, budget, and determine business value at the beginning of a project. A pure Agile practice can make that difficult because the planning elements (of the iron triangle) are too vaguely defined at that point. By adding a bit of Waterfall up front, with Agile for the rest of the project, the business gets what it needs when it needs it, and development can utilize the more efficient Agile methodology. A Waterfall-Agile hybrid.

There still needs to be a focus on quality during all phases of the software development life cycle. The tendency to skimp on QA during a Waterfall project has transferred right over to the practice of Agile on many projects. Bad habits like code-and-throw, a lack of unit testing, long release to QA cycles, and not having testers involved early enough are symptoms of this.

The “extra room” created by the flexibility and speed of Agile should be used to more tightly integrate testing into the practice. Otherwise, the real world pressures of the iron triangle will still surface.

As an industry, we need to include predictability and quality as equals to the classic iron triangle elements of requirements, cost, and schedule. None of these elements should be considered immutably fixed. We can do this without throwing out the best of either Waterfall or Agile. At Bridge360, we specialize in the practice of delivering quality software first, no matter what methodology our clients are using. We help our clients implement lightweight, lean methods of delivering the best quality software possible.


[i] Leffingwell, Dean (2010-12-27). Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise


Leave a comment

Requirements, really?

Nadine Parmelee

by Nadine Parmelee
Senior Quality Assurance Engineer

Change is hard. Project teams frequently resist the addition of new processes and procedures, and one process frequently fought is the creation and review of requirements. The reaction is understandable—it’s most likely resistance to creating more work in the midst of already tight schedules. On the other hand, some of the consequences of not having requirements or having poorly thought out requirements are:

  • Developers guessing how the application should behave
  • Secondhand guessing: testers guessing how the developers guessed the application should behave
  • Customers not liking or having difficulty with the product
  • Spending time and money reworking a product to remove ambiguities
  • Inability to map test case to requirements, which can lead to not really knowing if the testing is complete

The benefits of creating requirements and making them clear in the first place are numerous. Here are just a few:

  • Understanding the purpose of a new feature
  • Understanding the workflow of a new feature
  • Understanding the options of a new feature
  • Understanding how a new feature integrates with existing functionality

And that’s just the beginning of what ultimately amounts to a little time spent upfront for maximum gain and minimizing stress on a project.

Creating requirements does not have to be an exercise in torture. Even the most basic requirements list is better than no requirements at all. Whenever there is functionality without requirements, developers and testers will make assumptions on how each thinks the functionality should work. Those assumptions may not match the intended behavior. By at least starting a list with the most basic of the requirements, early misunderstandings of what is expected can be avoided.

Additionally, the more people that review the requirements as early as possible, the sooner questions about functional behavior and whether certain functionality is even possible can be raised and potentially save large amounts of time and money. Reworking requirements adds delays and costs to any project, so the better the requirements are to begin with, the more efficiently the project schedule and budget can be maintained. Reworking code over and over and the subsequent testing of that code can add even more costs and delays until the project team can agree on the desired functionality. The earlier everyone has the same understanding of what needs to be done and the desired outcomes, the more quickly a project can progress and the more quickly new personnel can get up to speed on what they need to do to help complete the project.

When requirements are organized well and are succinct and clear, it is easier for developers and testers both to meet those requirements and map specifications and test cases to those requirements and improve the product quality from the beginning. By having better quality early on, the odds of getting a project completed on time and on budget are greatly improved.