Leave a comment

Managed Migration with Alembic

By Avishek Bhattarai, Senior Software Engineer

Database structure changes can be very stressful especially in production. It is a pain to migrate databases without having a formalized solution. We all are aware that keeping track of the schema changes is important and always very beneficial to reduce the overall cost of the change.

Database migration primarily involves changing the database structure at a point in time to another, resulting in a new version. Prevention of possible errors in the process, caused by any human factor, can be critical in the maintenance of database servers running different schema versions. Importantly, we should apply one schema change only once and in cases when running two or more operations at same time, we need to make sure the changes are free of race conditions.

While there are other database migrations tools available, we preferred alembic as an automated migration tool for a client application with multiple databases. We use SQLAlchemy as an Object Relation Mapper (ORM) in the application and alembic is a lightweight database migration tool for usage with SQLAlchemy. Also, alembic is developed and maintained by Mike Bayer, the author of SQLAlchemy.

Alembic can be useful for:

  • Multiple database support

  • Automatic scripts generation

  • Avoidance of race conditions

  • Scripts encapsulation in a single file

  • Backward or downgrade compatibility

  • Offline mode support

Typical alembic environment structure looks like:

project/
    alembic.ini 
    alembic/  
        env.py  
        script.py.mako
        versions/  
            generated_migration_scripts.py

Alembic can be configured by providing the database driver information in the alembic.ini file or by customizing the database config in env.py so that the configurations are in sync between application and migration scripts. Alembic supports the multidb configuration to have the versions for multiple databases defined in the same environment. In our application for databases with separate schema definitions we created two different alembic environment directories, each holding their own configuration and versioned migration scripts. This way it helped us to maintain the versions separately and avoid probable schema conflicts while running them. One way or the other, it seems to be well customizable and easy to adapt as needed.

Our client application is hosted in Amazon’s Elastic Compute Cloud (EC2) with the database in Amazon’s Relational Database Server (RDS). We configured our Amazon CloudFormation script adding a new task definition to run the database migration script as a standalone task. We included alembic upgrade commands in a bash script, which are triggered by the defined AWS task. The task can be run using ‘Run Task’ operation in Amazon ECS cluster console or using AWS Command Line Interface as,

aws ecs run-task --cluster <ECSCluster_identifier>\
--task-definition <alembic taskdefinition identifier> 

The setup process seems to be straightforward and well documented. Using SQLAlchemy and alembic together, we can automatically generate and customize the migration scripts and we can identify the differences between existing database tables and the defined model in code. There are some limitations of the autogenerate option that are mentioned in the documentation. Each migration is called a revision and knows what order to be run in because each revision is given a down_revision to identify its parent. The revisions range from base to head, base being the initial or stamped revision and head as the latest revision.


Adopting the simpler workflows of alembic has helped us reduce complexity and risks, save implementation time, and increase development throughput. It has provided an automated database refactoring technique which certainly gives us more control over the release process of new changes and enables continuous delivery of our product.


Leave a comment

The State of Being Secure: A Primer on Security in your Organization

Karel Gonzalezby Karel Gonzalez, Senior Software Engineer

A few weeks ago, I had the opportunity to attend the Lonestar Application Security Conference here in Austin. Security is something I have always been mindful of during my development, but I still felt a sense of futility about it. I ask myself on a fairly regular basis “I’m doing something, but am I doing enough?” Continue reading


1 Comment

How to Lower Your Defect Rate Using Simple Requirements Techniques

By Chris McIntosh, Senior Software Developer

Chris McIntoshWe have all seen the various studies of software development and the causes of failures to deliver on time and cost overruns. The original Chaos report stated that a mere 16.2% of projects finished on time and budget. There have also been numerous studies surrounding the cost of defects and how it varies depending on when in the lifecycle they are discovered. The consensus, first reported on by Barry Boehm in the 80’s, is that the later in the software process a defect is discovered, the more expensive it becomes. There is some debate as to whether or not this is a hard and fast rule, but suffice to say, defects are rarely free to fix. Agile has cropped up to try and address some of these issues. It has certainly helped. A more recent report on software project failures puts it at 50% – 70% of projects are finishing on time and budget, with the projects using more agile techniques in the upper end of the spectrum. Agile practices are successful in reducing the failure rate by, in part, making the team test the development more frequently and elicit requirements more often. This is wholly dependent on your team’s ability to gather, record, and test requirements efficiently.

Here are some simple techniques that you can slowly introduce to decrease the defect rate due to poor requirements.

Continue reading


Leave a comment

Codeception: A Clean and Simple Solution for Web Test Automation

by Troy Rudolph, Senior Software Engineer

troy-rudolphThe market certainly offers many test automation tools for testing in a variety of environments, but there is a relatively new one I particularly like for automated testing in web applications. While Codeception is intended primarily for testing PHP applications, the UI testing tools may also be used to easily create automated tests for web applications, as well.

In Codeception, these tests are referred to as acceptance tests. These tests are based on the notion of Behavior Driven Development (BDD). Essentially, BDD states that tests should be specified in terms of desired behavior. In the case of BDD, the behavior described is that of a user (or tester). To learn more about BDD, I would encourage reading the inventor’s article at http://dannorth.net/introducing-bdd.

A simple test might look like… Continue reading


Leave a comment

How to Quickly Solve Technical Problems With “Straw Man” Technique

by Chris Durand, CTO

ChrisDurand-B360Are you looking for new way to solve pressing technical problems? Well, I’ve found one, and I recommend it for anyone looking for a fast way to start a problem-solving process. I call it the “straw man” technique, and it’s pretty straightforward:

  • Think of the simplest possible solution or partial solution for a problem. This is the “straw man.”
  • Discuss with your team reasons why you think the solution will work or not. My favorite question to ask is, “Why won’t this work?”
  • Modify the straw man accordingly and repeat until you have a useful solution.

That’s it! I find myself using the straw man technique often; you can, too. Imagine you and your team are looking for answers on how to solve a challenging problem or implement what appears to be a complex feature. Think of a super simple solution that addresses at least 50-60% of the problem and ask the team why it won’t work. Then iterate from there until you reveal an acceptable solution. Continue reading


Leave a comment

An Introduction to Property-Based Testing

By Paul Bostrom, Senior Software Engineer

Paul-BostromDo your software testing teams ever discover bugs that seemingly should have been found by the developers’ unit tests? Quite often, the developer actually did unit test the software, but perhaps simply failed to think of scenarios using the problematic inputs. What if we could tell the computer to “think” of all the values used in our unit tests? This is the approach of property-based testing.

Instead of specifying a limited number of inputs and outputs for testing a unit of software, property-based testing specifies properties that a unit of software must hold, and then relies on the computer to generate the test values.

The authoritative library for property-based testing is called QuickCheck (http://en.wikipedia.org/wiki/QuickCheck), created for the Haskell programming language, but implementations of the library exist for many other popular programming languages. To illustrate the differences in these two testing approaches, we will use a simple example — testing a square root function. Continue reading


2 Comments

Top 5 Ways Software Quality Assurance Supports the Development Team

by Chris Durand, CTO

ChrisDurand-B360Are you thinking Development and Quality Assurance are separate, independent activities? Think again. Development and QA go hand in hand, and the better (and earlier) both teams are engaged in solid quality processes, the stronger your software will be. Here are five ways software quality supports your development team.

1. Software Quality Reduces the Cost of Fixing Bugs

At the risk of beating a dead horse, if you are not familiar with the cost of fixing a bug at various stages in the software lifecycle, this graph is crucial:

Adding more to my investment

As you can see, the later in the development cycle a bug is discovered, the more it costs to fix. If you find a problem in the requirements analysis phase, you simply change the requirements document to fix it. If you don’t discover that same issue until you are testing your beta release, you have to change the requirements document and rewrite code and retest. A good QA process will find defects earlier in the development process and reduce the cost of fixing those defects.

2. Software Quality Improves Requirements

Doing requirements well is hard without a strong software quality process in place. In many projects, testing begins too late since testers don’t start writing test cases until they have (hopefully) working code in their hands to break. If this is your process, you are missing out on the benefits of having the QA team engaged early on in the process. Strong QA professionals have an eye for detail and ensure your requirements are clear and testable. If your QA team cannot start writing test cases based on requirements, it is likely you have insufficient detail in your requirements documentation. This means you are leaving it up to a developer to self-determine many details of how your application should work instead of building a customer-driven application. So if your QA team says they cannot start writing test cases until they have a working application to reference, get them involved early and shore up those requirements.

3. Software Quality Improves Predictability of Releases

Predicting time to completion is challenging on poor quality projects with loose processes and parameters. Again, testing early is key. Feature development has a finite list of things to do, but the number of bugs in an application is unknown until you start testing. A strong QA process tests early and often, so at all stages in the development process you have an idea of where you stand. A weak QA process tests too late or not very thoroughly, and you find your dates slipping because you are still finding more and more bugs with no end in sight. High quality software also has fewer customer support issues and therefore your team can stay focused on new features instead of getting randomly pulled off to deal with the latest customer crisis and torpedoing your release schedule.

4. Software Quality Allows You to Refactor with Confidence

Writing software is a lot like gardening. Old plants must be removed, new plants added, existing plants trimmed and dead growth removed. Failing to keep your software well-groomed results in buggy software that is difficult to understand and expensive to maintain. You find you can’t easily add a simple feature because it breaks something elsewhere. To avoid this you must constantly keep your software pruned, and restructure parts that no longer make sense. Strong software quality allows you to do this with confidence since you know how the software was supposed to work before you made changes, and you can verify the software still works after your changes. If you have an automated unit or functional test infrastructure in place, that’s even better.

5. Software Quality Improves Team Morale

No one likes working on projects that have a bad reputation within the company or with customers.  High quality software improves the morale of everyone from the sales team to the support team. The sales team has confidence that the product won’t blow up in their faces during a demo and that they won’t be damaging future sales opportunities by selling the customer a “lemon”. Developers feel more pride in their work and perform better since they want to maintain the good reputation for the project or team. The support team does not dread yet another call from an upset customer due to issues that clearly should have been caught during the development process.

In summary, if you don’t have strong quality practices in place, your development team (and the rest of the company) is missing out! Get your QA team involved early and often in your development process and watch your development costs shrink, schedules become shorter and more predictable, and customer satisfaction soar. Happy testing!


Leave a comment

Understanding Build Server and Configuration Management – Part 2

Implementing Configuration Management: How to Start the Process

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360In my last post we discussed the benefits of creating a development environment with continuous integration and configuration management using a centralized build server. We have had a number of clients that hesitated to start the process of building such an environment because they were too busy and thought it would take too much time and too many resources to make meaningful progress. After all, if you have a team of smart developers, they should be able to handle the build process on their own without any trouble, right? Other teams simply lack the leadership or buy-in from management to put the proper processes in place.

The following are a few basic steps your team can take to start the process of configuration management. Each step solves specific problems and will be invaluable in bringing more consistency and stability to the development process.

1.  Establish a source control branching strategy and stick to it

The first rule of branching is don’t. The goal here is to establish a small number of branches in source control to help you manage the process of merging and releasing code to test and production. Avoid the common situation where multiple and nearly identical branches are created for each deployment environment. Merge operations can be a nightmare in any environment. The number of branches should be kept to a minimum with new branches created only when something needs to be versioned separately. This even applies with distributed source control systems like Git. There may be many disparate development branches but QA and release branches should be kept to an absolute minimum.

2. Establish basic code merging and release management practices

The goal is to establish this process along with the branching strategy in order to foster a more stable code base whose state is known at all times. There are many resources available that give examples of which kinds of strategies can be used for environments with different levels of complexity. Avoid the situation where features for a given release are on multiple different branches and integration has to be done at the last minute and involves multiple merges.

3. Create and document an automated build process

Make sure the entire source tree can be built using a single automated process and document it. Avoid the situation where only one person knows all the dependencies and environmental requirements for a production build. We have seen all too often cases where it takes too much time and money to reverse engineer and reproduce a build when the point person is not available, or when critical steps of the manual process are forgotten or performed incorrectly.

4. Create and configure a build server using off the shelf components

There are many packages available for putting together your own build server and continuous integration environment. TeamCity is one of the more popular of these.

5. Establish continuous integration on the central development branch

Designate a central development branch for continuous integration. The first step is simply to initiate the automated build process every time someone checks in code. Packages like TeamCity make this very easy and include notification systems to provide instant feedback on build results. Avoid the situation where one team member checks in code that breaks the build and leaves other developers to debug the problem before a release.

6.  Create and document an automated deployment process

Avoid the inevitable mistakes of a manual deployment process or the huge waste of time to manually reconfigure a deployment for a different environment: Create automated scripts to make sure a build can be easily deployed to any environment.

7. Create unit tests and add them to continuous integration

Avoid nasty system integration problems before they happen. Also, avoid situations where changes in one area break basic functionality in another by establishing a process and standard for developers to create automated unit tests using a common framework like JUnit or NUnit. These tests should be configured to run as part of the continuous integration process and if any of them fail, the entire build process fails, and the whole team is notified.

Finally, remember that software development is a very complicated process. Even with a team staffed with geniuses, people will make mistakes, undocumented branching and build processes will confuse and confound the team, and there will always be turnover in any organization. With even the most basic processes put in place for software configuration management, any organization can avoid the horror of weeks of time spent debugging code integration problems or even worse the situation where no one is quite sure how to reproduce the production version of an application in order to find and fix a critical bug. No one is going to jump into this and reach CMM level 5 in a few weeks, but starting small and establishing meaningful and attainable milestones can get any team a long way towards the goal of a stable, reproducible, and flexible build and deployment environment.


Leave a comment

How to Prepare for Test Automation

by Nadine Parmelee, Senior Quality Assurance Engineer

Nadine-Parmelee1-bExecuted correctly, test automation is a major benefit to an organization with a strong return on their investment and efficient speed to market over competition. There are other benefits as well, but these are the top two. When launching a test automation initiative, there are a number of factors to consider. Perhaps foremost is how test automation will need to integrate with the current business, and technical resources of an organization.

Assessing Resources
Resources must be aligned in order to maintain and continue to build upon the test automation investment made by the organization. Projects mature, morph and change, and test automation is part of that living ecosystem.

An organization needs to look across the product delivery landscape to make sure they have the confidence their entire team is positioned well and has the expertise to move forward implementing test automation. The “team” in automation consists of everyone from Product Managers (responsible for the product delivery) to the Development Team (programmers) to Test Team (testers) all the way to Customer Support (who have to manage any issues coming in from the field). Once emphasis has been placed on having resources in line with the initiative, one of the next steps is to focus on how success will be measured.

Which Tools to Use
One of the first test automation project decisions to make is what tool(s) to use. There are many automation tools available today, and all have their own specific advantages. Take the time to determine which tool is best for your delivery goal. There is no “one-size-fits-all” when it comes to tools. And not all tools are appropriate for all projects, so it is important to make sure the tool you are using will work in your project environment (or environments). Will your project need to be supported on mobile devices? Some tools work better with some coding languages than others, and the controls the developers are using may make different tools a better option for your environment. Be sure to try the tools you are considering with your application before making the decision about which ones to purchase, and keep in mind that tools do change and mature just like your software projects.

There’s an old saying, “I’m in a hurry, so please slow down.” The point, of course, is to take your time and do the task correctly to avoid mistakes that cause you to start over. Properly executed test automation is much the same thing, and the surest way to deliver a high-quality software product is by realizing these three keys:

  • Making Test Automation a part of the overall Quality Assurance Strategy yields the most efficiency and reduces time to market
  • Test Automation is an investment that brings ROI only when an organization embraces it as part of their living ecosystem
  • Test Automation requires experienced resources that span the internal delivery team


Leave a comment

Adapting Enterprise Applications for Mobile Interfaces

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360Since the explosion of smartphone and mobile device technology spurred by the release of the iPhone in 2007 and Android-based devices starting in 2008, mobile devices are making more and more inroads into the enterprise business environment. Many major websites have released mobile versions that automatically adapt to the smaller screen real estate, and others have developed native applications that allow for even greater flexibility and function than is available on a web page. Smartphones and tablets are also being used as mobile point-of-sale devices, having completely replaced cash registers in some places. The future possibilities are nearly endless. However, integrating a new technology into an existing application infrastructure can be a challenge, especially for large enterprises, and in this post, we will discuss a basic strategy for adapting an existing enterprise application for use with a mobile interface.

Many large enterprises have invested in web application technologies, some homegrown and others off-the-shelf. While these applications are technically accessible from any device with a browser, adapting an application to a smaller interface can be a much bigger challenge than a simple website. Most applications have interactive editing features that can be difficult to navigate in a mobile interface. Creating a native application for these devices is arguably a better investment for the long run, especially considering that the enterprise has the luxury of dictating the specific platform to be used.

The environments for developing native mobile applications have evolved significantly, making the task of creating an interface for a mobile application a relatively easy one. The challenge then becomes the integration with back-end services. This is an area where the benefits of advancements in web services and service-oriented architecture (SOA) can really be seen. Enterprise applications that are already built on a service-oriented architecture will likely already have a web service interface to give clients access to data and application services. Also, development tools have advanced to the point that adding basic web services onto an existing server side application can be a relatively straightforward task. The key is to look at all mobile application interfaces in your enterprise simply as new consumers of these data and application services. If you carefully consider the specific use cases of the mobile interface you would like to create, it is likely you will be able to narrow down the required services to a relatively small number. It is therefore possible to roll out a whole new mobile interface to your existing infrastructure within a few short weeks or months, depending mostly on the complexity of the services rather than the mobile interface itself.

Finally, there is the question of how to interact with the web services from the point of view of the mobile client. With traditional SOAP web services this is a relatively easy task on the desktop due to the large number of development tools and libraries that provide most of the client code for you. In a mobile operating system, your options are much more limited. In the absence of a helping library, writing an interface to a SOAP web service can be time consuming and challenging. It is therefore a much simpler and more efficient approach to use architectures based on existing transport technologies and semantics. This is the environment where REST-based service architecture excels. Web services based on REST use basic HTTP and can pass data using simple existing formats like XML or JSON. In fact, there are a number of open-source libraries available to read and write JSON data, and more modern versions of mobile operating systems have actually included JSON libraries in their development API. With these tools in hand, calling a REST service becomes a simple act of sending an HTTP request to the right URL and making a simple call to read the JSON response.

On the server side, adding a REST layer over an existing service infrastructure is also relatively straightforward. Many of the existing web service application frameworks already contain features to make the addition of REST services very simple. Also, because the mechanics of calling the services are so simple, the process of testing the services in isolation can be as simple as opening a browser and hitting the right URL.

Obviously, the details can get more complicated than what I have outlined here, but in our business we have been able to prove that using this approach, a couple of developers can create a new mobile interface to an enterprise system in a matter of weeks rather than months.