Leave a comment

Managed Migration with Alembic

By Avishek Bhattarai, Senior Software Engineer

Database structure changes can be very stressful especially in production. It is a pain to migrate databases without having a formalized solution. We all are aware that keeping track of the schema changes is important and always very beneficial to reduce the overall cost of the change.

Database migration primarily involves changing the database structure at a point in time to another, resulting in a new version. Prevention of possible errors in the process, caused by any human factor, can be critical in the maintenance of database servers running different schema versions. Importantly, we should apply one schema change only once and in cases when running two or more operations at same time, we need to make sure the changes are free of race conditions.

While there are other database migrations tools available, we preferred alembic as an automated migration tool for a client application with multiple databases. We use SQLAlchemy as an Object Relation Mapper (ORM) in the application and alembic is a lightweight database migration tool for usage with SQLAlchemy. Also, alembic is developed and maintained by Mike Bayer, the author of SQLAlchemy.

Alembic can be useful for:

  • Multiple database support

  • Automatic scripts generation

  • Avoidance of race conditions

  • Scripts encapsulation in a single file

  • Backward or downgrade compatibility

  • Offline mode support

Typical alembic environment structure looks like:

project/
    alembic.ini 
    alembic/  
        env.py  
        script.py.mako
        versions/  
            generated_migration_scripts.py

Alembic can be configured by providing the database driver information in the alembic.ini file or by customizing the database config in env.py so that the configurations are in sync between application and migration scripts. Alembic supports the multidb configuration to have the versions for multiple databases defined in the same environment. In our application for databases with separate schema definitions we created two different alembic environment directories, each holding their own configuration and versioned migration scripts. This way it helped us to maintain the versions separately and avoid probable schema conflicts while running them. One way or the other, it seems to be well customizable and easy to adapt as needed.

Our client application is hosted in Amazon’s Elastic Compute Cloud (EC2) with the database in Amazon’s Relational Database Server (RDS). We configured our Amazon CloudFormation script adding a new task definition to run the database migration script as a standalone task. We included alembic upgrade commands in a bash script, which are triggered by the defined AWS task. The task can be run using ‘Run Task’ operation in Amazon ECS cluster console or using AWS Command Line Interface as,

aws ecs run-task --cluster <ECSCluster_identifier>\
--task-definition <alembic taskdefinition identifier> 

The setup process seems to be straightforward and well documented. Using SQLAlchemy and alembic together, we can automatically generate and customize the migration scripts and we can identify the differences between existing database tables and the defined model in code. There are some limitations of the autogenerate option that are mentioned in the documentation. Each migration is called a revision and knows what order to be run in because each revision is given a down_revision to identify its parent. The revisions range from base to head, base being the initial or stamped revision and head as the latest revision.


Adopting the simpler workflows of alembic has helped us reduce complexity and risks, save implementation time, and increase development throughput. It has provided an automated database refactoring technique which certainly gives us more control over the release process of new changes and enables continuous delivery of our product.


Leave a comment

The State of Being Secure: A Primer on Security in your Organization

Karel Gonzalezby Karel Gonzalez, Senior Software Engineer

A few weeks ago, I had the opportunity to attend the Lonestar Application Security Conference here in Austin. Security is something I have always been mindful of during my development, but I still felt a sense of futility about it. I ask myself on a fairly regular basis “I’m doing something, but am I doing enough?” Continue reading


1 Comment

How to Lower Your Defect Rate Using Simple Requirements Techniques

By Chris McIntosh, Senior Software Developer

Chris McIntoshWe have all seen the various studies of software development and the causes of failures to deliver on time and cost overruns. The original Chaos report stated that a mere 16.2% of projects finished on time and budget. There have also been numerous studies surrounding the cost of defects and how it varies depending on when in the lifecycle they are discovered. The consensus, first reported on by Barry Boehm in the 80’s, is that the later in the software process a defect is discovered, the more expensive it becomes. There is some debate as to whether or not this is a hard and fast rule, but suffice to say, defects are rarely free to fix. Agile has cropped up to try and address some of these issues. It has certainly helped. A more recent report on software project failures puts it at 50% – 70% of projects are finishing on time and budget, with the projects using more agile techniques in the upper end of the spectrum. Agile practices are successful in reducing the failure rate by, in part, making the team test the development more frequently and elicit requirements more often. This is wholly dependent on your team’s ability to gather, record, and test requirements efficiently.

Here are some simple techniques that you can slowly introduce to decrease the defect rate due to poor requirements.

Continue reading


Leave a comment

Codeception: A Clean and Simple Solution for Web Test Automation

by Troy Rudolph, Senior Software Engineer

troy-rudolphThe market certainly offers many test automation tools for testing in a variety of environments, but there is a relatively new one I particularly like for automated testing in web applications. While Codeception is intended primarily for testing PHP applications, the UI testing tools may also be used to easily create automated tests for web applications, as well.

In Codeception, these tests are referred to as acceptance tests. These tests are based on the notion of Behavior Driven Development (BDD). Essentially, BDD states that tests should be specified in terms of desired behavior. In the case of BDD, the behavior described is that of a user (or tester). To learn more about BDD, I would encourage reading the inventor’s article at http://dannorth.net/introducing-bdd.

A simple test might look like… Continue reading


Leave a comment

How to Quickly Solve Technical Problems With “Straw Man” Technique

by Chris Durand, CTO

ChrisDurand-B360Are you looking for new way to solve pressing technical problems? Well, I’ve found one, and I recommend it for anyone looking for a fast way to start a problem-solving process. I call it the “straw man” technique, and it’s pretty straightforward:

  • Think of the simplest possible solution or partial solution for a problem. This is the “straw man.”
  • Discuss with your team reasons why you think the solution will work or not. My favorite question to ask is, “Why won’t this work?”
  • Modify the straw man accordingly and repeat until you have a useful solution.

That’s it! I find myself using the straw man technique often; you can, too. Imagine you and your team are looking for answers on how to solve a challenging problem or implement what appears to be a complex feature. Think of a super simple solution that addresses at least 50-60% of the problem and ask the team why it won’t work. Then iterate from there until you reveal an acceptable solution. Continue reading


Leave a comment

An Introduction to Property-Based Testing

By Paul Bostrom, Senior Software Engineer

Paul-BostromDo your software testing teams ever discover bugs that seemingly should have been found by the developers’ unit tests? Quite often, the developer actually did unit test the software, but perhaps simply failed to think of scenarios using the problematic inputs. What if we could tell the computer to “think” of all the values used in our unit tests? This is the approach of property-based testing.

Instead of specifying a limited number of inputs and outputs for testing a unit of software, property-based testing specifies properties that a unit of software must hold, and then relies on the computer to generate the test values.

The authoritative library for property-based testing is called QuickCheck (http://en.wikipedia.org/wiki/QuickCheck), created for the Haskell programming language, but implementations of the library exist for many other popular programming languages. To illustrate the differences in these two testing approaches, we will use a simple example — testing a square root function. Continue reading


2 Comments

Top 5 Ways Software Quality Assurance Supports the Development Team

by Chris Durand, CTO

ChrisDurand-B360Are you thinking Development and Quality Assurance are separate, independent activities? Think again. Development and QA go hand in hand, and the better (and earlier) both teams are engaged in solid quality processes, the stronger your software will be. Here are five ways software quality supports your development team.

1. Software Quality Reduces the Cost of Fixing Bugs

At the risk of beating a dead horse, if you are not familiar with the cost of fixing a bug at various stages in the software lifecycle, this graph is crucial:

Adding more to my investment

As you can see, the later in the development cycle a bug is discovered, the more it costs to fix. If you find a problem in the requirements analysis phase, you simply change the requirements document to fix it. If you don’t discover that same issue until you are testing your beta release, you have to change the requirements document and rewrite code and retest. A good QA process will find defects earlier in the development process and reduce the cost of fixing those defects.

2. Software Quality Improves Requirements

Doing requirements well is hard without a strong software quality process in place. In many projects, testing begins too late since testers don’t start writing test cases until they have (hopefully) working code in their hands to break. If this is your process, you are missing out on the benefits of having the QA team engaged early on in the process. Strong QA professionals have an eye for detail and ensure your requirements are clear and testable. If your QA team cannot start writing test cases based on requirements, it is likely you have insufficient detail in your requirements documentation. This means you are leaving it up to a developer to self-determine many details of how your application should work instead of building a customer-driven application. So if your QA team says they cannot start writing test cases until they have a working application to reference, get them involved early and shore up those requirements.

3. Software Quality Improves Predictability of Releases

Predicting time to completion is challenging on poor quality projects with loose processes and parameters. Again, testing early is key. Feature development has a finite list of things to do, but the number of bugs in an application is unknown until you start testing. A strong QA process tests early and often, so at all stages in the development process you have an idea of where you stand. A weak QA process tests too late or not very thoroughly, and you find your dates slipping because you are still finding more and more bugs with no end in sight. High quality software also has fewer customer support issues and therefore your team can stay focused on new features instead of getting randomly pulled off to deal with the latest customer crisis and torpedoing your release schedule.

4. Software Quality Allows You to Refactor with Confidence

Writing software is a lot like gardening. Old plants must be removed, new plants added, existing plants trimmed and dead growth removed. Failing to keep your software well-groomed results in buggy software that is difficult to understand and expensive to maintain. You find you can’t easily add a simple feature because it breaks something elsewhere. To avoid this you must constantly keep your software pruned, and restructure parts that no longer make sense. Strong software quality allows you to do this with confidence since you know how the software was supposed to work before you made changes, and you can verify the software still works after your changes. If you have an automated unit or functional test infrastructure in place, that’s even better.

5. Software Quality Improves Team Morale

No one likes working on projects that have a bad reputation within the company or with customers.  High quality software improves the morale of everyone from the sales team to the support team. The sales team has confidence that the product won’t blow up in their faces during a demo and that they won’t be damaging future sales opportunities by selling the customer a “lemon”. Developers feel more pride in their work and perform better since they want to maintain the good reputation for the project or team. The support team does not dread yet another call from an upset customer due to issues that clearly should have been caught during the development process.

In summary, if you don’t have strong quality practices in place, your development team (and the rest of the company) is missing out! Get your QA team involved early and often in your development process and watch your development costs shrink, schedules become shorter and more predictable, and customer satisfaction soar. Happy testing!