Leave a comment

Managed Migration with Alembic

By Avishek Bhattarai, Senior Software Engineer

Database structure changes can be very stressful especially in production. It is a pain to migrate databases without having a formalized solution. We all are aware that keeping track of the schema changes is important and always very beneficial to reduce the overall cost of the change.

Database migration primarily involves changing the database structure at a point in time to another, resulting in a new version. Prevention of possible errors in the process, caused by any human factor, can be critical in the maintenance of database servers running different schema versions. Importantly, we should apply one schema change only once and in cases when running two or more operations at same time, we need to make sure the changes are free of race conditions.

While there are other database migrations tools available, we preferred alembic as an automated migration tool for a client application with multiple databases. We use SQLAlchemy as an Object Relation Mapper (ORM) in the application and alembic is a lightweight database migration tool for usage with SQLAlchemy. Also, alembic is developed and maintained by Mike Bayer, the author of SQLAlchemy.

Alembic can be useful for:

  • Multiple database support

  • Automatic scripts generation

  • Avoidance of race conditions

  • Scripts encapsulation in a single file

  • Backward or downgrade compatibility

  • Offline mode support

Typical alembic environment structure looks like:

project/
    alembic.ini 
    alembic/  
        env.py  
        script.py.mako
        versions/  
            generated_migration_scripts.py

Alembic can be configured by providing the database driver information in the alembic.ini file or by customizing the database config in env.py so that the configurations are in sync between application and migration scripts. Alembic supports the multidb configuration to have the versions for multiple databases defined in the same environment. In our application for databases with separate schema definitions we created two different alembic environment directories, each holding their own configuration and versioned migration scripts. This way it helped us to maintain the versions separately and avoid probable schema conflicts while running them. One way or the other, it seems to be well customizable and easy to adapt as needed.

Our client application is hosted in Amazon’s Elastic Compute Cloud (EC2) with the database in Amazon’s Relational Database Server (RDS). We configured our Amazon CloudFormation script adding a new task definition to run the database migration script as a standalone task. We included alembic upgrade commands in a bash script, which are triggered by the defined AWS task. The task can be run using ‘Run Task’ operation in Amazon ECS cluster console or using AWS Command Line Interface as,

aws ecs run-task --cluster <ECSCluster_identifier>\
--task-definition <alembic taskdefinition identifier> 

The setup process seems to be straightforward and well documented. Using SQLAlchemy and alembic together, we can automatically generate and customize the migration scripts and we can identify the differences between existing database tables and the defined model in code. There are some limitations of the autogenerate option that are mentioned in the documentation. Each migration is called a revision and knows what order to be run in because each revision is given a down_revision to identify its parent. The revisions range from base to head, base being the initial or stamped revision and head as the latest revision.


Adopting the simpler workflows of alembic has helped us reduce complexity and risks, save implementation time, and increase development throughput. It has provided an automated database refactoring technique which certainly gives us more control over the release process of new changes and enables continuous delivery of our product.


Leave a comment

The State of Being Secure: A Primer on Security in your Organization

Karel Gonzalezby Karel Gonzalez, Senior Software Engineer

A few weeks ago, I had the opportunity to attend the Lonestar Application Security Conference here in Austin. Security is something I have always been mindful of during my development, but I still felt a sense of futility about it. I ask myself on a fairly regular basis “I’m doing something, but am I doing enough?” Continue reading


1 Comment

How to Lower Your Defect Rate Using Simple Requirements Techniques

By Chris McIntosh, Senior Software Developer

Chris McIntoshWe have all seen the various studies of software development and the causes of failures to deliver on time and cost overruns. The original Chaos report stated that a mere 16.2% of projects finished on time and budget. There have also been numerous studies surrounding the cost of defects and how it varies depending on when in the lifecycle they are discovered. The consensus, first reported on by Barry Boehm in the 80’s, is that the later in the software process a defect is discovered, the more expensive it becomes. There is some debate as to whether or not this is a hard and fast rule, but suffice to say, defects are rarely free to fix. Agile has cropped up to try and address some of these issues. It has certainly helped. A more recent report on software project failures puts it at 50% – 70% of projects are finishing on time and budget, with the projects using more agile techniques in the upper end of the spectrum. Agile practices are successful in reducing the failure rate by, in part, making the team test the development more frequently and elicit requirements more often. This is wholly dependent on your team’s ability to gather, record, and test requirements efficiently.

Here are some simple techniques that you can slowly introduce to decrease the defect rate due to poor requirements.

Continue reading


Leave a comment

Understanding Build Server and Configuration Management – Part 1

A two-part series by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360The terms configuration management, continuous integration, and build server are used like buzzwords in the software world today. But how many people really understand what they mean in terms of time, efficiency and money? Most computer science programs in universities don’t talk about the practical application and benefit of these concepts, possibly because they are beyond the purview of the academic science.

In general, software configuration management is the systematic management of software changes both in terms of code and configuration for the purpose of maintaining integrity and traceability throughout the life cycle. In practical terms this can mean a wide variety of things, all the way from a simple source control branching and release management process to CMM level 5.

Continuous integration is the practice of merging all developer working copies of a source tree several times a day into a central development branch for the purpose of building, testing, and providing instant feedback. Implementing some version of one or both of these environments requires the creation of a centralized build server, the system where all code is merged, builds are produced, unit tests are executed, and eventually automated deployments are initiated.

In my line of work, I am in the position to work with a number of different clients and see diverse development environments and teams. In teams larger than two people, the difference between those that use continuous integration and solid configuration management and those that don’t is significant. I think it is very likely that many teams don’t implement continuous integration or a build server because they don’t fully appreciate what can really go wrong or the potential negative effects when they don’t have these systems in place. The time wasted doing builds, fixing deployment configurations, and correcting broken builds when configuration management is not in place is a true hidden cost in software development.

Teams that have these systems in place:

  • Have a more stable code base
  • Tend to be more flexible
  • Can deploy and test in disparate and new environments with minimal effort, and
  • Can reproduce any specific build or release from any point in the history of the application

Many software development teams are familiar enough with the concepts of build servers, configuration management and continuous integration to know they are a good idea, but often decide it’s more trouble than it is worth to implement them. In my next post, I’ll outline the basic steps you can follow to implement continuous integration and configuration management in your development environment.


Leave a comment

The Convenience of ClojureScript for Complex Applications

by Paul Bostrom, Senior Software Engineer

Paul-BostromIn my last blog post, The Case for Clojure, I discussed the advantages of using the Clojure programming language for writing applications on the Java Virtual Machine (JVM). Clojure’s success on the JVM led the language designers to compile the language for JavaScript engines, resulting in ClojureScript. Practically every device that can connect to the Internet has a web browser containing a JavaScript engine, making it an appealing target for developing applications.

Unfortunately, the JavaScript language contains flaws that make it difficult for developers to manage the complexity inherent in large applications. The widespread adoption of JavaScript presents a barrier to fixing these flaws without breaking existing applications. The JavaScript community has constructed various conventions in attempt to refine JavaScript’s faults, however these conventions have an ad-hoc nature and are not consistently applied. ClojureScript provides developers with an entirely new language and set of semantics that can be executed on any existing JavaScript engine. ClojureScript avoids JavaScript’s shortcomings while providing a number of new programming paradigms not currently available in the JavaScript ecosystem.

A Comparison of Clojure and ClojureScript

Clojure and ClojureScript are essentially identical languages, although they differ in the way that they deal with their respective execution environments, the JVM and the JavaScript engine. ClojureScript has the ability to interoperate with all of the interfaces that JavaScript uses, like the browser window, the document object model (DOM), and HTML5 APIs. ClojureScript uses the same immutable data structures as Clojure on the JVM. In a multi-threaded environment like the JVM, immutable data structures provide a nice alternative to the complex locking code used to preserve the integrity of data structures across multiple threads. Almost all JavaScript engines are single threaded, and only one particular section of code can run at any time. However, JavaScript engines are designed to handle asynchronous events that might be generated by a user interaction like a mouse click. It is hard to predict in what order related sections of code will run when developing JavaScript applications since these events can occur at any time. Immutable data structures make the development process easier by explicitly delineating when application data can be modified.

ClojureScript and JavaScript each share some common characteristics, so the difference between program structures is not as vast as the difference between Clojure and Java. ClojureScript and JavaScript are both dynamic, functional languages, so many concepts can be expressed with the same degree of conciseness. ClojureScript has an additional advantage over JavaScript, though, since it uses the same macro system provided by Clojure. This allows developers to extend the capabilities of the compiler. Well-designed macros allow repetitive, boilerplate code to be abstracted away, resulting in much more concise source code.

Namespaces

ClojureScript enforces a specific organizational structure for code called namespaces. All libraries outside of the core language must be referenced by their unique namespace (similar to Java packages). The lack of an equivalent structure can be a pain point for JavaScript developers. An interesting scenario involves the use of the popular JavaScript libraries jQuery and Prototype. Each of these libraries defines a global symbol ‘$’. JavaScript code using both of these libraries cannot differentiate between calls to ‘$’ in jQuery or Prototype without writing additional code to manage this symbol conflict. ClojureScript’s namespaces conveniently avoid such conflicts.

Development Tools

The ClojureScript community has contributed a number of tools to increase productivity during application development. The tool ‘cljsbuild’ leverages the Clojure build tool ‘lein’ to automatically download third-party dependencies and then compile ClojureScript code into JavaScript. It has a convenient feature to automatically recompile when it detects changes to the source code.

One of the most innovative development tools for ClojureScript is called the “browser REPL.” The acronym “REPL” stands for “Read-Evaluate-Print-Loop.” It is a concept that is found in a number of development languages and usually consists of a text prompt that accepts source code to be instantly evaluated. The browser REPL is an extension of this idea and will evaluate ClojureScript code in the context of a running web browser, skipping the usual compile/refresh process. While this is certainly a valuable tool, most major web browsers provide a similar console for evaluating JavaScript code. The ClojureScript community has taken this concept a step further and integrated the browser REPL with a number of popular text editors. This allows editing of currently executing ClojureScript source code directly in the browser from the comforts of a full-featured text editor. There are few other development environments that offer such a dynamic experience.

Language Innovation

The Clojure development community has a reputation for producing innovative new libraries to resolve issues across all types of problem domains. A recent example of this is a new library to do asynchronous programming called core.async. The main architect of the library (as well as the creator of Clojure) describes the library here. To summarize, JavaScript applications have traditionally used callback functions to handle asynchronous events like mouse clicks or responses from AJAX calls. This library has the potential to simplify how ClojureScript applications are structured, providing a nice alternative to callback functions. Due to the compiled nature of ClojureScript, in addition to the macro system’s ability to create abstractions out of portions of code, a library like core.async can provide new syntax without introducing any drastic changes to the language or the runtime environment. In contrast, the standard that JavaScript is based on, called ECMAScript, moves at a much slower pace in order to standardize any new syntax. A lot of the features available today in ClojureScript cannot be introduced in the ECMAScript standard until a large group of stakeholders agree upon them. This is often necessary due to the importance of stability in future JavaScript engines, but it often leaves JavaScript developers wanting for more powerful language features for constructing applications.

As browser applications become more complex, JavaScript developers must expend additional effort to produce correct, maintainable code. ClojureScript provides numerous features for dealing with application complexity and creating clear and concise code. I encourage readers to consider using ClojureScript for your next web-based project.


Leave a comment

Adapting Enterprise Applications for Mobile Interfaces

by Morgan McCollough, Senior Software Engineer

Morgan McCollough Bridge360Since the explosion of smartphone and mobile device technology spurred by the release of the iPhone in 2007 and Android-based devices starting in 2008, mobile devices are making more and more inroads into the enterprise business environment. Many major websites have released mobile versions that automatically adapt to the smaller screen real estate, and others have developed native applications that allow for even greater flexibility and function than is available on a web page. Smartphones and tablets are also being used as mobile point-of-sale devices, having completely replaced cash registers in some places. The future possibilities are nearly endless. However, integrating a new technology into an existing application infrastructure can be a challenge, especially for large enterprises, and in this post, we will discuss a basic strategy for adapting an existing enterprise application for use with a mobile interface.

Many large enterprises have invested in web application technologies, some homegrown and others off-the-shelf. While these applications are technically accessible from any device with a browser, adapting an application to a smaller interface can be a much bigger challenge than a simple website. Most applications have interactive editing features that can be difficult to navigate in a mobile interface. Creating a native application for these devices is arguably a better investment for the long run, especially considering that the enterprise has the luxury of dictating the specific platform to be used.

The environments for developing native mobile applications have evolved significantly, making the task of creating an interface for a mobile application a relatively easy one. The challenge then becomes the integration with back-end services. This is an area where the benefits of advancements in web services and service-oriented architecture (SOA) can really be seen. Enterprise applications that are already built on a service-oriented architecture will likely already have a web service interface to give clients access to data and application services. Also, development tools have advanced to the point that adding basic web services onto an existing server side application can be a relatively straightforward task. The key is to look at all mobile application interfaces in your enterprise simply as new consumers of these data and application services. If you carefully consider the specific use cases of the mobile interface you would like to create, it is likely you will be able to narrow down the required services to a relatively small number. It is therefore possible to roll out a whole new mobile interface to your existing infrastructure within a few short weeks or months, depending mostly on the complexity of the services rather than the mobile interface itself.

Finally, there is the question of how to interact with the web services from the point of view of the mobile client. With traditional SOAP web services this is a relatively easy task on the desktop due to the large number of development tools and libraries that provide most of the client code for you. In a mobile operating system, your options are much more limited. In the absence of a helping library, writing an interface to a SOAP web service can be time consuming and challenging. It is therefore a much simpler and more efficient approach to use architectures based on existing transport technologies and semantics. This is the environment where REST-based service architecture excels. Web services based on REST use basic HTTP and can pass data using simple existing formats like XML or JSON. In fact, there are a number of open-source libraries available to read and write JSON data, and more modern versions of mobile operating systems have actually included JSON libraries in their development API. With these tools in hand, calling a REST service becomes a simple act of sending an HTTP request to the right URL and making a simple call to read the JSON response.

On the server side, adding a REST layer over an existing service infrastructure is also relatively straightforward. Many of the existing web service application frameworks already contain features to make the addition of REST services very simple. Also, because the mechanics of calling the services are so simple, the process of testing the services in isolation can be as simple as opening a browser and hitting the right URL.

Obviously, the details can get more complicated than what I have outlined here, but in our business we have been able to prove that using this approach, a couple of developers can create a new mobile interface to an enterprise system in a matter of weeks rather than months.


Leave a comment

Even Agile Needs a Quality-First Focus

by John Brooks, Vice President of Engineering

john_brooks_b360_1There is no doubt that in the software development world, Agile Development is a hot topic. Just about every company engaged in software development describes itself as an “Agile house” and I haven’t seen a software-related resume without Agile listed in a very long time. Amazon Books has more than 2,000 books related to the topic.

This often leads to an automatic assumption in our industry that by practicing Agile methodology, the resulting software will be higher quality than it would be if another development methodology were used. Ubiquitous references to this exist almost everywhere Agile is discussed, and it’s easy to understand why. I believe there are very high value aspects to Agile practice that can benefit almost every project. But what we as an industry sometimes miss when we see one development practice as a miracle method is a focus on the real end goal of software development—quality—instead of how we develop it. We should be focused on delivering quality software that truly works as it should.

Too many companies and Agile practitioners take it for granted that high software quality is an automatic side effect of Agile; but just like any other methodology, Agile’s practices must be properly executed, or quality will suffer. Agile is great, but Agile alone does not a perfect project make. There are some ways to bolster your Agile practices for a tighter, more quality-focused development process overall.

Software veterans can tell you the Waterfall model with its “iron triangle” was almost universal back in the 1970’s and 80’s, and is still in practice in many companies today.[i] That iron triangle, consisting of requirements, cost, and schedule, created an unintended negative effect: the critical dimension of quality was not even included! With requirements, schedule, and cost all fixed, the only variable was quality. And we all know how well that worked out.

Steve McConnell, author of Code Complete and recognized authority on software development, recently addressed this during a speech in Austin that I attended. He has found, as I have, that what does work for most enterprise projects is a hybrid approach, and he’s done the research to prove it. Business leaders must be able to plan, budget, and determine business value at the beginning of a project. A pure Agile practice can make that difficult because the planning elements (of the iron triangle) are too vaguely defined at that point. By adding a bit of Waterfall up front, with Agile for the rest of the project, the business gets what it needs when it needs it, and development can utilize the more efficient Agile methodology. A Waterfall-Agile hybrid.

There still needs to be a focus on quality during all phases of the software development life cycle. The tendency to skimp on QA during a Waterfall project has transferred right over to the practice of Agile on many projects. Bad habits like code-and-throw, a lack of unit testing, long release to QA cycles, and not having testers involved early enough are symptoms of this.

The “extra room” created by the flexibility and speed of Agile should be used to more tightly integrate testing into the practice. Otherwise, the real world pressures of the iron triangle will still surface.

As an industry, we need to include predictability and quality as equals to the classic iron triangle elements of requirements, cost, and schedule. None of these elements should be considered immutably fixed. We can do this without throwing out the best of either Waterfall or Agile. At Bridge360, we specialize in the practice of delivering quality software first, no matter what methodology our clients are using. We help our clients implement lightweight, lean methods of delivering the best quality software possible.


[i] Leffingwell, Dean (2010-12-27). Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise