DevOps – a DBA’s perspective

Earlier this year we ran a DevOps 101 webinar in conjunction with Redgate to their predominately DBA audience. In this blog post our own DevOpsGuys DBA, Paul Ibison gives his views on what DevOps means from a DBA perspective. Note Paul is a SQL Server DBA some of the comments are specific to SQL Server although the overall approach should work for any database platform.

Over to Paul!

OK. In a DevOps pipeline setup the methodology is sometimes to treat the database as an isolated “black box” and in that way we deploy changes to it in the same manner as the normal packaged code deployments.

To achieve this, the process may look something like this:

  1. We compare the current state of the database to a baseline state using a tool to create a comparison file.
    1. The tool will look at the database schema and create a comparison (DACPAC/NUGET/etc.) file. Using a comparison-based approach may cause issues which results in data loss e.g. splitting a single column into 2 new columns however some tools (eg Redgate) allow you to mitigate this without resorting to manual deployments.
    2. The tool may or may not compare some reference data and generate change scripts for this as well. It all depends on the requirements. Comparison of the data held in user tables is sometimes a little tricky as the key values might be different across environments but it is certainly possible.
    3. We create scripts using unit test frameworks such as tSQLt to test the code element of the database such as the stored procedures and functions etc.
  2. The database comparison files are published with the other elements of the software release. These files should be checked into an artefact repository at this stage to ensure we always have a record of what was changed with that release. Your base schema and other database code should be in source control too!
  3. The database package is then deployed automatically and updates the database, ideally using the same or similar deployment methods to the code.

In the ideal deployment pipeline scenario it really is this straightforward with databases.

I’m don’t want to trivialise the role of the DBA (especially as I am one!) but in this instance they are not really involved in the deployment process in any significant way. This of course leaves them free to concentrate on the (very important!) related core operational tasks – backups, re-indexing, high-availability, security, optimisation, troubleshooting and so on.

So, what if the deployment pipeline scenario is not ideal? Well, the problems come when the database is dependent on other bits of what we can call “peripheral architecture”. These extra “bits” could exist in the current database, another user database or one of the system databases or even further afield.

To not get overwhelmed with all these variations I like to separate this peripheral architecture into 2 classes:

  1. Basic configurations. Many of these come from the system databases. From the master database we might include logins, server triggers, linked servers, configurations and encryption (certificates and keys). For the msdb database we’ll have the jobs, operators, alerts, messages etc. All these are “basic configurations” in the sense that they are usually a single row in a system table and quite easy to script.
  2. Complex configurations. Here we consider the setup of replication, log-shipping, SSIS packages, availability groups, clustering, mirroring, service broker, database mail, CLR dlls, change tracking, extended events and so on. These technologies involve entries in many different system tables in system databases and scripting them might involve a fair bit of work.

One thing the Basic and Complex Configurations have in common is that having them in scope as far as the deployment pipeline is concerned means that we no longer treat the system as a simple database-level black box, and we’re beginning to see the black box as the SQL instance itself. This is in terms of the outer boundary of our dependencies and remember that the deployable artefacts themselves are far more granular than this. We could of course push things even further and widen the boundary definition to include the server (ever heard of the GPO setting “Perform volume maintenance tasks”?) and then why not include the DNS/AD servers etc but in the interests of being practical we have to stop somewhere and the SQL instance seems like the best point of demarcation.

The other point is that when we add the Basic and Complex Configurations together we have a list which contains definite dependencies and many of the others are really potential dependencies for our build. For an example of a definite dependency we can be sure that a build script for a new user requires the Basic Configuration of a login. This is solved by having a script that can check for the login during the deployment and create it if it doesn’t exist. For some other configurations (Basic and Complex) they could be relevant to our build but we don’t initially know for sure – such as an SSIS package or SQL job or replication articles which might be relevant to parts of our build: these are the potential dependencies.

At this stage let me say that totally ignoring the list of potential dependencies often works! The build runs and deployments happen and users are happy. That is – until it doesn’t work! Then questions are asked – why is this job that failed not also being updated and deployed? Why didn’t the SSIS package get updated during the build? Why didn’t you test replication before deploying? Etc. What we need is an automated build process which takes the finger-crossing out of the equation as much as we can.

So how do we deal with all these configurations and have a build that always works, or at least to get nearer to this goal? There are some guidelines we can follow:

  1. Make a list of your configurations which are to be considered to be in scope.

    In particular examine the list of potential dependencies and clarify which are truly relevant. Are there jobs which refer to user tables? Are the SSIS packages importing into our tables? Are the replicated tables the same as our deployment tables? Are we using SMTP? Are these linked servers being used? Etc.

  2. Decide to what extent each environment should aim to have the same configurations.

    If there is replication in Prod then why not have it in Dev also? Ideally we decide to have all environments identical so we can thoroughly test the build before deploying to Production but there may be financial constraints which make this impossible. Clustering is an obvious case – often the cluster exists in Preprod and Prod but not Dev, Test and UAT due to the expense.

  3. Make a recreate/update decision for each configuration

    I.e. do we always recreate or only create if it doesn’t exist and if it does we update? It is easiest to drop the extra architecture and recreate each time we deploy. The scripting is easier this way and we ensure the configuration is identical. For a configuration like the database mail setup this can be done quite nicely. However for some configurations it can be bad practice eg we won’t want to lose the job history each time we recreate the jobs. In some cases it is simply impractical e.g. having to reinitialize all those replication subscribers or set up log shipping destinations as a part of each deployment – I doubt anyone will agree to that.

  4. Create deployment scripts for the configurations.

    These scripts will deploy all the peripheral architecture and will most likely be environment-specific. For example, linked servers in Dev are unlikely to be the same as those in Prod, logins are often set up differently in environments by design for security purposes, the SMTP profiles in Dev shouldn’t send emails to Prod support and so on. 3rd-Party tools might help us crease some scripts here as a starting point.

  5. Create test scripts for some of the complex configurations.

    One of the deployment steps checks that replication still works, that the SSIS packages work, that jobs still run, that emails still get sent and so on. This is a validation – functional smoke test / integration test – that nothing is broken.

To summarise we are saying that there are some elements of a database that can be thought of as code and other elements that can be thought of as infrastructure. When we are working with these elements, the complexity comes with preserving data and the need for a more holistic view of “the database”. While it is not simple, there are already well-established ways of managing all this and we just need to implement the process to put it in place.

Thanks Paul!

From a wider DevOps perspective much of what Paul discusses is related to the First Way of DevOps – Systems Thinking. Always looking at the holistic “Big Picture” – in this case always looking beyond the simple database-level solutions and seeking to provide a holistic solution that covers all area of the database platform (“replication, log shipping, SSIS packages, availability groups, clustering, mirroring, service broker, database mail, CLR dlls, change tracking and so on”).

Share this:

Leave a comment

Your email address will not be published. Required fields are marked *