Contact

Technology

Feb 11, 2015

Liquibase: Mopping up Issues

Justin Wilson

Justin Wilson

Default image background

Today’s blog post covers various issues I’ve run into using Liquibase and their solutions. Parts one and two of this series provide an introduction to Liquibase and some of its basic concepts.

Could Not Acquire Change Log Lock

Liquibase maintains a table called DATABASECHANGELOGLOCK for the sole purpose of maintaining a lock on the database while applying changesets. If its process shuts down before it’s had a chance to undo the lock, manual intervention will be required before Liquibase can execute against the database again. Liquibase update will display the following error when it’s executed against a locked database:

“Liquibase Update Failed: Could not acquire change log lock”

If this happens and there really isn’t another Liquibase instance executing against the database, run the following SQL command to remove the lock:

UPDATE DATABASECHANGELOGLOCK SET LOCKED = FALSE.

Rename Column in MySQL

While Liquibase changesets that avoid manual SQL are usually database agnostic, certain situations can cause failures on certain types of databases but not on others. A common cause of Liquibase failures is using the renameColumn element without the columnDataType attribute, which is only required for MySQL. If developer machines don’t use MySQL but QA or production servers do, the changeset will pass locally but fail when it gets to the MySQL environment.

Recovering From Inconsistent Changeset Failures Across Environments

If a changeset succeeds in one environment but fails in another, there are several options available for recovery.

1. Replace the old changeset with one or more changesets that use the precondition element to ensure that databases the old changeset worked on, failed on, and databases that have not applied the old changeset will all end up in the same state. The precondition element allows changesets to conditionally execute depending on the database states, which is the perfect way to recover when those states are inconsistent.

For example, consider this changeset that renames a column without specifying a columnDataType. It will succeed on a developer’s H2 database and fail on a QA MySQL database.

One solution would be to replace it with this second changeset that utilizes the changeSetExecuted precondition. It will skip executing on any developer database that ran the original changeset, it will run successfully on any MySQL database, and it will succeed on any other database that the original changeset did not execute against (such as other developer databases that haven’t checked out the first changeset yet).

Alternatively, instead of checking for a previously executed changeset, the precondition could check for a different quality instead. In this case, the precondition can check to see if the column has already been renamed, and if it has been it won’t try to rename it again.

2. If the old changeset only successfully ran on developer machines and failed when pushed to the first non-developer environment, the changeset can be directly modified without changing its ID or author. There won’t be any MD5 hash conflict errors on the non-developer environments since the changeset failed to apply to them. However, there will be hash conflicts on developer databases. Developers can either delete the MD5 sum of the changeset from the DATABASECHANGELOG table on their local databases and execute SQL to manually remove its side effects before running the fixed changelog, or just wipe their local databases and let Liquibase re-construct them when the application starts. Wiping local databases is generally the easiest and fastest option in this situation.

Liquibase Upgrades

Liquibase has occasionally changed the methodology used to calculate MD5 hashes on changesets as new versions have been released. Generally these changes have made improvements to ignoring whitespace and comment changes. However, these upgrades can cause changesets executed with a previous version of Liquibase to generate a different MD5 sum after Liquibase has been updated even if they were not modified, causing Liquibase update failures. There are two solutions to this situation:

  1. The new MD5 sum can be set in the correct DATABASECHANGELOG row in each affected database so that Liquibase will think that the changeset successfully ran with the new version.

  2. The old MD5 sum can be added as a validCheckSum element to each affected changeset to whitelist it.

This situation is very rare and should occur less frequently now that Liquibase has matured.

Liquibase Getting Slower?

Liquibase takes more time to apply updates when there are more changesets. Every once in a while (such as after a major application release), it’s a good idea to merge all existing changesets into one to improve Liquibase execution speeds.

To do this, make one changeset and copy the contents of every other changeset into it in order. If certain changesets modified older ones, those changes could be consolidated at this time. For example, if a column was added to a table by a later changeset, it could be copied into the earlier declaration of the table if there aren’t any conflicts with other changesets in between. Collapsing modifications can make the changelog file much more readable by cutting down on the total number of operations the master changeset will contain.

After consolidating everything into one changeset, add a pre-condition element that will prevent it from running on databases that the original changesets were applied to. The changeSetExecuted precondition using the ID of the last changeset is a good candidate for this.

Primary Key Length Differences

Due to an odd Liquibase rollback defect, one of the most complicated situations to resolve is making a primary key that has a legal length in a local developer database (e.g., H2) and an illegal length on a different environment’s database, such as a QA MySQL database. MySQL primary keys can’t be longer than 767 bytes, and if your database uses UTF-8 or UTF-16 encoding, 4 bytes will be reserved for each character. This means that MySQL primary keys can only be 191 characters long with that encoding. Sometimes it’s easy to concatenate a couple Strings into a primary key without realizing this limit has been exceeded, especially when using custom properties such as ${type.uuid} that hide the length of the column in the property declaration instead of the changeset itself.

Normally, Liquibase treats changeset applications as transactions. When something goes wrong while applying a changeset, it will roll back any changes that occurred. However, Liquibase does not roll back changes that occur before applying an invalid primary key constraint! It will not execute changes that occur after the violation, but it won’t clean up anything else that came before it. This can make recovery complicated when a lot of changes occur within one changeset.

Consider the following example, which assumes that a table called PERSON_REQUIRED_ATTRIBUTE exists that has a PERSON_ID and an ATTRIBUTE_ID which are both part of a composite primary key constraint called PK_PERSON_REQUIRED_ATTRIBUTE. This changeset attempts to replace the ATTRIBUTE_ID column with an ATTRIBUTE_NAME column.

Make the PERSON_REQUIRED_ATTRIBUTE table contain an ATTRIBUTE_NAME column instead of an ATTRIBUTE_ID column.

If ${type.name} is too long (e.g. 255 characters), then the primary key will be too long for MySQL and everything up to (but not including) the addPrimaryKey element will still be applied when this changeset fails on a MySQL database. This can make recovery complicated, since new changesets that undo the creation of the column with the invalid length will need to be created that use pre-conditions to make sure they aren’t run on databases that never encountered this issue.

The following series of four changesets could replace the bad changeset above to recover from this situation:

1. The primary key constraint needs to be removed before it can be re-created on developer databases that already ran the “bad-add-person-required-attribute” changeset but didn’t encounter the primary key failure, and on fresh databases that would have had it from the original changeset that made the constraint back when it still used the ATTRIBUTE_ID column. Since the constraint was already removed on the MySQL database that the “bad-add-person-required-attribute” changeset failed on, the pre-condition will prevent this changeset from executing against that database and causing a failure due to being unable to drop something that was already dropped.

Remove existing constraints in preparation for the "fixed-add-person-required-attribute" changeset.

2. While the ATTRIBUTE_ID column has already been removed from every database that the “bad-add-person-required-attribute” changeset has been executed on, now that we’re removing that bad changeset, databases it was never run against will still need to remove that column.

Remove existing column in preparation for the "fixed-add-person-required-attribute" changeset.

3. In a similar fashion to step two, only databases that have encountered the old “bad-add-person-required-attribute” changeset will have an ATTRIBUTE_NAME column that’s too large to use. This changeset will remove that column if it exists and still succeed validation even if there’s nothing to remove.

Ensure any existing ATTRIBUTE_NAME column with the wrong size is dropped.

4. After the previous three changesets have been executed, the table will not have any primary key constraints and it will not have an ATTRIBUTE_NAME column on any database, regardless of whether or not a database has applied the original “bad-add-person-required-attribute” changeset. At this point, the ATTRIBUTE_NAME column can be re-created with the right size and the new primary key constraint can be applied.

Make the PERSON_REQUIRED_ATTRIBUTE table contain an ATTRIBUTE_NAME column instead of an ATTRIBUTE_ID column.

Solving New Issues

Any Liquibase changeset failure can be solved with creative manipulation of precondition elements and/or modifications to the DATABASECHANGELOG table. The examples in this blog don’t cover every possible situation that could occur in a continuous deployment environment, but the techniques covered can be combined to resolve new situations as they’re encountered.

One large topic not covered in this blog series is the ability to specify rollback operations within changesets. Check out Liquibase.org for more information on additional features outside the scope of this blog series, including how to set up and use Liquibase for the first time.

Please post any questions to the comments section below, and be sure to keep in touch with us on LinkedIn and Twitter at @CrederaOpen.

Conversation Icon

Contact Us

Ready to achieve your vision? We're here to help.

We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

Searching for a new career?

View job openings