Liquibase + Hibernate 5 + Envers + Spring Boot Naming Conventions

Starting with a project using Spring Boot 1.5.8 and Hibernate 5.2.12 I implemented Liquibase to handle database schema changes. This is all straight forward enough, with plenty of other tutorials of how to setup the runtime side.

The schema diff generation side was another matter. The issues hit was that the identifier names were not correct – Spring Boot would produce underscore versions of names, e.g., my_table, while Liquibase would produce names as per the class or field name, e.g., MyTable. To get around this I could specify the actual table or column names in the @Table or @Column annotations. However, this was not going to work when we put Envers into the mix.

Envers automatically produces a revision table REVINFO, and changelog tables for each table which are normally suffixed with _AUD. However, Spring Boot wanted the lowercase version of these names, but Liquibase wanted to make the uppercase version, and while I could override the changelog tables by passing the relevant Envers property via system parameters, the revision table name is hard coded.

The problem is caused because Liquibase wasn’t setup with the same naming strategies as Spring Boot has. It was being called from a Gradle JavaExec task. Unfortunately it was based on a slightly naive template which didn’t pass in the relevant naming strategies in the referenceUrl. The key is hibernate.physical_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy&hibernate.implicit_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy

Unfortunately, there are few results that really explain what is going on given the issues I was seeing, but I do note that the JHipster project generator does exactly this, so I’d suggest using it to generate a project and inspect it for a better example.

For reference, this the basic task:

task liquibaseDiffChangelog(type: JavaExec) {
group = "liquibase"

classpath sourceSets.main.runtimeClasspath
classpath configurations.liquibase
main = "liquibase.integration.commandline.Main"

args "--changeLogFile=" + buildTimestamp() +"_changelog.groovy"
args "--referenceUrl=hibernate:spring:org.nigelsim.springbootproject?dialect=org.hibernate.dialect.MySQLDialect&hibernate.physical_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy&hibernate.implicit_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy"
args "--username=${config.spring.datasource.username}"
args "--password=${config.spring.datasource.password}"
args "--url=${config.spring.datasource.url}"
args "--driver=${config.spring.datasource['driver-class-name']}"
args "diffChangeLog"

Posted in Gradle, java, JPA | Leave a comment

Telegram notification from Jenkins

The following Groovy PostBuild script can be used to send build notification to Telegram via a Telegram bot account. Note, that this is just using the HTTP POST API, meaning that you can’t interact with Jenkins via Telegram.

The message will include Jacoco coverage information if the build script is placed after the Jacoco post build reporting.

Use the following, dropping in the appropriate values in the first few lines:

def validStatuses = [hudson.model.Result.FAILURE, hudson.model.Result.UNSTABLE, hudson.model.Result.SUCCESS]

def result =
def name =

def coverage = ''

def action = { it.getUrlName() == "jacoco" }
if (action != null) {
def percentage = action.getLineCoverage().getPercentageFloat();
def previous = action.getPreviousResult()
if (previous) {
def delta = percentage - previous.getLineCoverage().getPercentageFloat();
coverage = String.format("Coverage at %.02f%% (%+.02f)", percentage, delta)
} else {
coverage = String.format("Coverage at %.02f%%", percentage)
} else {
manager.listener.logger.println "No coverage info available"

if (validStatuses.contains(result)) {
def urlText ="$result building $name $coverage", "utf-8")
println new URL("$botId/sendMessage?chat_id=$chatId&text=$urlText").getText()

Posted in Uncategorized | Leave a comment

Gradle version in Jenkins build

Out of the box Jenkins can extract the metadata from Maven build for use as environment variables in the build steps, but it doesn’t have the same support for Gradle. This can be achieved using the EnvInject plugin, and a custom task in your build.gradle file.

The first step is to setup a task in build.gradle that will print out the project name and version in a format that can be used directly in a properties file.

task projectDetails {
doLast {
println "PROJECT_NAME=${}"
println "PROJECT_VERSION=${project.version}"

You can test this works by running ./gradlew -q projectDetails and you should see something like


Now, to get these values in the Jenkins build environment we need the EnvInject plugin. The trick here is that EnvInject plugin doesn’t allow environment variables to be set from scripts (or at least not any obvious way), so we do this in a two step process. First, select the Build Environment step called Inject environment variables to the build process. Then in its script contents place the code

./gradlew -q projectDetails >

This will place the environment variables into a properties file that we’ll read in during the next step.

Next, add a Build task called Inject environment variables to the top of the build tasks, and supply the name as the properties file path.

Now you can use $PROJECT_NAME and $PROJECT_VERSION in your build steps.

Note, the reason we can’t do all this in one step in the Environment setup is that the properties file is read prior to running the script contents. And the script contents can’t affect the environment. Perhaps a Groovy script could read the value return the environment variables, but that is for someone else to figure out.

Posted in programming | Leave a comment

The strange beast of PyTZ and datetime

The crux of this post is, make sure to never use datetime.replace(tzinfo=...) when working with PyTZ, use tz.localize(...) instead, otherwise you’ll end up with some very strange times.

The PyTZ docs do mention this helper method as a way to fix incorrect conversion across timezones, but out of the box PyTZ timezones seem odd. Consider this simple code that takes both the native datetime.replace approach, and the localize approach:

import pytz
import datetime

lunchtime = datetime.datetime(2014, 11, 12, 12, 30)
print 'lunchtime =', lunchtime

local_tz = pytz.timezone('Australia/Brisbane')
print 'local_tz =', repr(local_tz)

lunchtime_local = lunchtime.replace(tzinfo=local_tz)
print 'lunchtime_local =', lunchtime_local
print 'lunchtime_local to UTC =', lunchtime_local.astimezone(pytz.utc)

lunchtime_localize = local_tz.localize(lunchtime)
print 'lunchtime_localize =', lunchtime_localize
print 'lunchtime_localize to UTC =', lunchtime_localize.astimezone(pytz.utc)

The output is:

lunchtime = 2014-11-12 12:30:00
local_tz = <DstTzInfo 'Australia/Brisbane' LMT+10:12:00 STD>
lunchtime_local = 2014-11-12 12:30:00+10:12
lunchtime_local to UTC = 2014-11-12 02:18:00+00:00
lunchtime_localize = 2014-11-12 12:30:00+10:00
lunchtime_localize to UTC = 2014-11-12 02:30:00+00:00

Off the bat, notice that the representation of the PyTZ timezone is strange: “LMT+10:12:00”. For a start, what is LMT? And why is there an extra 12 minutes in the offset? The correct format would be something like: AEST+10:00:00. i.e., the timezone abbreviation is AEST, and the offset is 10 hours.

When you apply this in a naive way you end up with the incorrect offset, and when you take this across to UTC the answer is wrong. But, when you apply the timezone using localize method the correct offset.

Certainly something to be aware of.

Posted in programming, python | Leave a comment

Null and JPA2 Native Query in MSSQL

This post involves a slightly edge case scenario that I encountered a couple of months ago, so hopefully I get all the details right the first time.

Essentially, I had a JPA2 project using Hibernate 3.6.10 as the ORM. This project had a requirement of some native SQL being used for dynamic table creation, so to achieve this I would call Query q = em.createNativeQuery(sql); and the proceed to call q.setParameter(...). This worked fine for both setting columns to a value, and setting them to null, at least on H2 and MySQL. However, if you tried to set to null when using SQLServer you’d get the following:

java.sql.SQLException: Operand type clash: varbinary is incompatible with float
	at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(
	at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(
	at net.sourceforge.jtds.jdbc.TdsCore.nextToken(
	at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(
	at net.sourceforge.jtds.jdbc.JtdsStatement.processResults(
	at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(
	at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(
	at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(
	at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(
	at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(
	at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(
	at org.hibernate.impl.SessionImpl.executeNativeUpdate(
	at org.hibernate.impl.SQLQueryImpl.executeUpdate(
	at org.hibernate.ejb.QueryImpl.internalExecuteUpdate(
	at org.hibernate.ejb.AbstractQueryImpl.executeUpdate(

It was very curious as to where the varbinary was coming from (float is the correct type for the column). What happens is this. In JPA2 there is just one call to set a parameter, q.setParameter(...), unlike in pure JDBC which has a specific setNull(position, type) method that allows you to specify the underlying column type. To get around this the JPA2 provider has to either know what the column types are, which is fine if you are using JPA mapped entities, or use a generic type. In the case of Hibernate it uses setParameter( position, val, Hibernate.SERIALIZABLE ). Serializable maps to varbinary and, even though the value is null, this can not be coerced to a float by SQLServer. See the conversions chart 1/3 way down this page.

For me, the solution was to unwarp the Hibernate Session object from the entity manager, and use Hibernate’s native query interface that allows you to specify the underlying column types.

SQLQuery q = em.unwrap(Session.class).createSQLQuery(sql);
q.setParameter(1, null, Hibernate.DOUBLE);

Unless there is some way to supply column type mappings to the native JPA2 provider that I haven’t found then I see this as a significant shortcoming with JPA2’s native query interface.

Posted in java | Leave a comment

Saltstack: Passing objects to templates

Quick one. When you pass a variable like this to a template through the context/default parameter it is iterpreted as a literal string:

    - managed
    - name: /opt/tomcat/conf/server.xml
    - template: jinja
    - source: salt://tomcat/files/server.xml.tmpl
    - context:
        deploy_conf: deploy_conf

Which means that you end up with errors like this:
Unable to manage file: Jinja variable 'unicode object' has no attribute 'control_port

To pass the object itself in you need to put it in the braces:

    - managed
    - name: /opt/tomcat/conf/server.xml
    - template: jinja
    - source: salt://tomcat/files/server.xml.tmpl
    - context:
        deploy_conf: {{ deploy_conf }}
Posted in saltstack, technology | Leave a comment

Using MapMessage with ActiveMQ with a Python consumer

Out of the box a STOMP consumer on an ActiveMQ broker will be able to receive TextMessages, but MapMessages will arrive without content. This is because we need to specify a converter to ActiveMQ(?) which we can do in the subscription setup, in a similar way that this thread discusses sending MapMessages from the Python end. Unfortunately there is a little bit of manual handling, because the best the client library is do is deliver a string, so you’ll have to handle the deserialisation.

The following example uses JSON as the encoding. If you change the transformation to jms-map-xml you can get the encoding as XML.

import getopt
import time
import logging

class SimpleListener(object):
    def on_error(self, headers, message):
        print('received an error %s'%message)
    def on_message(self, headers, message):
        print('received a message %s (%s)'%(message, headers))

def listen():
    con = stomp.Connection(host_and_ports=[('localhost', 61613)])
    con.set_listener('', SimpleListener())
    con.subscribe(destination='/queue/test', id=1, ack='auto', headers={'transformation' : 'jms-map-json'})

There is no simple way to automatically detect the encoding, so we rely on the data travelling through a given topic or queue to be in a consistent format. Also, the JSON encoding is a little strange, but not incomprehensible.

Broker config, sender example, and python sample code is available here.

Posted in python, Uncategorized | Leave a comment

Simple type checking using property()

Python usually relies of duck typing for type safety, but from time to time it can be handy to enforce some type checking, particularly when new users are going to be using your objects. The following are three utility methods for applying type checking to class properties, using the new style object property() method.

def deleter(attr):
    """Deleter closure, used to remove the inner variable"""
    def deleter_real(self):
        return delattr(self, attr)
    return deleter_real

def getter(attr):
    """Getter closure, used to simply return the inner variable"""
    def getter_real(self):
        return getattr(self, attr)
    return getter_real

def setter(attr, valid_types):
    """Setter closure, used to do type checking before storing var"""
    def setter_real(self, var):
        if not isinstance(var, valid_types): raise TypeError("Not of required type: "+str(valid_types))
    return setter_real

def typed(attr, valid_types, docs=""):
    """Wrapper around property() so that we can easily apply type checking
    to properties"""
    return property(getter(attr), setter(attr, valid_types), deleter(attr), docs)

# Example class
class A(object):
    a = typed("_a", int)

# Testing output
a1 = A()
a1.a = 1
print "Got stored value = " + str(a1.a)

a1.a = "1"

The results are:

$ python 
Got stored value = 1
Traceback (most recent call last):
  File "", line 28, in <module>
    a1.a = "1"
  File "", line 11, in setter_real
    if not isinstance(var, valid_types): raise TypeError("Not of required type: "+str(valid_types))
TypeError: Not of required type: <type 'int'>
Posted in programming, python | Leave a comment

Active Directory on EC2/VPC – Using Elastic IP in DNS

The basic use case is this: we want an Active Directory server running in an AWS VPC that can serve machine within the VPC, and in other locations. The AD DC has an Elastic IP to allow external entities to access it, specifically the DNS. However, due to the way Elastic IPs work the Windows network stack sees its IP as being in the range of the VPC, and so, the dynamic updating of the DC’s DNS entries results in all the address pointers being to this private IP.

What is happening here is the NetBIOS stack is doing its routine updates of the DNS, and it is pulling the private IP from the network stack. This isn’t a NIC dynamic DNS update as you may expect. You can however disable this behaviour by following the instructions here, which require you to create a registry entry for the IP of the server:

Registry Value: PublishAddresses
Registry Value Type: REG_MULTI_SZ
Registry Value Data:

This could be baked into a usercode startup script if required for multiple server images.

Note, this is a more advanced configuration change which needs to be documented in case it causes issues in the future.

Posted in AWS | Tagged | Leave a comment

Reloading Tiles2 Config in Spring 3.x

When you are using Tiles for layout composition with Spring you configure it as a view resolver by adding something like this to the applicationContext.xml

    <!-- Configure the Tiles templates -->
    <bean id="tilesConfigurer"
        <property name="definitions">
        <property name="preparerFactoryClass"
            value="org.springframework.web.servlet.view.tiles2.SpringBeanPreparerFactory" />
    <!-- Resolve views using Tiles -->
    <bean id="tilesViewResolver"
        <property name="viewClass"
            value="org.springframework.web.servlet.view.tiles2.TilesView" />

This will read the tiles.xml on startup. If you want it to refresh when the file changes you either need to add the Tiles filter, or more elegantly just add this context-param to the web.xml

<context-param>   <param-name>org.apache.tiles.definition.dao.LocaleUrlDefinitionDAO.CHECK_REFRESH</param-name>  <param-value>true</param-value> </context-param> 

Even better for testing environments, you can also put the declaration in your context.xml file, so you can switch it on and off on a per-deployment basis

    <Parameter name="org.apache.tiles.definition.dao.LocaleUrlDefinitionDAO.CHECK_REFRESH"
        value="true" override="false" />


Posted in Spring | 1 Comment