Gradle version in Jenkins build

Out of the box Jenkins can extract the metadata from Maven build for use as environment variables in the build steps, but it doesn’t have the same support for Gradle. This can be achieved using the EnvInject plugin, and a custom task in your build.gradle file.

The first step is to setup a task in build.gradle that will print out the project name and version in a format that can be used directly in a properties file.


task projectDetails {
doLast {
println "PROJECT_NAME=${project.name}"
println "PROJECT_VERSION=${project.version}"
}
}

You can test this works by running ./gradlew -q projectDetails and you should see something like


PROJECT_NAME=MyProject
PROJECT_VERSION=0.2

Now, to get these values in the Jenkins build environment we need the EnvInject plugin. The trick here is that EnvInject plugin doesn’t allow environment variables to be set from scripts (or at least not any obvious way), so we do this in a two step process. First, select the Build Environment step called Inject environment variables to the build process. Then in its script contents place the code

./gradlew -q projectDetails > build-env.properties

This will place the environment variables into a properties file that we’ll read in during the next step.

Next, add a Build task called Inject environment variables to the top of the build tasks, and supply the name build-env.properties as the properties file path.

Now you can use $PROJECT_NAME and $PROJECT_VERSION in your build steps.

Note, the reason we can’t do all this in one step in the Environment setup is that the properties file is read prior to running the script contents. And the script contents can’t affect the environment. Perhaps a Groovy script could read the value return the environment variables, but that is for someone else to figure out.

Posted in programming | Leave a comment

The strange beast of PyTZ and datetime

The crux of this post is, make sure to never use datetime.replace(tzinfo=...) when working with PyTZ, use tz.localize(...) instead, otherwise you’ll end up with some very strange times.

The PyTZ docs do mention this helper method as a way to fix incorrect conversion across timezones, but out of the box PyTZ timezones seem odd. Consider this simple code that takes both the native datetime.replace approach, and the localize approach:

import pytz
import datetime

lunchtime = datetime.datetime(2014, 11, 12, 12, 30)
print 'lunchtime =', lunchtime

local_tz = pytz.timezone('Australia/Brisbane')
print 'local_tz =', repr(local_tz)

lunchtime_local = lunchtime.replace(tzinfo=local_tz)
print 'lunchtime_local =', lunchtime_local
print 'lunchtime_local to UTC =', lunchtime_local.astimezone(pytz.utc)

lunchtime_localize = local_tz.localize(lunchtime)
print 'lunchtime_localize =', lunchtime_localize
print 'lunchtime_localize to UTC =', lunchtime_localize.astimezone(pytz.utc)

The output is:

lunchtime = 2014-11-12 12:30:00
local_tz = <DstTzInfo 'Australia/Brisbane' LMT+10:12:00 STD>
lunchtime_local = 2014-11-12 12:30:00+10:12
lunchtime_local to UTC = 2014-11-12 02:18:00+00:00
lunchtime_localize = 2014-11-12 12:30:00+10:00
lunchtime_localize to UTC = 2014-11-12 02:30:00+00:00

Off the bat, notice that the representation of the PyTZ timezone is strange: “LMT+10:12:00”. For a start, what is LMT? And why is there an extra 12 minutes in the offset? The correct format would be something like: AEST+10:00:00. i.e., the timezone abbreviation is AEST, and the offset is 10 hours.

When you apply this in a naive way you end up with the incorrect offset, and when you take this across to UTC the answer is wrong. But, when you apply the timezone using localize method the correct offset.

Certainly something to be aware of.

Posted in programming, python | Leave a comment

Null and JPA2 Native Query in MSSQL

This post involves a slightly edge case scenario that I encountered a couple of months ago, so hopefully I get all the details right the first time.

Essentially, I had a JPA2 project using Hibernate 3.6.10 as the ORM. This project had a requirement of some native SQL being used for dynamic table creation, so to achieve this I would call Query q = em.createNativeQuery(sql); and the proceed to call q.setParameter(...). This worked fine for both setting columns to a value, and setting them to null, at least on H2 and MySQL. However, if you tried to set to null when using SQLServer you’d get the following:

java.sql.SQLException: Operand type clash: varbinary is incompatible with float
	at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
	at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
	at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
	at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
	at net.sourceforge.jtds.jdbc.JtdsStatement.processResults(JtdsStatement.java:613)
	at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(JtdsStatement.java:572)
	at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(JtdsPreparedStatement.java:727)
	at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
	at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
	at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
	at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:210)
	at org.hibernate.impl.SessionImpl.executeNativeUpdate(SessionImpl.java:1310)
	at org.hibernate.impl.SQLQueryImpl.executeUpdate(SQLQueryImpl.java:396)
	at org.hibernate.ejb.QueryImpl.internalExecuteUpdate(QueryImpl.java:188)
	at org.hibernate.ejb.AbstractQueryImpl.executeUpdate(AbstractQueryImpl.java:99)

It was very curious as to where the varbinary was coming from (float is the correct type for the column). What happens is this. In JPA2 there is just one call to set a parameter, q.setParameter(...), unlike in pure JDBC which has a specific setNull(position, type) method that allows you to specify the underlying column type. To get around this the JPA2 provider has to either know what the column types are, which is fine if you are using JPA mapped entities, or use a generic type. In the case of Hibernate it uses setParameter( position, val, Hibernate.SERIALIZABLE ). Serializable maps to varbinary and, even though the value is null, this can not be coerced to a float by SQLServer. See the conversions chart 1/3 way down this page.

For me, the solution was to unwarp the Hibernate Session object from the entity manager, and use Hibernate’s native query interface that allows you to specify the underlying column types.

SQLQuery q = em.unwrap(Session.class).createSQLQuery(sql);
q.setParameter(1, null, Hibernate.DOUBLE);

Unless there is some way to supply column type mappings to the native JPA2 provider that I haven’t found then I see this as a significant shortcoming with JPA2’s native query interface.

Posted in java | Leave a comment

Saltstack: Passing objects to templates

Quick one. When you pass a variable like this to a template through the context/default parameter it is iterpreted as a literal string:

server_xml:
  file:
    - managed
    - name: /opt/tomcat/conf/server.xml
    - template: jinja
    - source: salt://tomcat/files/server.xml.tmpl
    - context:
        deploy_conf: deploy_conf

Which means that you end up with errors like this:
Unable to manage file: Jinja variable 'unicode object' has no attribute 'control_port

To pass the object itself in you need to put it in the braces:

server_xml:
  file:
    - managed
    - name: /opt/tomcat/conf/server.xml
    - template: jinja
    - source: salt://tomcat/files/server.xml.tmpl
    - context:
        deploy_conf: {{ deploy_conf }}
Posted in saltstack, technology | Leave a comment

Using MapMessage with ActiveMQ with a Python Stomp.py consumer

Out of the box a STOMP consumer on an ActiveMQ broker will be able to receive TextMessages, but MapMessages will arrive without content. This is because we need to specify a converter to ActiveMQ(?) which we can do in the subscription setup, in a similar way that this thread discusses sending MapMessages from the Python end. Unfortunately there is a little bit of manual handling, because the best the client library is do is deliver a string, so you’ll have to handle the deserialisation.

The following example uses JSON as the encoding. If you change the transformation to jms-map-xml you can get the encoding as XML.

import getopt
import time
import logging

class SimpleListener(object):
    def on_error(self, headers, message):
        print('received an error %s'%message)
    def on_message(self, headers, message):
        print('received a message %s (%s)'%(message, headers))

def listen():
    con = stomp.Connection(host_and_ports=[('localhost', 61613)])
    con.set_listener('', SimpleListener())
    con.start()
    con.connect()
    
    con.subscribe(destination='/queue/test', id=1, ack='auto', headers={'transformation' : 'jms-map-json'})

There is no simple way to automatically detect the encoding, so we rely on the data travelling through a given topic or queue to be in a consistent format. Also, the JSON encoding is a little strange, but not incomprehensible.

Broker config, sender example, and python sample code is available here.

Posted in python, Uncategorized | Leave a comment

Simple type checking using property()

Python usually relies of duck typing for type safety, but from time to time it can be handy to enforce some type checking, particularly when new users are going to be using your objects. The following are three utility methods for applying type checking to class properties, using the new style object property() method.

def deleter(attr):
    """Deleter closure, used to remove the inner variable"""
    def deleter_real(self):
        return delattr(self, attr)
    return deleter_real

def getter(attr):
    """Getter closure, used to simply return the inner variable"""
    def getter_real(self):
        return getattr(self, attr)
    return getter_real

def setter(attr, valid_types):
    """Setter closure, used to do type checking before storing var"""
    def setter_real(self, var):
        if not isinstance(var, valid_types): raise TypeError("Not of required type: "+str(valid_types))
        setattr(self,attr,var)
    return setter_real

def typed(attr, valid_types, docs=""):
    """Wrapper around property() so that we can easily apply type checking
    to properties"""
    return property(getter(attr), setter(attr, valid_types), deleter(attr), docs)

# Example class
class A(object):
    a = typed("_a", int)

# Testing output
a1 = A()
a1.a = 1
print "Got stored value = " + str(a1.a)

a1.a = "1"

The results are:

$ python tmp.py 
Got stored value = 1
Traceback (most recent call last):
  File "tmp.py", line 28, in <module>
    a1.a = "1"
  File "tmp.py", line 11, in setter_real
    if not isinstance(var, valid_types): raise TypeError("Not of required type: "+str(valid_types))
TypeError: Not of required type: <type 'int'>
Posted in programming, python | Leave a comment

Active Directory on EC2/VPC – Using Elastic IP in DNS

The basic use case is this: we want an Active Directory server running in an AWS VPC that can serve machine within the VPC, and in other locations. The AD DC has an Elastic IP to allow external entities to access it, specifically the DNS. However, due to the way Elastic IPs work the Windows network stack sees its IP as being in the 10.0.0.0/16 range of the VPC, and so, the dynamic updating of the DC’s DNS entries results in all the address pointers being to this private IP.

What is happening here is the NetBIOS stack is doing its routine updates of the DNS, and it is pulling the private IP from the network stack. This isn’t a NIC dynamic DNS update as you may expect. You can however disable this behaviour by following the instructions here, which require you to create a registry entry for the IP of the server:

HKLM\SYSTEM\CurrentControlSet\Services\DNS\Parameters
Registry Value: PublishAddresses
Registry Value Type: REG_MULTI_SZ
Registry Value Data:

This could be baked into a usercode startup script if required for multiple server images.

Note, this is a more advanced configuration change which needs to be documented in case it causes issues in the future.

Posted in AWS | Tagged | Leave a comment

Reloading Tiles2 Config in Spring 3.x

When you are using Tiles for layout composition with Spring you configure it as a view resolver by adding something like this to the applicationContext.xml

    <!-- Configure the Tiles templates -->
    <bean id="tilesConfigurer"
        class="org.springframework.web.servlet.view.tiles2.TilesConfigurer">
        <property name="definitions">
            <list>
                <value>/WEB-INF/tiles.xml</value>
            </list>
        </property>
        <property name="preparerFactoryClass"
            value="org.springframework.web.servlet.view.tiles2.SpringBeanPreparerFactory" />
    </bean>
    <!-- Resolve views using Tiles -->
    <bean id="tilesViewResolver"
        class="org.springframework.web.servlet.view.UrlBasedViewResolver">
        <property name="viewClass"
            value="org.springframework.web.servlet.view.tiles2.TilesView" />
    </bean>

This will read the tiles.xml on startup. If you want it to refresh when the file changes you either need to add the Tiles filter, or more elegantly just add this context-param to the web.xml

<context-param>   <param-name>org.apache.tiles.definition.dao.LocaleUrlDefinitionDAO.CHECK_REFRESH</param-name>  <param-value>true</param-value> </context-param> 

Even better for testing environments, you can also put the declaration in your context.xml file, so you can switch it on and off on a per-deployment basis

    <Parameter name="org.apache.tiles.definition.dao.LocaleUrlDefinitionDAO.CHECK_REFRESH"
        value="true" override="false" />

 

Posted in Spring | 1 Comment

Spring MVC Validation BindingResult

A quick note about using the BindingResult to detect and report errors in a form. One gotcha that got me was the need to set a name on the @ModelAttribute in order to properly relate the form:form commandName and the validation object. Essentially, if you don’t set a name then @ModelAttribute will get the command name from the name of the argument, and BindingResult will get the command name from the type of the argument, meaning that when you go to use form:errors nothing will be displayed.

ie, this didn’t work because @ModelAttribute was using “myObject”, while BindingResult was using “objectClass”:

public String saveObject(ModelMap model,
            @Valid @ModelAttribute ObjectClass myObject, BindingResult result)

This did work:

public String saveObject(ModelMap model,
            @Valid @ModelAttribute("myObject") ObjectClass myObject, BindingResult result)

At the JSP end things looked like this, where “attr” is an attribute that is being checked:

<form:form commandName="myObject">
...
<form:errors path="attr" />
...
</form:form>

 

Posted in java, Spring | 2 Comments

Programmatically getting the Maven version of your project

It is often handy to be able to extract the Maven version of your project at run time, either for displaying in an about box, or in debugging information. One option is to read /META-INF/maven/${groupId}/${artifactId}/pom.properties. However, this file is only available in the packaged version of your project, so during development the method will fail.

The approach I’ve taken to fulfil this requirement is to create a text (properties) file in the project resources, and have Maven process this. This allows Maven to inject the version number during compile. The following snippets need to be configured.

pom.xml

version=${project.version}

The version file, such as /src/main/resource/version.prop

<build>
  <resources>
    <resource>
      <directory>src/main/resources</directory>
      <filtering>true</filtering>
      <includes>
        <include>**/*.prop</include>
      </includes>
    </resource>
  </resources>
</build>

Java method to extract the version

public String getAPIVersion() {
	String path = "/version.prop";
	InputStream stream = getClass().getResourceAsStream(path);
	if (stream == null) return "UNKNOWN";
	Properties props = new Properties();
	try {
		props.load(stream);
		stream.close();
		return (String)props.get("version");
	} catch (IOException e) {
		return "UNKNOWN";
	}
}

Note, I only filter the files *.prop, because if you have any templates in your log4j.properties file (such as ${catalina.base}) these will also get wiped out.

Posted in java, Maven, programming | 1 Comment