Ideas, Solutions, Work in progress

and other things

Ubuntu and the Dell XPS 9560 Touchpad

I recently purchased a Dell XPS 15 9560 which I use for my day to day work, with Ubuntu 16.04 as the only operating system. It is a wonderful laptop and Ubuntu runs on it with virtually no issues. There is one issue which bothered me more than others. The touch pad sensitivity and palm detection. Whenever I tried to type a long piece of text I would invariably find my self type a couple of words somewhere else on the screen.

The problem was that when I reached for letters in the middle of the keyboard such as T or Y the inside bit of my palm would touch the touchpad in the top corner. The touchpad percieved that as a tap and I’d continue typing wherever the mouse pointer was at the time.

I usually have a mouse plugged in and have set the touchpad to be disabled when a mouse is plugged in. I used these instructions from askubuntu to do that. It helped but I do quite a bit of work away from a desk and then I need the touchpad enabled.

After a bit of googling a found a couple of questions on answers on numerous forums and managed to piece things together from there. See the references at the bottom of the post.

To make it work you first you have work out what your touchpad is called internally. You do this by running:

1
xinput --list

which prints something like this:

From this output you can see there are a number of touch input devices:

  • Virtual core XTEST pointer - Not entirely sure what this is
  • ELAN Touchscreen
  • DLL07BE:01 06CB:7A13 Touchpad

Look for the one called ‘Touchpad’. You can now list all the configuration for this device by using xinput list-props. For the XPS it is:

1
xinput list-props  "DLL07BE:01 06CB:7A13 Touchpad"

This yields a long list of properties and their values. Look for the following:

  • Synaptics Palm Detection
  • Synaptics Palm Dimensions
  • Synaptics Area

In my case these where set as follows:

Property Value Description
Synaptics Palm Detection 0 This means palm detection is off
Synaptics Palm Dimensions 10, 200 The first value refer to the minimum
Synaptics Area 0, 0, 0, 0 This describe the touchpad surface area where touches are detected.


Most of the references I found was talking about changing the first two properties Synaptics Palm Detection and Synaptics Palm Dimensions however, changing those didn’t make a difference for me. The cursor still jump around because my palm looks like a finger at the edge of the touchpad no matter how small I make the palm detection setting. This is understandable since only a small part of my palm actually touches the touchpad surface while typing.

The setting which made the biggest difference for me was the last one Synaptics Area. It is used to manage the detection of touches at the edges of the touchpad. By changing the four values associated with Synaptics Area you can change the area of the touchpad that is active to touches.

Note that the Synaptics Area is about inital touches. The disabled areas still work if a touch is initated in the active area

The first value defines how far from the left of the touchpad edge touches are detected. Anything to the left of this value is not considered a touch. The second value sets how far to the right the active part of the touchpad stretches. The third sets how far from the top edge the active area starts and the fourth is how far down the active part stretches.

To configure these you first have to work out how large the touchpad is by running the following command:

1
less /var/log/Xorg.0.log | grep -i range
1
2
3
4
5
6
7
8
[     6.904] (--) synaptics: DLL07BE:01 06CB:7A13 Touchpad: x-axis range 0 - 1228 (res 12)
[     6.904] (--) synaptics: DLL07BE:01 06CB:7A13 Touchpad: y-axis range 0 - 928 (res 12)
[     6.904] (--) synaptics: DLL07BE:01 06CB:7A13 Touchpad: invalid pressure range.  defaulting to 0 - 255
[     6.904] (--) synaptics: DLL07BE:01 06CB:7A13 Touchpad: invalid finger width range.  defaulting to 0 - 15
[     7.028] (--) synaptics: SynPS/2 Synaptics TouchPad: x-axis range 1278 - 5664 (res 0)
[     7.028] (--) synaptics: SynPS/2 Synaptics TouchPad: y-axis range 1206 - 4646 (res 0)
[     7.028] (--) synaptics: SynPS/2 Synaptics TouchPad: pressure range 0 - 255
[     7.028] (--) synaptics: SynPS/2 Synaptics TouchPad: finger width range 0 - 15

From this you can see that the touchpad as a horizontal range of 0-1228 and a vertical range of 0-928. I don’t know what these numbers mean or measure, but I played around with different values a bit and found that, for me, the magic number is 70.

1
xinput set-prop "DLL07BE:01 06CB:7A13 Touchpad" "Synaptics Area" 70 1168 70 0
  • Start detecting touches 70 from the left
  • Stop detecting touches 1228-70 = 1168 from the left
  • Start detecing touches 70 from the top
  • Detect touches all the way down

This setup work perfectly for me without even changing the Synaptics Palm Dimensions. I can now type without worrying about my cursor jumping all over the place. The best part is that if you initiate a drag of the pointer in the active area of the touchpad, the touchpad will track your finger all the way to the edge, even in the ‘no touch’ zone.

To make the changes permanent put them in a script and run the file at log in time using the startup applications gui.

References:

Octopress in a Docker Container

I use Octopress for my, albeit infrequent, blogging. I like the idea of statically generating the blog site with the posts and keeping it all in Git. I also enjoy the ability to quickly type up a post using markdown and then publish it. No need to log in to a site to write the post.

My biggest problem with Octopress is that it requires me to install and manage a Ruby environment. Either by outright installing Ruby on my machine or through rbenv. This usually means, for me at least, a struggle to get all the gems successfully installed. Adding plugins often result in errors while installing gems either due to dependencies between gem versions, or some system level dependency that is not available.

These system level dependencies really bugs me so when I recently replaced my laptop I decided to keep my OS Ruby free by managing the Octopress Ruby environment and the system level dependencies in a Docker container rather than using rbenv

TLDR; The impatient can find the Dockerfile at: https://github.com/dirkvanrensburg/octopress2-dockerfile

Tested environment

Thing Version
OS Ubuntu 16.04
Docker 17.03.1-ce
Octopress 2.0


Assumptions

Github Pages to publish and host my blog and this post assumes that you are too. However, this post should be useful even if you don’t want to use Github Pages.

Octopress

Now that you have a working container, it is time to do the Octopress installation. You can either start with a fresh copy of Octopress, see the documentation or with an existing Octopress blog. If you have an exising blog then you may have to tweak the Dockerfile a bit to add any system level dependencies your blog my have.

New Octopress blog

For a new blog you have a bit of a chicken and egg problem. You need to get the Octopress stuff, but you can’t run any Ruby dependent commands because Ruby is in the Docker container.

Clone Octopress and take note of the Gemfile in the root of the repository:

1
2
git clone git://github.com/imathis/octopress.git octopress
cd octopress

You’ll need the Gemfile in the root of the blog repository for building the Docker image later on.

Existing octopress blog

The following steps should get the blog repository in the correct state for generating and publishing.

  • If you don’t have a current copy of your blog then clone the repository from Github
  • Once you have cloned the repository change into the repository directory and change the working branch from master to source
  • Create the _deploy folder and change into that directory
  • Now link the _deploy directory with the master branch of your blog repository
    • git init
    • git remote add origin <githubrepourl>
    • git pull origin master

You’ll need the Gemfile in the root of the blog repository for building the Docker image later on.

Docker

Docker gives you the ability to create a lightweight container for processes and their dependencies. You create a container from some published base image and then install the necessary packages as you would on a normal machine. You can then run the container, which will start the required process and run it without poluting the host operating system with unecessary dependencies.

Dockerfile

A Dockerfile is a file which tells the Docker daemon what you want in your container. How to write a Dockerfile and what you can do with it is extensively documented here.

You basically start with a FROM statement telling Docker where you want to start and then telling it which package to install. In this case you have a number of dependencies as seen here:

1
2
3
4
5
6
7
8
9
10
FROM ubuntu:16.04

RUN apt-get update -y && apt-get -y install \
  sudo \
  gcc make \
  git \
  vim less \
  curl \
  ruby ruby-dev \
  python2.7 python-pip python-dev

These packages should be enough to get going with Octopress. The next step is to set up a user so that you don’t have to run the rake commands as root.

1
2
3
4
5
# Add blogger user
  RUN adduser --disabled-password --gecos "" blogger && \
  echo "blogger ALL=(root) NOPASSWD:ALL" > /etc/sudoers

  USER blogger

Next create the working folder octopress and grant permissions to blogger to by changing ownership of the folders where you need to make changes in the future

1
2
3
4
5
6
7
8
# Directory for the blog files
RUN sudo mkdir /octopress
WORKDIR /octopress

# Set permissions so blogger can install gems
  RUN sudo chown -Rv blogger:blogger /octopress
  RUN sudo chown -Rv blogger:blogger /var/lib/gems
  RUN sudo chown -Rv blogger:blogger /usr/local/bin

Running rake preview in your octopress blog folder will generate the blog and serve it on port 4000. In order to access the blog from outside the container you need to tell Docker to expose port 4000 for connections.

1
2
# Expose port 4000 so we can preview the blog
EXPOSE 4000

Next it adds the Gemfile. The contents of this file will be custom to your blog so copy it from the blog repository as mentioned earlier. Easiest is to copy your Gemfile to be in the same folder as the Dockerfile since the docker ADD command is relative to the directory you build from.

The next section will add the Gemfile to the Docker image and install the bundles.

1
2
3
4
  # Add the Gemfile and install the gems
  ADD Gemfile /octopress/Gemfile
  RUN gem install bundler
  RUN bundle install

Then it adds your gitconfig to the image. This is necessary to provide the same git experience in or outside of the container. As with the Gemfile you’ll have to copy your .gitconfig file to the same folder as the Dockerfile

1
ADD .gitconfig /home/blogger/.gitconfig

And that is it. See the repository mentioned above for the complete Dockerfile.

Build the docker image

Everything is ready so you can now build the Docker image.

  • Get the Dockerfile from Github git clone git@github.com:dirkvanrensburg/octopress2-dockerfile.git octopress-dockerfile
  • Then copy the Gemfile from the blog repository into the same folder.
  • Then copy your .gitconfig file from your home folder into the same folder
  • Then in the folder containing you Dockerfile run the following:
1
docker build . -t blog/octopress
Flag Description
-t Tag the built image with that name. Later you can use the tag to start a container.


This command instructs Docker to create a container image using the instructions in the Dockerfile. If all goes well you should see a message saying something like this: Successfully built b847ccd963fa

Test the container

To test the Docker image, start the container using the following command:

1
docker run --rm -ti blog/octopress /bin/bash
Flag Description
–rm Clean up by removing the contaier and its filesystem when it exits.
-ti Tells docker to create a pseudo TTY for the container and to keep the standard in open so you can send keystrokes to the container.


The container will start up and your terminal should be attached and in the octopress folder. You should see something like:

1
blogger@eba0e6a691ef:/octopress$

The exit command will exit the container and clean up.

Rakefile

In order to preview the blog while working on a post, you need to change the Rakefile in the root of the blog repository to bind the preview server to the IP wildcard 0.0.0.0. This makes it possible to access the blog preview in your browser at http://localhost:4000

Change

1
  rackupPid = Process.spawn("rackup --port #{server_port}")

To

1
  rackupPid = Process.spawn("rackup -o 0.0.0.0 --port #{server_port}")

Launch the container

The next step is to launch the container. Execute this command from anywhere, replacing the paths to your blog and .ssh keys:

1
docker run -p 4000:4000 --rm --volume <absolute-path-to-blog-repository>:/octopress --volume <absolute-path-to-user-home>/.ssh:/home/blogger/.ssh -ti blog/octopress /bin/bash
Flag Description
docker run Instructs docker to run a previously built image. blog/octopress in this case
-p 4000:4000 Instructs docker to expose the internal (to the container) port to the host interfaces so that the host can send data to a process listening on that port in the container
–rm Tells docker to remove the container and its file system when it exits. We don’t need to keep the container around since the blog source is external to the image
–volume volume is used to mount folders on the host system into the container. This allow processes in the container to access the files as if they are local to the container. In this case two folders are mounted. The blog repository as /octopress and the local .ssh folder of the host user as /home/blogger/.ssh. The ssh keys are used by git to authenticate and encrypt traffic to and from github. Feel free to change this so that only the github keys are available in the container.
-ti Tells docker to create a pseudo TTY for the container and to keep the standard in open so you can send keystrokes to the container.
blog/octopress The name of the image to run. This is the image built earlier using docker build
/bin/bash The command to run when starting the container.


Docker will start the container, create a pseudo tty, open standard in and run /bin/bash so that your terminal is now effectively inside the container.

It is handy to place the command above in a script in the ~/bin folder of your user. For example create a file called ~/bin/blog and place the command in there. Then you can run blog from any terminal to immediately start and access the container.

Do blog stuff

Now run the Octopress blogging commands as you would if Ruby was installed on your local machine.

Install theme

If you created a new Octopress blog then you now have to install your theme. The following installs the default Octopress theme:

1
rake install

New post

1
rake new_post

Will ask for the post name and create the post file in source/_posts

Preview

To preview blog posts:

1
rake preview

Then in your browser navigate to http://localhost:4000 and you should see the preview of your blog.

Generate

1
rake generate

This will generate the blog into the public folder

Publish

1
rake deploy

This will commit the generated blog and push it to github.

Hazelcast, JCache and Spring Boot

Hazelcast is an in memory data grid which enables data sharing between nodes in a server cluster along with a full set of other data grid features. It also implements the JCache (JSR107) caching standard so it is ideal for building a data aggregation service.

In this post I’ll go through the motions of adding Hazelcast to a Spring Boot REST application and resolving the issues until we have a functioning REST service with its response cached in Hazelcast via JCache annotations.

TLDR; I suggest reading the post to understand the eventual solution, but if you are impatient see the solution on github:
* hazelcast-jcache option 1 and 2 and
* hazelcast-jcache option 3

UPDATE 1: It seems that this issue will be resolved soon due to the hard work of @snicoll over at Spring Boot and the Hazelcast community. See the issues:

UPDATE 2: This problem described in this post was fixed in Spring Boot release 1.5.3. Check this repository for a clean example based on Spring boot 1.5.3. I am leaving the post here since it is still interesting due to the different ways the problem could be worked around.

Versions

Dependency Version
Spring Boot 1.5.1
Hazelcast 3.7.5


Spring boot REST

I am going to assume a working knowledge of building REST services using Spring Boot so I won’t be going into too much detail here. Building a REST service in Spring is really easy and a quick Google will bring up a couple of tutorials on the subject.

This post will build on top of a basic REST app found on github. If you clone that you should be able to follow along.

Adding Hazelcast

To add Hazelcast to an existing Spring Boot project is very easy. All you have to do is add a dependency on Hazelcast, provide Hazelcast configuration and start using it.

Step 1

For maven add the following dependencies to your project pom.xml file:

Hazelcast dependencies in pom file
1
2
3
4
5
6
7
8
9
10
     <dependency>
        <groupId>com.hazelcast</groupId>
        <artifactId>hazelcast</artifactId>
    </dependency>

    <dependency>
        <groupId>com.hazelcast</groupId>
        <artifactId>hazelcast-spring</artifactId>
        <version>${hazelcast.version}</version>
    </dependency>

Step 2

I left the following Hazelcast configuration empty to keep things simple for now. Hazelcast will apply defaults for all the settings if you leave the configuration empty. You can either provide a hazelcast.xml file on the classpath (e.g. src/main/resources)

XML expample of default Hazelcast configuration
1
2
3
4
5
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
       xmlns="http://www.hazelcast.com/schema/config"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

</hazelcast>

or provide a com.hazelcast.config.Config bean by means of Spring Java configuration like this:

Java example of default Hazelcast configuration
1
2
3
4
5
6
7
8
@Configuration
public class HazelcastConfig {

    @Bean
    public Config getConfig() {
        return new Config();
    }
}

The hazelcast config can also be externalised from the application by passing the -Dhazelcast.config system property when starting the service.

Step 3

Hazelcast will not start up if you start the application now. The Spring magic happens because of the org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration configuration class which is conditionally loaded by Spring whenever the application context sees that:

  1. HazelcastInstance is on the classpath
  2. There is an unresolved dependency on a bean of type HazelcastInstance

To start using Hazelcast let’s create a special service that will wire in a Hazelcast instance. The service doesn’t do anything since it exists only to illustrate how Hazelcast is configured and started by Spring.

Illustrate starting Hazelcast by adding a dependency
1
2
3
4
5
6
7
@Service
public class MapService {

    @Autowired
    private HazelcastInstance instance;

}

If you start the application now and monitor the logs you will see that Hazelcast is indeed starting up. You should see something like:

[LOCAL] [dev] [3.7.5] Prefer IPv4 stack is true.                                                                                             
[LOCAL] [dev] [3.7.5] Picked [192.168.1.1]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true   
[192.168.1.1]:5701 [dev] [3.7.5] Hazelcast 3.7.5 (20170124 - 111f332) starting at [192.168.1.1]:5701                                     
[192.168.1.1]:5701 [dev] [3.7.5] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.                                             
[192.168.1.1]:5701 [dev] [3.7.5] Configured Hazelcast Serialization version : 1                                                            

and a bit further down:

Members [1] {                                                               
    Member [192.168.1.1]:5701 - f7225da2-a428-4849-944f-43abfb12063a this 
}                                                                           

This is great! Hazelcast running with almost no effort at all!

Add JCache

Next we want to start using Hazelcast as a JCache provider. To do this add a dependency on spring-boot-starter-cache in your pom file.

Spring Boot Caching dependency in pom file
1
2
3
4
<dependency>
    <groupId>org.springframework.boot</groupId>o
    <artifactId>spring-boot-starter-cache</artifactId>
</dependency>

Then, in order to use the annotations, add a dependency on the JCache API

JCache dependency in pom file
1
2
3
4
<dependency>
    <groupId>javax.cache</groupId>
    <artifactId>cache-api</artifactId>
</dependency>

Finally to tell Spring to now configure caching add the @EnableCaching annotation to the Spring boot application class. The Spring boot application class is the one that is currently annotated with @SpringBootApplication

Enable Caching
1
2
3
@SpringBootApplication
@EnableCaching
public class Application {

Now something unexpected happens. Starting the application creates two hazelcast nodes which, if your local firewall allows multicast, join up to form a cluster. If multicasting works then you should see the following:

Members [2] {
    Member [192.168.1.1]:5701 - 18383f04-43ac-41fc-a2bc-cd093a9706b6 this
    Member [192.168.1.1]:5702 - b654cb85-7b59-489d-b599-64ddd2dc0730
}

2017-02-16 08:41:46.154  INFO 14141 --- [ration.thread-0] c.h.internal.cluster.ClusterService      : [192.168.1.1]:5702 [dev] [3.7.5] 

Members [2] {
    Member [192.168.1.1]:5701 - 18383f04-43ac-41fc-a2bc-cd093a9706b6
    Member [192.168.1.1]:5702 - b654cb85-7b59-489d-b599-64ddd2dc0730 this
}

This is saying that there are two Hazelcast nodes running; One on port 5701 and another on port 5702 and they have joined to form a cluster.

This is an unexpected complication, but lets ignore the second instance for now.

Caching some results

Let’s see if the caching works. Firstly we have to provide some cache configuration. Add the following to the hazelcast.xml file.

Hazelcast Cache configuration
1
2
3
4
5
6
7
<cache name="sumCache">
    <expiry-policy-factory>
        <timed-expiry-policy-factory expiry-policy-type="CREATED"
                                     duration-amount="1"
                                     time-unit="MINUTES"/>
    </expiry-policy-factory>
</cache>

Next, to start caching change the sum function in the CalculatorService to:

Cached Sum Service
1
2
3
4
5
6
7
@CacheResult(cacheName="sumCache")
@RequestMapping(value = "/calc/{a}/plus/{b}", method = RequestMethod.GET)
public CalcResult sum(@PathVariable("a") Double a, @PathVariable("b") Double b) {
     System.out.println(String.format("******> Calculating %s + %s",a,b));

     return new CalcResult(a + b);
}
  1. On line 1 we added the @CacheResult annotation to indicate that we want to cache the result of this function and want to place them in the cache called sumCache
  2. On line 4 we print a message to standard out so we can see the caching in action

Start the application and call the CalculatorService to add two numbers together. You can use curl or simply navigate to the following URL in a browser window

http://localhost:8080/calc/200/plus/100 

Note: The port could be something other than 8080. Check the standard out when starting the application for the correct port number

You’ll get the following response

{"result":300.0}

and should see this output in standard out:

******> Calculating 200.0 + 100.0

Subsequent calls to the service will not print this line, but if you try again after a minute you’ll see the message again.

So caching works! But what about the second HazelcastInstance ?

Only one instance

So you may think that the second Hazelcast instance is due to the example MapService we added earlier. To test that we can disable it by commenting the @Service annotation. You can even delete the class if you like, but the application will still start two Hazelcast nodes.

My guess as to what is going on is: When the application context starts, a Hazelcast instance is created by the JCache configuration due to the @EnableCaching annotation but this instance is not registered with the Spring context. Later on a new instance is created by the HazelcastAutoConfiguration which is managed by Spring and can be injected into other components.

Solution

I have found two solutions to the ‘two instance problem’ so far. Each with its own drawbacks

Option 1

I got the following idea from Neil Stevenson over on stackoverflow.

Disable the Hazelcast auto configuration
1
2
3
4
@EnableAutoConfiguration(exclude = {
        // disable Hazelcast Auto Configuration, and use JCache configuration
    HazelcastAutoConfiguration.class, CacheAutoConfiguration
})

Drawback: You can’t use the Hazelcast instance created this way directly. Spring has no knowledge of it, so you can’t get it wired in anywhere.

Option 2

This as the same effect as option 1 above except that you can use the instance. You have to name the hazelcast instance in the config:

<instance-name>test</instance-name>

and then tell the Spring context to use it by getting it by name:

Bring the instance into Spring
1
2
3
4
@Bean
public HazelcastInstance getInstance() {
    return Hazelcast.getHazelcastInstanceByName("test");
}

Drawback: This relies on the order of bean creation so I can only say that it works in Spring Boot 1.5.1.

Option 3

The best solution so far is to set and instance name as in Option 2 above and then setting the spring.hazelcast.config=hazelcast.xml in the application.properties file.

see: hazelcast-jcache-option3

Conclusion

I personally think that option 3 is the best approach. That gives you the best of both worlds with minimum configuration.

Spring Boot can be a bit magical at times and doesn’t always do exactly what you would expect, but there is always a way to tell it to get out of the way and do it yourself. The people over at Spring are working hard to make everything ‘just work’ and I am confident that these things will be ironed out over time.

Java Double and NaN Weirdness

We learn something everyday. We don’t always realise it, but we do. Sometimes the thing you learn isn’t new at all, but something you sort of knew but never really thought about too much.

I recently learned that my understanding of what causes a NaN value in Java’s double was wrong.

The story

I was working on an integration project and received a bug report on one of my services. The report said that my service is returning an HTTP code ‘500’ for a specific input message.

During my investigation I found the cause of the exception was an unexpected value returned from a down stream service. It was a SOAP service which returned something like the following in its XML response:

<SomeNumberField type="number">NaN</SomeNumberField>

I was a bit surprised to see the NaN there since I would expect them to either leave the field off or set it to null if they don’t have a value. This looked like a calculation bug since we all know that, in Java and C# at least, dividing a double with 0 results in a NaN. (Spoiler: It doesn’t)

However, this got me thinking and I tried to remember what I know about doubleand NaN. This resulted in an embarrisingly deep spiral down the rabbit hole.

NaN

Well if you think about it NaN is kind of like a number in this case, even though NaN means Not-a-Number. It exists to enable calculations with indeterminate results to be represented as a “number” in the set of valid double values. Without NaN you could get completely wrong results or you’ll get an exception, which isn’t ideal either. NaN is defined, same as Infinity, to be part of the set of valid doubles.

System.out.println(Double.isNaN(Double.NaN)); //true
System.out.println(Double.POSITIVE_INFINITY == Double.POSITIVE_INFINITY); //true
System.out.println(Double.NEGATIVE_INFINITY == Double.NEGATIVE_INFINITY); //true

I played around with double a bit and I thought to share it in a post, because I think the various edge cases of double are interesting.

I started with the following experiment:

//Lets make a NaN!
double NaN = 5.0/0;
System.out.println("NaN: " + NaN);

>> NaN: Infinity

Wait. What?

Turns out that I have lived with this misconception about what happens when you divide a double by zero. I seriously expected that a double divided by 0 is NaN. Well it turns out I was wrong. You get:

POSITIVE_INFINITY

double infinity = 5.0/0;
System.out.println((infinity == Double.POSITIVE_INFINITY)); //true

I can sort of rationalise that the answer could be infinity because you are dividing something largish with something much much smaller. In fact, dividing it by nothing so you could argue the result of that should be infitely large. Although, mathematically this does not make any sense. x/0 is undefined since there is no number that you can multiply with 0 to get back to x again. (for x <> 0)

Anyway lets play with NaN a bit.

double NaN = Double.NaN;
System.out.println("NaN: " + NaN); //NaN: NaN

System.out.println((NaN + 10)); //(NaN + 10): NaN
System.out.println((NaN - 10)); //(NaN - 10): NaN
System.out.println((NaN - NaN)); //NaN - NaN: NaN
System.out.println((NaN / 0));     //NaN / 0: NaN
System.out.println((NaN * 0));     //NaN * 0: NaN

Well no surprises here. Once a NaN always a NaN.

I used Double.NaN above to be sure I have a NaN but if you want to make one yourself then calculating the square root of a negative number is an easy way:

System.out.println((Math.sqrt(-1))); //NaN

Max and Min value

Before we get to infinity let take a quick look at Double.MAX_VALUE and Double.MIN_VALUE. These are special constants defined on Double which you can use to check if a number is at the maximum of what a double can represent. If a number is equal to Double.MAX_VALUE it means that it is about to overflow into Double.POSITIVE_INFINITY. The same goes for Double.MIN_VALUE except that it will overflow to Double.NEGATIVE_INFINITY.

Something to note about double is that it can represent ridiculously large numbers using a measly 64 bits. The maximum value is larger than 1.7*10^308 !

System.out.println("Double.MAX_VALUE is large! : " + (Double.MAX_VALUE == 1.7976931348623157 * Math.pow(10,308)));

> Double.MAX_VALUE is large! : true

It can represent these large numbers because it encodes numbers as a small real number multiplied by some exponent. See the IEEE spec

Let’s see what it takes to make Double.MAX_VALUE overflow to infinity.

double max = Double.MAX_VALUE;

System.out.println((max == (max + 1))); //true
System.out.println((max == (max + 1000))); //true
System.out.println("EVEN...");
System.out.println((max == (max + Math.pow(10,291)))); //true

System.out.println("HOWEVER...");
System.out.println((max == (max + Math.pow(10,292)))); //false
System.out.println((max + Math.pow(10,292))); //Infinity

This ability to represent seriously large numbers comes at a price of accuracy. After a while only changes in the most significant parts of the number can be reflected. As seen in the following code snippet:

double large_num = Math.pow(10,200);
System.out.println("large_num == (large_num + 1000): " + (large_num == (large_num + 1000))); //true

At large integer values the steps between numbers are very very large since the double has no place to record the change if it doesn’t affect its most 16 most significant digits. As shown above 1000 plus a very large number is still that same very large number.

Infinity

Java’s double supports two kinds of infinity. Positive and negative inifity. The easiest to make those are by dividing by 0.

double pos_infinity = 5.0/0;
System.out.println("POSITIVE_INFINITY == pos_infinity: " + (Double.POSITIVE_INFINITY == pos_infinity));

double neg_infinity = -5.0/0;
System.out.println("NEGATIVE_INFINITY == neg_infinity: " + (Double.NEGATIVE_INFINITY == neg_infinity));

In maths infinity is a numerical concept representing the idea of an infinitly large number. It is used, for example in calculus, to describe an unbounded limit - some number that can grow without bound.

In this case things are pretty much the same as in maths, where POSITIVE_INFINITY and NEGATIVE_INFINITY are used to represent numbers that are infinitely large. However they function more as a way to know something went wrong in your calculation. You are either trying to calculate something that is too large to store in a double or there is some bug in the code.

There are once again some interesting things to note when playing with positive and negative infinity.

double pos = Double.POSITIVE_INFINITY;

System.out.println("POSITIVE_INFINITY + 1000 = " + (pos + 1000));
System.out.println("POSITIVE_INFINITY + 10^1000 = " + (pos + Math.pow(10,1000)));
System.out.println("POSTIVE_INFINITY * 2 = " + (pos * 2));

Once the value is infinity it stays there even if you add or substract rediculously large numbers. However there is one interesting case, when you substract infinity from infinity:

double pos = Double.POSITIVE_INFINITY;
double neg = Double.NEGATIVE_INFINITY;

System.out.println("POSITIVE_INFINITY - POSITIVE_INFINITY = " + (pos - pos));
System.out.println("POSITIVE_INFINITY + NEGATIVE_INFINITY = " + (pos + neg));

Subtracting infinity from infinity yields NaN and as you would expect adding or subtracting NaN yields a NaN again.

System.out.println("POSTIVE_INFINITY + NaN" + (pos + Double.NaN));
System.out.println("POSTIVE_INFINITY - NaN" + (pos - Double.NaN));

In closing

Both Java’s float and double types follow the IEEE 754-1985 standard for representing floating point numbers. I am not going to go into great detail on the internals of double, but it suffice to say that double and float are not perfectly accurate when you use them to perform arithmetic. The Java primitive type documentation says:

This data type should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead.

If precision is you main concern then it is generally better to stick with good old java.math.BigDecimal. BigDecimal is immutable which makes it nice to work with, but the most important thing is precision. You have absolute control over number precision, without the rounding or overflow surprises you get with double and float. However, if performance is the main concern it is better to stick with float or double and live with the inaccuracies.

For more information on how Java handles NaN, infinity and rouding read the documentation here.

Extending Metrics for Complex Dashboards in AppDynamics

Overview

Some time ago, I was tasked to replicate one of our client’s Wily Introscope dashboards in AppDynamics. The Wily dashboard displayed a number of status lights indicating how recently activity was detected from a particular client of the application.

The status light colours were assigned as follows:

Status Meaning
GREY No activity since 5am this morning
RED No activity in the last hour, but something since 5 am
YELLOW No activity in the last 10 minutes, but some in the last hour
GREEN Activity in the last 10 minutes

 
The data for each light was gathered by Introscope using custom instrumentation points looking for calls to a particular POJO method. The first parameter to this method was the client identifier, so Introscope collected metrics for each call to this method grouping it by 10 minutes, 1 hour and 1 day.

In this post I will describe what I did to reproduce the dashboard in AppDynamics. Even though it is a rather hacky work around, it is still interesting. The solution works by extracting metrics from AppDynamics using the REST API and sending it back in as new metrics, which can be used by health rules to drive status lights.

The code and examples in this post is from an example application built to illustrate the solution more clearly.

See github: https://github.com/dirkvanrensburg/blogs-appd-metrics-for-dashboards

Status lights in AppDynamics

The status light in AppDynamics relies on a health rule to drive the state of the light. The AppDynamics status light is green by default, to indicate no health rule violations. Yellow for WARNING rule violations and Red for CRITICAL rule violations. The status light in Introscope is grey when there is no data, so it essentially has four states compared to the three states available in AppDynamics.

As mentioned, the AppDynamics status light uses one health rule, which means you cannot tie the different colours of the light to metrics gathered over different time ranges. The time range for the light is determined by the setting on the widget or the dashboard, where the Introscope status light can use separate metrics for each status.

Getting the information

The first step to solving the problem is to gather the information we need to display. We can look at the Introscope probe configuration to see what it uses for for the status light:

TraceOneMethodWithParametersOfClass: SomeCentralClass loadItems(Ljava/lang/String;)V BlamedMethodRateTracer "SYS|Service By Client|{0}:Invocations Per Second"

This means that Introscope will count the invocations per second of a method called loadItems, on an instance of the class SomeCentralClass and group this count by the client identifier (the String parameter to loadItems).

Information points

To capture that type of information in AppDynamics you use information points. Information points tracks calls to a method on a POJO and collects metrics such as Calls Per Minute and Average Response Time. AppDynamics does not allow information points to be “split” by parameter in a generic way. That means to get the required information, we have to create an information point for every client.

You create information points by opening the Information Points view from the side menu and clicking on New

Analyse -> Information Points -> New

Information points track calls to specific methods so you need to provide the class name, method name of the method to collect metrics for. In this case we want separate informations points based on the parameter to the method call, so we need to set a match condition

The information point will then start collecting data for Average Response Time, Calls per minute, and Errors per minute as seen on the following dashboard view.

Once defined, the information points are also available in the metric browser where you can plot the different metrics of each information point on the same graph. The following image shows the Average Response Time for CLIENT2 and CLIENT4

Analyse -> Metric Browser

Using the REST API

The AppDynamics controller provides a REST API, which enables you to programmatically extract information out of the controller and, in the case of configuration, send information to the controller. This means that we can call the controller to get the metric information of the information points we just configured. The URL to an information point metric can be retrieved from the metric browser. Right click on the information point and the metric you are interested in, Calls per Minute in our case, and select Copy REST URL

"Rest URL from metric browser"

This will copy the URL to the clipboard and you can test it by pasting it into a new tab in your web browser. You should see something like this

"Example REST results"

The URL can be changed to get the information over different time ranges by changing the time-range-type and duration-in-mins fields. The time-range-type field is used to get information relative to a point in time, so for example it can be used to get information for the last 15 minutes or for the 30 minutes after 10 am this morning. We can use this to get the information we are after out of AppDynamics. We can get the number of times each client called the service in the last 10, 60 or 960 minutes by changing these fields and calling the controller.

Having the information available as a REST service call is one thing, but we need it in the controller so we can create a dashboard. It is of no real use on the outside. To get metrics into the controller we need to use the Standalone Machine Agent.

The Standalone Machine Agent

The Standalone Machine Agent is a Java application whose primary function is to monitor machine statistics such as CPU, Memory utilisation and Disk IO. It also provides a way to send metrics into AppDynamics by means of a Monitoring Extension. The extension can supplement the existing metrics in AppDynamics by sending your custom metrics to the controller. A custom metric can be common across the nodes or associated with a specific tier. You specify the path, as seen in the metric browser, where the metrics should be collected relative to the root Custom Metrics

Get the information out

As mentioned before the metrics we are interested in can be extracted from the AppDynamics controller using the REST API and using the Standalone Machine Agent we can create new metrics, which we can use for the dashboard. Using the following REST API call, we can get the metrics captured by our information points rolled up to the different time ranges. The call below will get the Calls per Minute metric of CLIENT1

http://controller:8090/controller/rest/applications/ComplexDashboardTest/metric-data?metric-path=Information Points|C1|Calls per Minute&time-range-type=BEFORE_NOW&duration-in-mins=10

By calling the above REST call multiple times for every client we can get values for Calls per Minute rolled up over the periods we are interested in (10, 60 and 960 minutes). However, just getting the values of the last 960 minutes (16 hours) is not good enough since it will give incorrect values early in the day. Before 13h00 it could still pick up calls from the previous day, so we need a different approach. To do this we change the time-range-type to AFTER_TIME and provide a start time of 5am the morning. This will then only return values for the 960 minutes after 5am.

The following REST call will do that - replace the ${timeat5am} value with the UNIX time for 5am of that day.

http://controller:8090/controller/rest/applications/ComplexDashboardTest/metric-data?metric-path=Information Points|C1|Calls per Minute&time-range-type=AFTER_TIME&start-time=${timeat5am}000&duration-in-mins=960

Send the information back in

To send the information back in we need to actually create the monitoring extension, which essentially is a script which the Standalone machine agent will call periodically and any values the script writes to standard output will be forwarded to the controller. We want the script to send metrics such as the following:

name=Custom Metrics|Information Points|CLIENT1|Calls per 10 Minutes,value=0
name=Custom Metrics|Information Points|CLIENT1|Calls per 60 Minutes,value=2
name=Custom Metrics|Information Points|CLIENT1|Calls per 960 Minutes,value=2
name=Custom Metrics|Information Points|CLIENT2|Calls per 10 Minutes,value=0
name=Custom Metrics|Information Points|CLIENT2|Calls per 60 Minutes,value=1
name=Custom Metrics|Information Points|CLIENT2|Calls per 960 Minutes,value=3519

...And so on for all the clients

Once we have the extension installed and reporting, the new metrics will show up in the AppDynamics metric browser at the following location, assuming the machine agent is reporting for the tier called ‘OneTier’.

Application Infrastructure Performance -> OneTier -> Custom Metrics

There will be a section for each client (CLIENT1 to CLIENTx) and each will have a metric for each of the time ranges we are interested in (10, 60 and 960 minutes)

"The new metrics displayed in the browser"

Health Rules

Health Rules provides a way to specify conditions which the system will consider WARNING or CRITICAL conditions. You specify the metric to monitor and the threshold or baseline to compare it to for both the WARNING and CRITICAL condition.

We can now create health rules to track these metrics, so that the dashboard lights can show how recently a particular client accessed the system. To create a health rule we use the side menu in the AppDynamics controller.

Alert & Response -> Health Rules -> (click on +)

First specify a name for the rule, the type of metric we will use and the time range to use when evaluating the health rule. The last 5 minutes is good enough since the machine agent will send a value every minute and the value it sends is already summed over the period in question.

We need to create one health rule for every client "Create the health rule"

The WARNING condition is raised if there were no calls in the last 10 minutes, but some in the last 60 minutes. "Create the health rule"

The CRITICAL condition is raised if there were no calls in the last 60 minutes. "Create the health rule"

Putting it all together

Now we have all the information we need to start assembling the dashboard. Status lights only work on Custom Dashboard as opposed to Node/Tier Dashboards. To create a Custom Dashboard we click on the AppDynamics logo at the left top an choose Custom Dashboards

"Create Custom Dashboard"

Next we create a new dashboard by clicking on the Create Dashboad and set the layout of the canvas to absolute. This is because the grid layout does not support metric labels on top of other widgets and we need this to complete the dashboard.

"Create new Dashboard"

Put a label with the text Client Access and place it at the top of the dashboard, add a label for the first status light with the text CLIENT 1 and then add the status light for client 1. The status light is linked to the health rule for CLIENT1 by selecting it in the status light properties.

"Status light properties"

We can now repeat these steps for the remaining 5 clients, linking each to the appropriate health rule, and finally the dashboard looks like this

"All the status lights"

As mentioned at the start of the post, the Introscope status light can be in four states and the AppDynamics status light only three. To represent the fourth state we can put the value of the Calls per 960 Minutes metric on the status light as a dynamic label.

"Metric label"

The label background is set as transparent, and sized so that it will fit on the status light for client 1. After adding a metric label for each client, the dashboard is complete. We now have a fully functional dashboard which displays the same information as the original Introscope dashboard. In fact, it shows a little more information because we added the ‘calls today’ label on the status to make up for the missing fourth state. Knowing the number of calls for the day is much better than just having a red light meaning ‘some calls today but nothing in the last hour’.

"Completed Dashboard"

Conclusion

Using the AppDynamics REST API and Standalone Machine Agent allows you to do powerful enrichment of the metric information in AppDynamics. You could, for example, monitor an internal legacy system through a custom built API and combine that data with information already captured by AppDynamics. This can then be stored as a new metric which you can use to visualise the data.

Split Business Transactions on Oracle Service Bus Using AppDynamics

Overview

We recently helped a customer configure AppDynamics to monitor their business transactions on Oracle OSB. AppDynamics does not have built-in support for Oracle OSB, although it does support Weblogic Application Server. It detected various business transactions out of the box, but one type of transaction in particular proved to be a little tricky.

The OSB was accepting SOAP messages from a proprietary upstream system all on one endpoint. It then inspected the message and called one or more services on the OSB, essentially routing the incoming messages. AppDynamics grouped all these messages as one business transaction because they all arrived at the same endpoint. This was not acceptable as a significant number of distinct business transactions were processed this way. We had to find a way to separate the business transaction using the input data.

Changing the application was not an option so, we solved this by augmenting the application server code to give AppDynamics an efficient way to determine the business transaction name. The rest of this article describes how AppDynamics was used to find a solution, and how we improved the solution using custom byte code injection.

Example Application

An example OSB application which reproduces the design of the actual application is used to illustrate the problem and solution.

Finding the business transactions

It is a bit tricky to find the business transactions for this application because the services on Oracle OSB are all implemented by configuring the same Oracle classes. Each Proxy Service is just another instance of the same class and the call graph in the transaction snapshot is full of generic sounding names like ‘AssignRuntimeStep’

The first step to figuring out how to separate the business transactions is using the AppDynamics Method Invocation Data Collector feature. This gives you a way to inspect the method parameters in the call and printing their values. Method invocation data collectors allows you to configure AppDynamics to capture the value of a parameter for a particular method invocation. Not only the value of the parameter but it is possible to apply a chain of get- methods to the parameter.

The following figure shows data collector configuration to get information out of the parameters passed to the processMessage method on the AssignRuntimeStep class we noticed in the call graph. This data collector tells AppDynamics to capture the first parameter to the processMessage method on the class AssignRuntimeStep and then to collect the result of calling toString() and getClass().getName() on that parameter.

The results of this can be seen in the following images. The first shows the result of the toString() applied to the first parameter

and the second shows the class of the parameter. Notice that the class name is repeated three times. It is a list of values, one value saved for every invocation of the processMessage method.

From the first image it is obvious that the input message is contained in the first parameter. You can also see that the messages is stored in a map like structure and the key is called body. Note that the business transaction name is visible in the first image TranactionName=“Business Transaction1”. The second image shows the type of the first parameter so the message is contained in an object of class MessageContextImpl.

The next step is to tell AppDynamics what to use for splitting the business transactions and this can be done by using a Business Transaction Match Rule. The number of characters from the start of the message to the field we are interested in are roughly 126 and assuming the transaction names will be around 20 characters we can set up a match rule as follows:

Note the number of arguments (2) set in the above image. That value is important and the transactions will not show up at all if the wrong value is used. We determined the value by decompiling the Weblogic class but you can always do it by first trying 1 and then 2.

With the above configuration in place the AppDynamics agent is able to pick up the different transactions. The transaction names aren’t perfect but it works!

Optimise and get the correct name

This solution is OK, but it has a few issues. Firstly the transaction names are either cut off or include characters that are not part of the transaction name. It is also not very efficient, because it requires the entire MessageContextImpl instance to be serialised as a String just to extract a small part of it. To improve this we need to add custom code to the MessageContextImpl class so that we can access the data in a more efficient way.

Consider the following Java code to search a string for the transaction name:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
 private static final SEARCH_TOKEN = "TransactionName=\"";

  private static String getTransactionType(String input) {
      int startIndex = input.indexOf(SEARCH_TOKEN);              
      int endIndex = startIndex;
      
      if (startIndex == -1) return null;                  
      
      startIndex += SEARCH_TOKEN.length();             //Jump to the open quote
      if (startIndex < input.length() -1) {
          endIndex = input.indexOf("\"", startIndex);       //Find the end quote
      }
      
      if (endIndex > startIndex && endIndex < input.length()) {     
          return input.substring(startIndex, endIndex);
      }
      
      return null;
  }

This is a statically accessible piece of code that will extract the transaction name from an arbitrary string. It first tries to find the token in the input string. Once the token is found it determines the open and end quote positions and returns the transaction name. If nothing is found then return null.

The next step is to write some Java code that can use the above code without loading the entire string into memory.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
 public static short WITHIN = 512;
  public static short BUFFER_SIZE = 256;

  public static String getTransactionType(InputStream inputStream) {
      String result = null;
      BufferedReader reader = null;
      try {
          reader = new BufferedReader(new InputStreamReader(inputStream), BUFFER_SIZE);
          int read, total = 0;

          //Read up to BUFFER_SIZE and then stop
          StringBuilder sb = new StringBuilder();
          boolean stop = false;
          do {
              char[] cbuf = new char[BUFFER_SIZE];

              read = reader.read(cbuf, 0, BUFFER_SIZE);
              if(read > -1) {
                  sb.append(cbuf);

                  //Search for the transaction type in the buffer
                  result = getTransactionType(sb.toString());

                  total = total + read;
                  stop = stop || result != null || total >= WITHIN;
              }
          } while (!stop && (read != -1));
          reader.close();

          return result;
      }
      catch (Throwable e) {
          Logger.INSTANCE.trace("Failed: {0}",e, e.getMessage() );
      }
      finally {
      //Omitted clean up code
      }
      return null; //Something went wrong
  }

This method accepts an InputStream and progressively reads it, 256 characters at a time, to find the transaction type. It is limited to search only the first 512 characters as an optimisation based on the known message structure. It will likely always find the transaction type within the first 256 characters, but 512 makes it a certainty. Also note the variables WITHIN and BUFFER_SIZE, which are there to make the code configurable and future proof.

The code listed above can be included in a custom Java agent that will instrument the Weblogic code using a ClassFileTransformer. Creating Java agents and class transformers are out of the scope of this article. It focuses on the bits actually injected. For more on creating a custom Java agent see the java.lang.instrument documentation.

The next step is making the above getTransactionType accessible by the AppDynamics agent.

Using custom byte code injection to expose internals to AppDynamics

Byte code injection can be achieved in different ways, one way is using the ASM library. The basic idea is to inject a method into the MessageContextImpl class that can be accessed by AppDynamics as a getter on the first parameter of the processMessage method of AssignRuntimeStep.

So for the agent to inject the following piece of code into the MessageContextImpl class,

1
2
3
4
5
6
7
8
9
 public String ec_getTransType() {
      try {
          Logger.INSTANCE.debug("getTransactionType called");
          return TransactionTypeExtractor.getTransactionType(getBody().getInputStream(null));
      } catch (Exception e) {
          Logger.INSTANCE.error("Failed to get transactionType", e);
      }
      return null; // Something went wrong
  }

you can use ASM as listed below. It effectively writes the above method into the MessageContextImpl class before it is loaded by the class loader. For more information on how to use ASM see the ASM User Guide.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
     MethodVisitor mv = super.visitMethod(ACC_PUBLIC, "ec_getTransType", "()Ljava/lang/String;", null, null);
      mv.visitCode();
      Label l0 = new Label();
      Label l1 = new Label();
      Label l2 = new Label();
      mv.visitTryCatchBlock(l0, l1, l2, "java/lang/Exception");
      mv.visitLabel(l0);
      mv.visitFieldInsn(GETSTATIC, "au/com/ecetera/javaagent/logging/Logger", "INSTANCE", "Lau/com/ecetera/javaagent/logging/Logger;");
      mv.visitLdcInsn("getTransactionType called");
      mv.visitInsn(ICONST_0);
      mv.visitTypeInsn(ANEWARRAY, "java/lang/Object");
      mv.visitMethodInsn(INVOKEVIRTUAL, "au/com/ecetera/javaagent/logging/Logger", "debug", "(Ljava/lang/String;[Ljava/lang/Object;)V", false);
      Label l3 = new Label();
      mv.visitLabel(l3);
      mv.visitVarInsn(ALOAD, 0);
      mv.visitMethodInsn(INVOKEVIRTUAL, "com/bea/wli/sb/context/MessageContextImpl", "getBody", "()Lcom/bea/wli/sb/sources/Source;", false);
      mv.visitInsn(ACONST_NULL);
      mv.visitMethodInsn(INVOKEINTERFACE, "com/bea/wli/sb/sources/Source", "getInputStream", "(Lcom/bea/wli/sb/sources/TransformOptions;)Ljava/io/InputStream;", true);
      mv.visitMethodInsn(INVOKESTATIC, "au/com/ecetera/javaagent/vha/TransactionTypeExtractor", "getTransactionType", "(Ljava/io/InputStream;)Ljava/lang/String;", false);
      mv.visitLabel(l1);
      mv.visitInsn(ARETURN);
      mv.visitLabel(l2);
      mv.visitFrame(Opcodes.F_SAME1, 0, null, 1, new Object[] {"java/lang/Exception"});
      mv.visitVarInsn(ASTORE, 1);
      Label l4 = new Label();
      mv.visitLabel(l4);
      mv.visitFieldInsn(GETSTATIC, "au/com/ecetera/javaagent/logging/Logger", "INSTANCE", "Lau/com/ecetera/javaagent/logging/Logger;");
      mv.visitLdcInsn("Failed to get transactionType");
      mv.visitVarInsn(ALOAD, 1);
      mv.visitInsn(ICONST_0);
      mv.visitTypeInsn(ANEWARRAY, "java/lang/Object");
      mv.visitMethodInsn(INVOKEVIRTUAL, "au/com/ecetera/javaagent/logging/Logger", "error", "(Ljava/lang/String;Ljava/lang/Throwable;[Ljava/lang/Object;)V", false);
      Label l5 = new Label();
      mv.visitLabel(l5);
      mv.visitInsn(ACONST_NULL);
      mv.visitInsn(ARETURN);
      Label l6 = new Label();
      mv.visitLabel(l6);
      mv.visitLocalVariable("this", "Lcom/bea/wli/sb/context/MessageContextImpl;", null, l0, l6, 0);
      mv.visitLocalVariable("e", "Ljava/lang/Exception;", null, l4, l5, 1);
      mv.visitMaxs(4, 2);
      mv.visitEnd();

Change AppDynamics configuration

Now the AppDynamics agent configuration can be updated to use the new ec_getTransType method.

"Using new method to split"

The resulting business transaction names now looks much better.

"Properly named transactions"

Conclusion

With AppDynamics it is possible to get really useful information out of a running application. It has very flexible configuration, which allows you to really dive deep into the application internals to find issues and separate transactions. However, sometimes it is better to give AppDynamics a hook into the internal information so that it can work more efficiently. When you have access to the application code then this can easily be achieved by adding some code. When it is not practical to rebuild the entire application then you can always use byte code injection.