Back

TechnologyOct 20, 2016

Tailoring JavaAgent Setup for Microservice Applications

Christopher Blewett, and Sean Nixon

During a recent client engagement, our team was asked to configure our Java microservice distributed web application to run with a third-party monitoring service. We set about defining and executing the solution to their challenge. Here’s how we did it.

problem

As mentioned above, the application we were building uses Java with a microservice architectural approach. Normally, enabling this particular monitoring agent would simply involve downloading a jar file, updating values in a configuration file, and including the jar in the start command at runtime. (Note: Normally, a controller web server must be set up to view the reports from the monitor in the browser. In our case the controller had already been set up.) For our client’s purposes, the typical plug and play use case was not viable for several reasons.

The application is built on a distributed system made up of eight different environments. We have four testing environments for new feature development, a staging environment tested internally to simulate production, and a beta environment for participating users who help test features that may not be ready to preview yet. These six sites are hosted on three boxes. The remaining two environments are the production and pilot setups, which share hosts but differ in function. Unlike the four test sites, the higher environments are scaled to three instances each.

Tailoring JavaAgent Setup for Microservice Applications

While each host is home to at least two environments, we want to monitor each separately. This is where the challenges come into play:

  • How do we leverage the monitoring service’s configuration to distinguish each instance of the application?

  • How can we integrate the monitor into our existing build and deployment pipeline?

  • How can we manage our client’s costs while maintaining the ability to scale up as needed?

paths we considered

As consultants, our team put our heads together to come up with a possible solution, evaluate how each would meet our client’s needs, and make a final recommendation.

  1. We could use a unique jar and configuration file for each environment on each box. That would keep each one separate so the agent would only have to worry about monitoring and reporting on its own services. This would be the most straightforward approach but could be potentially troublesome depending on how the solution handles licensing (please see the section below on licensing). It also complicates the DevOps side of things, requiring that we keep track of the paths, jars, and configuration files.

  2. We could use a single jar for each box and use different configuration files for each site, but the trouble there is overwriting the default location the agent jar uses in its configuration file. From a DevOps perspective, we would again have to manage the different files and their locations across each box, and ensure that the correct configuration file is pulled with each instance of the application.

  3. After digging into the tool’s documentation, we found an alternative that would achieve the granular configuration we were looking for while only requiring a single instance of the agent jar. Using environment variables, we would be able to leverage our existing startup scripts to define the values we would need right before each service was started. Most of them were already defined in the scripts and being used elsewhere, with the exception of the path to the jar that runs the monitor. We use the same version of the deployment scripts on every server, so keeping our solution as box-agnostic as possible by avoiding the need for configuration files would allow us to continue that practice.

Following some collaboration involving our client, the third-party monitor’s technical and sales teams, and our own team, we went with the last option. This decision accomplished our goals of monitoring each environment separately and fully, maintaining our current deployment workflow, and doing so at a reasonable cost to the client.

implementation

Now that the plan had a green light, we were ready to set everything up. The actual changes themselves were not extensive. Here’s a simplified example of how we changed our scripts to enable monitoring.

Notice that we use the $JAVAAGENTMONITOR variable to define the path to the agent jar. It’s in the same location on all the boxes for simplicity’s sake but could be moved as needed. Next, we explicitly define which services we are monitoring.

# Path to monitoring agent export JAVAAGENTMONITOR="javaagent:~/path/to/javaagent.jar"

list of monitored services

export monitoredServices="service-a service-b service-c service-d"

function MonitoringSetup() {

set urls for environment names

  case "${currentEnvironment}" in beta_env|stage_ env|prod_ env|pilot_ env)    case "${currentEnvironment}" in beta_env) # We are in Beta SITENAME="beta-app.example.com" SERVICENAME="$(serviceName)" ;; stage_env)        # We are in UAT SITENAME="stage-app.example.com"           SERVICENAME="$(serviceName)"         ;;         prod_env)           # We are in Production           SITENAME="app.example.com"           SERVICENAME="$(serviceName)"         ;;         pilot_env)           # We are in Pilot environment           SITENAME="pilot-app.example.com"           SERVICENAME="$(serviceName)"         ;;         *)           echo "Bad condition: Script should never enter this part of script"         ;;       esac       # Set vars based on results       AGENT_APPLICATION_NAME="${SITENAME}"       AGENT_SERVICE_NAME="${SERVICENAME}"       AGENT_NODE_NAME="${SERVICENAME}::${ServerName}"

      export AGENT_APPLICATION_NAME AGENT_SERVICE_NAME AGENT_NODE_NAME     ;;

    *)      # We don't want monitoring in other environments       unset JAVAAGENTMONITOR     ;;   esac }

if [[ $monitoredServices =~ "${serviceName}" ]] then   MonitoringSetup; else   unset JAVAAGENTMONITOR fi

normal startup, now with javaagentmonitor included, if defined

java $JAVAAGENTMONITOR -jar jarfile.jar

The bulk of the logic goes into the MonitoringSetup function. We were already passing $serviceName and $currentEnvironment into our script, so we used those values to determine how to define the $SITENAME and $SERVICENAME variables. We derived $ServerName from some formatted output of the hostname command. We check the environment name because we don’t want to enable monitoring in the testing environments, so we unset $JAVAAGENTMONITOR in those places. Otherwise, we export the environment variables the jar is looking for. The MonitoringSetup function is only called if we are starting one of our monitored services. Once all that is done, we just call our normal Java start command and the correct value of $JAVAAGENTMONITOR is used (either blank for a service/environment in which we aren’t interested or the correct value in all other cases).

With the code changes complete, we pushed out the script to the appropriate servers and restarted the services to enable monitoring. Now our client has insight into the application’s performance, responsiveness, load, and status, and they can view these metrics on application, microservice, and individual server levels. As we pass development ownership back to the client, they are equipped with the tools they need to support the application on their own.

note about licensing

During this process, we explored various pricing strategies from the third-party vendor. As microservice architectures continue to be adopted more widely, vendors are having to restructure their pricing policies, which may still be holdovers from a more monolithic-friendly era. Be sure to explore your options when it comes to microservices. In most cases, we were able to use microservice license pricing, which significantly reduced the number of licenses our client needed to buy. From a pricing model, this particular vendor distinguishes microservices from services requiring a full license, based on the memory requirements of the service. Some of our not-so-micro-services did not qualify for shared-licensing, so be sure to involve representatives from the technical side of things during your negotiations to ensure you are purchasing the solution that meets your needs.

If you have questions about microservices, monitoring tools, or how we prioritize the long-term success of our clients at Credera, comment below or connect with us through our contact page and we’ll be happy to follow up with you.