In a recent Credera blog post, Lambda, Lambda, Lambda, Vikalp Jain reviewed some internal success we have had leveraging AWS Lambda. He notes that by 2020 this could be one of the predominant methods of code deployment because it abstracts the hardware and execution environment away from a developer’s purview.
While I was not personally involved in the OpenGive project Vik mentions, I have some experience with AWS Lambda and the serverless Java ecosphere. Many within this community proclaim the benefits of running on Lambda as it represents a significant savings in terms of overhead and provides for a low-cost method of creating restful architected dynamic websites. There are even some more complex websites that run on Lambda and offer a great end-user experience while minimizing non-development overhead (such as A Cloud Guru).
In my own adventures of leveraging Java with Lambda, I have learned that while the end state represents a leap forward, there are some important obstacles and shortcomings that you should be aware of before you take the plunge. I’ll walk through a few of those shortcomings and discuss ways to overcome them.
In developing any application, the difference between knowing what your application is really doing and what you believe you have instructed it to do is an important distinction that can typically be figured out with logging. AWS Lambda provides for a simple custom Log4j logger (com.amazonaws.services.lambda.runtime.log4j.LambdaAppender) as a straightforward way to accomplish this. On a typical Java project, logs are dumped to a file or sent to a log analysis/search tool. In AWS’s case, this logger sends logs to AWS Cloud Watch. While this is a common pattern for AWS, and using a single thread or a single request can get you logging information, it is not a solution that scales well. When you have thousands of requests with often simultaneous access, trudging through the AWS interface to debug an application may not be reasonable. Knowing this going in, you may want to be prepared to hook up an ELK stack.
For those familiar with AWS EC2, Amazon offers a variety of non-proportional instances—choose more CPU/memory/IO/graphics—to fit your application. With Lambda this is not the case and the amount of CPU you get is locked onto the memory allocation metric. From the AWS documentation, “In the AWS Lambda resource model, you choose the amount of memory you want for your function and are allocated proportional CPU power and other resources.”
So how much CPU or memory do you need? Do you know? Are they proportional? Not every developer initially plans for this, but is something that needs attention on Lambda. AWS does provide a mechanism to get the metrics from Lambda on a running instance, but I found local profiling and testing/tweaking to be an easy strategy to get the response times I desired.
Lambda has limits. This isn’t so much a knock against AWS but more about knowing what to expect. Lambda is not designed for large frameworks. When thinking about Lambda, use the function as a service (FAAS) concept in the forefront and know that you are creating a method.
This isn’t necessarily a plug for a microservice, this isn’t a destination for a function of a multi-tiered service application, it is a method.
It is a method.
It is a method.
Now that the concept has sunk in, there are file size limitations and more, so you should be careful about what dependencies you pull in when building out your code. If you find you need Spring and Hibernate in your Java application, I’d suggest not using Lambda. Keep it simple and code on.
One of my biggest frustrations with using Lambda was that there did not seem to be a common pattern for executing code locally. Lambda was designed for methods to get up and running quickly and execute fast, and there are frameworks such as the Serverless Framework that will assist in deploying code and the associated resources quickly. However, for me, it is not a fast enough turnaround time. What is needed is the ability to run code locally, make changes, see results, and iterate. It’s a style of development that allows for rapid changes and quick discovery with tools such as JRebel that do code hotswap and lower the overhead of deploy cycles.
While there is no “common” way of running Lambda locally, I found myself creating a local Lambda runner that would create its own webserver using the built-in Java http server `com.sun.net.httpserver.HttpServer` and then handle the exchange by funneling it to the appropriate Lambda function. While this was simple enough, by just mocking Lambda you lose a lot of the context AWS has and may find that results locally differ from results when deployed.
Testing locally comes after running locally. As there was not a common pattern of running locally, I also did not find a common pattern of testing code within Lambda documentation (This is quite different than Azure functions). Though local testing is simple enough when you dig in a little. First, mock the environment. Second mock the context. Third invoke the handler.
I did find that there were some helpful tools here, though they were limited. I ended up creating my own environment before the test using the `@BeforeClass` JUnit annotation, then initialize my own context object and seed it in the test, and finally loading up a JSON payload to invoke my handler and assert the result. Along the way I was able to utilize Amazon’s Local DynamoDB libraries, which could use better documentation about running within JUnit. I also found Amazon’s event package, `com.amazonaws.services.lambda.runtime.events`, that will allow you to formulate and mock external non-http events.
While these are helpful, just like running locally without the proper context and harness of the AWS environment, your mocks may be dissimilar to the actual AWS environment, meaning a test could succeed locally and fail at runtime.
As we have discussed, Lambda is a method and AWS provides us two basic Java implementations for this method. The first one is a basic inputstream/outputstream combo and the other offers POJO-based input/output types.
The benefit of the second being that any JSON/XML payload delivered to your method can automatically be un-marshalled and placed into a well-formed object ready for you to consume. This paradigm of operating on objects fits with the FAAS model, as it is similar to how you would normally produce a Java method.
However, once connected with an API Gateway (you know, to actually leverage what you build) things start to change. API Gateway has “Mapping Templates” (which are a great idea) such that when a certain request comes through, the body of the request is changed to be more of a meta object which can include information about the request itself. What does this mean? Well the POJO you expected is now an object within this larger object, and now your method and its input must change.
I was able to easily create a wrapper around each input object to address this, but it became tedious. I then started to work on a generic wrapper since all of my template mappings were the same, but ran into a brick wall. AWS Lambda would not let me implement a generic wrapper or a wrapper with a custom deserializer that could store JSON objects as strings—their internal Jackson implementation takes precedence and stops such behavior, requiring a strictly defined object to be able to un-marshall.
I found this odd and defeating, so I started looking around. What I found was AWS’s Secure Pet Store implementation and then deconstructed their pattern. Essentially, they create a single endpoint that takes an inputstream, converts to an object, and then forwards to the appropriate handler. While this works, it is not a typical API pattern. Each endpoint should serve its own requests and not rely on another layer to absorb and route (as this is what API gateway and Lambda are supposed to provide inherently).
For now, I like having the wrappers better than a single destination, but it shouldn’t take much for AWS to allow a custom deserializer on payloads in the future.
move beyond the problems
As much of this was not documented or the documentation/examples were somewhat misleading, my hope is that this information empowers you to not be distracted by its shortcomings. Instead, know where the problem spots are with Java on Lambda and move beyond, creating and implementing for the next wave of web apps. Another thing to remember is that this is a relatively new technology and offering from AWS, and it has only been in service for about 18 months. As more use it and competition grows, I would expect these limitations to quickly disappear.
Accelerate your transition to the cloud with our AWS expertise
About: Dustin Talk is a Senior Architect at Credera. He enjoys discussing technology perspective and strategy, running projects with agility, implementing application integrity, and understanding how technology can deliver business value. Read more about Credera’s perspective.