Caching services

Optimizing services and their associated payload is often an after thought. Having said that its not such a big issue since caching solutions such as EhCache can be easily sprinkled into the service implementation at a latter point of time. These can be added in as aspects expressed through annotations.Spring’s caching annotations is one such implementation that works wonderfully when you have Spring already within your application stack. Otherwise, if need be, one can easily build equally functional custom aspect implementations to weave in caching at a latter point in time.

Basically with the help of caching, service implementations can non-instrusively control the lifecycle of not only the costly domain objects constructed, but also to tie back to the caches associated with service payloads generated using those domain objects such as REST Json representations.


All GET requests to REST resource are cached in a EhCache key-value store that has REST resource URI as the key and corresponding JSON http payload as the value. Example of KV pair in this cache will look something like

Keys Values
/rest/services/order/customer/1 {“id”:”customer1″, “product”:”iPhone 5″, “status”:”shipped” … }
/rest/services/customer/2 {“id”:”customer2″, “name”:”vinay nair”, ….. }

Open source EhCache servlet filter pretty much does everything required for caching REST payloads into a KeyValue(KV) store (named “allRestJsonResponses” within the sample). A minor extension to this EhCache servlet filter allows it to just cache GET requests. Anyways here is the snippet from a web.xml that shows both the ehcache filter as well as the CXF Servlet that acts as the REST services endpoint : –



With JaxRs standard / programming model, one can easily map REST resource URIs to service implementation. With the annotations provided as part of the JaxRs specification, a service implementation is made aware of their corresponding REST representation(s), and therefore these backing service implementations can easily access natural keys that map to REST payloads cached. Therefore with any PUT/DELETE/POST operations that changes the REST resource, one can easily remove specific elements from the REST cache (named “allRestJsonResponses” within the sample). Here is the sample code that does exactly that: –


     @CachePut(value = "customerAndOrders", key = "#customerID")
     @CacheEvict(value = "allRestJsonResponses", key = "'/rest/services/order/customer/'+#customerID", beforeInvocation = false)
     public Customer addOrder(@PathParam("customerID") String customerID,
               Order order) {
         . . . . .

So basically with the combination of a servlet filter and Spring @Cache annotation one can cache services using the same backing caching solution.


The size of the data that we cache is often an issue. But with Terracotta’s BigMemory, scaling the solution both vertically (on one server) and horizontally (across an array of servers) is pretty easy and doesn’t involve any code change or even changes to annotation . One can simply configure the cache definitions to leverage Terracotta offheap memory to scale-up a specific instance of the application server caching services payload.

<!-- allocate 3gb offheap memory on local app server instance for caching domain objects such as orders-->
<cache name="customerAndOrders" maxEntriesLocalHeap="100" maxBytesLocalOffHeap="3g" statistics="true">

<!-- allocate 1gb offheap memory on local app server instance for caching REST json payload-->
<cache name="allRestJsonResponses" maxEntriesLocalHeap="100" maxBytesLocalOffHeap="1g" statistics="true">

Also it is equally easy to hook the cache to the Terracotta Server Array to distribute the caches so that more than one instance can leverage the cached service payload, thereby providing the ability to scale-out the services caching infrastructure. Here are the 2 minor changes to the cache definition so as to distribute its over a server array: –

<!-- terracotta server array composed of 2 stripes that work together to present itself as combined in-memory data store-->
<terracottaConfig url="localhost:9510"/>

<cache name="customerAndOrders" maxEntriesLocalHeap="100" maxBytesLocalOffHeap="3g" statistics="true">
 <!-- distribute this cache so that all instances that can shared the cached data -->

<cache name="allRestJsonResponses" maxEntriesLocalHeap="100" maxBytesLocalOffHeap="1g" statistics="true">
 <!-- distribute this cache so that all instances that can shared the cached data -->

Configuring a terracotta server array is equally easy & can be found here at Terracotta web site @

All in all, without the need to tie oneself to costly XML edge appliances for caching, or having to hack up a software caching solution that works just for services, one can leverage BigMemory to build a caching solution that acts as a hardware agnostic scalable KeyValue store for application data as well as for service payloads

Sample Code
See sample code at that makes use of Spring with CXF with EhCache with Terracotta’s BigMemory to illustrate the above technique for caching services as well as domain objects or other data cached by the backing service implementation.