top of page
  • Writer's pictureDevarsh Suthar

How to Accelerate Your Microservice APIs with CloudFront: A Comprehensive Guide

Introduction


In the era of microservices, where applications are broken down into smaller, independently deployable services, the need for speed and efficiency has never been more critical. This post explores how leveraging Amazon CloudFront can supercharge your microservice APIs, significantly reducing latency on the client service and improving user experience.


Understanding the problem


Leaving an application without any optimization can lead to multiple issues, both on and off the service. In this article, we will explore how to optimize microservice API performance on the service.


To grasp the problem, let’s consider an example. Imagine an e-commerce web application with a service named product-service. This service manages all product details, including images, descriptions, and titles. Whenever a user accesses the product page, the service retrieves image data from storage and transmits it across the network to the requesting client. This process is repeated every time a user, whether new or returning, requests the product details. This approach presents several challenges:


  • Redundant API calls

  • High resource utilization

  • Downgrades the Performance

  • Increase the costs

  • Increased latency


Microservice architecture diagram showing interconnected services, databases, and APIs for scalable system design.

How we reduced tons of API calls by using CloudFront


Our scenario involved fetching images and audio files stored on AWS EFS. We had already optimized the service by using a VPC connection and mounting EFS to the EC2 instance to minimize response times. However, the primary motive for introducing CloudFront was to decrease the file transfer time from the service to the client.



Amazon CloudFront CDN diagram showing global edge locations caching API responses, reducing latency and enhancing worldwide API performance for faster user access.

The diagram illustrates CloudFront's mechanism (CDN = CloudFront). Upon receiving a client request, the API gateway selects the appropriate microservice through internal logic. Instead of reaching the actual microservice, CloudFront fulfills the request, thereby saving the latency involved in fetching the file and converting it to an HTTP response. Resources are utilized only for fresh requests that are not yet cached.


After seeing tremendous improvement on the media API, we researched a bit more on the usage of cloudFront if we can use the CloudFront CDN on a non-static API. Non-Static API meaning the response from the API might change overtime. We had one majorly used API in our service which we already optimized using “to the service” optimization. We were using the redis cache cluster as a caching layer. If the data is not in the cache then only service will go to the DB and fetch-prepare the data. The API latency was in single digit but as we now introduced CDN on top of our service we wanted to remove the redis cluster dependency from the service and leverage the CDN instead. 


There was a big “BUT” in this case. As This API was a non-static API, Which is why not every API call response can be cached as response might get changed over the time. So we need to find a way that we can just cache the API responses which will be static throughout. We buckled up and brainstormed the issue because we don't have any indicator or attribute in the DB to identify if the response Item is mutable or not. Thankfully we found a way. We used another microservice APIs and created a method “shouldCacheResponseItem” which was returning a boolean value.


Now the question was how CDN will know to skip the cache and keep the cache. This step was easy as Cloudfront supports and can behave as per the response headers.

ShouldCache

Header (key:value)

true

Cache-Control: public, max-age=604800

false

Cache-Control: no-cache


The Improvement


Implementing CloudFront led to a reduction of overall request load by 35% and conserved resources on the servers while maintaining the same output threshold. We were experiencing around ~8 million requests daily to fetch media files. After the CloudFront implementation, 99% of these requests are now served from the CloudFront cache. Also the non-static API has cache-hit-ratio about 70%. This improvement has enabled our servers to handle a higher load without additional costs, and the performance of the API calls has improved in terms of latency.


Conclusion


Adopting CloudFront for our microservice APIs not only streamlined our resource utilization but also enhanced the scalability and performance of our services. By intelligently caching both static and dynamic content, we achieved significant reductions in latency and cost. This journey underscores the importance of strategic optimization in cloud environments, proving that with the right tools and approaches, even high-traffic services can be made more efficient and cost-effective.

54 views0 comments

Comments


bottom of page