Our story begins like most stories in the SRE/Devops culture, with a simple request: “Two of our applications need a cost-effective solution for interactive maps.” Our first thought (of course) was to use Google Maps or MapBox. These are popular, easy-to-use, well-supported and documented services.
As we investigated though, we realized that we had some slightly different needs; this led to a number of design choices that we wanted to share.
In the last year and a half, we’ve launched several products that included interactive maps. For example, DataVis is a storytelling tool for presenting a data-driven narrative about a market; the natural setting for this data is a map.Visualizing data for 10 submarkets in L.A.
When we started developing these products, we quickly realized that there are multiple components that make interactive maps work:
We looked at multiple services and began using Mapbox for 1-3 and MapBox GL JS for 4. Mapbox GL JS is an interactive map renderer that uses map styles conforming to the Mapbox Style Specification and applies those styles to vector tiles per the Mapbox Vector Tile Specification, and renders them using WebGL.
While Mapbox was impressive and exceptionally easy to use, they were missing a key feature: offline maps for web applications. At the time, offline maps were in beta for mobile apps and required self-hosting the tilesets and tile server for web applications.
As a result, we ended up replacing 1-3 with open-source or custom components:
Cue our Devops team: how do we host our map tilesets, TileServer, and map styles in a robust, scalable way, with automated deployments and updates?
Upon hearing that we needed to host our own map components, I said, “No problem! Let’s do it the same way we always do—EC2 instances and Auto Scaling Groups!”
Unfortunately, although the TileServer’s documentation has instructions for installing from npm, after attempting to create an AMI using the npm installation, I realized it would be difficult to install the virtual display drivers used by the TileServer and have those work well with the NodeJS module.
An alternative installation for the TileServer is using a Docker container. This saves us the time and effort of creating the AMI itself but containerized services had not previously been part of our infrastructure.
How do you design the deployment of a containerized service on Amazon? Where do you even begin?
Our options were:
Although AWS Fargate diminishes the need to create and manage your own EC2 instances infrastructure, we decided to go with option 2: self-managed EC2 instances, due to the fact that at this time, it is impossible to automatically create an EFS mount point and Docker shared volumes on AWS Fargate managed infrastructure.
It’s always good practice to “expect the best but prepare for the worst”, that’s why it’s very important to make sure the cluster components are scalable and can automatically recover in case something happens.
For those reasons and also to make the application infrastructure more resilient to different amounts of requests and load, we’ve created 2 Autoscaling layers:
This diagram demonstrates the ECS cluster components and their relationships:The red colored components are the main components: ALB, ECS and EFS. The rest are necessary sub-components which were also automated to ensure the ECS Cluster functionality.
Idan Shifres is a Sr. Devops Engineer at CBRE Build and a Devops Evangelist. Between finding the best Devops delivery practices and developing Terraform modules, you can probably find him in Meetups or bars looking for his next IPA beer to add to his list.