Skip navigation to main content

AWS Proton

Proton is one of the many new services/service improvements announced by AWS at Re:Invent 2020 and is currently in preview.

It is designed to enable seamless cooperation between infrastructure teams and their development counterparts when delivering microservices, primarily through Lambda or Fargate. It does this by allowing each team to focus on their own areas of expertise so the developers can develop whilst the infra people handle the rest.

The problem

Historically this is a challenge that we have felt at KCOM. Like many companies, we have separate teams to deal with cloud infrastructure and application development/maintenance and while they work very closely together they have different skillsets. This works well when lines between infrastructure and application are clearly defined but rapidly falls down as microservices and cloud-native technologies become more prevalent. When we had our first primarily serverless platform, we had to make a decision about where to draw the line and eventually it ended with the cloud team writing and then maintaining the bulk of the application code because it was so intrinsically linked with the infrastructure. This carries the risk of making inefficient use of people’s skills or relying on specific generalists with a broad set of skills, which can carry other risks.

AWS has released services in the past to try to tackle this kind of issue, normally focusing on making the lives of the developers as easy as possible by abstracting infrastructure requirements – this is the case with services like Elastic Beanstalk, CodeStar, Lightsail, CodeDeploy and others. However, in most of these cases this abstraction came at the cost of a loss of flexibility to some degree, or else still required the developers to know more about AWS infrastructure than they should have to.

Proton as a solution?

Proton approaches this in quite a different manner, more akin to a specialised service catalogue. It enables the infrastructure team to create templates of environments and services, making use of CloudFormation in a particular form. These templates can contain any AWS resources that can created by CloudFormation (which, if you include custom resources, is pretty much all of them) and they can be as complex or simple as is required. Then they define the inputs that are required from the developers and publish the template for use.

Now a developer can come along and see the available templates and create environments and services as they require, knowing that the infrastructure in the backend will be consistent. There are also built-in integrations with popular source-control repositories that will link up the application code to CodePipeline to enable high speed automatic deployments with minimal interaction with AWS itself. This automation is of course fully customisable to ensure the best practice deployment strategies of the team can be followed.

One of the most interesting features of Proton from a maintenance point of view is the versioning of the particular templates. As templates are updated due to general improvements and changes in company policy, the versions are maintained by Proton in both minor and major form. This means that live services are not automatically updated to the latest template but can be updated as and when the development team deem it appropriate to enable proper testing and avoid unexpected production changes.

From a practicality point of view, Proton is targeting quite a specific audience, that is to say an organisation with distinct application development and infrastructure teams who are looking for a brand-new mechanism for handling the deployment of their microservices. Proton is arguably not particularly compatible with existing pipelines that organisations might have and it places a heavy reliance on the use of CloudFormation as the Infrastructure-as-Code tool of choice.

Conclusion

Ultimately, Proton seems like a very interesting new tool that strikes a balance between enabling developers to focus on development whilst maintaining the depth of flexibility from having a dedicated skillful cloud infrastructure team. Many organisations may already have tools of their own to solve these problems, or not have the cloud skills to make this better than some of the tools I mentioned above, but for others this tools will be a perfect fit solution with the potential to accelerate their DevOps journey. For the right piece of work, I can absolutely see the potential, and KCOM may evaluate the service in the coming months.