This article is interesting and correctly points out something which I think is often missed when considering the risk of moving applications into the cloud. What do you do when there is a platform vulnerability which the provider is not addressing?
Fundamentally, many cloud providers such as Google, Amazon, and probably even Microsoft now, offer services in a bit of a “you get what you get” mentality. Support is often via email only and, as the reporter of the vulnerability referenced in this article found out, there are sometimes not even pathways to reach particular parts of the organization to address more specialized issues.
Security should be a big deal to Amazon. Indeed, I think it probably is a big deal to them internally. They have built many mechanisms into their platform to provide security and stability in many ways – it’s really quite amazing. I do hope they improve this particular situation, but it makes me think about the ways this might be handled if you had an application in your corporate datacenter vs. running with a cloud provider.
First of all, because your in-house system is likely not multi-tennant, the risk of MITM attacks or observation of your messages are much lower. Second, if you care enough, and are in control of the underlying OS, there is probably a lot you can do to mitigate the risk. Additional monitoring, perhaps even using secure links between nodes and applications. While HTTPS is an option in this case, I’m not sure that the cloud gives you the same monitoring capabilities that you would have in-house.
Now, this is by no means a statement that we should abandon Amazon’s services, these are the sorts of things which should mature over time and get worked out of the system. I do think it points out that a service provider like Amazon has to raise the bar a bit. Because they offer customers a little less control over their environment, they need to take extra steps to respond to issues such as this. Response time is one metric, but also offering workarounds which aren’t perfect, but help mitigate the risk.
I wonder how many AWS developers knew that using HTTPS would have mitigated this issue for the 7.5 months it existed in production? Did Amazon communicate to its customers that this issue existed? I’m guessing at least a few of those developers, if they were concerned enough, could have built in some additional checks to help minimize the risk.
Having a publicized contact path for security issues seems obvious, but apparently was not in place either. This will very likely get fixed, but I wonder if other organizations have similar oversights?
With every new technology comes bumps in the road and this is just one of them. Truth be told, it doesn’t sound like the issue was heavily exploited and it was probably low risk if knowledge about the issue was kept quiet. Obscurity isn’t ideal, but it does have an effect which is real and meaningful, if unpredictable and quickly overturned.
I think security professionals who watch these trends and problems and learn from them will see a pattern emerge for what security architecture around a service looks like. It’s a little different than your traditional datacenter (a lot different in some cases) and those who figure out that difference and create a playbook out of it will probably be pretty valuable.