Back

How to Take AWS Cloud Security Seriously: The 6 Worst Mistakes to Avoid

Learn how to take AWS cloud security seriously by avoiding the 6 worst mistakes. Protect sensitive data and avoid common misconfigurations. 
Przemek Królik

Przemek Królik

Mar 24, 2023 | 10 min read

How to Take AWS Cloud Security Seriously: The 6 Worst Mistakes to Avoid

Many of today's businesses rely on AWS computing and storage services. This makes security and compliance a major concern – the risk of making a configuration mistake and inviting bad actors into your network is very high. In fact, according to estimates, 46% of buckets on Amazon's Simple Storage Service (S3) are misconfigured.

Sensitive data is increasingly being stored in the cloud, so how your organization secures its cloud instances and infrastructure is critical. In this article, we're going to look at six most common AWS security mistakes.

As a CTO, I have worked with MANY (really, MANY) different cloud architectures; many of these were either a pleasure to work on or a nightmare to deal with. I love cloud solutions. They give you so much opportunity to set up everything as you want, and yet, they are so complicated that setting this up is a hindrance, especially for new people or these small quick teams that work under a time crunch.

How does AWS care about security?

How does AWS care about security? AWS has been dubbed a secure cloud platform. However, no platform can be safe, and cloud users shouldn't solely rely on Amazon to take care of all security risks. Instead, cloud security must be viewed as a shared responsibility between the platforms and their users. For example, Amazon secures the server hardware on which your instances run. Still, the users are responsible for the security of the cloud infrastructure set up on that infrastructure. They call it a Shared Responsibility Model.

To secure practically any workload, you can use a wide range of built-in and third-party security tools. However, everything should start with a proper configuration.

The importance of taking proper care of AWS security configuration

The importance of taking proper care of AWS security configuration AWS's EC2 infrastructure-as-a-service platform puts more responsibility on users than a platform-as-a-service such as Elastic Beanstalk. But user misconfigurations can create security vulnerabilities on any cloud service, which is why user error causes all of the common security problems we'll discuss here.

Creating a new server in minutes without waiting for IT is possible in the cloud. However, due to the ease of deploying new servers and the democratic nature of cloud management, it can be a security nightmare. A simple configuration mistake, oversight, or administrative error can compromise your company's cloud environment.

Mistake 1: Not taking encryption seriously

Mistake 1: not taking encryption seriously Many users fail to activate the strong encryption Amazon data storage services offer.

Encryption at rest and in transit makes data worthless to bad actors – even if it is leaked or intercepted.

There are many reasons why organizations do not enable encryption in their AWS infrastructure, from it being too hard to not realizing it is essential.

Why it matters

Relational Database Service (RDS) instances are often created without encryption. And workloads in EC2 use unencrypted Elastic Block Storage (EBS). In such situations, a data breach is just a matter of time.

How to avoid these mistakes

Data in S3 should be protected, and traffic between EC2 instances should be secured and routed via private subnets. Amazon offers tools that make it easy. However, implementing encryption incorrectly may be as bad as not having encryption at all. If administrators are concerned about managing keys, they should let AWS manage them. There's always the option of migrating to the organization's own public key infrastructure afterward.

S3 provides various server-side encryption options alongside key management solutions that simplify the use of encryption keys, but users can choose to store data unencrypted. Elastic Block Storage offers encryption for data at rest, in transit, and for snapshots, but users can select unencrypted volumes, creating a risk of accidental misconfiguration that could expose sensitive data.

Mistake 2: Configuring S3 Buckets with Public Availability

Mistake 2: configuring S3 buckets with public availability AWS S3 provides users with reliable and inexpensive storage. A user selects an area to store their data, creates a bucket within that area, and then uploads objects to that bucket. Multiple devices at multiple locations in the chosen region are used to store objects on the S3 infrastructure. After an object has been stored, S3 immediately detects any loss of redundancy and repairs it.

S3 buckets are private by default, but an administrator can make a bucket public. It can be problematic, however, for a user to upload private content to a public bucket.

A bucket can and usually is configured so that anyone accessing the internet can access its data. Sometimes this is a genuine mistake, but it usually happens because a user finds it convenient to bypass access controls.

Why it matters:

S3 configuration errors have, on many occasions, exposed highly sensitive data on the open internet. Misconfigured S3 buckets cause numerous data leaks. S3 is an object storage service. Access permissions are configured for each bucket, and data is kept in buckets. By default, buckets are private, so only accounts with explicit permission can access them.

AWS users also can control who has access to the S3 buckets and objects and which permissions they are granted. Using the AWS console, access to a bucket can be given to authenticated users (with an AWS account) or everyone (anonymous access).

These grantees' permissions are list, upload/delete, view, and edit. However, users can also generate custom bucket policies that provide greater flexibility than the AWS console.

How to avoid these mistakes

Granting these permissions may or may not be problematic, depending on the bucket and the objects it contains. However, it is essential to review buckets with "everyone" permissions. Such anonymous access could lead to data being stolen or compromised.

According to Symantec's 2018 report, poor configuration resulted in the theft or leakage of 70 million records from S3 buckets. As well, keep in mind that you pay for traffic from your S3 bucket ;)

Stay updated

Get informed about the most interesting MasterBorn news.

Check our Policy to know how we process your personal data.

Mistake 3: Connecting EC2 Instances Directly to the Internet

Mistake 3: connecting ec2 instances directly to the internet There are legitimate reasons to assign EC2 instances a public IP address. However, in most cases, they should be deployed on an internal network with access limited to other resources under your control.

How to avoid this mistake:

For example, if you host a web application's database server on an EC2 instance, it should not be directly connected to the internet. Access should be mediated by firewalls and limited to web or application servers that need to request data.

Mistake 4: Leaving Insecure Ports Open

Mistake 4: leaving insecure ports open Software services run on a server connected to the network via a numbered port. Many services use a standard port: SSH on Port 22 or HTTP on Port 80. Several services are widely recognized as insecure because they send unencrypted data or contain software vulnerabilities. FTP (21), Telnet (23), and SNMP (161) are in this category.

How to avoid this mistake:

Ideally, these services should not run on EC2 instances, and AWS security groups and network access control lists should block the associated ports. However, if you need them, try pinning them down to your IP address and/or allow-listing them as required.

Mistake 5: Not Using Multi-Factor Authentication MFA

Mistake 5: not using multi-factor authentication mfa AWS's Identity and Access Management (IAM) service allows authentication with only a username and password. This is a potential security risk.

How to avoid this mistake:

It is recommended that all users take advantage of multi-factor authentication. MFA requires users to provide additional authentication. This could be a one-time code sent to a mobile device or a hardware security key.

MFA mitigates the risk that leaked passwords, or brute-force attacks could give bad actors access to your AWS account.

Mistake 6: Giving users too broad roles

Mistake 6: giving users too broad roles Access keys and user access control play an integral role in AWS security. You may be tempted to give developers administrator rights to handle specific tasks. However, in most situations, rights can be handled by policies. Not everyone needs to be an admin.

In a report published by Saviynt, 35 percent of privileged users in AWS have complete access to a wide range of services, including the ability to bring down the whole AWS customer environment. An additional mistake is leaving high-privilege AWS accounts on for terminated users.

Administrators often fail to establish policies that cover a variety of user scenarios, instead creating policies that are so broad that they are no longer effective. AWS policies and roles reduce your attack surface and eliminate the possibility that the entire AWS environment can be compromised due to the exposure of a key, account credentials, or a configuration error on your team's part.

Why it matters:

Misconfigurations commonly occur when the entire set of permissions is assigned to each AWS item. With full access to Amazon S3, an application that needs to write files to S3 can read, write, and delete every file in S3 for the account.

Never give someone full access to a service. An organization's policies should provide the lowest permissions necessary to perform a particular task.

How to avoid such mistakes?

IAM (Identity and Access Management) is integral to AWS deployment security. You can use it to quickly set up a user, a role, pre-made policies, and customize permissions at the granular level. You should use the service to assign a role to an EC2 instance or a policy to the role. As a result, the EC2 instance has full access to the policy without storing credentials locally. EC2 enables lower-level users to perform specific tasks without requiring higher-level (e.g., admin) access.

For example, scripts that run quarterly cleanups of unused files do not need read permissions. You should instead use the IAM service to grant the application write access to a specific bucket in S3. Specific permissions prevent the application from reading or deleting files in or out of the bucket.

Why is avoiding AWS security mistakes so important?

Why is avoiding AWS security mistakes so important? It isn't the fault of the cloud that privileged users can halt a whole AWS environment with sensitive data and critical applications. It shows that understanding security and its implementation is often lacking in many organizations. Cloud administrators must apply the same rigorous controls to their cloud infrastructure and configuration as their data centers.

Many of these configuration mistakes are easy to fix. By addressing them, administrators can focus on more in-depth tasks, such as running vulnerability scans with tools like Amazon Inspector or Tenable's vulnerability assessment tool Nessus.

Remember, you don't always need to use AWS for each of your workloads, or you can even end on their PaaS solutions if they fit your need. Just keep in mind that simplicity is the key. On the other hand, repeatability and setting up your security standards (or setting Terraform/CloudFormation, which would keep them ;) ) can give you more flexibility with your solutions and make them more robust for future extension and scalability.

Related articles:
/blog/cloud-vendor-lock-in-4-real-life-scenarios-and-lessons-learned/

Cloud vendor lock-in: 4 real-life scenarios and lessons learned

What exactly does getting locked-in with a cloud vendor mean? Here are 4 scenarios that show how vendor lock-in works on real-life examples.

/blog/why-cloud-computing-architecture-components-are-like-lego-blocks/

Why Cloud Computing Architecture Components Are Like... LEGO Blocks?

Cloud architecture is like… putting together LEGO blocks and creating wonderful things. It is like having nearly endless opportunities to build your own app.

/blog/Serverless_computing_pitfalls_to_avoid/

Serverless Computing - 5 pitfalls to avoid in your project

The top 5 pitfalls of Serverless Computing and how to overcome them. Learn how to avoid problems with Microservices, Timeouts, Vendor Lock, Cold start and Running dry database connections.

We build valuable, JavaScript products for U.S.-based companies

  • Nashville, U.S.

    2713 Wortham Ave.
    Nashville, TN 37215, USA

  • Wrocław, PL

    ul. Krupnicza 13, 50-075 Wrocław

  • Szczecin, PL

    ul. Wielka Odrzańska 26, 70-535 Szczecin

  • Kielce, PL

    ul. Gabrieli Zapolskiej 45B, 25-435 Kielce


Copyright © MasterBorn® 2016-2024

The Administrator of your data is MasterBorn, with its registered office in Wroclaw, Krupnicza 13, Wroclaw. If you want to withdraw, get an insight or update information about you, then contact us: contact@masterborn.com