Amazon VPC (Virtual Private Cloud) is probably one of the most used and famous services inside the Amazon Web Services suite. Because it is mostly related to the concepts of security in the cloud and access to our data inside a third-party data center, like the ones of Amazon.
In this article we will talk about controlling network access primarily via VPC and its associated networking resources. We need to create rules which helps to determine who has what kind of access. In IAM the rules are policies, which defines actions that can be performed in the API or the console, and these rules are applied to IAM entities (users, groups, and so on), who are authenticated using AWS credentials. In the networking sphere, the rules are concerned with what kind of traffic is permitted into your network, and further to specific resources within your network. For example, a rule might only permit HTTPS traffic into your network, and only on port 443. Rather than being applied to authenticated entities, these rules are instead applied based on the source of the traffic. As an example, you might apply the previous HTTPS traffic rule to any traffic originating outside your network but allow any kind of traffic originating inside your network. While the concepts are similar, the mechanisms for creating and configuring these access rules are completely different.
Security incidents, such as denial-of-service (DOS) attacks and brute-force SSH attacks, can be prevented if user conduct due diligence when it comes to network security. The best way to avoid these attacks is by creating secure VPCs, subnets and security groups.
There are 4 component which helps to secure network
Routing is most important part in networking (I think we studied this in networking 101 class or probably CCNA exams). If you can’t reach or you don’t have access it’s impossible for anyone to attack. First thing we can control access at routing then we can control who can access our system.
How can we implement access on routing level. The first to figure out who needs access to what. let’s say there is a network operations department, dev-test group, and another group. You would place each group in their own VLANs. A VLAN is a way to logically separate different parts of the switch. Now it’s easy just provide access to those VLANs. We’ll create an IP interface in the VLANs, and we’ll give access by sending the route to AWS so they know how to reach back to these VLANs. Also, give the VLANs access to the routing information of AWS so they can reach. That’s it no other access required. If we’re going to constrain users by putting them in specific VLANs, and exchanged routing information. This simple step helped us to achieve a lot to secure VPC.
But what if VPC gets compromised? then create private VPC. Thats what the private VPC is for. Private VPC is another step towards security. We can also achieve authentication for user who are directly connected via ethernet cable from 802.1x. What happens when you’re plugging your ethernet port into the network, it’ll look at a database and say, this Mac address, the physical address associated with the ethernet port is allowed on the network, open the port. And if an unauthorized user tries to plug into the network, they’ll be shut down, the port will just close.
Firewall is builds a strong, secure perimeter around the edge of a network. With firewalls, you’re going to create a policy. These policies are going to allow in only what needs to come in. Typically, that’s often TCP port 179 for BGP routing, and whatever ports are necessary for your systems, whether they be HTTPS or SSH or whatever’s necessary. Policies allow what needs to come in and block everything else. Firewalls are stateful.
Firewall on AWS called web application firewall (WAF). It’s typically placed on a cloud front distribution, Amazon API gateway, a rest API, or even an application load balancer. Placing a firewall on these devices before things can get on your network. It’s good because you’re placing it on the edge of the network. It blocks the traffic long before it even comes in. WAF worked like any other firewall which enables control access based upon a policy. WAF gives some fairly granular access to protect your resources. What you’re doing is you’re controlling access with web ACL’s rules or rule groups. Web access lists just allow or deny traffic based upon what you want to permit. Just like any type of access list, but these things are stateful. You just need to setup allow or deny access rules or create a rule groups which are groups of individual rules that can be reused. WAF gives you the ability to monitor your traffic metrics with CloudWatch because it’s an AWS product, therefore, it’s automatically integrated.
Network ACLs are not stateful like a firewall, which means you have to write a network ACL for outbound traffic and inbound traffic. After all, it has no means to actually know the way the traffic is coming because it’s not watching it. Network ACLs create some packet inspection rules, allow, deny, and that’s it.
All network ACLs have a default policy, which is to deny all traffic, you will create your own policies. When you build a policy, you’re going to specify the source and destination address. There can be some wildcards in there as well, the protocol and the port number. I want to say this again, network ACLs are stateless. You must port them inbound and outbound. The order you write your network ACL is absolutely critical because network ACLs process rules in order.
The good news is security groups is that they’re stateful. You determine what you want to allow in, and it knows to allow the return traffic back out. This is another layer of protection because if an intruder gets past your routing, your firewall, and your network ACL, you’ve got the security group protecting your services.
Now to learn about Network ACLs and Security Group click
What Denial of service attack, consider example in which my organization running a server have a capacity to serve 1000 request per second. Now, attacker trying to exhaust my server capacity by sending or call 1000 times per second. In this case all my server resources are exhausted and server will not able to serve any other request from other users. In this case it will throw denial of service to users.
To avoid this situation, In the cloud environment we can do eliminate this using autoscaling. we can configure autoscaling in a such a way that it will start new instance of server as soon as work load on the server increases. This helps organizations to eliminate denial of service attack by simply scaling more servers so they can handle more requests.
A distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt the normal traffic of a targeted server, service or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic.
AWS provides shield against the security attacks. AWS Shield Standard and AWS Shield Advanced provide protections against Distributed Denial of Service (DDoS) attacks for AWS resources at the network and transport layers (layer 3 and 4) and the application layer (layer 7). A DDoS attack in which multiple compromised systems try to flood a target with traffic. A DDoS attack can prevent legitimate users from accessing the target services and can cause the target to crash due to overwhelming traffic volume.
AWS Shield Standard defends against the most common, frequently occurring network and transport layer DDoS attacks that target your website or applications. While AWS Shield Standard helps protect all AWS customers, you get particular benefit with Amazon Route 53 hosted zones, Amazon CloudFront distributions, and AWS Global Accelerator standard accelerators.
For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. When you subscribe to Shield Advanced and add protection to your resources, Shield Advanced provides expanded DDoS attack protection for those resources.
If there is an attack on the system, the IDS is charged with analyzing traffic that passes through the system. It identifies any abnormality in the system, and it sends an alert via email or text to working servers. Functioning of Intrusion systems varies, which are united by their running through a network normalization process where they learn the normal functions of the network. In this way, they realize the difference between normal functions and malfunctions. In short, IDS detects and sends alerts.
There are multiple ways to engineer an IPS into a system. One possibility is to configure it as a passive monitoring device in an out-of-band response. An out of band response means that IPS will receive a copy of the traffic and decide the type of action that the traffic demands. The passive monitoring clarifies that IPS is not the direct receiver as it passes the information via a third party or a switch. Thus, IPS are not part of the traffic flow but rather out of the band of communication.
For this reason, if traffic does travel through the network and IPS receives a copy and then identifies the traffic as malicious, it sends a TCP reset frame to both the source of the communication and the destination. The TCP reset aims to crumble the connection between the servers, thus preventing them from communicating and sending traffic to each other.
However, for enhanced control over these traffics, configuring the IPS for in-line monitoring is preferred. In this way, all the traffic directly passes through the IPS instead of IPS being the third party. Hence, the IPS decides whether the traffic should be allowed to traverse the network or not.
Overall when you try to secure your cloud VPC network your request and response goes via different layers. The best way to secure using combining all the above methods.
If the request is coming from outside the cloud you can secure configuring appropriate access at routing layer. Next apply firewall on the edge of the network which prevents unwanted request. Firewalls are stateful. Next in a line is Network ACLs helps to secure your VPC traffic by configuring inbound and outbound rules. Finally security group which is last layer and very close to your application which secures your instances.