Write us what you want & improve the DevOpsCloud website for easy to use.

To stop spammers/bots in Telegram, we have added a captcha while joining the Telegram group, which means every new member, has to authenticate within 60 seconds while joining the group.

Home >>All Articles

Published Articles (117)

Sort by:
  • All |
  • AWS (52) | 
  • Azure (31) | 
  • DevOps (9) | 
  • FREE Udemy Courses (6) | 
  • GCP (1) | 
  • Linux (1) | 

You are viewing the Articles/Questions in AWS category

AVR posted:
12 months ago
What is Cloud?
Cloud is nothing, but we can get the Infrastructure for our project easily.
When we use Cloud, we don't have to invest money into servers/storage/manpower/electricity etc.

How to launch an EC2 instance in AWS?
EC2 is nothing but elastic cloud computing.
We need an EC2 instance to install applications and expose them to the outside world via port no.
Go to AWS Account
Click on EC2
Launch instance
Name: (Provide a meaningful name)
OS:(Pick an OS, and I would recommend Red Hat Linux)
Instance type:(As a part of learning, we could go with t2.micro, which is eligible for the free tier)
key pair: (create one as we need this to connect to the EC2 instance)
Once the instance is successfully launched, We can connect to the launched ec2 instance via the Git Bash terminal.
If you don't have Git Bash, you can download git bash for Windows.
We need to go to the path where our pem key is saved.
Go to the correct folder path where the pem key is located, and this is where we can execute the ssh command.
Go to the SSH client of the EC2 instance and get the ssh command.
Once we are connected to the EC2 instance via the Git Bash terminal, we can execute all the basic commands of Linux like
sudo -i
whoami
date
cal


How to install nginx on AWS EC2 Instance?
nginx and Apache are web servers, and their default port no 80
tomcat/web logic/WebSphere are application servers. Tomcat's default port no is 8080
We must execute the below commands in the Red Hat Linux EC2 instance.
yum install nginx
systemctl enable nginx
systemctl start nginx
systemctl status nginx


How to check the nginx in the browser?
Go to the browser and give publicip of the EC2 instance.
Browser default port no is 80 only; no need to give port no 80 separately
we need to open the 80 port in the security group as mandatory; if not, the Nginx would work in the browser
How can the security group be changed if port no 80 is not allowed as inbound?
Go to the appropriate security group
edit inbound rules
add rule
custom TCP    80    Anywhere
SAVEPlease note that we allow Anywhere only in the training sessions, not in the enterprise environment.
Once port no 80 is allowed as inbound
Go to the browser and give publicip of the EC2 instance.  
We should be able to see the Nginx landing page successfully.

How to stop Nginx?
systemctl stop nginx


How to start Nginx?
systemctl start nginx



How to install Apache on AWS EC2 Instance?
We must execute the below commands in the Red Hat Linux EC2 instance.
yum install httpd
systemctl enable httpd
systemctl start httpd
Please note that only one service can run on one port.
We need to ensure that no other services are running on port no 80, as Apache uses this port no.



How to see the list of executed all commands from the terminal?
history is the command we need to use to get the list of all executed commands.
Posted in: AWS | ID: Q118 | May 08, 2023, 06:53 PM | 1 Replies
AVR posted:
1 year ago
What do you know about the s3 bucket in AWS?
Amazon S3 stands for Amazon Simple Storage Service.
We can store files/images/videos/log files/war files/objects etc
S3 bucket size is 5TB
We can create approximately 100 S3 buckets. Beyond this, we may need to raise a service request with AWS.
What are the advantages of an S3 bucket?
All IT Companies can use S3 bucket with DevOps and without DevOps
S3 is just storage where IT companies have the flexibility to implement and start using it.
Every company would have some data, and they have to store it somewhere, and this is where they could use AWS S3 Service.
The S3 bucket name MUST be unique globally
S3 bucket name MUST be with lowercase letters
The S3 bucket can be accessed globally
S3 also have life cycle policies that we use to reduce the billing as per the business need.
S3 standard is expensive, and this is for everyday usage. This is like instant download; we can download the files instantly without waiting.
S3 Glacier is inexpensive as this can be used once in a while, like every 3 months/every 6 months. Download won't happen immediately, as downloading the files may take an hour or a few hours.
View replies (0)
Posted in: AWS | ID: Q106 | November 01, 2022, 12:19 AM | 0 Replies
AVR posted:
1 year ago
Let's learn something about MFA(Multi-factor authentication)
Go to AWS Account at header menu - click on Security Credentials
This is where we can see the MFA option
Click on MFA, where we can see the option "Activate MFA"
Click on Activate MFA
Select the option(Virtual MFA device)
Click on Continue
Click on Show QR Code
We need to scan this code from the Mobile, where we get code 1 and code 2
We need to download Microsoft Authenticator App
From the above App, we need to scan the QR code
Enter the code 1
Enter the code 2
Now click on Assign MFA
How to test the MFA?
Sign in back from the console
This is where it asks MFA code after the password to login
Why are we activating MFA? Due to enhanced security, companies have started using this MFA as mandatory.

How to delete MFA?
Go to AWS Account at header menu - click on Security Credentials
This is where we can see the MFA option
Click on MFA, where we can see the option "Manage"
If we want to remove this, then click on the Remove option
View replies (0)
Posted in: AWS | ID: Q105 | November 01, 2022, 12:17 AM | 0 Replies
AVR posted:
1 year ago
How to delete IAM users in AWS?
Go to IAM
Click on users
Select the appropriate users
Click on the Delete button
If any prompt comes, follow the AWS instructions
View replies (0)
Posted in: AWS | ID: Q104 | November 01, 2022, 12:15 AM | 0 Replies
AVR posted:
1 year ago
What is IAM & What do you know about IAM in AWS?

IAM stands for Identity Access Management
Let's assume that we have 100 users in the company, and all will access only one AWS Account.
There could be two accounts also depending on the environments and how their infrastructure has been planned
Now the question is how the access would be granted to the users.
Some people may need only access to the s3/ec2/load balancer. Not everyone needs full access to AWS.
Now we need to learn how to restrict the user or users with roles
IAM is the one who helps with this requirement

Search for IAM
We can see the IAM dashboard
Left Menu - Click on Users
Click on Add users
username-chak
Select AWS credential type- We have two options, and we can select both checkboxes
Programmatic Acess is nothing to access AWS from the command line, not GUI. This is where we use the access key and secret key.
Click on Next permissions.
Click on Create group.
Group name - devgroupaccess
Search for s3 as a keyword
Select AmazonS3FullAccess
Here I'm giving only S3FullAccess.Other than this, users cannot access anything else.
Click on Create group
Click on the Next tags
(Key, Value) we can specify anything as these are just tags (name IAM)
Click on Review
Click on Create user
On the confirmation page, we can see the sign-in URL and Download.csv option.
Now the user can log in with credentials.

NOTE:
For the root user, we don't need an Account ID in the URL. The root user is nothing but the Admin in the company.
For a normal user, we need an Account ID in the URL
When a normal user signs in as an IAM user, it asks the below fields as MANDATORY.
Account ID
IAM user name
Password
Users must change the password at the time of first-time login as per the policy.

How to give AmazonEC2FullAccess to the normal user?
Go to the Admin/Root user account
Go to IAM
Go to Users - click on the correct user where we need to grant permissions
Click on the Groups tab
Click on the Group name it is assigned
Click on the Permissions tab
Click on Add permissions
Click on Attach policies
Now search for the role "AmazonEC2FullAccess"
Click on Add permissions
The group permissions have been updated, and the user can get the newly added role as expected.
View replies (0)
Posted in: AWS | ID: Q103 | November 01, 2022, 12:14 AM | 0 Replies
AVR posted:
1 year ago
What is EC2?
EC2 is nothing Elastic Cloud Computing
It is nothing but an instance or VM.

How to launch Instance?
Click on the launch Instance
Name is Mandatory
Application and OS images- We need to select the appropriate OS(As an example: I pick Red Hat)
Instance type - (t2.micro) is only meant for learning purposes, and this one we cannot use in the Enterprise environment.
Key pair - We need to create one which is used for authentication(this is for login purposes)
Click on create new key pair
Provide a name
We have two formats(One is .pem, and the other one is .ppk)
.pem is for use with OpenSSH
.ppk is for use with PuTTY
Here I select .pem
Click on Create key pair (Automatically .pem file gets downloaded locally)
Network settings - We should understand what VPC is and how VPC is working internally
AWS gives default VPC for self-learning
We also have a security group which allows port numbers
For example
Web servers' default port no is 80
To connect to the Linux machine port, no 22 should be opened
Security group name - Specify a meaningful name
SSH stands for Secure Shell
TCP stands for Transmission Control Protocol
Source type - Anywhere (In companies, there would be a range where the connectivity happens only from those given IPs)
We can always add more security group rules as needed.
Custom TCP is the Type. Here I give 80 as the Port range.The source type is Anywhere or IP range or My IP
Next
Storage is 10 GB or Max 30GB for Linux machines as a part of self-learning which is FREE to use.
Click on Launch Instance
Click on view all instances
Now we can see our EC2 Instance up and running
How to connect to an EC2 machine?
Select the EC2 Instance and click on Connect button
Click on SSH client, where we can see all the instructions given by the AWS
AWS also provides an example for beginners who can understand easily.
The format looks like as below
ssh -i "NAME OF THE PEM FILE" username@ec2instancename
We can use the Git bash terminal to connect to EC2 Machine
Below are the basic commands to play around with in the Git Bash terminal
pwd is a command - present working directory
Here the catch is we need to go to the location where the .pem file is downloaded
To go there
cd downloads - this command takes us to the location where we saved the .pem file
Now paste the command here to connect to the EC2 Linux machine
Once successfully connected, we get the prompt with the username and private IP of the EC2 machine
This is just an example of how it looks like
[ec2-user@ip-172-31-41-51 ~] $
$ says the user is a normal user
We can verify this with the command whoami
If we want to switch from normal user to sudo user
sudo -i
Now we can see #
# says the user is root user
we can try few command like
whoami
date
cal
All the Linux commands should work as expected here
If we exit 2 times, we come out of the session
We can also log in with the below command
ssh -i pemfile.pem ec2-user@ec2instancepublicip
View replies (0)
Posted in: AWS | ID: Q100 | October 30, 2022, 05:02 AM | 0 Replies
AVR posted:
3 years ago
Let's have a basic understanding of Elastic Beanstalk in AWS.

Elastic Beanstalk is mainly used by developers
Developers write the code for creating applications
For testing the code, developers need EC2 Instance(s).

Developers are not inclined/preferred to create infrastructure on their own

The solution for developers is Elastic Beanstalk.

Developers write the code, they upload the code into Elastic Beanstalk.

Infrastructure would be created automatically for testing the applications.

The objective of CloudFormation is to create the Infrastructure.
The objective of Elastic BeanStalk is only to test the applications.
View replies (0)
Posted in: AWS | ID: Q62 | August 09, 2021, 12:45 PM | 0 Replies
AVR posted:
3 years ago
Let's have a basic understanding of Cloud Formation in AWS.

Cloud Formation:
W0e create the AWS Infrastructure by writing & running the code.

We have three ways to create AWS Infrastructure
1)GUI(Graphical user interface)
2)CLI(Command Line Interface)
3)IAC(Infrastructure as Code) - Write the code and run the code

Example:
How do we create 1000 EC2 Machines at a time?

This is where Cloud Formation comes into the picture.
Code is written in Json/Yaml script.
When we run the code, infrastructure would get created.

Advantages:
We can store the code in S3 for repeated execution for multiple regions(Known as reusability)
We can also have version control.

We also have Terraform.
It is similar to CloudFormation.
By using Terraform, we can create AWS/Azure/GCP Infrastructure quickly & efficiently.
The beauty of Terraform is the same code can be used across multiple clouds easily as known as reusability.
View replies (0)
Posted in: AWS | ID: Q61 | August 09, 2021, 12:38 PM | 0 Replies
AVR posted:
3 years ago
Let's learn about Cloud Trial in AWS.

Cloud Trial is an auditing service.

The root user can track all the history from Event history.

Let's say some of the AWS users have deleted s3 buckets, objects & some EC2 instances permanently.

How do we trace the logs like which user has deleted the resources, what were the resources that got deleted & also we need to know the date and time of the deletion?

We can track this information using cloud trial.
All the activities can be tracked by using a cloud trial service in AWS.

Go to CloudTrial - - Event history.
We can see all the records of the events.
Select a particular event; we can get more detailed information.

We can apply filters.
Filter: Resource type Bucket
We can see the events related to the s3 bucket.

We can apply a filter based on the time
We can download the list of events

Note:
The only Root users can see the Event history
IAM users will not have access to see the event history
View replies (0)
Posted in: AWS | ID: Q60 | August 09, 2021, 12:29 PM | 0 Replies
AVR posted:
3 years ago
Let's learn about CloudWatch Service in AWS.

CloudWatch is a monitoring service provided by AWS.

The monitoring helps you get the CloudWatch metrics like
CPU Utilization
Disk read(Bytes)
Disk write(Bytes)
Network packets coming in
Network packets coming out etc


Basic monitoring is FREE. These metrics would get updated every 5mins
Detailed monitoring is PAID

What is detailed monitoring?
These metrics would get updated every 1min

Go to the CloudWatch dashboard.
Services - Management & Governance - CloudWatch
Select Metrics - EC2 - Per Instance Metrics
We can see all the metrics available
Please select the required machine & metrics so that we can monitor
Setting up an Alarm to take appropriate action
Topic Name - Specify the topic name
Alarm Name - Specify the Alarm name

How to set a billing alarm?
My Account - Billing Preferences - We need to make sure that Receive Billing Alerts option is checked as this is mandatory.

In the navigation - CloudWatch - Alarms (The below options are only available in N Virginia)
Create Alarm
Click on Select metric
Click on Billing
Click on Total estimated charge
Select USD Currency as Metric Name
Click on Select Metric
Conditions - Greater/Equal
Threshold value - You can define the value based on your requirement. (Example: 1000 USD)
Next
Next
Send a notification to - Specify an email or group of emails
Like this, we can create many alarms based on the threshold value

NOTE:
Only the N Virginia region has the above option where we can set threshold values.
Other regions do not have this option.
View replies (0)
Posted in: AWS | ID: Q59 | August 08, 2021, 08:21 PM | 0 Replies
AVR posted:
3 years ago
Let's learn about SNS in AWS.

SNS(Simple notification service):
=========================
Notifications are nothing but alerts.

When auto scaling launches a new machine, we need a notification.

In Route 53, failover routing policy, when one region is down, we need notification.

We get notifications quickly with SNS.

Subscribers are nothing but the users/AWS Admins that are responsible for managing infrastructure.

We create a group(Also known as topic)
We add the email addresses of subscribers into the group
The process of adding subscribers to the group is known as a subscription.

Notifications we can receive through
1)Email with normal text
2)Email with JSON script
3)Mobile SMS
4)HTTP/HTTPS requests

We need to create Topics/Subscriptions to receive notifications.

We need to log in and confirm the notification subscription as this is mandatory.
Status- Pending confirmation gets changed to Confirmed once the email validation gets succeeded.

We use SNS mainly with Auto scaling as this is mandatory.
View replies (0)
Posted in: AWS | ID: Q58 | August 08, 2021, 11:54 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about SQS in AWS.

SQS(Simple Queue Service):
======================
SQS is a message queue used to store messages while waiting for a computer to process them.

A queue is a temporary repository for messages that are awaiting process.


The backup mechanism for the Application server is SQS.

The actual representation is as follows:

Route 53 (www.domian.com)

LB

WS1 WS2 WS3

SQS

AS1 AS2

DBS

User request www.domain.com goes to the load balancer.
Web Servers(WS1 WS2 WS3) are attached to Load balancers
From the Web Server, the request goes to the Application Server(AS1, AS2)
The actual application runs on the Application Server
Web Server is just providing web pages to the users
Finally, Application servers communicate with DB Server

SQS is a communication channel between Web Servers and Application servers

What is the advantage of SQS?
The request wouldn't get lost
.
All the requests from webservers will be stored in SQS in Queue format.

The application server will pull the requests from SQS.

SQS will control the flow of the requests

Once the request is pulled from the SQS, the request in SQS wouldn't get deleted. It will be invisible for 30sec.

Within 30sec, the Application server should process the request

Within 30sec, if the Application server processes the request, it will be deleted from SQS

Within 30sec, if the Application server is unable to process the request, it will be visible again in SQS
View replies (0)
Posted in: AWS | ID: Q57 | August 08, 2021, 11:49 AM | 0 Replies
AVR posted:
3 years ago
Let's understand AWS RDS.

RDS stands for Relational Database Service.

RDS is used by the clients to host their databases.

Relational databases are stored in the form of rows and columns in a table format.

The below are the most popular Relational Database Types
SQL Server
Oracle
MySQL ServerPostgreSQL
Aurora
MariaDB


The advantages of RDS are (Automatic Backups, Multi-AZ feature and Read Replica)


RDS Back-ups:
---------------------
We have two types of Back-ups
1)Automated Back-ups
2)DB Snapshots which is nothing but manual backup


Automated Back-ups:
----------------------------
A)Automated backups allow us to recover our database to any point in time within a "retention period".
The retention period can be between 1 and 35 days
.
B)Automated Back-ups will take a complete daily snapshot and will also store transaction logs throughout the days

When we make a recovery, AWS would 1st choose the most recent daily backup and then apply transaction logs relevant to that day.

Automated Backups are enabled by default.

When we delete the original RDS Instance, automatic backups also get deleted automatically, which is a drawback.

DB Snapshots:(This is a manual process)
--------------------------------------------------------
DB Snapshots are done manually. (These are user-initiated).

They are stored even after we delete the original RDS instance, unlike automated backups.

In this, when we delete the original RDS Instance, we can still have DB Snapshots.

DBA's use DB Snapshots whenever they apply patches to ensure that they do have a working backup of DB

Restoring Back-ups:
---------------------------
Whenever we restore either an Automatic Backup or a manual Snapshot, the restored version of the database will be a new RDS instance with a new DNS endpoint.



Multi-AZ:(Availability Zone):
------------------------------------
Multi-AZ allows us to have an exact copy of our production database in another AZ.

AWS handles the replication

So whenever the PROD Database is written to, this write will automatically be synchronised to the stand by the database.

In the event of planned database maintenance, DB Instance failure or an AZ failure, Amazon RDS will automatically
failover to the standby so that database operations can resume quickly without administrative intervention.

Both DB Servers should have the same DNS Endpoints.


Read Replica:
-------------------

We use this to have better performance when multiple users are reading/archiving data from the Database)

The replica is nothing but a duplicate.

Read replicas allow us to have a read-only copy of our PROD Database.

This is achieved by using Asynchronous replication from the primary RDS instance to the read replica.

We use read replicas primarily for very read-heavy database workloads.

We can have up to 5 RR copies of any database.

We can have read replicas of read replicas.

Each Read Replica will have its own DNS endpoint.

For Read Operations - We use the Select command.
For Write operations - We use Insert/Update/Delete commands.
View replies (0)
Posted in: AWS | ID: Q56 | August 06, 2021, 10:48 AM | 0 Replies
AVR posted:
3 years ago
Let's understand Route 53 in AWS.

Route 53 is nothing but Domain Name System.

53 is nothing but DNS Port Number

Route 53 is a highly reliable and cost-effective way to route end users to Internet applications by translating names.

Route 53 is responsible for converting IP to Name and Name to IP.


Advantages of Route53:
==================
1)DNS is used to convert a human-friendly domain name into an IP(Internet Protocol) address and vice versa

Computers use IP Addresses to identify each other on the network.

We have two types of IP's(IPv4 & IPv6)

2)Route53 helps from regional failures.

If one region fails, the end-users have access to another region.

Traffic gets diverted to the standby region, also known as Disaster Recovery.




Route53 Routing policies:
====================
Simple
Weighted
Latency
Failover
Geolocation




Simple Routing Policy:
------------------------------
This is the default routing policy.

This is most commonly used when a single region performs a given function for our domain.


Weighted Routing Policy:
---------------------------------
Weighted Routing Policies let you split your traffic based on different weights assigned.

Example1:
10% of traffic going to US-EAST-1
90% of traffic going to EU-WEST-1

Example2:
20% of traffic going to US-EAST-1
80% of traffic going to EU-WEST-1


Latency Routing Policy:
---------------------------
Latency based routing allows you to route your traffic based on the lowest network latency for your end-user (i.e. which region will give them the fastest response time)
Latency refers to delay

Assuming the below regions as an example,
100ms to US-EAST-1
300ms to US-WEST-1
The request always goes to the lowest network latency, i.e. US-EAST-1


Failover Routing Policy:
--------------------------
Failover Routing policies are used when we want to create an active/passive set-up.

Example:
The primary site is in US-EAST-1, and the secondary DR site is in US-WEST-1
Route53 will monitor the health of the primary site using health checks
Always the user request goes to the active region




Geolocation Routing Policy:
------------------------------
Geolocation routing lets you choose where your traffic will be sent based on the geographic location of your users.
EU customers request goes to EU-WEST-1
US customers request goes to US-EAST-1
View replies (0)
Posted in: AWS | ID: Q55 | August 05, 2021, 02:12 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create NACL(Network Access control list)?

Create NACL
Name - Specify the name correctly
VPC - Select VPC correctly
Create
Now we need to attach this to the public subnet
Select NACL - Actions - Edit subnet associations - Select webSN assuming that this is Web Subnet where Web Server EC2 Instance is created.
SAVE

Now go to browser - PublicIP of WebServer EC2 Instance
The page doesn't load as expected because NACL blocks all the incoming connections
We need to open ports at the NACL
We need to open HTTP 80 port as we're accessing Web Server EC2 Instance from the browser.
We also need to open ssh 21
Go to NACL - Select NACL - Go to inbound rules
Add new rule
Add SSH & HTTP
SAVE

Now go to browser - PublicIP of WebServer EC2 Instance
The page doesn't load as expected

We need to understand stateful and stateless in nature
We need to open outbound ports explicitly
NACL is stateless

Select security group - webserver - We have both Inbound/Outbound
Go to Outbound - Edit
Add SSH & HTTP
SAVE

Now go to browser - PublicIP of WebServer EC2 Instance
The page doesn't load as expected

Even though the ports opened at Outbound, the web page doesn't load as expected.
Now we need to understand Ephemeral ports
Ephemeral means temporary
The total range of Ephemeral ports are 1024 to 65535

Go to inbound - Edit
Add Ephemeral ports are 1024 to 65535

Go to Outbound - Edit
Add Ephemeral ports are 1024 to 65535

For NACL, we need to apply rules at both Inbound & Outbound explicitly.

Now go to browser - PublicIP of WebServer EC2 Instance
The page should load as expected



We also need to understand what is stateful & stateless.
When we open an inbound port in the security group, the outbound port is open to all by default.
This status is called stateful.

For NACL, we need to open outbound port explicitly
NACL is stateless in nature
View replies (0)
Posted in: AWS | ID: Q54 | August 04, 2021, 10:56 AM | 0 Replies
AVR posted:
3 years ago
Let's learn the difference between Security group & NACL(Network Access control list)

The security group will provide security at the Instance level.

The security group is stateful in nature

NACL(Network Access control list) would provide security at the subnet level.

NACL is stateless in nature

NACL would provide one more layer of security at the subnet level.

As a part of NACL, Opening Ephemeral ports are mandatory otherwise NACL would block all ports

For Ephemeral ports - We need to open at both NACL inbound & outbound
View replies (0)
Posted in: AWS | ID: Q53 | August 04, 2021, 10:32 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create VPC, Subnets, Internet Gateway & Route table.

How to create VPC?
Name-myvpc
IPv4-10.0.0.0/16
Create

Assume that we have two subnets. One is webSN & the other one is dbSN

Web servers should be in public subnet & DB servers in a private subnet as per the standards.

How to create a subnet?
Select VPC where we want to create a subnet
Subnet name - webSN
IPv4 - 10.0.1.0/24
Availability zone - 1a
Create


How to create a subnet?
Select VPC where we want to create a subnet
Subnet name - dbSN
IPv4 - 10.0.2.0/24
Availability zone - 1b
Create


By default, every subnet is private.
If we create VPC with a private subnet, then there is no connectivity to the outside world.
WebServer should be available to the public.
DBServer shouldn't be accessed by the public & DBServer MUST have high security.


How to make subnet public?
This is a two-step process.
Step1:
We need to enable public IP at the subnet level
Select subnet - actions - modify auto-assign IP settings
Enable auto-assign IPv4
Step2:
Create IGW & attach it to VPC
Create Internet gateway - IGW
By default, the IGW is detattached, and we need to attach it to VPC
Attach internet gateway
The purpose of the IGW is to provide internet connectivity to the subnet
one IGW can be attached to one VPC only
IGW cannot be connected to webSN directly
We need one more component, the Route table, which is present between IGW and webSN.
Hence we need a Route table
The route table is in between IGW & Subnet
One end of the RT is connected to IGW & the Other end of the RT is connected to webSN.
Step3:
How to create a Route table?
Name - RT
VPC - myvpc
Create
Once the Route table is created
One end of the Route table, we need to attach to webSN
Edit subnet associated - select webSN
Another end of the Route table, we need to attach to IGW
Edit Routes
0.0.0.0/0 select IGW - Worldwide should access webSN as this is a web server
Save
Now we can confirm the subnet is public

How to prove that subnet is public?
Create a webEC2 Machine inside the subnet(webSN) and see if we can connect
Public subnets should be available to the public
Use bootstrap script

#!/bin/bash
sudo su
yum update -y
yum install httpd -y
cd /var/www/html
echo "Connecting to webSN" > index.html
service httpd start
chkconfig httpd on

Open ports SSH & HTTP
The EC2 gets created with public IP address
Go to browser - Public IP
We should get a Web Server that confirms that it is working as expected.



How to prove that subnet is private?
Create dbEC2 Machine inside the subnet(dbSN) and see if we can connect
the port should be opened across the subnet
Because webSN should communicate with dbSN
MySQL/Aurora port should be opened for the entire webSN
Because all web servers from webSN should communicate with dbSN
MySQL/Aurora 3306 10.0.1.0/24(This is Subnet for WebServer)
create
The EC2 gets created without public IP address because we're using dbSN & also we're not enabling auto-assign public IPv4 addresses to dbSN
The EC2 gets created only with a private IP address
We should be careful while creating dbEC2 Machine, and we need to make sure that we're creating at appropriate VPC & dbSN
View replies (0)
Posted in: AWS | ID: Q52 | July 29, 2021, 10:40 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create a subnet in VPC?

Just to recap of creating VPC.
Name - myVPC
IPv4 CIDR block - 10.0.0.0/16 {Technically 10.0.0.0. is a private ip address}
Create VPC

Now let's create two subnets in the above VPC

Create subnet
Select VPC where we want to create the subnet
Subnet name - webSN
IPv4 - 10.0.1.0/24
Availability zone - 1a
Create


Create subnet
Select VPC where we want to create the subnet
Subnet name - dbSN
IPv4 - 10.0.2.0/24
Availability zone - 1b
Create



By default, every subnet is private.
We need to make a private subnet to a public subnet, and this is a two-step process.



Step1:
We need to enable public IP
Select subnet
Enable auto-assign IPv4

Step2:
Create IGW & attach it to VPC
Create Internet gateway - IGW
By default, the IGW is detattached, and we need to attach it to VPC
Attach internet gateway
The purpose of the IGW is to provide internet connectivity to the subnet
IGW cannot be connected to the subnet directly
Hence we need a Route table
The route table is in between IGW & Subnet
View replies (0)
Posted in: AWS | ID: Q51 | July 28, 2021, 10:06 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about subnets in AWS.

What is a subnet?
A subnet is a partition that is created inside the VPC
We shouldn't have everything in one single subnet as a part of security.
It is always recommended to have more than one subnet.

Example:
The client has 1000 Web Servers, 1000 Application Servers & 1000 Database Servers.
Web Servers(1000 in total) - Create one subnet partition for 1000 Web Servers & place them in that subnet
Application Servers(1000 in total) - Create one subnet partition for 1000 Application Servers & place them in that subnet
Database Servers(1000 in total) - Create one subnet partition for 1000 Database Servers & place them in that subnet


By default, every subnet is private.
If we create VPC with a private subnet, then there is no connectivity to the outside world.
We need to make private subnet to public subnet
The public should access only Web Servers
The public shouldn't access DB servers
Also, DB servers MUST have high security



The first four IP addresses and the last IP address in each subnet CIDR block are unavailable for us to use and cannot be assigned to an instance.

For example, in a subnet with CIDR block 10.0.0.0/24, the following five IP addresses are reserved:

10.0.0.0: Network address.

10.0.0.1: Reserved by AWS

10.0.0.2: Reserved by AWS

10.0.0.3: Reserved by AWS

10.0.0.255: Network broadcast address.
View replies (0)
Posted in: AWS | ID: Q50 | July 28, 2021, 09:50 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about VPC in AWS.

VPC stands for Virtual Private Cloud

VPC is a virtual data centre in the Cloud

VPC is nothing but creating a partition in AWS Data Center

Whenever we create/launch EC2 Instance, make sure that we're creating that in our VPC



How to create VPC in AWS?
======================
We need to create VPC(myvpc) in AWS Cloud.

10.0.0.0/16(16 is nothing but subnet mask), and the subnet mask can max 32

The format is like IP ADDRESS/Subnet mask.

Name - myvpc

IPv4 CIDR block - 10.0.0.0/16 {Technically 10.0.0.0. is a private ip address}

Create VPC

Creating VPC is as simple as that.
View replies (0)
Posted in: AWS | ID: Q49 | July 28, 2021, 07:41 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create an IAM custom role & assign that to EC2 Instance.

The role is a replacement of credentials.

In simple terminology, a role would have two ends like source and destination where the source is EC2 Instance & destination is AmazonS3FullAccess OR AmazonEC2FullAccess.

Generally, Roles are assigned to EC2 Instances.

How to attach a role to EC2 Instance?
We can attach in step3 while creating EC2 Instance
OR
For existing EC2 Instances, Select EC2 Machine - Actions - Modify IAM Role - Select custom role.


NOTE:
If the role assigned is AmazonS3FullAccess, we can create S3bucket only from the EC2 terminal.
If the role assigned was IAMFullAccess, we can create anything related to IAM from the EC2 terminal.

We don't need to configure anything.
We can start using all AWS CLI commands from the terminal based on the role assigned.

Connect to EC2 Instance via Putty
Execute AWS CLI Commands as per the given role


Below are few examples:
===================

From the CLI, how to see all the list of buckets that were created already
aws s3 ls (list s3 objects)

Every command starts with aws followed by service name.

How to create a bucket via CLI?
aws s3 mb s3://mybucket

How to upload an object into a bucket via CLI?
aws s3 cp test.txt s3://mybucket/test.txt

How to create an IAM user via CLI?
aws iam create-user --user-name john

How to create an IAM group via CLI?
aws iam create-group --group-name mygroup


AWS CLI Command Reference - https://docs.aws.amazon.com/cli/latest/reference/
View replies (0)
Posted in: AWS | ID: Q48 | July 28, 2021, 07:29 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to use AWS CLI in AWS IAM.


To work with AWS CLI Access, we need Access Key ID & Secret Access Key.

Go to IAM Dashboard, where we can see an option to generate the access key.

We need to install the AWS CLI tool for the Windows Operating system
https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

Once AWS CLI is installed successfully on Windows,

We can follow the below steps

Open CMD prompt & execute the below command.

aws configure
It asks for Access key ID & Secret Access key
It also asks for Default region name: type your region-name correctly
Default output format: text
Now we're connected to AWS Account


From the CLI, how to see all the list of buckets that were created already
aws s3 ls (list s3 objects)

Every command starts with aws followed by service name.

How to create a bucket via CLI?
aws s3 mb s3://mybucket

How to upload an object into a bucket via CLI?
aws s3 cp test.txt s3://mybucket/test.txt

How to create an IAM user via CLI?
aws iam create-user --user-name john

How to create an IAM group via CLI?
aws iam create-group --group-name mygroup


AWS CLI Command Reference - https://docs.aws.amazon.com/cli/latest/reference/


NOTE:
When we have AWS Console Access, why do we need CLI Access?
Considering Operational team convenience, AWS gives both console access and CLI Interface.
View replies (0)
Posted in: AWS | ID: Q47 | July 26, 2021, 01:03 PM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create User groups in AWS IAM.

1st of all, IAM stands for Identity and Access Management.

IAM is a part of Security, Identity & Compliance.

Go to IAM Dashboards - Click on User groups to see the option for creating a group.
On the User groups dashboard, we can see group name, Users, Permissions & Creation time.

Click on create group
Specify User group name
Attach permissions policies - Optional
You can attach up to 10 policies to this user group. All the users in this group will have permissions that are defined in the selected policies.

Create group
That's it.



Example:

Name of the group - EC2Group
Attach permissions policies - Select AmazonEC2FullAccess
Create group

Name of the group - EC2Group
Attach permissions policies - Select AmazonS3FullAccess
Create group


Whenever new people join the company, we can add them directly to the groups if the groups are in place with policies.
Instead of assigning something manually to each user, we can promptly use the groups.
View replies (0)
Posted in: AWS | ID: Q46 | July 25, 2021, 09:23 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create users in AWS IAM.

1st of all, IAM stands for Identity and Access Management.

IAM is a part of Security, Identity & Compliance.

Go to IAM Dashboards - Click on Users to see the option Add users
We need to specify the below details
user name -
Access type - (Programmatic access/AWS Management console access) - We can choose our options based on the requirement
Console password -
Add user to a group(if applicable) / copy permissions from existing user(if applicable) / Attach existing policies directly(if applicable)

The below are a few examples of existing policies:
1)AdministratorAccess
2)AmazonEC2FullAccess
3)AmazonS3FullAccess


Add tags(this is optional but good to use)
Review
Create user



Important points to remember:
i)Console access is nothing but logging in with Email and Password
ii)CLI stands for Common Line Interface is nothing but Programmatic Access
When a user gets AdministratorAccess, the user can create IAM Users
View replies (0)
Posted in: AWS | ID: Q45 | July 25, 2021, 08:50 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about AWS Identity and Access Management (IAM), which securely manages AWS services and resources.

IAM is related to administration.

AWS IAM is a service that helps you securely control access to AWS resources.

We use IAM to control the users with the necessary permissions to access AWS Services/Resources
..
When we create AWS Account, we get complete access to all AWS Resources as we get Root user access by default.
Every company would have only one AWS root account
The owner of the account can create user accounts with limited privileges
Examples:
User A should have EC2 Full Access
User B should have S3 Full Access
User C should have EC2 read-only access


IAM allows to manage users, groups and their level of access to the AWS Services

Advantages of IAM:
===============
Centralised account of AWS Account
Shared access to AWS Account
Granular permissions
Identify federation(Users can login using LinkedIn, Facebook etc)
Multifactory Authentication(Password & OTP)
Setup password rotation policy(Expires every 30 days)


Important terms:
=============
users - end-users (people)
groups - a collection of users under one set of permissions
policies - set of permissions
roles - we can create roles for the users to make use of AWS Resources


Whenever a consultant/engineer joins the company, the user gets IAM user access, not root user access.

Root user credentials cannot be shared so easily in any organization.

We have 2 types of users in AWS (Root user & IAM user)
IAM users get limited permissions from the AWS Administrator
View replies (0)
Posted in: AWS | ID: Q42 | July 23, 2021, 11:40 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Life cycle management which is one of the AWS S3 features.

Life cycle management:
==================
Go to the bucket
Click on Management
Go to Lifecycle rules - Create lifecycle rule
Specify Lifecycle rule name
Choose a rule scope -
Acknowledge the settings
Lifecycle rule actions -

Example:
Storage class transitions Days after object creation
Standard-1A 30(After 30 days the object/objects moved to Standard-1A)
Glacier 90(After 90 days the object/objects moved to Glacier)

Create rule

Once the rule is created, we can see the timeline summary

This saves the bill for the client.
View replies (0)
Posted in: AWS | ID: Q41 | July 23, 2021, 11:13 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about the Bucket policy, which is one of the AWS S3 features.

Bucket policy: (Applicable only at Bucket level)
==============================
Go to the bucket - Select Permissions tab
We can see Bucket policy in (JSON)
Only AWS Administrators are allowed to write Bucket policies.

The purpose of the ACL & Bucket policy is the same in AWS.

ACL - We can apply at the Bucket level & also at the Object level

Bucket policy - We can only apply at the Bucket level.
View replies (0)
Posted in: AWS | ID: Q40 | July 23, 2021, 10:59 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about the Access control list, which is one of the AWS S3 features.

ACL(Access control list):
===================
Using ACL, we can control bucket & also objects.
ACL is there at the bucket level & also at the object level.


What is canonical id?
Canonical id is nothing but AWS Account Number where we can see next to the account.



Where do we see the ACL option?
Go to the bucket - Select Permissions tab
We can see the ACL option where we can edit
Click on Edit
Click on Add grantee
Grantee - This is canonical id
Object permissions (Select List/Write checkboxes) & Bucker ACL (Select Read/Write checkboxes)
Save changes


We can also apply ACL at the object level
Go to the object
Click on Permissions
Click on Edit
Click on Add grantee
Grantee - This is canonical id
Object permissions (Select List/Write checkboxes) & Object ACL (Select Read/Write checkboxes)
Save changes
View replies (0)
Posted in: AWS | ID: Q39 | July 23, 2021, 10:45 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Encryption which is one of the AWS S3 features.

Encryption: (Data gets encrypted and saved into the bucket)
=========
We use Encryption for very sensitive objects in the S3 bucket.

Why the customers should use AWS when compared to On-Prem
The most serious concern from the customers is security.

There are 2 types of encryption
1)AES - 256 (Advanced Encryption Standard) - Single Encryption
2)AWS - KMS (Key Management Service) - Double Encryption (More secured)


How do we enable Encryption?
Select the bucket - Properties - Default Encryption - Edit - Enable
View replies (0)
Posted in: AWS | ID: Q38 | July 23, 2021, 10:41 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Transfer Acceleration which is one of the AWS S3 features.

Transfer Acceleration:
==============
S3 TA enables fast, easy & secure transfer of files over long distances between end-users and the s3 bucket.

As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

When we enable TA, data will be transferred to the edge location 1st, and then from the edge location, data will be transferred to the bucket.

How do we enable TA?
Select source bucket - properties - TA - Edit - Enable - Save changes

NOTE:
User uploads to edge location with user speed
Edge location to Bucket - AWS uses the high-speed upload
View replies (0)
Posted in: AWS | ID: Q37 | July 23, 2021, 10:36 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Cross-region replication(CRR) which is one of the AWS S3 features.

Why do we need CRR? To avoid network traffic, usually, companies implement CRR.
Example: OTT Platforms like Netflix & Amazon prime movies

Replication is nothing but a duplication.
When we upload objects in one region, they should be available in another region automatically.

The pre-requisite is we need to enable bucket versioning in both the regions for Cross-region replication(CRR)

We need to enable cross-region replication at the source side(1st region)

Go to source bucket - Management - Replication Rules - Create Replication Rule.
Specify Replication Rule Name
Specify Destination bucket where we need the replication
IAM Role - Create a new role (To establish a connection between two regions, we need to have IAM Role in place)
Save

Now the Cross-region replication(CRR) should work as expected.

Upload an object in the source bucket and see if that is coming to the destination bucket automatically(known as replication)
View replies (0)
Posted in: AWS | ID: Q36 | July 21, 2021, 10:11 AM | 0 Replies
AVR posted:
3 years ago
What do you know about Amazon S3 Storage Classes?

Amazon S3 offers a range of storage classes designed for different use cases.

These include
S3 Standard for general-purpose storage of frequently accessed data;
S3 Intelligent-Tiering for data with unknown or changing access patterns;
S3 Standard-Infrequent Access (S3 Standard-IA) and
S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and
Amazon S3 Glacier (S3 Glacier) and
Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation.

If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3 Outposts storage class to store your S3 data on-premises.

Amazon S3 also offers capabilities to manage your data throughout its lifecycle.

Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.
View replies (0)
Posted in: AWS | ID: Q35 | July 20, 2021, 10:38 PM | 0 Replies
AVR posted:
3 years ago
Let's understand how we can use static website hosting in AWS.

By default, static website hosting is disabled in AWS & we need to enable this by going to bucket properties.

We can see website endpoint at the bucket's properties is nothing but the URL where we can access the website files via the browser.

To link a website name with a website endpoint, we use Route53.
View replies (0)
Posted in: AWS | ID: Q34 | July 20, 2021, 06:58 PM | 0 Replies
AVR posted:
3 years ago
Let's learn about AWS S3 Versioning.

AWS S3 Versioning has two advantages
i)We can recover deleted objects easily
ii)We can maintain different versions of the object

By default, this feature is disabled, so we need to enable versioning 1st to use it.
Go to Bucket Versioning - Enable and Save the changes.

When we enable versioning, the bucket should maintain the current version & also the previous versions of the objects.

When versioning is enabled, technically, the object is not deleted; it is only marked for deletion.
Click on the Show versions radio button.
Remove the mark & we can see the deleted object automatically in the bucket.
View replies (0)
Posted in: AWS | ID: Q33 | July 20, 2021, 11:07 AM | 0 Replies
AVR posted:
3 years ago
Let's learn something about AWS S3 Features:

The below are the most important S3 features used by most of the companies

Versioning
Static website hosting
Classes/Tiers
Cross-region replication
Transfer Acceleration
Encryption
Tags
Metadata
ACL
Bucket policies
Life cycle management
View replies (0)
Posted in: AWS | ID: Q32 | July 20, 2021, 10:32 AM | 0 Replies
AVR posted:
3 years ago
Let's understand about AWS S3
S3 stands for Simple Storage Service
S3 is a storage service, and this is paid service at the Enterprise level.

What type of storage is S3?
It is object storage, obviously.
S3 is a secure, durable and highly scalable object storage.
S3 is easy to use, with a simple web service interface to store & retrieve any amount of data from anywhere on the web.
We need to upload files only in a unique bucket(Create a bucket & upload the objects in that bucket)



EC2 - root drive comes with 8GB storage
Why do you need s3, and how is this different from EC2 storage?
s3 is pure object storage
EC2 root drive is not object storage




Features of S3:
==========
Built for 99.99% availability for the S3 platform
Amazon Guarantee 99.999999999% durability
Tiered Storage Available
Lifecycle Management
Versioning
Encryption
Secure your data using Access control lists & Bucket policy



S3 Storage Classes/Tiers:
=================
S3 Standard
S3 Intelligent Tiering
S3 Standard IA
S3 One Zone-IA(Infrequently Access)
S3 Glacier (To get data, need to wait for 2-5 hours)
S3 Glacier Deep Archive (To get data, need to wait for 12 hours)




How to create s3 bucket? (Any bucket we create based on the region is global)
=================
Storage-S3 is Global, not regional.
S3 Dashboard is Global & EC2 Dashboard is Regional
Step 1: Create Bucket
Step 2: Upload objects
Click on Create bucket
Bucket name - (Name MUST be unique)
AWS Region - Select your region
Block Public Access settings for this bucket - By default, Block all public access is enabled.
We need to acknowledge the settings
Create Bucket
Once the Bucket is created,
Go inside the Bucket, Choose Upload option
Select the objects we need to upload
Go to Permissions - Last option is Predefined ACL's - Select Grant public-read access.
Click on Upload
Click on the Object & look for the Object URL
Since this is public-read access, anyone can access the object via a browser.

How to change the permissions?
Go to permissions-Edit
We can remove the public access(Unselect the tick box)
Save changes
When there is no public access, we can see AccessDenied error message
We can control the objects via permissions.

Deletion of bucket steps:
---------------------------
1st we need to delete the objects inside the bucket(Also known as Empty the bucket)
2nd we need to delete the bucket when there are no objects internally(Only when the bucket is empty)



Bucket naming rules:
-----------------------
The following rules apply for naming buckets in Amazon S3
Bucket names must be between 3 and 63 characters long
Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-)
Bucket names must begin and end with a letter or number
Bucket names must not be formatted as an IP address (for example, 192.168.5.4)
Bucket names must be unique within a partition
View replies (0)
Posted in: AWS | ID: Q31 | July 17, 2021, 06:06 AM | 0 Replies
AVR posted:
3 years ago
Let's understand more about volumes in AWS.

Scenario1: (Same Region)
Let's say that we have two EC2 Instances. 1st Instance is with Root and EBS volumes, and 2nd Instance is with only Root volume
Now my requirement is I would like to detach the EBS volume from 1st Instance and attach it to 2nd Instance.
Detach from 1st one and attach to 2nd one, as simple as that but make sure that they both are in the same subnet

A subnet is nothing but a partition in AWS Region.

Scenario2: (Multiple Regions)
Let's say that I have one EC2 Instance with Root & EBS volumes in one region, and I also have another EC2 Instance in another region with only Root volume.
Now the requirement is I would like to have a copy of the EBS volume from one region to another region.
How can we do this?
Volume (Region1) has to be converted to Snapshot(Region1)
Snapshot(Region1) has to be copied to Snapshot(Region2)
Snapshot(Region2) has to be converted to Volume(Region2)
Volume(Region2) has to be attached to EC2 Instance(Region2)
Both Volume & EC2 Instance MUST be in the same availability zone/subnet
View replies (0)
Posted in: AWS | ID: Q30 | July 16, 2021, 10:11 AM | 0 Replies
AVR posted:
3 years ago
Should we go with AMI or Snapshots in AWS?

root volumes - we go with AMI
EBS volumes - we go with Snapshots
View replies (0)
Posted in: AWS | ID: Q29 | July 16, 2021, 09:32 AM | 0 Replies
AVR posted:
3 years ago
What is AMI in AWS?

AMI stands for Amazon Machine Image

We can take the complete image/backup of any EC2 Instance easily.
How do we create AMI?
This is very simple
Select the Instance - Go to Actions - Choose Image & Templates - Create Image
We need to specify the Image name, Description while creating an image.

Once the AMI has been successfully created, please keep an eye on the status to confirm the AMI is ready or not.


We can use AMI in the same region & also between multiple regions.

Same Region:
If we have AMI in one region, and if we need to replicate the same, then we can easily do this by launching a new EC2 Instance with the help of My AMI.


Multiple Regions:
If we have AMI in one region, and if we need to replicate the same in another region,
then we can easily do this by copying AMI from one region to another region
Once the copying of AMI is done from one region to another region
We can launch a new EC2 Instance in the 2nd region with the help of copied AMI
View replies (0)
Posted in: AWS | ID: Q28 | July 16, 2021, 07:30 AM | 0 Replies
AVR posted:
3 years ago
Let's understand more about the EC2 dashboard.

EC2 Dashboard is Region specific
If we create EC2 machines in one region, then they won't be visible in another region.
AWS Management Console is Region specific dashboard
Every region is independent.
The experience of creating the EC2 instance is always the same in any region.
View replies (0)
Posted in: AWS | ID: Q27 | July 16, 2021, 07:04 AM | 0 Replies
AVR posted:
3 years ago
What is Scale-up in AWS?
What is Scale down in AWS?
Scale-up and Scale down also known as Vertical scaling in AWS
What is Scale in AWS?
What is Scale-out in AWS?
Scale in and Scale-out also known as Horizontal scaling in AWS


Scale-up in AWS is increasing the hardware configuration of the specific instance
-----------------------------------------------------------------------------------------------------------

Let's take a look at Scale-up in AWS.
How do we increase the hardware configuration such as Harddrive/RAM/CPU?

Harddrive - We can increase from the Volumes - Actions - Modify volume - Specify new size - Confirmation to modify.
For this, we don't have to stop the instance.

RAM&CPU - We must stop the instance from making any changes as this is changing the instance type from one instance to another instance
Please note that the changes to RAM&CPU go together as we need to use the existing instances given by AWS.
Select the instance - Actions - Instance settings - Change instance type - Select the new instance correctly - Apply.




Scale down in AWS is decreasing the hardware configuration of the specific instance
----------------------------------------------------------------------------------------------------------------

Let's take a look at Scale down in AWS.
How do we decrease the hardware configuration such as Harddrive/RAM/CPU?

Harddrive - We can decrease from the Volumes - Actions - Modify volume - Specify new size - Confirmation to modify
For this, we don't have to stop the instance.

RAM&CPU - We must stop the instance from making any changes as this is changing the instance type from one instance to another instance
Please note that the changes to RAM&CPU go together as we need to use the existing instances given by AWS.
Select the instance - Actions - Instance settings - Change instance type - Select the new instance correctly - Apply.




Scale-out - Without making any changes to existing EC2, and launching a new instance is nothing but Scale-out.

Scale in - Without making any changes to existing EC2, and removing an instance is nothing but Scale in

Example of Horizontal scaling: Autoscaling
View replies (0)
Posted in: AWS | ID: Q26 | July 14, 2021, 09:28 PM | 0 Replies
AVR posted:
3 years ago
How do we protect critical EC2 Instances without any accidental termination?

To overcome the accidental termination, we have a termination protection feature in AWS.

Where can we see this option?

Select the EC2 Instance - Go to Actions - Instance settings - Change termination protection - Enable - SAVE
Once this is enabled, we can avoid accidental termination.


Removing the termination protection is also the same
Select the EC2 Instance - Go to Actions - Instance settings - Change termination protection - Remove Enable option - SAVE
View replies (0)
Posted in: AWS | ID: Q25 | July 14, 2021, 10:07 AM | 0 Replies
AVR posted:
3 years ago
What is status checks in AWS?

We need to understand the EC2 Instance status check when the instance is terminated.
We also have to understand the EC2 Instance status check when the instance is created/started - Initializing, 2/2 checks passed.

Any EC2 Instance should pass the below status checks
Instance status check(1/2 passed)
System status check (2/2 passed)

Instance status check usually refers to OS(Operating System)
System status check usually refers to Hardware

When the Instance status check fails, we need to reboot the Instance as this is OS related.

When the System status check fails - automatically, the Instance status check would also get failed - We need to Stop & Start the EC2 Instance.
View replies (0)
Posted in: AWS | ID: Q24 | July 14, 2021, 09:52 AM | 0 Replies
AVR posted:
3 years ago
What is Autoscaling in AWS?

Autoscaling is nothing but an extension to load balancing in AWS.

We use load balancing along with the Autoscaling feature.

To recap,
The load balancer receives incoming requests/traffic and distributes the requests/traffic to registered target instances.


What is the Advantage of a Load balancer?
If one target instance is down/unhealthy, we can still get the application from other target instances.
Also, there is no limit of instances while attaching to the load balancer.


Companies use Load balancers & Autoscaling as a part of their business.

Example:
Let's consider that CompanyA is using a Load balancer
How many instances can CompanyA use as a part of the load balancer?
When there is an increase/decrease in traffic, how the company can handle this scenario?
No of the instances we attach shouldn't be static, considering the business as a priority.
No of the instances we attach should be purely based on the traffic
When traffic increase-we, need more no of EC2 Instances
When traffic decrease-we, need less no of EC2 Instances
No of the instances we attach should be dynamic.
Based on the traffic, the scaling should happen dynamically.

The auto-scaling feature helps us automatically scale up the infrastructure and scale down the infrastructure based on the traffic.


The following are the sequence of steps we need to perform while working with Autoscaling.
Step1 - Create a load balancer
Step2 - Create Launch configuration
Step3 - Create Topic in SNS(Simple Notification Service)
Step4 - Create Autoscaling group(min/max)
Step5 - Create Alarm in CloudWatch (This is condition)
Step6 - Add policy in Auto Scaling (This is an action for the condition)




Step1 - Create load balancer(For WebServers, we use Classic Load Balancer)
Specify Load balancer name
Create security group
Specify the security group name
Open SSH(22) and HTTP(80) ports for communication
Configure Health Check
Create



What is Launch configuration & Why do we need Launch configuration?
When traffic increases, Autoscaling should add the new EC2 instances automatically.
What happens when there is unexpected traffic on any special day/occasion? For this, we have business analysts who give instructions to the infrastructure team.
Based on the previous statistics, business analysts develop some plans and give their inputs to the internal teams accordingly to be prepared for any unexpected traffic.
When traffic decreases, Autoscaling should remove the EC2 instances automatically.
There is no manual process here, especially while adding or removing EC2 instances.
The most important questions as a part of Autoscaling are
What would be the RAM for the new EC2 instances?
What would be the HDD for the new EC2 instances?
What would be the Processor for the new EC2 instances?
What would be the OS for the new EC2 instances?
What are the ports we need to open for new EC2 instances?
Should we install any software while the EC2 instance gets created?
We need to pre-define all the above as part of the Launch configuration so that auto-scaling helps us automatically scale up the infrastructure and scale down the infrastructure based on the traffic.




Step2 - Create Launch configuration (This is a part of Autoscaling where we can see this from the Autoscaling Menu)
Specify the launch configuration name
Choose the AMI Image/ID correctly(Autoscaling is going to use while launching new EC2 instances as a part of scaling up the infrastructure)
Choose the instance type correctly(Autoscaling is going to use while launching new EC2 instances as a part of scaling up the infrastructure)
Since this web server in our example, go to advanced details - user data - use the bootstrap script here(Autoscaling is going to use while launching new EC2 instances as a part of scaling up the infrastructure)
Storage - EBS volumes - Select the Size correctly(Autoscaling is going to use while launching new EC2 instances as a part of scaling up the infrastructure)
Security groups - We need to assign a security group correctly(Autoscaling is going to use while launching new EC2 instances as a part of scaling up the infrastructure)
Key pair - We need to choose the key pair correctly(Autoscaling is going to use while launching new EC2 instances as a part of scaling up the infrastructure)
The purpose of the key pair is to get connected to EC2 Machine.




Step3 - Create Topic in SNS(Simple Notification Service)
What is Topic & Why do we need SNS?
The topic is nothing but a group where the entire group can receive email notifications.
SNS stands for Simple Notification Service
When the traffic increases, auto-scaling happens as per the configuration. We need to receive email notifications to scale up the infrastructure and scale down the infrastructure in this scenario.
Go to Application Integration - Simple Notification Service
Click on Topics - Create Topic - Choose standard
Specify Topic Name -
Specify Topic Display name -
Create
Now we need to add emails id's to the Topic.
Go to the Topic - Look for the subscription option
Click on create subscription
Protocol - Email
Endpoint - Specify email id OR group of email ids to receive notifications
Create subscription
We need to confirm email as a part of validation to receive notifications as this is mandatory.





Step4 - Create Autoscaling group(min/max)
We need to create an Autoscaling group based on the Launch configuration that we have created.
Select Launch configuration - Go to Actions - Create Autoscaling group
Specify Autoscaling group name
Choose VPC correctly(This is where Auto scaling happens)
Subnet - Choose this correctly(Subnet is nothing but a partition in your AWS Region. You may have more than one subnet, hence choose the subnet correctly so that the Autoscaling happens in the given subnet partition)
Load balancing - Attach to an existing load balancer - Choose from Classic Load balancers - Select your Load balancer correctly so that whenever Auto scaling happens, automatically EC2 Instances would be a part of Load balancer by default.
Group size - (Desired-1 ) (Min -1) (Max -1) We can also use any capacity, but as a part of learning/training, a minimum is recommended. At the company's we follow the size based on the statistics of the previous sales/business.
(Desired is nothing but to start with) (Minimum is minimum instances) (Maximum is maximum instances)
Create Autoscaling group
Check the status of the Autoscaling group
Add notifications - Select topic where we need to receive notifications
Add tags appropriately
Review
Create
Creation may take some time, depending on the configuration we have specified. Keep checking the status, and we can proceed with the next step when the status is





Step5 - Create Alarm in CloudWatch (This is condition)
How do we measure the traffic for the auto-scaling?
When Traffic increases/Traffic decreases, what is the condition?
Creating an alarm is nothing defining a condition when there is traffic increase/decrease.
We have a metric known as CPUUtilization provided by AWS
If CPUUtilization>70%, then create new EC2 or create more EC2 based on the business requirement
If CPUUtilization<30%, then remove EC2 or remove more EC2
In this scenario, we have to create 2 Alarms in CloudWatch
Alarm1-CPUUtilization-GTE70
Alarm2-CPUUtilization-LTE30



In the above step, we have created 2 conditions.
Now, what is the action to take for every condition is nothing but adding policy in Autoscaling.




Step6 - Add policy in Auto Scaling (This is an action for the condition)
In this scenario, we have to create 2 policies from the Dynamic scaling policy
Policy1 is for Alarm1-CPUUtilization-GTE70 (Adding EC2 Instances)
Policy2 is for Alarm2-CPUUtilization-LTE30 (Removing EC2 Instances)
Once the policies are created, we can take appropriate action accordingly


The Application should not go down no matter what happens.



Deleting process:
============
1)Delete AutoScaling
2)Delete launch configuration (Instances will be terminated automatically)
3)Delete Load balancer
4)Delete Topic
5)Delete Alarm
View replies (0)
Posted in: AWS | ID: Q23 | July 13, 2021, 04:57 PM | 0 Replies
AVR posted:
3 years ago
What is a Load balancer & Why do we need a Load balancer?

A load balancer accepts incoming traffic from users/clients and routes requests to EC2 Instances or targets.

The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets.

When the load balancer detects an unhealthy target, it stops routing traffic to that target.

It then resumes routing traffic to that target when it detects that the target is healthy again.


The load balancer is region-specific.

1:Define Load Balancer
2.Assign Security Groups
3.Configure Security Settings
4.Configure Health Check
5.Add EC2 Instances
6.Add Tags
7.Review

Once the Load Balancer is successfully created, we can see the DNS Name of the load balancer on the dashboard.

How to test the Load Balancer?
Go to browser - Type DNS Name and see what the response we're getting is
We should get a response from all the machines that are associated with Load Balancer
This confirms that the load balancer is working as expected.
View replies (0)
Posted in: AWS | ID: Q21 | July 09, 2021, 10:43 AM | 0 Replies
AVR posted:
3 years ago
How to launch AWS Linux Instance?

EC2 stands for Elastic Compute Cloud

In this, let's discuss how to create Linux EC2 Instance from AWS Management Console
1.Choose AMI correctly
2.Choose Instance type appropriately
3.Configure Instance
4.Add storage
5.Add tags
6.Configure security group(SSH Port 22 should be opened)
7.Review

AWS provides a .pem file while launching any Instance.

Using PuTTYgen tool - We can convert (.pem) to (.ppk)

Now we can get connected to Linux Machine using Putty.

In order to use Putty, we need Hostname or IP Address & (.ppk file)

PPK is nothing but (Putty Private Key)



Each Linux instance launches with a default Linux system user account.
The default user name is determined by the AMI that was specified when you launched the instance.

For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.

For a CentOS AMI, the user name is centos.

For a Debian AMI, the user name is admin.

For a Fedora AMI, the user name is ec2-user or fedora.

For a RHEL AMI, the user name is ec2-user or root.

For a SUSE AMI, the user name is ec2-user or root.

For an Ubuntu AMI, the user name is ubuntu.

Otherwise, if ec2-user and root don't work, check with the AMI provider.
View replies (0)
Posted in: AWS | ID: Q20 | July 09, 2021, 10:19 AM | 0 Replies
AVR posted:
3 years ago
How to launch AWS Windows Instance?

EC2 stands for Elastic Compute Cloud

In this, let's discuss how to create Windows EC2 Instance from AWS Management Console
1.Choose AMI correctly
2.Choose Instance type appropriately
3.Configure Instance
4.Add storage
5.Add tags
6.Configure security group(RDP Port 3389 should be opened)
7.Review

AWS provides a .pem file while launching any Instance.

We need RDP(Remote Desktop Protocol) to get connected to Windows Machine.

We need to make sure that we have DNS Name, Username & Password.

To get the password, we need to upload the downloaded .pem file and then decrypt the password to get the real password to log in to the Windows Instance.
View replies (0)
Posted in: AWS | ID: Q19 | July 09, 2021, 10:12 AM | 0 Replies
AVR posted:
3 years ago
What is AWS?

AWS is nothing but a Cloud Platform.

In 2006, Amazon officially launched AWS(Amazon Web Services), which has become one of the major providers of cloud computing services.

AWS is a collection of remote computing services (web services) that together make up a cloud computing platform offered over the Internet by Amazon.

Website: https://aws.amazon.com/

AWS Global Infrastructure https://aws.amazon.com/about-aws/global-infrastructure/

What AWS Offer?
Availability
Reliability
Scalability
Pay-as-you-go


We also have other cloud platforms.
AWS
AZURE
GCP


Availability
AWS promises - 11 9's of availability (99.999999999)


Scalability
Ability to grow in size
Instant Elasticity(Scaling up and down) based on the business requirement.(Also known as dynamic)
Eliminate guessing on your infrastructure capacity needs


Pay-as-you-go
This is with no up-front expenses or long term commitments.

Gartner Report - https://pages.awscloud.com/EMEA-field-DL-gartner-2021-learn-long.html?sc_channel=em&sc_campaign=EMEA_FIELD_LN_emea-enterprise-bdm-nurture_20210315_7014z000001MOOU&sc_medium=em_341818&sc_content=REG_ln_field&sc_geo=emea&sc_country=mult&sc_outcome=reg&sc_publisher=aws&trkCampaign=emeafy21entnurture&trk=em_inv1_emeafy21entnurture



The list of below AWS Services is always good to start with.
EC2
S3
IAM
ELB
AS
VPC
Route 53
RDS
Elastic Beanstalk
Cloud Trial
SES
SQS
SNS
Cloud Formation
Cloud Front
Cloud Watch


Available AWS Certifications - https://aws.amazon.com/certification/
View replies (0)
Posted in: AWS | ID: Q18 | July 06, 2021, 11:18 AM | 0 Replies
AVR posted:
3 years ago
What are the different types of Routing policies we have in Route 53?
Simple
Weighted
Latency
Failover
Geolocation
View replies (0)
Posted in: AWS | ID: Q14 | June 18, 2021, 09:20 AM | 0 Replies
AVR posted:
3 years ago
Have you accidentally deleted Default VPC from your AWS Account?

Do you want to recreate your Default VPC using AWS Management Console?

It's straightforward.

Go to AWS Console

Click on Your VPC from VPC Dashboard

Go to Actions - Select Create Default VPC

Create


Please note that a default VPC enables you to launch Amazon EC2 resources without having t create and configure your own VPC and subnets. AWS creates a default VPC with a default subnet in each Availability Zone, an Internet gateway and a route table with a route to the Internet gateway.
View replies (0)
Posted in: AWS | ID: Q9 | June 10, 2021, 03:46 AM | 0 Replies
V posted:
3 years ago
I'm looking for AWS Material/Documents
Posted in: AWS | ID: Q6 | June 08, 2021, 09:05 AM | 1 Replies
Ayaz posted:
3 years ago
I want Aws Solutions Architect Course
Posted in: AWS | ID: Q4 | June 08, 2021, 07:35 AM | 1 Replies