Write us what you want & improve the DevOpsCloud website for easy to use.

To stop spammers/bots in Telegram, we have added a captcha while joining the Telegram group, which means every new member, has to authenticate within 60 seconds while joining the group.

Home >>All Articles

Published Articles (117)

Sort by:
  • All |
  • AWS (52) | 
  • Azure (31) | 
  • DevOps (9) | 
  • FREE Udemy Courses (6) | 
  • GCP (1) | 
  • Linux (1) | 

You are viewing the Articles/Questions in Azure category

AVR posted:
2 years ago
What is AKS?
-------------------
AKS stands for Azure Kubernetes Service
We should understand the difference between Monolithic vs Microservices
Monolithic means the applications are tightly coupled.
If any changes happen on Monolithic, the entire application has to be down
Microservices is nothing but breaking down an application into multiple pieces
Mainly E-commerce websites use Microservices as MANDATORY
We should have a basic understanding of virtualization vs containerization
Every Microservice can be containerized
Containers are lightweight and portable
View replies (0)
Posted in: Azure | ID: Q113 | February 22, 2023, 10:52 AM | 0 Replies
AVR posted:
2 years ago
SPARK ARCHITECTURE:
------------------------------------
There are 5 parts to it
->Driver program
->Cluster Manager
->Worker Node
->Executor
->Task

Driver program:
---------------------
Driver Program in the Apache Spark architecture calls the main program of an application and created SparkSession.
A SparkSession consists of all the basic functionalities.
Spark Driver contains various other components such as DAG Scheduler, Task Scheduler, and Backend Scheduler which are responsible for translating the user-written code into jobs that are actually executed on the cluster.
Job is split into multiple smaller tasks which are further distributed to worker nodes and can also be cached there.
SparkDriver and SparkSession collectively watch over the job execution within the cluster.
SparkDriver works with the Cluster Manager to manage various other jobs.

Cluster Manager:
-----------------------
The role of the cluster manager is to allocate resources across applications. The Spark is capable enough of running on a large number of clusters.
It consists of various types of cluster managers such as Hadoop YARN, Apache Mesos, and Standalone Scheduler

Worker Node:
------------------
The worker node is a slave mode
Its role is to run the application code in the cluster.

Executor:
-------------
An executor is a process launched for an application on a worker node.
It runs tasks and keeps data in memory or disk storage across them.
It read and writes data to the external sources
Every application contains its executor.

Task:
-------
A unit of work that will be sent to one executor.
View replies (0)
Posted in: Azure | ID: Q99 | May 07, 2022, 10:30 AM | 0 Replies
AVR posted:
2 years ago
What do you know about Spark Architecture?

SPARK ARCHITECTURE:
-----------------------------------
There are 5 parts to it and let's understand how they interconnect each other internally
->Driver program
->Cluster Manager
->Worker Node
->Executor
->Task

A user writes the code and submits it to the Driver program and this is where the Spark Session gets started
Spark Session establishes the communication between the Driver program, Cluster manager & Worker nodes.
Driver Program has the code and asks the Cluster Manager to get the work done with the help of worker nodes
Now Cluster Manager would bring some worker nodes
We can have internally 1 worker node or 2 worker nodes or 3 worker nodes etc.
Typically Worker Node consists of Executors and Tasks
Driver Program asks the Cluster Manager to launch the workers.
The cluster Manager's responsibility is only to monitor the workers or worker nodes.
Once the Cluster Manager notices the active worker nodes it informs back to the Driver program
Now Driver Program gives the actual work to Worker Nodes directly
The executor executes the job and gives the result back to the Driver program.
Finally, the Driver program returns the end result to the user 
View replies (0)
Posted in: Azure | ID: Q98 | May 07, 2022, 10:28 AM | 0 Replies
AVR posted:
2 years ago
Real-time data processing vs. Batch processing:
----------------------------------------------------------------
->Real-time data collects the data and processes the data on the same day/immediately
->Collection & processing happens on the same day in Real-time
->Batch processing collects the data today and processes them tomorrow or the next day
->Batch processing has one day delay in processing the data
->Batch processing could be daily/weekly/monthly/quarterly in processing the data
->Every business has to adopt the new changes; if not, they can run the business in today's world
->Every customer looks for new features & it has to be done directly from the Mobile
->If the website is not working/mobile app is not working, we all know how business could get affected.
View replies (0)
Posted in: Azure | ID: Q97 | May 07, 2022, 10:25 AM | 0 Replies
AVR posted:
2 years ago
What do you know about Exception handling in python?

Exception handling would be there in all programming languages like
.Net
java
python
SQL
scripting(shell/bash)

Why do we need exception handling?
Exceptions are designed in such a way that the code shouldn't break if something is not available or if something is not working as expected

Exceptions are two types:
->System raised exception
->User raised exception
Typically if the run gets succeeded then only it places the file in the storage account.
If the previous run got failed, then the failed job wouldn't place any file in the storage account.
The next run wouldn't find the previous job file and eventually, this run gets failed too
Run2 is dependent on the Run1 file
File not found exception is what we could see


How to make use of exception handling without failing Run2?
===========================================
Exception handling is nothing but we're not fixing the issue
We're not placing any temporary file to run the job
but
you are redirecting the flow to make a smooth execution
Without interrupting the flow of execution,
If the file is not located in the so and so location, then pls go to the archive folder and pick the backup file to read and run the job
We have the main folder & archive folder
the main folder is the place where the original file must be
archive folder is the place where we keep the backup file to run the job to avoid failures

NOTE:
====
raise is a keyword
this is what we use to raise our own exception
Go to python Environment and type the word " keywords"
We can see all reserve words where raise is a part of it
try is another keyword
except is another keyword
else is another keyword
finally is another keyword





The sequence is as follows:
===================
try block-
This is a mandatory block in the exception handling
We need to keep risky code in the try block
Always try will execute as MANDATORY when we implement exception handling

except block -
When except block will execute?
Will this execute always?
Whenever an exception happens at try block then only the execution comes to except block
except block would get executed only if try block encounter an exception
except block is NOT mandatory
Exception handling can also be handled without except block
We need to maintain alternative logic or alternative files

else block -
This is not a mandatory block
This is meant for printing successful messages
else executes only when there is NO EXCEPTION raised in the entire application code
If the try block doesn't have an exception then else block gets executed automatically
If the try block has an exception then else block wouldn't get executed

finally, block -
This is not mandatory
This is for closing all database or storage connection
This executes always

raise block -
This is used to raise customer/user-defined exceptions




Regular error messages are as follows:
========================
->syntaxerror: unmatched
->unsupported operand type(s) for 'int' and 'str'
->An exception occured name 'x' is not defined
->Variable x is not defined
->unexpected indent




How to check mount points?
======================
dbutils.fs.mounts()
View replies (0)
Posted in: Azure | ID: Q92 | May 07, 2022, 10:04 AM | 0 Replies
AVR posted:
3 years ago
Let's understand the Data flow architecture in Azure databricks.
We need to have good knowledge of how different components are connected and internally what happens when creating a databricks cluster.

Microsoft is providing Azure Databricks which means servers/storage will be used from the Microsoft Datacenter.
Likewise,
AWS is providing AWS Databricks which means computer/storage will be used from the AWS Datacenter.

What is the difference between an All-purpose cluster & job cluster?
The all-purpose cluster used in the development environment
job cluster used in the production environment


What is the difference between real-time data processing and batch data processing?
Real-time data processing is processing a negligible amount of data. If the data is very small and the processing takes a minute or two then we can consider this as real-time data processing.
What is Batch data processing?
If we are collecting one-hour data to process then we can call this an hourly batch data processing(small job)
If we are collecting twenty-four hours of data to process then we can call this a daily batch data processing(big job)
If the batch data processing is five minutes or ten minutes then we can call this small-batch data processing(a very small job)
View replies (0)
Posted in: Azure | ID: Q91 | April 18, 2022, 11:21 AM | 0 Replies
AVR posted:
3 years ago
What is Databricks & What do you know about Databricks?

Databricks is a new analytics service.
Azure databricks is a fast, easy, scalable, and collaborative apache-spark based analytics service on azure.

Why do we call it Fast? Because it uses a spark cluster
Why do we call it Easy? - We don't need any eclipse like PyCharm/Visual Studio to write the code
Why do we call it Scalable? - Dynamic allocation of the resources as per the requirement(nodes) is possible - We always need more nodes to process more data in databricks.
What is collaborative? - Data engineers/Data scientists/business users can work in Databricks notebook as collaborative work. Instead of working isolated, they all work in Databricks to achieve better productive work.
We can seamlessly connect from Databricks to other azure services(datalake/blob storage account/SQL server/azure synapse). Reduces cost and complexity with a managed platform that auto-scaled up and down

Let's understand more about Azure Databricks Architecture
Once Databricks Workspace is created, we have the flexibility to create clusters
We also have a flexibility option to upload the data via DBFS though this is NOT recommended at the enterprise level considering the security as a high priority.
DBFS is a databricks file system.
When we store the data internally via DBFS, it gets stored backend in the storage account depending on the cloud we choose(AWS/AZURE).
If we choose AWS, then EC2 Instance would spin up and data gets stored internally at AWS S3.
If we choose AZURE then VM would spin up and data gets stored internally at the Blob storage account.
Databricks knows all the dependencies at the time of workspace creation. It creates all the pre-requisites that are needed for Databricks workspace.
Databricks cluster is nothing but a group of VMs.
When we create a cluster, VMs get created at the backend in the Azure Portal.
In order to run the notebook, we need to have a databricks cluster in place. We need to attach the notebook to the cluster to run the notebook where the notebook code gets executed.

Databricks has got 2 options. One is auto-scaling and the other one is to Terminate the cluster due to cluster inactivity. These two options are very helpful to reduce the cost.
View replies (0)
Posted in: Azure | ID: Q90 | April 18, 2022, 11:19 AM | 0 Replies
AVR posted:
3 years ago
How do we create a notebook in databricks, and what are our options while creating a notebook?
We can create a notebook using any of the below languages.
python
scala
SQL 
When we create a notebook with python as an example, we also have an option to change that to another language scala after creating.
We have the flexibility to switch from one language to another.


Let's understand more about the databricks cluster.
What is a databricks cluster & why do we need a databricks cluster?
Let's assume that we have some data in the storage, and we need to process this data. To process this data with some ETL operations, we need some computing power to execute the notebook logic, and this is where the cluster would come in place.
A cluster is nothing but a group of nodes.
Apache spark clusters with multiple nodes have spark installation and spark features. They all work together internally to achieve a common goal.
1DBU means one node
2DBU means two nodes
8DBU means eight nodes
Nodes are used to execute the notebook code
DBU stands for Databricks unit.

Below are the options we have while creating a databricks notebook:
Clone
Rename
Move
Delete
Upload Data
Export
Publish
Clear Revision History
Change Default Language

When we upload files in databricks workspace we have two types of formats. Spark API Format & File API Format.
How to Access Files from Notebooks?
PySpark - Code can be generated easily through UI.
pandas - Code can be generated easily through UI easily.
R - Code can be generated easily through UI easily.
Scala - Code can be generated easily through UI easily.


Below is an example of how we can read CSV files from databricks filesystem?
df1 = spark.read.format("csv").option("header","true").load("dbfs:/FileStore/tables/credit.csv")
df1.show()


What are the formats we have while exporting databricks notebooks?
DBC Archive format comes as (.dbc) - This is an archived format that is not easy to read/understand
Source File format comes as (.py) assuming that we are exporting python notebook - This is easy to read/understand
IPython Notebook format comes as (.pynb) - This is readable but again not in the proper format to understand
HTML format comes as (.html) - This is easy to read but again not in the proper format to understand


How do we import notebooks from the databricks workspace?
Go to the workspace - Make use of the import option wherever we want
We can import only one at a time as per the current databricks standards.


Publish notebook:
When a notebook is published we get a URL that is accessible publicly
The link will remain valid for 6 months in general
Go to browser - paste the URL - the link should work as expected


Clear revision history:
All notebook code changes get recorded and this is what we call versioning.
We can always go back to a version history to see all the code changes.
Versioning plays a major role in any development activity in general.
Versioning helps when something goes wrong with new version changes- This is where Rollback comes in place.


Change Default Language
Python/Scala/SQL/R
We can change the language to any of the above using the default language options given by the Databricks


Clear cell outputs are only to clear the outputs but the state is still active if the cells get executed
A clear state means the state is all cleared and there were no values stored unless the cells get reexecuted manually 
View replies (0)
Posted in: Azure | ID: Q89 | April 17, 2022, 11:02 PM | 0 Replies
AVR posted:
3 years ago
What do you know about Databricks runtime(DBR)?

The set of core components that run on the clusters managed by Databricks is nothing but DBR.
Databricks offers several types of runtimes versions.
Below are LTS versions
7.3 LTS
9.1 LTS
10.4 LTS

What is Databricks Runtime?
It includes Apache Spark and adds a number of components and updates that substantially improve the usability, performance, and security of big data analytics.

DBKS Runtime for ML:
It is built on Databricks Runtime and provides a ready-to-go environment for machine learning and data science. It contains multiple popular libraries, including
TensorFlow
Keras
PyTorch
XGBoost

DBKS Runtime for Genomics:
It is a version of DBKS Runtime optimized for working with genomic and biomedical data. This cluster is mainly used in the healthcare industry.


Databricks clusters have options like Autoscale and Auto terminate options to minimize the cost of the cluster usage. This is very helpful in cutting down the cost of cluster usage.
View replies (0)
Posted in: Azure | ID: Q88 | April 07, 2022, 08:46 AM | 0 Replies
AVR posted:
3 years ago
What do you know about pool in databricks?

The Instance pools API allows you to create, edit, delete and list instance pools.
A set of idle, ready-to-use instances that reduce cluster start and auto-scaling times.
When attached to a pool, a cluster allocates its driver and worker nodes from the pool.
If the pool does not have sufficient idle resources to accommodate the request of the cluster, the pool expands by allocating new instances from the instance provider.
When an attached cluster is terminated, the instances used are returned to the pool and can be reused by a different cluster.
If we have a pool in place, we can get the nodes directly & internally.
View replies (0)
Posted in: Azure | ID: Q87 | April 07, 2022, 07:35 AM | 0 Replies
AVR posted:
3 years ago
What do you know about Computation Management in Databricks?

We have two types of clusters in databricks
1) Interactive cluster or all-purpose-cluster
2)Job cluster

Interactive cluster:
We can manually terminate/restart the cluster
Multiple users can share these clusters to do collaborative, interactive analysis
Manual operations are possible in this cluster

Job cluster:
This is a cost-saving cluster in general
When we schedule a job, the job would automatically spin up the job cluster and terminate the cluster once the job is complete.
We cannot start the job cluster manually
We cannot restart the job cluster manually
View replies (0)
Posted in: Azure | ID: Q86 | April 07, 2022, 07:27 AM | 0 Replies
AVR posted:
3 years ago
What do you know about Data Management in Databricks?
We can manage the data and also we can upload the data. When we log in to databricks, we can go to Data - create table - upload file - this is where we are storing the files
If the storage is a storage account- the files format is used. If the storage is a database- table format is used.
View replies (0)
Posted in: Azure | ID: Q85 | April 07, 2022, 05:21 AM | 0 Replies
AVR posted:
3 years ago
​What is a workspace in Databricks?
D​atabricks​ Workspace is an environment for accessing all of your D​atabricks assets.​
The dashboard is a direct interface where we can create notebooks, data import, etc.
The dashboard is an interface that provides organized access to visualizations.
What are notebooks?
A notebook is a web-based interface to documents containing a series of runnable cells (commands)
When we go with the Run all​ option in a notebook,​ commands would run in sequence automatically
​Below are examples of what we can do when we log in to Databricks workspace
create a notebook/folder
clone a notebook
import a notebook​​
export a notebook​
View replies (0)
Posted in: Azure | ID: Q84 | April 07, 2022, 04:59 AM | 0 Replies
AVR posted:
3 years ago
Databricks Community Edition is FREE for self-learning.
There are three ways to interact with Databricks Interface.
UI - Using UI, we can create a cluster and make the changes easily.
CLI - Using CLI, we can run commands to interact with the Databricks Workspace.
REST API: Databricks Rest API allows us to programmatically access Databricks instead of web UI. When we apply a command it calls the API URL at the backend to interact with Databricks Workspace
curl -n -X GET /api/2.0/clusters/list
View replies (0)
Posted in: Azure | ID: Q83 | April 07, 2022, 04:41 AM | 0 Replies
AVR posted:
3 years ago
What is VNet Peering?
To establish the communication between two virtual networks in two different regions.
These virtual networks can be in the same region or different regions (also known as Global VNet peering).
Once virtual networks have peered, resources in both virtual networks can communicate with each other, with the same latency and bandwidth as if the resources were in the same virtual network.


Example1:
RG1 is located EUS
Created VNet1  10.10.0.0/16
Created Subnet1 10.10.1.0/24
Created Subnet2 10.10.2.0/24
Created LVM1(Linux VM1)
Created LVM2 (Linux VM2)


Example2:
RG1 is located in WUS
Created VNet2  10.11.0.0/16
Created Subnet3 10.11.1.0/24
Created LVM3(Linux VM3)


By default, the communication can happen in the following way without any VNet peering.
LVM1 to LVM2  (Yes, because they are in the same VNet)
LVM1 to LVM3  (No, because they are indifferent VNets)
LVM2 to LVM1  (Yes, because they are in the same VNet)
LVM2 to LVM3  (No, because they are indifferent VNets)
LVM3 to LVM1  (No, because they are indifferent VNets)
LVM3 to LVM2  (No, because they are indifferent VNets)


Resource Group is not regional bound, whereas VNet is regionally bound.


Same VNet- Virtual machines can communicate by default
Different VNets - Virtual machines wouldn't communicate by default


How to create VNet peering?
Go to one of the VNet.
Go to settings - Peerings
Click on Add button to have a new peering
We need to fill in the below details correctly
Below are options for the current VNet where we opened the settings
Peering link name -  
Traffic to remote virtual network - Allow
Traffic forwarded from the remote virtual network - Allow
Virtual network gateway or Route Server - None
Remote VNet options
Peering link name -  
Subscription -
Virtual Network -
Traffic to remote virtual network - Allow
Traffic forwarded from the remote virtual network - Allow
Virtual network gateway or Route Server - None
Click on Add
Now check the connection VNet peering status
The peering status is Connected
If you don't see a Connected status, select the Refresh button.
View replies (0)
Posted in: Azure | ID: Q81 | January 26, 2022, 09:14 AM | 0 Replies
AVR posted:
3 years ago
What do you know about Network Security Group(NSG) in Azure?
Group of network security rules is nothing but NSG
Create a network security group with rules to filter inbound traffic to, and outbound traffic from, virtual machines and subnets.

What do you know about the network security Rule?
To allow or disallow inbound and outbound traffic.

The most commonly used ports to allow or disallow in the NSG are follows.
RDP 3389(To connect to Windows machines)
SSH 22(To connect to Linux machines)
HTTP 80(To connect to web-based application)
HTTPS 443(To connect to secured web-based application)
TCP 1433(To connect to SQL Server Database)
Ports range(11000-11999) is a newly added addition allowing connection to SQL Server Database.

Creation of VM Hierarchy
1st Create RG
2nd Create VNet
3rd Create Subnet
4th Create NSG
5th Create VM

What do you know about Network ID & Broadcast ID IP Range?
Assume that I have created VNet(10.10.0.0/16) with subnet(10.10.1.0/24)
First, four IP's are allocated to Network ID's, and the last IP is assigned to the Broadcast ID
We get only 251 IP Addresses to give to Virtual Machines.
10.10.1.4 is the starting IP as all the initial ones are reserved IP's

What do you know about Public IP & Private IP?
Public IP is nothing but access from outside (Internet to VM)
Private IP is nothing but internal access (VM to VM)

We have two options to allow/disallow ports?
One is from VM Networking & the other one is from NSG Inbound and outbound
At an Enterprise level, we use only NSG to allow/disallow the traffic.

How to install the Nginx web server on a Linux machine?
Use the below commands one by one in the sequence
sudo su -
apt update -y
apt install nginx -y

What do you know about priority value?
Usually, less priority numbers have more preference, and high priority numbers have less importance.
View replies (0)
Posted in: Azure | ID: Q80 | January 25, 2022, 07:55 AM | 0 Replies
AVR posted:
3 years ago
Virtual Network Service Endpoints in Microsoft Azure

Virtual Network(VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network.

Endpoints allow you to secure your critical Azure service resources to only your virtual networks.

Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the Vnet.

Resources Maintain a Public IP address
IP resolves by Microsoft DNS
Not available from Private, on Premises Networks
View replies (0)
Posted in: Azure | ID: Q79 | October 17, 2021, 03:13 PM | 0 Replies
AVR posted:
3 years ago
What are the three different types of services we have in Azure Cloud?

Infrastructure as a Service (IaaS),
Platform as a Service (PaaS), and
Software as a Service (SaaS).

These are the three major service categories provided by any cloud provider.
View replies (0)
Posted in: Azure | ID: Q78 | October 03, 2021, 07:13 PM | 0 Replies
AVR posted:
3 years ago
Microsoft provides Azure support options in multiple ways.
Basic
Developer
Standard
Professional Direct

If you’re looking for a comprehensive, organization-wide support plan that includes Azure, Microsoft 365, and Dynamics 365, we also have enterprise support from Microsoft.
View replies (0)
Posted in: Azure | ID: Q77 | September 21, 2021, 08:51 PM | 0 Replies
AVR posted:
3 years ago
How to organise Azure resources effectively?

Azure provides four levels of management scope:

Management groups
Subscriptions
Resource groups
Resources


Management groups:
We can have multiple administrators OR co-administrators or Enterprise Admins
But the owner is always unique(who has created the account)


Subscriptions:
We can have many subscriptions, and each subscription has some limits or quotas on the resources we create.


Resource groups:
A resource group is a logical container where we deploy and manage Azure resources.


Resources:
Resources are nothing but instances of services that we create.



We can apply management settings like policies and Azure RBAC(Role-based access control) at any management level.

The level we select determines how widely the settings are getting applied.

Please note that lower levels always inherit the settings from higher levels.
Example:
When we apply a policy to a specific subscription, that policy applies to all resources groups and resources in that specific subscription.
View replies (0)
Posted in: Azure | ID: Q76 | September 20, 2021, 08:07 PM | 0 Replies
AVR posted:
3 years ago
What are the features of Delta Lake?

An open-source storage format that brings ACID transactions to Apache Spark and big data workloads.
The key features are
Open format
ACID Transactions
Schema Enforcement and Evolution
Audit History
Time Travel
Deletes and upserts
Scalable Metadata management
Unified Batch and Streaming Source and Sink
View replies (0)
Posted in: Azure | ID: Q75 | September 20, 2021, 06:37 AM | 0 Replies
AVR posted:
3 years ago
What is Delta Lake?

Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark and big data workloads on Data Lakes.
Delta Lake provides ACID transactions on spark and scalable metadata handling.
Delta Lake runs on top of our existing data lake and is fully compatible with Apache Spark API's.
Delta Lake supports Parquet format, Schema enforcement, Time travel, Upserts and deletes.
View replies (0)
Posted in: Azure | ID: Q74 | September 20, 2021, 06:35 AM | 0 Replies
AVR posted:
3 years ago
Why do we need Delta Lake?

Challenges in implementation of a data lake
Missing ACID properties (A.C.I.D. properties: Atomicity, Consistency, Isolation, and Durability)
Lack of Schema enforcement
Lack of Consistency
Lack of Data Quality
Too many small files
Corrupted data due to frequent job failures in PROD (We cannot rollback)
View replies (0)
Posted in: Azure | ID: Q73 | September 20, 2021, 06:34 AM | 0 Replies
AVR posted:
3 years ago
Let's learn VNet peering in Azure.

By default, there is communication between virtual machines in the same virtual network.

We need VNet peering to establish communication between virtual machines in different virtual networks (or) regions.

Log in to the Azure portal at portal.azure.com
Create two Virtual networks in different regions
Go to any one of the two Virtual Networks, select Peerings under Settings, and then select Add.
Configuring the peering for the two virtual networks and select, Add.

This virtual network:
Remote virtual network:

Once the PEERING STATUS is showing as Connected,

Connect anyone of VM and then try to ping the Public IP of the second Virtual Machine to test the peering.


If you have any other comments or suggestions regarding this, please feel free to leave a comment below.

Happy Learning
View replies (0)
Posted in: Azure | ID: Q72 | August 29, 2021, 04:25 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to deploy Linux Virtual Machine using Azure Portal?

Let's learn the basic understanding of the below ones, to begin with.
Resource Group: Group of Services
Virtual Network: Imaginary Network
Subnet: A network divides into multiple networks
Virtual Machine: Imaginary Computer



We need to fill in all the mandatory details on the following
Basics
Disks
Networking
Management
Advanced
Tags
Review+Create



How to Connect to a Linux Virtual Machine using username, password & Putty?
Connect with PuTTY
Fill in the hostname or IP address of VM from the Azure portal
Click Open to connect to your VM.



Basic Linux Commands:
____________________
sudo su -: user to root user (admin)
mkdir: Make Directory
ls: list
ls -al: List hidden files
cd: change directory
touch: To create a file
nano: Edit the file
Ctrl+S: save
Ctrl+X : exit
cat: to see the information in the file
cd ..: change directory backwards
cd ~/ : change directory to home (ex: cd ~/)
rm -f: remove the files
rm -d -f : remove the directory
clear: clear the screen
exit: exit

If you have any other comments or suggestions regarding this, please feel free to leave a comment below.

Happy Learning
View replies (0)
Posted in: Azure | ID: Q71 | August 29, 2021, 03:02 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to deploy Windows Virtual Machine using Azure Portal?

Let's learn the basic understanding of the below ones, to begin with.
Resource Group: Group of Services
Virtual Network: Imaginary Network
Subnet: A network divides into multiple networks
Virtual Machine: Imaginary Computer



We need to fill in all the mandatory details on the following
Basics
Disks
Networking
Management
Advanced
Tags
Review+Create



How to connect to Azure Virtual Machine using RDC?
Type remote desktop connection in the Search bar, then hit Enter key to run it.
Enter in the IP address of the remote computer, and then click Connect.
Enter the username and password of the remote computer and click OK.
Click Yes to confirm this connection if prompted with the security message.
You will now be able to connect and access the Windows virtual machine.



If you have any other comments or suggestions regarding this, please feel free to leave a comment below.

Happy Learning
View replies (0)
Posted in: Azure | ID: Q70 | August 29, 2021, 03:00 AM | 0 Replies
AVR posted:
3 years ago
Let's understand the networking basics to use in Azure.

Network Security Group is nothing but a Group of Network Security Rules.

Network Security Rule is nothing but to allow or disallow inbound and outbound traffic.

The below are the common port numbers we use in general.

RDP: 3389
SSH: 22
HTTP: 80
HTTPS: 443


What is Public IP?
Public IP is nothing but communication from the Internet(Outside world) to Azure Virtual Machine.

What is Private IP?
Private IP is nothing but communication between Virtual Machines
View replies (0)
Posted in: Azure | ID: Q69 | August 29, 2021, 12:36 AM | 0 Replies
AVR posted:
3 years ago
We have different ways of accessing Data lake storage.
A)Mount ADLS Containers to DBFS using service principal and OAuth 2.0
B)Use access keys directly

We should also be aware of the following
Creating App Registration
Creating Key Vault Service
Creating Azure Databricks Workspace & spinning up Databricks cluster
Creating Databricks Secret scope for connectivity
Creating mount point
Reading different format files
Reading multiple files
Writing dataframe as CSV file
Writing dataframe as parquet file



Assume that we have data in Datasource
Considering that we have uploaded data in Azure Data Lake
If we have to access the data in Azure Data Lake, then the below are steps we need to follow
1)Read the data from the source
2)Apply transformations as per the business needs
3)write data back to the target/sink(Ex: SQL Database)

How to read the data in Data Lake?
We need to register a new App at App Registration, and after registering, we get credentials
We cannot access Azure Data Lake with the App credentials directly
We need to add App in Azure Data Lake to have communication
We also have Key Vault to store all sensitive details like secrets
We store all App credentials as secrets in Key vault
Databricks connects to the key vault; from here, both read and write operations can happen easily
Posted in: Azure | ID: Q68 | August 28, 2021, 09:46 AM | 1 Replies
AVR posted:
3 years ago
How to deploy Azure Virtual Machine using CLI?

CLI stands for common line interface.
Deploying via CLI is very fast rather than doing via Azure Portal.
We use shell scripting to create the services/infrastructure
We can also use PowerShell to create the services/infrastructure

Resource group - Group of services
Virtual network - Imaginary network
Subnet - A network that divides into multiple networks
NW Security group - Group of NW Security rules
NW Security Rules - To allow or disallow inbound & outbound traffic (Examples: SSH 22 RDP 3389)
Availability set - Logical grouping of VM's
Virtual Machine - Imaginary Computer/Machine


Tools we need:
-----------------..
Download visual code for Windows (This is nothing but code editor)



Scenario:
-----------.
If we need to deploy 100 VM's, then this is time-consuming, and this is not the best solution at the enterprise level.


Go to the URL
shell.azure.com

Bash is nothing but shell scripting.

The below are the steps we need to create
1)Create a Resource Group using Bash
2)Create a Virtual Network and Subnet using Bash
3)Create an additional subnet using Bash(Optional)
4)Create Network Security Group using Bash
5)Create Network Security Rules using Bash
6)Create Availability set using Bash
7)Create Ubuntu/RHEL/CentOS/Windows Virtual Machine using Bash



Difference between VM Creation using Azure Portal & CLI?
At a time, we can deploy more than one machine using CLI
Deploying via Portal takes min 5 mins
Deploying via CLI execution time is very fast

If I want to deploy 100 machines at a time using CLI
Every time we need to copy and paste the script multiple times, which is not the best approach
For this, we use Visual studio code
Visual studio code is nothing but a code editor
https://code.visualstudio.com/
Once the installation is complete

From the CLI
We need to execute the below commands
az
code .
touch vmcreationscript.sh
Copy the entire script in the visual editor
./vmcreationscript.sh
If you see permission denied error, execute the below chmod command
chmod 777 vmcreationscript.sh (777 Group User Others gets RWE permissions where R=4 W=2 E=1)
./vmcreationscript.sh



If you have any other comments or suggestions, please feel free to leave a comment below.

Happy Learning
View replies (0)
Posted in: Azure | ID: Q67 | August 25, 2021, 11:25 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create an Azure account.
https://portal.azure.com

What do we need to create an Azure account?
We need an Email Id
We need to solve the puzzle to confirm that the registration is not a robot.
When we logged in, by default, we wouldn't have any subscriptions.
We must need a subscription to work on any Azure Resources
Start with an Azure Free Trial, which gives $200 credit which is valid for 30 days.
Create a profile with personal details
There would be a temporary authorisation on the card where we need to provide the card details correctly. We won't be charged unless we upgrade.
Only Visa & Master credit cards are accepted.
Microsoft does not accept debit or pre-paid cards because they do not support monthly payments.
Once the credit is over, Microsoft asks to continue with pay-as-you-go.


Grouping of all resources is nothing but Resource Group.

Location is also known as Region. This is nothing but a data centre in simple terminology.
View replies (0)
Posted in: Azure | ID: Q63 | August 13, 2021, 07:41 AM | 0 Replies
Bharath posted:
3 years ago
Azure devops course
Posted in: Azure | ID: Q8 | June 08, 2021, 05:59 PM | 1 Replies