Friday, March 31, 2023

DOCKER COMMANDS






                  BASIC  DOCKER COMMANDS.

WHAT IS DOCKER
Docker packages software into units called CONTAINER which allows you to build, test, and deploy application quickly. 

DOCKER REGISTRY-
This is a software application used to store and distribute DOCKER IMAGES. Its a central repository which are used to create containers. The registry is an open- source and allows you to scale.

Docker run command : Is used to run container from an image. 
For example: Nginx, Node.js, MySQL, Python, Apache Kafka, MongoDB, Alpine,Ubuntu.  All these are images that can be pulled from docker registry to the host machine. 

Docker run nginx : This will run an instance of the nginx application on the docker host.

Docker ps : This is to list all container running. This will show you the CONTAINER ID. By default docker provides a random ID, time of creation, status, port (80/tcp), name. 

For example: To be clear see the container ID as the AMI in your O/S. This is another way to keep that in mind. 

Docker ps -a : To list all container running as well as previously stopped and exited container. 

Docker stop : To stop all container use the stop command which you must provide the container ID or container name. And if unsure of the name, run the "docker ps" command to get it. After success, run the the "docker ps" again to show no running container "exited". 

 Docker rm (container name): This will permanently remove a stop or running container. 

HOW TO REMOVE THE INITIAL IMAGES ON HOST.
In order to get rid of an image that is not needed. We need to see the list of images running on our host. We run the "docker images" command. 

docker images : To see all the images and sizes on our host machine. 

docker rmi alpine: This is to remove an image, you must first stop container and then delete all the dependencies within the container to be able to delete an image. 

HOW TO GET IMAGES FROM DOCKER REGISTRY.

docker pull command : This command only pulls the image from the register and not the container. 

 To make this simpler, Docker image is like a blueprint or template that contains all the necessary parts needed to run an application. When you build an image, it's like creating a blueprint for a house that you can use to build multiple identical houses. 
Once, you've created the Docker image, you store it in a central location called DOCKER REGISTRY which is like a warehouse where you can store your blueprints. 
When you want to run your application, you use the Docker image as a starting point to create a Docker container. The container is like a virtual environment that contains everything needed to run the application. 
Using Docker images makes it easier to deploy and run applications in different environments because you can create multiple identical containers from the same image. This makes it easy to move applications from development to testing to production environments.  

Docker push : You can set-up an account in docker hub to manage the repository. In order to push project. 
For example : docker push  localhost:4000/(dockerhub userid)

WHY SOME CONTAINERS TIME-OUT
There are few reasons why a container could time out or sleep mode. 
For example an "Ubuntu image"

Docker run ubuntu : This command runs an instance of ubuntu image and exits immediately.  And if you list all the container, it is "exited state"WHY? This is because container are not structured to host an O/S. And ubuntu is just an image of an O/S that is used for a base image for other application. 

For example: You can set-up a timer for your container. In order to keep the container in sleep state. You run the sleep command before it exits.

"docker run ubuntu sleep 5"

Container host applications like web server, database etc. The container only lives as long as the application process inside it is alive. If  the web server inside the container crash, the container exit.

Docker run exec -it  : This is to execute a command "-it" on a running container. And you want to access a file in the container, you run the file name to the exec command. You are able to log in and get access to any container in the world of Docker. 

Docker run -d  : This provides you the detach mood "-d" and runs the container in the BACKGROUND.  

Docker attach ( container ID) : To specify the container ID, you simply provide the first 5 characters 


Next slide we will discuss on docker compose . Happy Learning !!! ðŸ˜Š


Referencing: https://docs.docker.com/registry/


Sunday, March 26, 2023

CLOUDWATCH

 



CloudWatch is a global service provided by AWS. CloudWatch monitors resources such as EC2 instances, RDS databases, Lambda functions and applications running within AWS also with CloudWatch dashboard allows you to visualize its performance and operations across multiple regions.  With CloudWatch you're able to support cross-account access, monitor applications.

CloudWatch captures varieties of operational data within your resources. 
Metrics- Examples of metrics would be CPU utilization, disk usage, Network traffic.

Logs- CloudWatch captures log and stores the logs from AWS resources applications like Lambda function, Elastic Beanstalk.

Alarms- You can set-up an alarm on the metrics created as well as logs and CloudWatch will send you SNS notification once it breached its threshold. 

Dashboards- This displays customized views of metrics and logs which CloudWatch monitors. 


CLOUDWATCH AGENT 

CloudWatch agent is a software application that can be installed in your server. The agent can support on-premises or in the cloud, and it can be configured to collect logs within applications like NGINX, MYSQL, DOCKER, APACHE. 

HOW CLOUD WATCH AGENT WORKS IN CI/CD.

Install the agent package manager YUM or APT. 

Configure the agent using  Ansible, Chef, Puppet 

Integrate the CloudWatch agent into your CI/CD pipeline by adding it to the build deployment script or post-build.

The CloudWatch agent collects metrics and logs because it has been configured. It can also collect latency requests and error rates aside from application logs. 



Referencing : AWS documentation


Saturday, March 11, 2023

KUBERNETES -CONFIGURE SECRETS






HOW TO MANAGE KUBERNETES BUILT IN SECRET 

Secret is an object that stores and manages sensitive information like API Keys, TSL certificates, Passwords etc. Secret can be manually created or automatically generated and it is configured in the environment variable, volume and manifest file. For example:

API_TOKEN: password123

ssl-cert :----CERTIFICATE---


TYPES OF SECRET

Built-in Type Usage
Opaque arbitrary user-defined data
kubernetes.io/service-account-token ServiceAccount token
kubernetes.io/dockercfg serialized ~/.dockercfg file (store cred for docker reg)
kubernetes.io/dockerconfigjson serialized ~/.docker/config.json file
kubernetes.io/basic-auth credentials for basic authentication
kubernetes.io/ssh-auth credentials for SSH authentication
kubernetes.io/tls data for a TLS client or server
bootstrap.kubernetes.io/token bootstrap token data


HOW DO WE MANAGE SECRETS IN KUBERNETES.

There are two ways to create a secret.

First, we define a "secret.yaml file" and this specifies the Kind, Metadata and Value.  The value has different numbers and its encrypted and coded in "Base64"

BASE64: This is used to encode data in email attachment and can represent binary data in JSON / XML format etc. 

NB: Yaml and Json - Are data exchange format.


HOW TO ENCODE YOUR SECRET:
You can use the "Base64" command line by applying the "echo command". The "-n" flag will prevent any additional code. We can connect with base64 to get the encoded string. See the below: 

$ echo -n "password123" | base64 - i - Gfjjrkwcmrlmcalm=


HOW TO CREATE A SECRET IN A CLUSTER.
In Kubernetes (K8), Kubectl support certain variants that does not require us to provision a file to store secret which is more secure. With this, you can pass your secret without having to encode it as shown below:  



HOW TO CONSUME SECRET AND REFERENCE IT

To achieve this, we identify two ways to consume secret in the cluster within an application. 
  • First, we can introduce it as an environment variables within the container when defining the POD/ CONTAINER in the manifest file. We define the "env" where the value should be pulled from a "reference" to that secret we created. Then, it can reference the key value pair. The application can run within the container to access the "process.env.API_TOKEN (JAVASCRIPT/PYTHON)"

  • The second method to consume secret is through "MOUNT VOLUME". This helps when we want to configure and consume sensitive information such as reading an SSL certificate manager.
We need to configure TWO THINGS in the manifest file. 
  • We declare the volume within the the pod that reference our secret.
  • We create a "volume mount with in the container specification"
To Note: The name of the volume and the volume mount MUST match. 



 
See next slide on RBAC in cluster. 😊

TERRAFORM MODULE

 



                        WHAT ARE MODULES

Module provide a way to organize terraform code into reusable components, making it easier to manage and maintain complex infrastructure deployment.

For example

       HOW TO USE TERRAFORM MODULE

To use a terraform module, you declare it in your terraform configuration file and provide necessary input as variables. This is how module uses it to provision infrastructure resources according to its defined configuration. 
 
WHAT DOES TERRAFORM MODULE DO?

You can leverage on terraform modules to create logical abstraction 


You define the module by making use of the "RESOURCE BLOCK" and you consume the module by making use of the "MODULE BLOCK"

For example:
Root --------------Module Block
Child-------------Resource Block

ROOT MODULE
When you run terraform commands directly from a directory ".tf" it is considered to be the root module. The root module call the child module. 

CHILD MODULE:
It is when a configuration has been called multiple times by another module.            


OUTPUT VALUES:
Output values in terraform allows you to export particular values from a module/ multiple modules to another.

USED CASE:
Deployment in terraform, there are attributes which we want to identify its value. To get your public instead of going to the console each time a resource is created. We can make use of output values to display the public IP, public DNS at the level of you CLI you pass this command.  

 TWO MAIN USED CASE OF OUTPUT VALUES 
  • Printing Values on CLI
  • Resolve resource dependencies ** very important**
       
           MODULE SOURCE

Module source is the direct path where the actual child module config file set in. 


module "module_name" {
source = "module_source_location" ( This will be the path)

variable_name = "variable_value"
}


 META ARGUMENTS ARE:

  • Counts
  • depends_on
  • Providers
  • for_each

Let's develop modules (root)  and Child modules. Keep in mind that In an environment no one writes codes over and over. 

STEP 1:
Create a folder "developer-env"

STEP 2:
Within the above folder create 2 folder and give it any name "Uche" "Hodalo"We assume Uche and Hodalo are developers in our exercise. 

STEP 3:
Create a file "modules" within the developer-env folder.

STEP 4:
Within the module file create a folder "custom".

STEP 5:
Within the custom folder create 4 folders "EC2" "NETWORK" "RDS" "SG".
 
STEP 6:
Within the EC2 folder create two files "variable.tf" "webapp-ec2.tf".

variable.tf
# Dev instance ami_id
variable "dev-instance-ami-id" {
type = string
default = "ami-0b******2a63"
}
# dev instance type
variable "dev-instance-type" {
type = string
default = "t2.micro"
}

variable "ami" {
type = string
}

variable "key_name" {
type = string
}

variable "instance_type" {
type = string
}

variable "name" {
type = string
}

variable "subnet_id" {
type = string
}

variable "vpc_security_group_ids" {
type = string
}

STEP 7:
"webapp.tf"

resource "aws_instance" "prod-vm" {
# (resource arguments)
ami = var.ami
key_name = var.key_name
instance_type = var.instance_type
user_data = file ("webapp-deploy.sh")
subnet_id = var.subnet_id
vpc_security_group_ids = [var.vpc_security_group_ids]
associate_public_ip_address = true # assign public Ip to the EC2 at the time of creating EC2
tags = {
Name = var.name
}
}

STEP 8:
Within the network folder, create three files "outputs.tf" "variables.tf""webapp-network.tf"

"output.tf"
# exporting subnet1 id
output "subnet_1_id_export_output" {
value = aws_subnet.Dev-subnet-1.id
}

# exporting subnet2 id
output "subnet_2_id_export_output" {
value = aws_subnet.Dev-subnet-2.id
}

# exporting vpc id
output "vpc_id_export_output" {
value = aws_vpc.Dev-vpc.id
}

STEP 9:
"variable.tf"
# Dev instance ami_id
variable "dev-instance-ami-id" {
description = "Development ami id"
type = string
default = "ami-0b0d********a63"
}
# dev instance type
variable "dev-instance-type" {
description = "Development instance type"
type = string
default = "t2.micro"
}
# dev vpc cidrblock
variable "cidr_block" {
description = "Development vpc cidr_block"
type = string
}

variable "sn1_cidr_block" {
description = "Development subnet 1 cidrblock"
type = string
}
variable "sn1_availability_zone" {
description = "Development subnet 1 avialability_zone"
type = string
}
variable "sn2_cidr_block" {
description = "Development subnet 1 cidr_block"
type = string
}
variable "sn2_availability_zone" {
description = "Development subnet 2 avialability_zone"
type = string
}
variable "vpc_id" {
type = string
description = "vpc_id"
}

variable "instance_tenancy" {
description = "Development vpc instance_tenancy"
type = string
}

STEP 10:
"webapp-network.tf". At the vpc_id is been referenced because we need to make use of the variable, we will tell terraform to make use of this particular subnet. We want flexibility so that if we have multiple VPC'S and we want to create subnet in other VPC, variablelizing it will not be a constraint. 
# Create Development VPC
resource "aws_vpc" "Dev-vpc" {
cidr_block = var.cidr_block
instance_tenancy = var.instance_tenancy
tags = {
Name = "Dev-vpc"
}
}
# Create Development subnet 1
resource "aws_subnet" "Dev-subnet-1" {
vpc_id = var.vpc_id # to cross reference a resource in terraform use: resource_type.LocalResourceName.id
cidr_block = var.sn1_cidr_block
availability_zone = var.sn1_availability_zone
tags = {
Name = "Dev-subnet-1"
}
}
# Create Development subnet 2
resource "aws_subnet" "Dev-subnet-2" {
vpc_id = var.vpc_id
cidr_block = var.sn2_cidr_block
availability_zone = var.sn2_availability_zone
tags = {
Name = "Dev-subnet-2"
}
}
# to create Dev-vpc internet gateway
resource "aws_internet_gateway" "Dev-vpc-igw" {
vpc_id = var.vpc_id #to cross reference the vpc resource id
tags = {
Name = "Dev-vpc-igw"
}
}
# to create Subnet 1 Public RT
resource "aws_route_table" "Dev-SN1-RT" {
vpc_id = var.vpc_id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.Dev-vpc-igw.id
}

tags = {
Name = "Dev-SN1-RT"
}
}

# to create Subnet 2 Private RT
resource "aws_route_table" "Dev-SN2-RT" {
vpc_id = var.vpc_id
tags = {
Name = "Dev-SN2-RT"
}
}

#Public RT Association
resource "aws_route_table_association" "Dev-SN1-RT-Association" {
subnet_id = aws_subnet.Dev-subnet-1.id
route_table_id = aws_route_table.Dev-SN1-RT.id
}

#Private RT Association
resource "aws_route_table_association" "Dev-SN2-RT-Association" {
subnet_id = aws_subnet.Dev-subnet-2.id
route_table_id = aws_route_table.Dev-SN2-RT.id
}

STEP 11:
Within the "Security-group" folder create three files.

outputs.tf
# exporting security group id
output "security_group_id_export_output" {
value = aws_security_group.Development-SG.id
}

STEP 12 :
"variable.tf"
variable "vpc_id" {
type = string
description = "vpc_id"
}

STEP 13:
"webapp-sg.tf". You noticed there's no variable in this file. 

resource "aws_security_group" "Development-SG" {
name = "Development-SG"
description = "Development-SG"
vpc_id = var.vpc_id
ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"] #default description for IPV6
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "allow_http traffic"
}
}

STEP 14:
Run the terraform command
-apply


You have successfully deployed a custom module. 😊 Lets try a few projects by clicking the link below. 







Wednesday, March 8, 2023

TERRAFORM IMPORT



                 TERRAFORM IMPORT

Terraform import will import the actual existing infrastructure file to your local state file or remote state file. Terraform import will not create a configuration file for you. We had to sync the environment.

In our previous slide we discussed drift and saw how terraform import resolved drift. And today we will deploy a simple webpage application.
https://www.blogger.com/blog/post/edit/5428112557550405099/3933215085450776166


STEP 1:
We will create a resources manually. Create EC2 instance ( dev-vm) in your console. We will use "Ubuntu 18.04". Give a key pair

STEP 2
 Select "HTTP"traffic.

STEP 3:
 Pass "user data"

#! /bin/bash
sudo apt update -y
sudo apt -y install apache2
sudo systemctl start apache2
sudo systemctl enable apache2
sudo apt install wget -y
sudo wget https://github.com/UCHE2022/Uche-streaming-application/raw/jjtech-flix-app/jjtech-streaming-application-v1.zip
sudo apt install unzip -y
sudo unzip jjtech-streaming-application-v1.zip
sudo rm -f /var/www/html/index.html
sudo cp -rf jjtech-streaming-application-v1/* /var/www/html/


STEP 4:
Lunch your instance.

STEP 5:
Create a "terraform import folder".

STEP 6:
Create a "provider.tf" file within the folder.

terraform {
required_version = "1.3.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile = "default"
}

STEP 7:
 Run the "init" command.

STEP 8:
"cd" into the path folder and copy your instance id and paste it below. 

To import you have to specify the resources name and instance id, whatsoever resource you want to indicate (Rds,Subnet,Nat gateway). This is set back doing things manually. 

STEP 9:
In the terraform import directory run the below resource argument, dot the local name of the resource (aws) and the specific ID.

terraform import aws_instance.dev-vm i-01****42c*****34e50



STEP 10:
Copy the resource argument and create the file "ec2.tf" within the terraform import folder. Note that, the file name can be anything, i.e, main.tf. 

The reason why we created this file is because terraform will only capture the import at the level of the state file. The goal is resolving the "drift" we captured the argument in the tf file and passed it within the configuration file. 

resource "aws_instance" "dev-vm" {
# (resource arguments)
}

STEP 11:
Rerun terraform import command 




STEP 12:
A "tf state file" has been created locally. We did not run "apply validate", and because we're importing an infrastructure, it has to create a "tf file" with the exact description of that particular resource. 
  

STEP 13: 

We need to define the instance type, key pair and ami. 
Create a file "variable.tf". We need to variablelize the ami, instance type and key pair.

variable "ami" {
type = string
description = "dev ami"

variable "key_name" {
type = string
description = "dev instance key_name "
}
variable "instance_type" {
type = string
description = "dev instance_type"
}
}

STEP 14:
Update your ec2.tf file with the ami, keypair and instance

resource "aws_instance" "dev-vm" {
# (resource arguments)
ami = var.ami
key_name = var.key_name
instance_type = var.instance_type
}

STEP 15:
Now, we define the value from our "state file" because we already imported the ec2 instance from the console to state file. 

Create a file "dev.auto.tfvars". Go to the state file and copy your actual value and pass it in the configuration file. 

ami = "ami-026***eb4****90e"
key_name = "cicd"
instance_type = "t2.micro"


Run terraform init and you see that the user data has not been specified. In your ec2.tf file update the user data file. 

STEP 16:
Create a file "webapp.sh".

STEP 17:
Copy and paste the IP of your instance on the "web browser". And click on "dist" to see the webapp application. 




STEP 18:
Run terraform apply you should see 
- Nothing to change 

You have successfully deployed terraform import to avoid drift in your configuration. You can see the work flow via the GitHub link below.  Happy Learning 😊


Referencing : 


Sunday, March 5, 2023

DRIFT IN TERRAFORM



 



A Drift is said to have occurred when changes are made via the console, to the resources that previously set up via Terraform configuration files. 

For example:
Most organizations have policies that mandatory enforce the creation of resources through terraform only. However, sometimes when a software engineer encountered an issue, and as a result, created an ec2 instances via the console. As a result, A DRIFT is experienced given that the newly created instance was not part of the terraform configuration file. 

HOW DO WE RESOLVE A DRIFT?

One of the arguments that can be used to ratify the change made by the software engineer (as mentioned above) is "Terraform import".  

On the part of the software engineer, the reason could be that the resources that were created via the console were critical which involve some live systems in which other integrations and users are making use of. In this situation, the engineer cannot delete and re-provision through the terraform configuration file. Should there be any deletion and re-provisioning, this could result to DOWNTIME. And to solve downtime, we leverage  terraform import command. 

To note: Apart from Terraform Import, another solution being worked on by Hashicorp to to solve Drift, is to allow you import the resources created in the console with actual configuration files.

For example:

By running "terraform import" The resources (causing the drift) is imported to the local environment where you're working on. This means that you do not have to write the configuration file, terraform automatically develop it. 

Lets get few hands-on done on Drift: 

Commands you should know. 
## shows the state file in a human friendly manner
terraform show

## gives me a high level overview of what what was captured at the state file.
## It helps to show any missing resources that has been created.
Terraform state list

Terraform refresh


THE FOUR RESOURCES BEHAVIOUR IN TERRAFORM:

  • Initial creation
  • Update in place
  • Destroy and recreate
  • Destroy 

STEP 1:
Establish a folder call "terraform"

STEP 2:
Within the folder, create a file "provider.tf"

terraform {
required_version = "1.3.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile = "default"
}


STEP 3:
Create a file call "dev.tfvars"

Dev-vpc-cidr_block = "10.0.0.0/16"
Dev-vpc-instance-tenancy = "default"
Dev-subnet-1-cidr = "10.0.1.0/24"
Dev-subnet-1-availability-zone = "us-east-1a"

 STEP 4:
Create a file "sg-group.tf"

resource "aws_security_group" "development-SG" {
name = "development-SG"
description = "development security Group"
vpc_id = aws_vpc.Dev-VPC.id

ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

tags = {
Name = "allow http traffic"
}
}

STEP 5:
Create a file "variable.tf".

# dev vpc cidr block
variable "Dev-vpc-cidr_block" {
type = string
}

# dev vpc instance tenancy
variable "Dev-vpc-instance-tenancy" {
type = string
}

# dev vpc subnet1 cidr block
variable "Dev-subnet-1-cidr" {
type = string
}

# dev vpc subnet1 availability zone
variable "Dev-subnet-1-availability-zone" {
type = string
}

STEP 6:
Establish s file "Vpc-network.tf".

# Development VPC
resource "aws_vpc" "Dev-VPC" {
cidr_block = var.Dev-vpc-cidr_block
instance_tenancy = var.Dev-vpc-instance-tenancy

tags = {
Name = "Dev-VPC"
}
}
# Development Subnet 1
resource "aws_subnet" "Dev-Subnet-1" {
vpc_id = aws_vpc.Dev-VPC.id
cidr_block = var.Dev-subnet-1-cidr
availability_zone = var.Dev-subnet-1-availability-zone
tags = {
Name = "Dev-Subnet-1"
}

# Development VPC internet Gateway
resource "aws_internet_gateway" "Dev-VPC-IGW" {
vpc_id = aws_vpc.Dev-VPC.id

tags = {
Name = "Dev-VPC-IGW"
}
}

STEP 7:
"cd" into the path folder.

STEP 8:
Run the terraform commands.
-init
-validate
-plan
-Apply

STEP 9:
Run the command "terraform state list".
We "list" out resources after the command "terraform apply". Terraform list gives you an overview of what was captured in the state-file. 



STEP 10:
We verified the resources that was provisioned in our console.


Below Steps indicate the changes / Drift by introducing "Tag"

STEP 11:

Lets add "tag" to the VPC and subnet resources created in your console. At this point, we assume that the engineer literally had no idea that the resources were initially provisioned using terraform. 




STEP 12:
 Run "terraform plan" to detect the "drift". This tag was not specified in terraform configuration file. We have leveraged "update in place" which is a kind of terraform resource. 

Terraform does not need to destroy the resources because it has the ability to go in and add the tag without changing the state of the resources. 


STEP 13:
Run "terraform refresh", lets see the action at the level of the state file. Refresh captures all the changes that were made in the console and you see that in your state file not at the level of your configuration file.



STEP 14:

Go into your "vpc-network.tf" file and pass the "env =dev" as a tag name to VPC in the configuration file.



STEP 15:
Run "terraform plan". We have added the changes in the configuration file, thats why there are no changes required. 



STEP 16:
Run "terraform state" list
To destroy a specific resource run "terraform destroy --target <resources>"


We have seen how drift works !! Next slide we will deploy terraform import. Happy Learning !!! 😊

                       https://docs.aws.amazon.com/

CONFIGURING A PHISHING CAMPAIGN IN MICROSOFT DEFENDER.

Configuring a phishing campaign in Microsoft Defender (specifically Microsoft Defender for Office 365) involves creating a simulated attack ...