Sunday, April 9, 2023

DOCKER ARCHITECTURE




Docker Architecture explains how docker uses a client-server architecture at it's core. Docker engine is used to run the operating system which earlier used to be a virtual machine as docker containers. Today, all the applications now run as Docker containers. The advantages of Docker is that you do not have to purchase extra hardware for your OS. Docker architecture comprises of three main components.

  • Docker client
  • Docker daemon
  • Docker registry.
DOCKER DAEMON : This is responsible for managing docker objects like images, containers, volumes and networks.

DOCKER CLIENT : This helps Docker users to interact with Docker when you apply commands like "docker run", the client sends these commands to dockerd, which carries them out. It also uses REST APIs to communicate with the docker daemon server. 

In a more simpler way, Docker client is like a teacher who tells the Docker daemon what to do. When you use commands like docker run, the client carries these command out. 

DOCKER REGISTRY: This stores docker images.

WHAT INSTANCES CAN I USE DOCKER?

  • Code Pipeline Management
  • Debugging Capabilities
  • Simplifying Configuration
  • Server Configuration
  • Multi-workflow

 DIFFERENCE BETWEEN DOCKER AND VIRTUAL MACHINE

Docker images uses megabytes (smaller)

Docker is faster, and it uses your local machine and O/S as well.

Docker uses the kernel of the Host. 

VM uses larger gigabyte.

VM run any O/S to another Host.


The image shows the new generation architecture virtualization in the world of Docker.



  • The server is the physical server that can used on multiple VM.
  • The Host Operating System (OS) serves as the base of an Ubuntu, Linux, Windows.
  • Docker engine is used to run the operating system as Docker containers.
  • All the Apps run as Docker containers. 
                 
OPERATING SYSTEM 
The operating system has two layers.

Kernel : This communicates with CPU (hardware)
Application: This runs on the kernel. 



I hope this explains how to explore docker images. Have a excellent learning experience๐Ÿ˜Š!

Referencing : Docker Docs: How to build, share, and run applications

Uche Questia : https://www.blogger.com/blog/post/edit/5428112557550405099/1602697879220214062?hl=en



















Thursday, April 6, 2023

DOCKER COMMANDS HANDS-ON

 



 Each lab scenario gives you the opportunity to familiarize with the set up.  You will be expected to perform tasks like run a container, build, delete and stop. 

STEP 1: 
Docker  Version : To check the version of Docker Server Engine running on the host.

The output will show you both the client and server version of Docker engine installed on your system. 

$ docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:17:11 2021
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:15:40 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.18.0
GitCommit: fec3683
$




STEP 2:
How many images are available ? You run the command docker images

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql latest 4f06b49211c0 5 weeks ago 530MB
nginx alpine 2bc7edbc3cf2 7 weeks ago 40.7MB
postgres latest 680aba37fd0f 7 weeks ago 379MB
alpine latest b2aa39c304c2 7 weeks ago 7.05MB
redis latest 2f66aad5324a 7 weeks ago 117MB
nginx latest 3f8a00f137a0 7 weeks ago 142MB
ubuntu latest 58db3edaf2be 2 months ago 77.8MB


STEP 3:
You can run a container using the redis image. 


$ docker run -t redis
1:C 05 Apr 2023 16:10:51.377 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 05 Apr 2023 16:10:51.377 # Redis version=7.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 05 Apr 2023 16:10:51.377 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 05 Apr 2023 16:10:51.378 * monotonic clock: POSIX clock_gettime
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 7.0.8 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 1
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | https://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'


STEP 4:
docker run redis 

$ docker run redis
1:C 05 Apr 2023 16:55:43.279 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 05 Apr 2023 16:55:43.279 # Redis version=7.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 05 Apr 2023 16:55:43.279 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /pa

STEP 5:
Can you stop the running container you just created. 


$ docker stop 8413d7fdbfdb
8413d7fdbfdb
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8413d7fdbfdb redis "docker-entrypoint.s…" 5 minutes ago Exited (0) 10 seconds ago jolly_nightingale


STEP 6:
To check the running container. 

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$


STEP 7:
How many containers are present on the host including both running and not running ones.

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8413d7fdbfdb redis "docker-entrypoint.s…" 5 minutes ago Exited (0) 10 seconds ago jolly_nightingale
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8413d7fdbfdb redis "docker-entrypoint.s…" 8 minutes ago Exited (0) 3 minutes ago jolly_nightingale
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37bf0d26e4c1 alpine "/bin/sh" About a minute ago Exited (0) About a minute ago charming_cori
cd25b2b81889 alpine "sleep 1000" About a minute ago Up About a minute pedantic_kapitsa
f30f483b1c0a nginx:alpine "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp nginx-2
d74280453797 nginx:alpine "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp nginx-1
c841b3e99c4a ubuntu "sleep 1000" About a minute ago Up About a minute awesome_northcut
8413d7fdbfdb redis "docker-entrypoint.s…" 11 minutes ago Exited (0) 5 minutes ago jolly_nightingale
$



STEP 8:
To stop the container you just created.

$ docker stop 2f791
2f791
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2f791c2115ff redis "docker-entrypoint.s…" 7 minutes ago Exited (0) 6 minutes ago competent_cartwright
$

STEP 9:
To stop containers run the command "docker stop (container id) (container name) and then to delete them run docker rm (container id)

$ docker stop 86eff91657af crazy_meitner
86eff91657af
crazy_meitner
$ docker rm 86eff91657af
86eff91657af
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ef7675b669eb nginx:alpine "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 80/tcp nginx-2
998bd5125c4d nginx:alpine "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 80/tcp nginx-1
5792f6aaa0b0 ubuntu "sleep 1000" 11 minutes ago Up 10 minutes awesome_northcut
2a0ee489807d redis "docker-entrypoint.s…" 12 minutes ago Exited (0) 11 minutes ago ecstatic_darwin
2f791c2115ff redis "docker-entrypoint.s…" 24 minutes ago Exited (0) 23 minutes ago competent_cartwright
$ docker stop ef7675b669eb nginx-2
ef7675b669eb
nginx-2
$ docker rm ef7675b669eb
ef7675b669eb
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
998bd5125c4d nginx:alpine "/docker-entrypoint.…" 12 minutes ago Up 12 minutes 80/tcp nginx-1
5792f6aaa0b0 ubuntu "sleep 1000" 12 minutes ago Up 12 minutes awesome_northcut
2a0ee489807d redis "docker-entrypoint.s…" 14 minutes ago Exited (0) 13 minutes ago ecstatic_darwin
2f791c2115ff redis "docker-entrypoint.s…" 26 minutes ago Exited (0) 25 minutes ago competent_cartwright
$


STEP 10:
To delete an ubutu image. The command is "docker rmi ubuntu"

$ docker rmi ubuntu
Untagged: ubuntu:latest
Untagged: ubuntu@sha256:9a0bdde4188b896a372804be2384015e90e3f84906b750c1a53539b585fbbe7f


STEP 11:
To pull an image. 

$ docker pull nginx:1.14-alpine
1.14-alpine: Pulling from library/nginx
bdf0201b3a05: Pull complete
3d0a573c81ed: Pull complete
8129faeb2eb6: Pull complete
3dc99f571daf: Pull complete
Digest: sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
Status: Downloaded newer image for nginx:1.14-alpine
docker.io/library/nginx:1.14-alpine
$


STEP 12:
You can run a container with the nginx:1.14-alpine image and name it webapp .

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker run -d --name webapp nginx:1.14-alpine
3b1729e87ea63258d4a99d95ee7481f35fccdd83a95f8fa7ec3dc8d131879d80
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b1729e87ea6 nginx:1.14-alpine "nginx -g 'daemon of…" 9 seconds ago Up 7 seconds 80/tcp webapp
$
Kode kloud
Next slide we will discuss on docker run . Happy Learning !!! ๐Ÿ˜Š

Friday, March 31, 2023

DOCKER COMMANDS






                  BASIC  DOCKER COMMANDS.

WHAT IS DOCKER
Docker packages software into units called CONTAINER which allows you to build, test, and deploy application quickly. 

DOCKER REGISTRY-
This is a software application used to store and distribute DOCKER IMAGES. Its a central repository which are used to create containers. The registry is an open- source and allows you to scale.

Docker run command : Is used to run container from an image. 
For example: Nginx, Node.js, MySQL, Python, Apache Kafka, MongoDB, Alpine,Ubuntu.  All these are images that can be pulled from docker registry to the host machine. 

Docker run nginx : This will run an instance of the nginx application on the docker host.

Docker ps : This is to list all container running. This will show you the CONTAINER ID. By default docker provides a random ID, time of creation, status, port (80/tcp), name. 

For example: To be clear see the container ID as the AMI in your O/S. This is another way to keep that in mind. 

Docker ps -a : To list all container running as well as previously stopped and exited container. 

Docker stop : To stop all container use the stop command which you must provide the container ID or container name. And if unsure of the name, run the "docker ps" command to get it. After success, run the the "docker ps" again to show no running container "exited". 

 Docker rm (container name): This will permanently remove a stop or running container. 

HOW TO REMOVE THE INITIAL IMAGES ON HOST.
In order to get rid of an image that is not needed. We need to see the list of images running on our host. We run the "docker images" command. 

docker images : To see all the images and sizes on our host machine. 

docker rmi alpine: This is to remove an image, you must first stop container and then delete all the dependencies within the container to be able to delete an image. 

HOW TO GET IMAGES FROM DOCKER REGISTRY.

docker pull command : This command only pulls the image from the register and not the container. 

 To make this simpler, Docker image is like a blueprint or template that contains all the necessary parts needed to run an application. When you build an image, it's like creating a blueprint for a house that you can use to build multiple identical houses. 
Once, you've created the Docker image, you store it in a central location called DOCKER REGISTRY which is like a warehouse where you can store your blueprints. 
When you want to run your application, you use the Docker image as a starting point to create a Docker container. The container is like a virtual environment that contains everything needed to run the application. 
Using Docker images makes it easier to deploy and run applications in different environments because you can create multiple identical containers from the same image. This makes it easy to move applications from development to testing to production environments.  

Docker push : You can set-up an account in docker hub to manage the repository. In order to push project. 
For example : docker push  localhost:4000/(dockerhub userid)

WHY SOME CONTAINERS TIME-OUT
There are few reasons why a container could time out or sleep mode. 
For example an "Ubuntu image"

Docker run ubuntu : This command runs an instance of ubuntu image and exits immediately.  And if you list all the container, it is "exited state"WHY? This is because container are not structured to host an O/S. And ubuntu is just an image of an O/S that is used for a base image for other application. 

For example: You can set-up a timer for your container. In order to keep the container in sleep state. You run the sleep command before it exits.

"docker run ubuntu sleep 5"

Container host applications like web server, database etc. The container only lives as long as the application process inside it is alive. If  the web server inside the container crash, the container exit.

Docker run exec -it  : This is to execute a command "-it" on a running container. And you want to access a file in the container, you run the file name to the exec command. You are able to log in and get access to any container in the world of Docker. 

Docker run -d  : This provides you the detach mood "-d" and runs the container in the BACKGROUND.  

Docker attach ( container ID) : To specify the container ID, you simply provide the first 5 characters 


Next slide we will discuss on docker compose . Happy Learning !!! ๐Ÿ˜Š


Referencing: https://docs.docker.com/registry/


Sunday, March 26, 2023

CLOUDWATCH

 



CloudWatch is a global service provided by AWS. CloudWatch monitors resources such as EC2 instances, RDS databases, Lambda functions and applications running within AWS also with CloudWatch dashboard allows you to visualize its performance and operations across multiple regions.  With CloudWatch you're able to support cross-account access, monitor applications.

CloudWatch captures varieties of operational data within your resources. 
Metrics- Examples of metrics would be CPU utilization, disk usage, Network traffic.

Logs- CloudWatch captures log and stores the logs from AWS resources applications like Lambda function, Elastic Beanstalk.

Alarms- You can set-up an alarm on the metrics created as well as logs and CloudWatch will send you SNS notification once it breached its threshold. 

Dashboards- This displays customized views of metrics and logs which CloudWatch monitors. 


CLOUDWATCH AGENT 

CloudWatch agent is a software application that can be installed in your server. The agent can support on-premises or in the cloud, and it can be configured to collect logs within applications like NGINX, MYSQL, DOCKER, APACHE. 

HOW CLOUD WATCH AGENT WORKS IN CI/CD.

Install the agent package manager YUM or APT. 

Configure the agent using  Ansible, Chef, Puppet 

Integrate the CloudWatch agent into your CI/CD pipeline by adding it to the build deployment script or post-build.

The CloudWatch agent collects metrics and logs because it has been configured. It can also collect latency requests and error rates aside from application logs. 



Referencing : AWS documentation


Saturday, March 11, 2023

KUBERNETES -CONFIGURE SECRETS






HOW TO MANAGE KUBERNETES BUILT IN SECRET 

Secret is an object that stores and manages sensitive information like API Keys, TSL certificates, Passwords etc. Secret can be manually created or automatically generated and it is configured in the environment variable, volume and manifest file. For example:

API_TOKEN: password123

ssl-cert :----CERTIFICATE---


TYPES OF SECRET

Built-in Type Usage
Opaque arbitrary user-defined data
kubernetes.io/service-account-token ServiceAccount token
kubernetes.io/dockercfg serialized ~/.dockercfg file (store cred for docker reg)
kubernetes.io/dockerconfigjson serialized ~/.docker/config.json file
kubernetes.io/basic-auth credentials for basic authentication
kubernetes.io/ssh-auth credentials for SSH authentication
kubernetes.io/tls data for a TLS client or server
bootstrap.kubernetes.io/token bootstrap token data


HOW DO WE MANAGE SECRETS IN KUBERNETES.

There are two ways to create a secret.

First, we define a "secret.yaml file" and this specifies the Kind, Metadata and Value.  The value has different numbers and its encrypted and coded in "Base64"

BASE64: This is used to encode data in email attachment and can represent binary data in JSON / XML format etc. 

NB: Yaml and Json - Are data exchange format.


HOW TO ENCODE YOUR SECRET:
You can use the "Base64" command line by applying the "echo command". The "-n" flag will prevent any additional code. We can connect with base64 to get the encoded string. See the below: 

$ echo -n "password123" | base64 - i - Gfjjrkwcmrlmcalm=


HOW TO CREATE A SECRET IN A CLUSTER.
In Kubernetes (K8), Kubectl support certain variants that does not require us to provision a file to store secret which is more secure. With this, you can pass your secret without having to encode it as shown below:  



HOW TO CONSUME SECRET AND REFERENCE IT

To achieve this, we identify two ways to consume secret in the cluster within an application. 
  • First, we can introduce it as an environment variables within the container when defining the POD/ CONTAINER in the manifest file. We define the "env" where the value should be pulled from a "reference" to that secret we created. Then, it can reference the key value pair. The application can run within the container to access the "process.env.API_TOKEN (JAVASCRIPT/PYTHON)"

  • The second method to consume secret is through "MOUNT VOLUME". This helps when we want to configure and consume sensitive information such as reading an SSL certificate manager.
We need to configure TWO THINGS in the manifest file. 
  • We declare the volume within the the pod that reference our secret.
  • We create a "volume mount with in the container specification"
To Note: The name of the volume and the volume mount MUST match. 



 
See next slide on RBAC in cluster. ๐Ÿ˜Š

TERRAFORM MODULE

 



                        WHAT ARE MODULES

Module provide a way to organize terraform code into reusable components, making it easier to manage and maintain complex infrastructure deployment.

For example

       HOW TO USE TERRAFORM MODULE

To use a terraform module, you declare it in your terraform configuration file and provide necessary input as variables. This is how module uses it to provision infrastructure resources according to its defined configuration. 
 
WHAT DOES TERRAFORM MODULE DO?

You can leverage on terraform modules to create logical abstraction 


You define the module by making use of the "RESOURCE BLOCK" and you consume the module by making use of the "MODULE BLOCK"

For example:
Root --------------Module Block
Child-------------Resource Block

ROOT MODULE
When you run terraform commands directly from a directory ".tf" it is considered to be the root module. The root module call the child module. 

CHILD MODULE:
It is when a configuration has been called multiple times by another module.            


OUTPUT VALUES:
Output values in terraform allows you to export particular values from a module/ multiple modules to another.

USED CASE:
Deployment in terraform, there are attributes which we want to identify its value. To get your public instead of going to the console each time a resource is created. We can make use of output values to display the public IP, public DNS at the level of you CLI you pass this command.  

 TWO MAIN USED CASE OF OUTPUT VALUES 
  • Printing Values on CLI
  • Resolve resource dependencies ** very important**
       
           MODULE SOURCE

Module source is the direct path where the actual child module config file set in. 


module "module_name" {
source = "module_source_location" ( This will be the path)

variable_name = "variable_value"
}


 META ARGUMENTS ARE:

  • Counts
  • depends_on
  • Providers
  • for_each

Let's develop modules (root)  and Child modules. Keep in mind that In an environment no one writes codes over and over. 

STEP 1:
Create a folder "developer-env"

STEP 2:
Within the above folder create 2 folder and give it any name "Uche" "Hodalo"We assume Uche and Hodalo are developers in our exercise. 

STEP 3:
Create a file "modules" within the developer-env folder.

STEP 4:
Within the module file create a folder "custom".

STEP 5:
Within the custom folder create 4 folders "EC2" "NETWORK" "RDS" "SG".
 
STEP 6:
Within the EC2 folder create two files "variable.tf" "webapp-ec2.tf".

variable.tf
# Dev instance ami_id
variable "dev-instance-ami-id" {
type = string
default = "ami-0b******2a63"
}
# dev instance type
variable "dev-instance-type" {
type = string
default = "t2.micro"
}

variable "ami" {
type = string
}

variable "key_name" {
type = string
}

variable "instance_type" {
type = string
}

variable "name" {
type = string
}

variable "subnet_id" {
type = string
}

variable "vpc_security_group_ids" {
type = string
}

STEP 7:
"webapp.tf"

resource "aws_instance" "prod-vm" {
# (resource arguments)
ami = var.ami
key_name = var.key_name
instance_type = var.instance_type
user_data = file ("webapp-deploy.sh")
subnet_id = var.subnet_id
vpc_security_group_ids = [var.vpc_security_group_ids]
associate_public_ip_address = true # assign public Ip to the EC2 at the time of creating EC2
tags = {
Name = var.name
}
}

STEP 8:
Within the network folder, create three files "outputs.tf" "variables.tf""webapp-network.tf"

"output.tf"
# exporting subnet1 id
output "subnet_1_id_export_output" {
value = aws_subnet.Dev-subnet-1.id
}

# exporting subnet2 id
output "subnet_2_id_export_output" {
value = aws_subnet.Dev-subnet-2.id
}

# exporting vpc id
output "vpc_id_export_output" {
value = aws_vpc.Dev-vpc.id
}

STEP 9:
"variable.tf"
# Dev instance ami_id
variable "dev-instance-ami-id" {
description = "Development ami id"
type = string
default = "ami-0b0d********a63"
}
# dev instance type
variable "dev-instance-type" {
description = "Development instance type"
type = string
default = "t2.micro"
}
# dev vpc cidrblock
variable "cidr_block" {
description = "Development vpc cidr_block"
type = string
}

variable "sn1_cidr_block" {
description = "Development subnet 1 cidrblock"
type = string
}
variable "sn1_availability_zone" {
description = "Development subnet 1 avialability_zone"
type = string
}
variable "sn2_cidr_block" {
description = "Development subnet 1 cidr_block"
type = string
}
variable "sn2_availability_zone" {
description = "Development subnet 2 avialability_zone"
type = string
}
variable "vpc_id" {
type = string
description = "vpc_id"
}

variable "instance_tenancy" {
description = "Development vpc instance_tenancy"
type = string
}

STEP 10:
"webapp-network.tf". At the vpc_id is been referenced because we need to make use of the variable, we will tell terraform to make use of this particular subnet. We want flexibility so that if we have multiple VPC'S and we want to create subnet in other VPC, variablelizing it will not be a constraint. 
# Create Development VPC
resource "aws_vpc" "Dev-vpc" {
cidr_block = var.cidr_block
instance_tenancy = var.instance_tenancy
tags = {
Name = "Dev-vpc"
}
}
# Create Development subnet 1
resource "aws_subnet" "Dev-subnet-1" {
vpc_id = var.vpc_id # to cross reference a resource in terraform use: resource_type.LocalResourceName.id
cidr_block = var.sn1_cidr_block
availability_zone = var.sn1_availability_zone
tags = {
Name = "Dev-subnet-1"
}
}
# Create Development subnet 2
resource "aws_subnet" "Dev-subnet-2" {
vpc_id = var.vpc_id
cidr_block = var.sn2_cidr_block
availability_zone = var.sn2_availability_zone
tags = {
Name = "Dev-subnet-2"
}
}
# to create Dev-vpc internet gateway
resource "aws_internet_gateway" "Dev-vpc-igw" {
vpc_id = var.vpc_id #to cross reference the vpc resource id
tags = {
Name = "Dev-vpc-igw"
}
}
# to create Subnet 1 Public RT
resource "aws_route_table" "Dev-SN1-RT" {
vpc_id = var.vpc_id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.Dev-vpc-igw.id
}

tags = {
Name = "Dev-SN1-RT"
}
}

# to create Subnet 2 Private RT
resource "aws_route_table" "Dev-SN2-RT" {
vpc_id = var.vpc_id
tags = {
Name = "Dev-SN2-RT"
}
}

#Public RT Association
resource "aws_route_table_association" "Dev-SN1-RT-Association" {
subnet_id = aws_subnet.Dev-subnet-1.id
route_table_id = aws_route_table.Dev-SN1-RT.id
}

#Private RT Association
resource "aws_route_table_association" "Dev-SN2-RT-Association" {
subnet_id = aws_subnet.Dev-subnet-2.id
route_table_id = aws_route_table.Dev-SN2-RT.id
}

STEP 11:
Within the "Security-group" folder create three files.

outputs.tf
# exporting security group id
output "security_group_id_export_output" {
value = aws_security_group.Development-SG.id
}

STEP 12 :
"variable.tf"
variable "vpc_id" {
type = string
description = "vpc_id"
}

STEP 13:
"webapp-sg.tf". You noticed there's no variable in this file. 

resource "aws_security_group" "Development-SG" {
name = "Development-SG"
description = "Development-SG"
vpc_id = var.vpc_id
ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"] #default description for IPV6
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "allow_http traffic"
}
}

STEP 14:
Run the terraform command
-apply


You have successfully deployed a custom module. ๐Ÿ˜Š Lets try a few projects by clicking the link below. 







GRC

  How confident are you in your security program (Tools, systems, controls etc)? In the context of information security , the terms valida...