Sunday, February 19, 2023

VPC FLOW LOGS

 






      
WHAT IS VPC FLOW LOGS?


ANSWER:

VPC Flow logs is a feature in Amazon Web Services (AWS) that allow you capture the details of traffic that flows in and out from the network interfaces in your VPC. It detects a potential security threats and optimize application performance. 


HOW DO YOU CONFIGURE VPC FLOW?

ANSWER:

You can enable VPC Flow in your console, set-up the interval of flow capture ( 10mins, 20 mins), set up the destination ( S3 bucket, CloudWatch) and establish a destination log group where the flow log is established. 
                                      

IF YOU MAKE A CALL TO KNOW THE METADATA OF YOUR INSTANCE DOES  VPC FLOW LOGS CAPTURES IT ?

ANSWER:

No, VPC Flow logs does not capture metadata API calls. VPC Flow logs only capture information about the network traffic that comes in and out of your infrastructure. 

WHAT ARE THE LIMITS TO VPC FLOW?

ANSWER:

Tags -YOU cannot tag a flow log.

IAM - You cannot associate IAM role with the flow log.

Cost- VPC flow logs can incur additional bills from storage to data transfer. In this case, AWS cost explorer / budget can be enabled.

VPC flow logs cannot keep log for longer periods. You can customize.

It can only capture metadata traffic such as IP address. It does not capture encrypted keys that is used to encrypt the traffic. 

WHAT PROBLEM CAN VPC FLOW LOG SOLVE?

ANSWER:

Compliance: It helps to solve and provide audit trail of network.

Congestion: This helps to solve traffic jam to any potential bottlenecks such as the quality of services (QoS).

Security: VPC flow logs detects unauthorized access attempts. 


VPC FLOW LOGS CAPTURES WHAT INFORMATION?


ANSWER:

VPC Flow Logs captures sources, destination of your IP addresses, ports, protocol, bytes.


VPC FLOW LOG CAN BE CONFIGURED AT THREE
 LEVELS?

ANSWER:

VPC , Subnet,  Network interface level. 


HOW DOES VPC FLOW LOGS STORE DATA TO A DESTINATION?

ANSWER:

VPC Flow logs data can be use for Datadog analysis.

VPC flow logs can be sent to CloudWatch Logs where it analyses the logs insights.

VPC flow logs can be stored in s3 bucket which is integrated with AWS Athena.


WHAT IS THE DIFFERENCE BETWEEN VPC FLOW LOGS AND CLOUD TRAIL?

ANSWER:

VPC Flow logs captures the entire network traffic. The information within the network such security group, instances, subnet etc 

CloudTrail log shows a comprehensive details of API activity in your entire AWS account. 




Happy Learning!!😊

Friday, February 17, 2023

HANDS-ON FOR STARTERS IN TERRAFORM

 



This steps will guide beginners' on how to provision "ec2" and "VPC" with the use of count and "establish a public IP address".


NOTE : 

 - Failure to specify VPC argument will result to terraform creating the ec2 resource inside the default VPC. 

-  The "ami" should be gotten from your console (ubuntu 18.04)

Lets get a few hands-on:

STEP 1: 

Create a folder, called "Practice-Day1" 

Within the folder create  "ec2.tf file"and "provider.tf"

STEP 2:  

Below is the resource argument for "ec2''

resource "aws_instance" "Env" {
ami = "ami-0263e4deb****0e"
instance_type = "t3.micro"
count = 2
associate_public_ip_address = true
tags = {
Name = "Env"
}
}


STEP 3:

Below is the "provider" argument

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}

# Create a VPC
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}


STEP 4:
"cd" into the path of the folder.
 "ls" - list what you have inside the folder.    

STEP 7: 
Apply terraform command

-Init
-Validate
-Plan
-Apply
-Destroy


NB: If you successfully deploy this hands-on, kindly leave a comment and feedbacks. Happy Learning! 😊


TERRAFORM PROVISIONER







                            PROVISIONER
Provisioners are used to execute scripts and make configurations across resources created by Terraform.

TERRAFORM ALIGNS WITH SEVERAL PROVISIONERS Such as:
  • Terraform file provisioner
  • Remote Provisioner
  • Local exec Provisioner
  • Chef and puppet
TERRAFORM FILE PROVISIONER: It is used to copy certain files or directories from your remote systems (using SSH) to your local using a window or linux machine. At the level of the file provisioner, we are asking terraform to copy whatever is inside provisioner file from our local to the destination.

Potential Interview Question: HOW HAVE YOU TAKING ADVANTAGE OF FILE PROVISIONER?

USED CASE:
EFS mount: In my project, we actually had to create different virtual machines for different developers that joined the project, the reason for creating this different environment is because the developers needed it to complete their development. And they would need storage facilities  to store all the application files. They were few files that were generic as a result, when new machines gets created this different resources needs to be uploaded across all the VM. For this, you can create a mount point and upload files in which File provisioner was established to accomplish the project orchestrating with terraform. 



STEP 1:

NB: Create all the file before deploying step 4 and 5. 

 

File provisioner- We are using ubuntu 18.04 operating system. Create a folder  for example "file-provisioner


STEP 2:
Within the "file folder" create a file "app" and within the app create a folder "ec2-keypair". 

STEP 3:
Within the ec2-keypair folder, create a file "ec2.tf" and pass the config file below within ec2.tf file. And you specify the connection block using the link below. 

There are two ways to pass a user data in terraform, you can pass it as "raw data" below. As well as passing user data as external file where "shell script" is created within the local environment. 

resource "aws_instance" "Test-VM" {
ami = "ami-061d******4525c"
instance_type = "t3.micro"
#count = 3 # create three similar EC2 instances
key_name = "cicd"
vpc_security_group_ids = ["sg-0b00b*******7539"]
user_data = <<-EOF
sudo apt update -y
sudo apt -y install apache2
sudo systemctl start apache2
sudo systemctl enable apache2
EOF

tags = {
Name = "Test-VM"
}
# Copies the file as the root user using SSH
connection {
type = "ssh"
user = "ubuntu"
password = ""
host = self.public_ip
private_key = file("ec2-keypair/cicd.pem")
}
provisioner "file" {
source = "app/"
destination = "/home/ubuntu"
}
}


STEP 4:

Your terminal
"cd" into the folder file provisioner to ec2-keypair
ls
ls ~/downloads/cicd.pem   
The location of the pem ( local) should be specified. 

STEP 5:
cp ~/downloads/cicd.pem . /  


STEP 6:
"ls"

STEP 7:
You invoked the keypair from your local into the keypair configuration folder.

STEP 8: 
Create a file for example "provider.tf

terraform {
required_version = "1.3.4"
required_providers {

aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile= "default"
}


REMOTE PROVISIONER: You can bootstrap/user data to pass and configure it. For example instance, after it has been created. At this point your bash shell command or linux command would be required. 

STEP 1: 
Create a folder  for example "Remote-exec".

STEP 2:
Create another folder "ec2-keypair" within the "Remote-exec".

STEP 3:
Within ec2-keypair folder create a file "ec2.tf" below. "EOF" indicate your user data is done in flight.

resource "aws_instance" "Test-VM" {
ami = "ami-061dbd1******5c"
instance_type = "t3.micro"
#count = 3 # create three similar EC2 instances
key_name = "cicd"
vpc_security_group_ids = ["sg-0b0***b77****9"]
user_data = <<-EOF
sudo apt update -y
sudo apt -y install apache2
sudo systemctl start apache2
sudo systemctl enable apache2
EOF

tags = {
Name = "Test-VM"
}
# Copies the file as the root user using SSH
connection {
type = "ssh"
user = "ubuntu"
password = ""
host = self.public_ip
private_key = file("ec2-keypair/cicd.pem")
}
provisioner "remote-exec" {
inline = [
"sudo apt update -y"
]
}
}
output "public_ip" {
value = aws_instance.Test-VM.*.public_ip
}


STEP 4:
Create a file  for example "provider.tf

terraform {
required_version = "1.3.4"
required_providers {

aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile= "default"
}


LOCAL EXEC PROVISIONER: This provisioner runs local scripts after resources has been created. You mostly use it if you want to lock IP which stores the value locally. It can be used to capture the metadata of the resources that you created. 

For example:
Resources can be EC2, DNS (PUBLIC OR PRIVATE), IP (PUBLIC OR PRIVATE)

STEP 1:
Create a folder  for example "Local-exec

STEP 2: 
Within the local folder, create "ec2.tf".

resource "aws_instance" "Test-VM" {
ami = "ami-061d*********5c"
instance_type = "t3.micro"
#count = 3 # create three similar EC2 instances
key_name = "cicd"
vpc_security_group_ids = ["sg-0b00bb***7539"]
user_data = <<-EOF
sudo apt update -y
sudo apt -y install apache2
sudo systemctl start apache2
sudo systemctl enable apache2
EOF

tags = {
Name = "Dev-VM"
}

provisioner "local-exec" {
command = "echo ${self.public_ip} >> public_ips.txt"
}
}

output "public_ip" {
value = aws_instance.Test-VM.*.public_ip
}

STEP 3:
Create a file  for example "provider.tf"

terraform {
required_version = "1.3.4"
required_providers {

aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile= "default"
}


STEP 4:
Create a file for example "public_ips.txt"

"44.**.171.1*5"

STEP 5 : 
"cd" into the path 

STEP 6 :
Run the terraform commands : This below error is as a result of incorrect keypair in your configuration file.
-Init
-Validate




-plan
-apply


STEP 7:
Verify in your console the resources provisioned and open port 22.






-destroy




NOTE: You need to change the "source and destination" by passing the "user data" and "security group"in the steps.


Potential Interview Question: HOW WILL TERRAFORM AUTHENTICATE?

If you're creating a VM, you need to login. Terraform provide you a "CONNECTION BLOCK".
If you're using Linux, you need to provide the SSH KEY, user, the PRIVATE KEY, and terraform will use that access to authenticate in the connection block. 
And to authenticate for windows we use "WINRM".

SELF FUNCTION: 
Allows terraform to create the instance after and start executing it from the actual resource. 

       OR

"self" is a function in terraform that used to reference a particular configuration to its self. 



Tuesday, February 14, 2023

TERRAFORM VARIABLES

 TERRAFORM VARIABLES




                               WHAT ARE VARIABLES
 
Variables allows you to take away hard coded values within your configuration files in terraform.  Variables in terraform are declared using variable block and can be used to store values like strings, numbers, boolean or list.

Just like in CloudFormation that a reference of variable is "parameter store" 

For example:
"aws_instance" - Is the resource specified 
"instance_count" - Is the variable specified that makes tells us the number of instance being created based on the value of the variable.


resource "aws_instance" {
count = var.instance_count
}

    
TERRAFORM VARIABLES
Below are the different types of variables that Terraform support:

List: 
A list can be a collection of same values ["d", "e", "f" ]      

Number:  
We can say a number is numeric in value like  90 or 2.13    

Boolean:   
This is a value that can either be true or false

Object:
This could be AWS resources such as EC2 instance

Map:
A collection of key pair like specify the value of the key. 


                        LOCAL VALUES
  • Input Variables
  • Output Values

INPUT VARIABLES
Allows you to share modules across different terraform configuration files making your module reusable.


OUTPUT VALUES
The output values similarly shows its return values in a programming language.


MANUAL VALUES: 

Manual Values make the code less flexible, harder to manage and its not scalable. You will be able to take out the hard coded values but terraform will allow you run the values on the CLI. Meaning, when you run terraform plan and apply", terraform will ask you to provide the variables at the level of the terminal. One benefit with manual values is that, IT DOES NOT EXPOSE YOUR SECRET/VALUES. You can share your configuration file to GitHub. 

DEFAULT VALUES:

Terraform identifies the configuration value and it goes in and pick up the value as well as uses the FUNCTION (default argument)

For example:

 To pass default argument, in this case below, the default = 1
 You pass the default function  = value (inside the variable.tf )

LIMITATION OF DEFAULT VARIABLE: 
In most cases, when you are reproducing an infrastructure in another region, you would have to update the file and that could trigger the file due to the mistake. To solve this, you will remove all the values from the variable file and resources and keep it. You have all the variables configuration in a separate file and any future changes can be done in the values and can be updated.  This is when terraform.tfvars comes in.

resource aws_instance
variable "instance_count" {
type = number
default = 1
}



TERRAFORM.TFVARS:
With terraform TFVARS, you have the ability to create a tfvars file that will be used to house all the resource values. At this level you will be providing the specific variable keys from value to append to the resources.

For example:
Each files has ONLY what is stated in values, resources and variable. We are able to establish the files in a dynamic manner into Terraform.tfvars.

TFVARS CONNECTION WITH THE KEYS:

Terraform makes use of Variable reference" to achieve the connection between TFVARS and the KEYS. 

CUSTOM VARIABLES:
These are set of .tfvars files that allow you to pass in values at run time when you apply (terraform apply) in which you specified the custom name.

AUTO.TFVARS:
With this, Terraform automatically loads numbers of variables defined in files as auto.tfvars with the ability to customize the code and automatically pick up the code at run time.  

Lets' gets a few project done on custom-auto.tfvars

STEP 1:
Create a folder, for example called "input-variables" 

STEP 2:
Create another folder, inside input variables for example called "custom-auto-tfvars" 

STEP 3:

Within the 2nd folder, create a file for example called "provider.tf"

terraform {
required_version = "1.3.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile = "default"
}

STEP 3:
Within the same 2nd folder, create a file for example call  "dev.auto.tfvars"

Dev-vpc-cidr_block = "10.0.0.0/16"
Dev-vpc-instance-tenancy = "default"
Dev-subnet-1-cidr = "10.0.1.0/24"
Dev-subnet-1-availability-zone = "us-east-1a"


STEP 4:
Create another file inside the 2nd folder call "security-group.tf". 

To Note, Instead of making use of Port "443" which requires a certificate, we will make use of Port "80"

The "-1" referenced for egress grants access to everything.  You can also specify with TCP/ UDP for the protocol type. 

resource "aws_security_group" "development-SG" {
name = "development-SG"
vpc_id = aws_vpc.Dev-VPC.id

ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

STEP 5:
Create a file inside custom-auto-tfvars called "variable.tf"

# dev vpc cidr block
variable "Dev-vpc-cidr_block" {
description = "Development vpc cidr block"
type = string
}

# dev vpc instance tenancy
variable "Dev-vpc-instance-tenancy" {
description = "Development vpc instance tenancy "
type = string
}

# dev vpc subnet1 cidr block
variable "Dev-subnet-1-cidr" {
description = "development vpc subnet1 cidr"
type = string
}

# dev vpc subnet1 availability zone
variable "Dev-subnet-1-availability-zone" {
description = "development vpc subnet1 az"
type = string
}

STEP 6:
Create a file inside custom-auto-tfvars called "vpc-network.tf"

# Developement VPC
resource "aws_vpc" "Dev-VPC" {
cidr_block = var.Dev-vpc-cidr_block
instance_tenancy = var.Dev-vpc-instance-tenancy

tags = {
Name = "Dev-VPC"
}
}
# Developement Subnet 1
resource "aws_subnet" "Dev-Subnet-1" {
vpc_id = aws_vpc.Dev-VPC.id
cidr_block = var.Dev-subnet-1-cidr
availability_zone = var.Dev-subnet-1-availability-zone
tags = {
Name = "Dev-Subnet-1"
}

# Development VPC internet Gateway
resource "aws_internet_gateway" "Dev-VPC-IGW" {
vpc_id = aws_vpc.Dev-VPC.id

tags = {
Name = "Dev-VPC-IGW"
}
}

STEP 7:
In this hands-on, we provisioned 5 resources. In your case, you will get lesser resources.



STEP 8:
"cd" into the path (custom-auto-tfvars) where your files are located.

STEP 9:
Run the terraform commands
-Init
-Validate
-plan
-apply
-destroy

STEP 10:
Congratulations, you have successfully used terraform provision variables and values 

NB: If you successfully deploy this hands-on, kindly leave a comment and feedbacks. 😊


Referencing : Terraform- https://registry.terraform.io/
                    Mbandi  Awanmbandi






MODULE- TERRAFORM- HANDS-ON







Today will do few projects: 

 1). Develop an RDS instance module and add to the modules/code we developed in our previous post for the three modules (network, ec2 and security-group)

-- Create a folder and call it rds-database

-- Develop an RDS Module

-- Make the module dynamic ( do not hard code)

-- Use Parameter Store to store both the Username and Password

2). NOTE: Make sure to provide the developers/ops the option of passing a username and password of their choice from where the module is being called/consumed, to avoid any update action on the Database module.

-- Use the Data Function to call the values from Parameter Store


3). Make sure to provide both Single and Multi-AZ deployment options in your configuration in case the module is needed to provision database resources in production

-- Use both Subnet-1 and Subnet-2 for the DB placement.

-- Deploy the Resources and confirm everything was successful including the Database Module


NB:Kindly leave a comment if you are able to deploy. Happy Learning 😊

Referencing https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance

                Mbandi Awan - Instructor

Sunday, February 12, 2023

AWS AUTHENTICATION

 

 

WHAT IS WEB IDENTITY ROLE?

Answer: 

Allows users federated entity access into your AWS EKS cluster using IAM role.


HOW DOES WEB IDENTITY WORK?

  • User obtains a temporary AWS credentials to sign in
  • The application is able to access resources using the IAM role, S3 Bucket and DynamoDB.
  • The EKS cluster then verifies the AWS credentials and the identity of the web provider and assume IAM role on behalf of the user.


WHAT DO YOU UNDERSTAND ABOUT THE BELOW:

IAM LEAST PRIVILEGE

Answer:

 Every organization implement least privilege by leveraging IAM   policies that allow minimal level of access adequate to perform a specific task. 

 For example: You give a user access to ONLY "read data" from an S3 buckets. 


FEDERATING:

Answer:

To Federate organization entities to AWS IAM, one can leverage  AWS identity provider and roles as well as connect into a corporate directory using AWS Single sign-on and then set-up that permission which manages IAM the roles. 


SINGLE-SIGN-ON (SSO): 

Answer:

With SSO, one can centrally manage access to multiple accounts or business applications. AWS SSO management console or AWS CLI can be used to set up and manage SSO environment. Below are the steps taken to configure SSO:


  • AWS SSO documentation
  • Set-up SSO
  • Create a domain
  • Add AWS account
  • What application are we accessing
  • Create a user domain
  • Test SSO


WHAT IS THE DIFFERENCE BETWEEN ROLES AND ROLE:

Answer:

ROLE: Is a single IAM entity authorization that is assumed by a  user or service. 

ROLES: Is the entire set of roles that exist within an AWS account.


LIST THE TYPES OF IAM POLICIES:

Answer:

  • Service control policies : 
  • Permission boundaries
  • Identity
  • Resource


WHAT DO YOU UNDERSTAND ABOUT SECURITY TOKEN SERVICE ( STS) ?

Answer:
STS ensures that users have access they need and only for the duration they need as it its timestamp. To secure the environment by implementing fine-grained controls. 

 WALK THROUGH ON HOW TO ACCESS STS?

STS- IS GLOBAL ACCESS: It can be access programatically not through the AWS CONSOLE SEARCH. 

AN STS will returns: Once you use the API actions 
Access key, secret key ,Session Token, Expiration .
"CREATE USER "- Programmatic access- no permission-create role-managed permission- Trust relationships-trust user- copy Arn of user- change the services- to AWS ARN- update trust policy- user can assume role (s3fullaccess)- 
$creds = (use-STSRole -Role arn of the trust)- it generate the STS temp of the credentials and I set expiration.  
i.e- You can have one principle and  multiple role achieving this. 



This is to guide beginners on what to expect during interviews. I will be posting more tips   ðŸ˜Š

For more interview tips, kindly click the link below.

Reference: https://docs.aws.amazon.com/iam/

https://emmie-questia.blogspot.com/2023/02/top-10-interview-questions-on-s3.html


Saturday, February 11, 2023

TERRAFORM META-ARGUMENT



Meta-Argument : This helps Engineers to create resources across multi-region and multiple cloud environment using the resources block which is the terraform provider meta argument. 

For Example: 
To create 20 resource at a time : One can use LOOPING OR COUNT in terraform to achieve that work flow. 

Provider Config file  : Inside of the provider config file, you define an ALIAS OF THE NEW VALUE which is giving to every provider config. 

An Alias : Is a unique item that specify a reference to a resource. Each time you reference the logical name in terraform, you are calling out the "LOCAL NAME "

For Example:
Provider argument such as "ALIAS" requires one to target where the region or another cloud provider and calling it out or pulling it.
                                        

                            DEPENDS_ON

Terraform has a feature that identifies its dependency. This is a meta-argument that explicitly defines the dependency, meaning, terraform can actually know the sequence in which the dependent resource needs and provision it.  Below are the steps taken to achieve this:
      Depends_on is used on those resources that depends on other resources that terraform cannot automatically infer like VPC. 

STEP 1: 
Create a folder for example call depends_on

STEP 2:
Create a file inside the folder  for example call it provider.tf
terraform {
required_version = "1.3.4"
required_providers {

aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile= "default"
}


STEP 3: 
Create a file for example call it ec2.tf
resource "aws_instance" "Development-VM" {
ami = "ami-0b0d****867f****3"
instance_type = "t3.micro"
#count = 3 # create three similar EC2 instances

tags = {
Name = "Dev-VM"
}
}


STEP 4: 
Create a file inside the folder for example call it "eip.tf"(elastic IP ). To note, here I used Development-VM (dev-eip) as the environment.
resource "aws_eip" "dev-eip" {
instance =aws_instance.Development-VM
vpc = true
depends_on = [
aws_instance.Development-VM
]
}

                                   COUNT
Another Meta Argument is Count, which is a parameter use in creating multiple resources block which is specified in the block. Below are the steps taken to achieve this:

STEP 1: 
Create a  folder  for example call count

STEP 2:
Create a file inside folder example call provider.tf 
terraform {
required_version = "1.3.4"
required_providers {

aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile= "default"
}


STEP 3: 
Create a file inside the folder for example call ec2.tf.
resource "aws_instance" "specify your environment name" {
ami = "ami-0b*******f052a63"
instance_type = "t3.micro"
count = 3 # create three similar EC2 instances

tags = {
Name = "specify your environment name"
}
}

                        MULTIPLE-PROVIDER
Using multiple providers can be useful if one need to manage resources from different providers in a single project. 

The example shows different providers to provision instance across AWS and AZURE environments. 

STEP 1 : 
Create a folder for example call multiple-provider. 

STEP 2 :
Create a file inside the folder for example call provider.tf. 
terraform {
required_version = "1.3.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "2.47.0"
        }
    }
}



STEP 3:
Create a file inside the folder for example called instance-VM.tf for AWS (ec2 instance) as well as Azure (Virtual Machine -VM).

# AWS instance Block
resource "aws_instance" "specify your environment name" {
ami = "ami-0b0dc***7f05**2a63"
instance_type = "t3.micro"
tags = {
Name = "specify your environment name"
}
}

# AZURE Virtual Machine (VM) Block
resource "azurerm_virtual_machine" {
name = specify your environment name""
location = azurerm_resource vm_size = "Standard_D2s_v3"
}

STEP 4 : 
"cd" into the path 

STEP 5 :
Run the terraform commands
-Init
-Validate
-plan
-apply
-destroy


NB: In our next slide, we will talk about terraform provisioners. 

                   Prof Mbandi : https://github.com/awanmbandi/Terraform-1

GRC

  How confident are you in your security program (Tools, systems, controls etc)? In the context of information security , the terms valida...