Tuesday, February 28, 2023

GRAFANA LOKI


 

 


                  GRAFANA LOKI 
This is an open-source tool with the ability to scale horizontally, and it can handle large volume of log data. It is integrated with prometheus and configured in a YAML file (loki.yaml)

HOW DOES GRAFANA LOKI COLLECT LOGS:

Loki stores data in S3, locally on the filesystem as well as built ONLY with indexing metadata of your logs. In the course of collecting data, Grafana Loki manages the index storage using DynamoDB Table. 

HOW LOG IS FILTERED?
We can use "LOGQL", a query language that allows you to filter logs using LABEL which is attached to the log streams.

FILTERING STAGES IN GRAFANA LOKI:
  • Stream selector: This allows users to a unique keywords for example, you may be searching for error in a key value pair via the logs from an application within a certain period. 
  • Line filters: This makes use of log line and each line makes use of "filter operator" which uses "strings"
  • Label filters: This uses LOGQL tool to query and analyze metrics within prometheus.

MAJOR TYPES OF LOGQL

Log queries: It returns all the content of the log lines.

Metrics queries: It extends the log queries and calculate the average duration of a request. 


TWO RULES OF LOKI:

  • Alert rules.
  • Recording rules. 

GRAFANA LOKI & HTTP:
Every user must authenticate to get that permission to access the token after being authorized you can send HTTP requests to the API endpoints to carry out the task. 

ENDPOINTS:
There are few endpoints components;

GET /ready
GET /metrics
GET /config
GET /services


GRAFANA LOKI WITH KUBERNETES:
Grafana Loki can be used to collect pod logs in Kubernetes clusters, and in order to collect the logs, "Fluentd and Promtail" can be used by installing an agent that runs on each node within the Kubernetes cluster. Promtail then sends the logs to LOKI for storage. 


HOW DOES GRAFANA LOKI IS CONFIGURED IN KUBERNETES MANIFEST FILE? 
 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: query-frontend
name: query-frontend
namespace: <namespace>
spec:
minReadySeconds: 10
replicas: 2
selector:
matchLabels:
name: query-frontend
template:
metadata:
labels:
name: query-frontend
spec:
containers:
- args:
- -config.file=/etc/loki/config.yaml
- -log.level=debug
- -target=query-frontend
image: grafana/loki:latest
imagePullPolicy: Always
name: query-frontend
ports:
- containerPort: 3100
name: http
protocol: TCP
resources:
limits:
memory: 1200Mi
requests:
cpu: "2"
memory: 600Mi
volumeMounts:
- mountPath: /etc/loki
name: loki-frontend
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: loki-frontend
name: loki-frontend

 Next slide we will look at prometheus . Happy Learning !!! ðŸ˜Š

Monday, February 27, 2023

TERRAFORM LOCAL & REMOTE BACKENDS AND STATE FILE

 





                     TERRAFORM BACKENDS:
This is the base foundation that terraform use to persist configuration file. 

                     TYPES OF BACKEND

LOCAL BACKEND:
Here, the state file is stored in the local machine within the the configuration folder for Terraform to make use of (i.e Local Backend)

HANDS-ON:

Lets deploy files in the local backend and to achieve this, we create two environment files that have same configurations and we the declare the path. For this project, we use "ubuntu 18.04" O/S.


STEP 1:
Create a folder called "local-Backend"

STEP 2:
Within the folder, lets create a "dev.tfvars" file and in this file, you have the below configuration:

dev-vpc-cidr-block = "10.0.0.0/16"
dev-vpc-instance-tenancy = "default"
dev-subnet-1-cidr-block = "10.0.1.0/24"
dev-subnet-1-availability-zone = "us-east-1a"
dev-subnet-2-cidr-block = "10.0.2.0/24"
dev-subnet-2-availability-zone = "us-east-1b"
key_name = "cicd"
ami = "ami-02***eb42***0e"
instance_type = "t2.micro"


STEP 3:
Lets create another file "preprod.tfvars" and in this file, you have the below configuration:
dev-vpc-cidr-block = "10.0.0.0/16"
dev-vpc-instance-tenancy = "default"
dev-subnet-1-cidr-block = "10.0.1.0/24"
dev-subnet-1-availability-zone = "us-east-1a"
dev-subnet-2-cidr-block = "10.0.2.0/24"
dev-subnet-2-availability-zone = "us-east-1b"
key_name = "cicd"
ami = "ami-026****eb*****90e"
instance_type = "t2.micro"

STEP 4:
Create a file "provider.tf"and in this file, you have the below configuration:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}

STEP 5:
Create a file "variable" and in this file, you have the below configuration:
# dev vpc cidr block
variable "dev-vpc-cidr-block" {
description = "development vpc cidr block"
type = string
# default = "10.0.0.0/16"
}

# dev vpc instance tenancy
variable "dev-vpc-instance-tenancy" {
description = "development vpc instance tenancy"
type = string
# default = "default"
}

# dev vpc subnet 1 cidr block
variable "dev-subnet-1-cidr-block" {
description = "development subnet 1 cidr block"
type = string
# default = "10.0.1.0/24"
}

# dev vpc subnet 1 avialability zone
variable "dev-subnet-1-availability-zone" {
description = "development subnet 1 availability zone"
type = string
# default = "us-east-1"
}

# dev vpc subnet 2 cidr block
variable "dev-subnet-2-cidr-block" {
description = "development subnet 2 cidr block"
type = string
# default = "10.0.2.0/24"
}

# dev vpc subnet 2 avialability zone
variable "dev-subnet-2-availability-zone" {
description = "development subnet 2 availability zone"
type = string
# default = "us-east-1b"
}

variable "ami" {
type = string
description = "prod instance ami"
}

variable "key_name" {
type = string
description = "prod instance key name"
}

variable "instance_type" {
type = string
description = "prod instance type"
}

STEP 6:
 Create a file "ec2.tf" and in this file, you have the below configuration:
resource "aws_instance" "prod-vm" {
# (resource arguments)
ami = var.ami
key_name = var.key_name
instance_type = var.instance_type
user_data = file("webapp-deploy.sh")
subnet_id = aws_subnet.dev-subnet-1.id
#vpc_security_group_ids = [ "aws_security_group.Development-SG.id" ]
tags = {
Name = "Dev-vm"
}
}


STEP 7:
Create a file "network.tf" with the configuration below:

# Development vpc
resource "aws_vpc" "dev-vpc" {
cidr_block = var.dev-vpc-cidr-block
instance_tenancy = var.dev-vpc-instance-tenancy

tags = {
Name = "dev-vpc"
}
}

# Development subnet 1
resource "aws_subnet" "dev-subnet-1" {
vpc_id = aws_vpc.dev-vpc.id
cidr_block = var.dev-subnet-1-cidr-block
availability_zone = var.dev-subnet-1-availability-zone
tags = {
Name = "dev-subnet-1"
}
}

# Development subnet 2
resource "aws_subnet" "dev-subnet-2" {
vpc_id = aws_vpc.dev-vpc.id
cidr_block = var.dev-subnet-2-cidr-block
availability_zone = var.dev-subnet-2-availability-zone
tags = {
Name = "dev-subnet-2"
}
}

# Development vpc internet gateway
resource "aws_internet_gateway" "dev-vpc-igw" {
vpc_id = aws_vpc.dev-vpc.id
tags = {
Name = "dev-vpc-igw"
}
}

STEP 8:
Create a file "Sg.tf" and in this file, you have the below configuration:
resource "aws_security_group" "Development-SG" {
name = "Development-SG"
description = "Development-SG"
vpc_id = aws_vpc.dev-vpc.id

ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
#tfsec:ignore:aws-vpc-no-public-ingress-sgr
cidr_blocks = [ "0.0.0.0/0" ]
#tfsec:ignore:aws-vpc-no-public-ingress-sgr
ipv6_cidr_blocks = [ "::/0" ]
}

#tfsec:ignore:aws-vpc-add-description-to-security-group-rule
egress {
from_port = 0
to_port = 0
protocol = "-1"
#tfsec:ignore:aws-vpc-no-public-egress-sgr
cidr_blocks = ["0.0.0.0/0"]
#tfsec:ignore:aws-vpc-no-public-egress-sgr
ipv6_cidr_blocks = ["::/0"]
}

tags = {
Name = "allow_tls"
}


STEP 8:
"cd" into the path (Local backend) where your files are located.

STEP 9:
Run the terraform commands
-Init
-Validate



-plan

Kindly ensure you provide the path "terraform plan --var-file prod.tfvars" and enter the value manually.





-apply



-destroy
In order to destroy run "terraform destroy --var-file prod.tfvars"



REMOTE BACKEND:
This is used to manage the state file with an S3 bucket and Dynamo DB table to lock the state file. We can enable encryption in flight as best practice and this is AWS managed. Also, we can enable access logging to capture all the activities in S3 bucket. All these stated are atypical of local backend. 

For example:
With the STATE FILE located in S3 Bucket, we can explicitly deny the permission by enabling OBJECT LOCK AND VERSIONING, just so that each time software engineer makes changes they will not be able to delete the STATE FILE.

             BENEFITS OF REMOTE BACKEND 

Collaboration: 
Multiple users can access at the same time.

Scalability:
An infrastructure deployment grow in size overtime and with the use of s3 bucket to store the state file, a more scalable data is best managed. 

               


Congratulations, you have successfully deployed a local variables and values 

NB: If you successfully deploy this hands-on, kindly leave a comment and feedbacks. 😊


Thursday, February 23, 2023

VARIABLE LIST

 



            A  VARIABLE  LIST 

A variable list uses a type of input variable that allows you to define a list of values using terraform configuration and it comes in handy when parameterizing. With a variable list terraform configuration uses "LIST" in the variable declaration. 


HANDS-ON:
The below steps will identify how to deploy variable list. And list can be seen as passing a string ["d", "e", "f" ]. We need the instance, security group and variables. 

STEP 1: 
First: create a folder'variable-list' and within the folder create a file "provider.tf".

terraform {
required_version = "1.3.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}


STEP 2:
Create a file "Variables.tf" and pass instance type variable as well as the count and list them as strings.  

# dev instance ami id
variable "Dev-instance-ami-id" {
type = string
default = "ami-0b0dc*****52a63"
}

# dev instance type
variable "Dev-instance-type" {
type = list(string)
default = ["t2.micro" ,"t2.nano", "t2.large", "t2.small"]
}

# dev vpc cidr block
variable "Dev-vpc-cidrblock" {
type = string
default = "10.0.0.0/16"
}

# dev vpc instance tenancy
variable "Dev-vpc-instance-tenency" {
type = string
default = "default"
}

# dev subnet 1 cidr block
variable "Dev-subnet-1-cidrblock" {
type = string
default = "10.0.1.0/24"
}

# dev subnet 1 availability zone
variable "Dev-subnet-1-availability-zone" {
type = string
default = "us-east-1a"
}

# dev subnet 2 cidr block
variable "Dev-subnet-2-cidrblock" {
type = string
default = "10.0.2.0/24"
}

# dev subnet 2 availability zone
variable "Dev-subnet-2-availability-zone" {
type = string
default = "us-east-1b"
}
variable "provider-profile" {
type = string
default = "default"
}
variable "dev-count" {
description = "dev count"
type = list(number)
default = [1, 3, 5, 10]

}

STEP 3:
Create a file "ec2.tf" and pass the instance type and the count using the applicable "index" (0,1,2,.....n)

resource "aws_instance" "Development-VM" {
ami = var.Dev-instance-ami-id
instance_type = var.Dev-instance-type[1]
count = var.dev-count[0] # create four similar EC2 instances
subnet_id = aws_subnet.Dev-subnet-1.id
vpc_security_group_ids = [aws_security_group.Development-SG.id]
tags = {
Name = "Dev-VM"
}
}


STEP 4:
 We provisioned a "SG"

resource "aws_security_group" "Development-SG" {
name = "Development-SG"
description = "Development Security Group"
vpc_id = aws_vpc.Dev-VPC.id

ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

### -1 protocol for egress means allow all traffic, and the below notation for ipv6 is the general way in which ipv6 is recognized.connection {

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

tags = {
Name = "allow_http traffic"
}
}

STEP 5:
Create a file called "Vpc-network.tf"

# Development VPC
resource "aws_vpc" "Dev-VPC" {
cidr_block = var.Dev-vpc-cidrblock
instance_tenancy = var.Dev-vpc-instance-tenency

tags = {
Name = "Dev-VPC"
}
}

# Development subnet 1
resource "aws_subnet" "Dev-subnet-1" {
vpc_id = aws_vpc.Dev-VPC.id
cidr_block = var.Dev-subnet-1-cidrblock
availability_zone = var.Dev-subnet-1-availability-zone
tags = {
Name = "Dev-subnet-1"
}
}

# Development subnet 2
resource "aws_subnet" "Dev-subnet-2" {
vpc_id = aws_vpc.Dev-VPC.id
cidr_block = var.Dev-subnet-2-cidrblock
availability_zone = var.Dev-subnet-2-availability-zone
tags = {
Name = "Dev-subnet-2"
}
}

# Development VPC internet Gateway
resource "aws_internet_gateway" "Dev-VPC-IGW" {
vpc_id = aws_vpc.Dev-VPC.id

tags = {
Name = "Dev-VPC-IGW"
}
}

STEP 6:
"cd" into the path of the folder.
 "ls" - list what you have inside the folder.    

STEP 7: 
Apply terraform command

-Init
-Validate





-Plan
-Apply



- Destroy



Happy learning 😊

Referecing: HashiCorp - https://developer.hashicorp.com/terraform/language/values/variables.

AWS COST SAVING SERVICES

 









HOW DO YOU DEBUG A BASH SHELL? 

ANSWER:

  • Downloading bashdb - We install bashdb using system package manager. The package has steps for using debugging scripts.
  • STEPS:
  • Create a bash script file for example "script.sh"
  • Copy the file from "._DEBUG.sh"
  • Set up the debug configuration  to "run - debug configuration -bash scripts"
  • Lunch your "script.sh" from bash shell.
  • Debugger port 33333 - Ensure that firewall are configured to allow connections to the port.

. _DEBUG.sh
  • "Set  -x" option - This helps to execute the script and the argument. You can use set -x and set +x to see whats happening in your script.
#!/bin/bash

set -x
..code to debug...
set +x

  • Echo command - We can see the variables and the output of commands as well as where the scripts is likely failing. 


WHAT AWS SERVICES MANAGES COST?

ANSWER

AWS Trusted Advisor - By scanning the environment, ensures best practice and optimize the environment. Trusted Advisor implements checks in five categories cost optimization, performance, service limit, security and fault tolerance. 

HOW DO YOU CONFIGURE TRUSTED ADVISOR PROGRAMATICALLY USING AWS BOTO3?

import boto3

client = boto3.client('support')

response = client.describe_trusted_advisor_check_result(
###Do not forget to replace with the ID of the check you want to retrieve
checkId='*******',
language='en' ## This depends on your language preference for example en=English, fr=french etc.
)

print(response['result']['status'])


  • WHAT ARE THE CORE CHECKS OF TRUSTED ADVISOR?
  • ANSWER:

  • EBS public Snapshots

  • RDS public Snapshots.

  • IAM Use

  • S3 Bucket Permission

  • Security Groups - Specific port unrestricted.

  • MFA on Root Account

  • Service Limits

                                

                  AWS SAVING PLANS

AWS saving plans: Ensures a flexible pricing model that helps to reduce bills significantly. AWS offers three types of Saving plans.


Compute Savings Plan- Like AWS Fargate, AWS Lambda,                   Region, Avalability Zone, saving up to 66%.


EC2 instance Plan -like  AZ , Instance size ( t2 micro, t2 medium), operating system, saving 72%.

 SageMaker Savings Plan -Like Any ML instance size, any region saving 64%


               AWS COST EXPLORER : 
We use it to monitor and track your spending usage pattern and forecast future cost.

  AWS BUDGET:
You can set up a threshold usage budget for your AWS resources either monthly, quarterly, annually and set up alert should you exceed budget. 


AWS COST ALLOCATION TAGS
Cost allocation tags are metadata and AWS allows you to assign up to 50 cost allocation tags to each resource.  


AWS COST ANOMALY DETECTION
This services is available at no cost. And can be accessed via APIs, It also detects any sudden spikes in cost and sends an alert. This uses machine learning algorithms to establish a baseline of your cost and monitor the current against the baseline cost.

USED CASE:

Think about this, you are in an environment where you have   developers creating resources and some of the resources were not used; accumulating cost. As a best practice, using trusted Advisor to scan the environment will expose both tagged and untagged resources as well as their overhead cost...............





These are potential interview questions asked. Happy Learning!!😊

Referencing:

AWS :https://aws.amazon.com/savingsplans/

https://emmie-questia.blogspot.com/

Tuesday, February 21, 2023

SENSITIVE IN TERRAFORM

 



Hands-on:


In an environment you can deploy terraform remotely with CICD, using Jenkins. And the kind of value provisioned, you have to ensure that the password is not exposed in your central repository. In the world of terraform, terraform has a function that prevents your credentials from being exposed. That function is called "SENSITIVE". 

Inside the "variable.tf" we pass the function sensitive = "true". Automatically, terraform will pick it up. Sensitive will not hide the secret at the level of the STATE FILE but at least it will hide it from the terraform history that gets generated. 


Hands-on:
Lets' provision few resources identifying sensitive while deploying our mysql-db.


STEP I:
Create a folder for example called "sensitive"

STEP 2: 
Within the folder create a file called "db.auto.tfvars" We specified the user name and password.

Dev_allocated_storage = "10"
Dev-db_name = "devdb"
Dev-db_engine = "mysql"
Dev-db_engine_version = "5.7"
Dev-db_instance_class = "db.t3.micro"
Dev-db_username = "admin"
Dev-db_password = "adminpassword"
Dev-dbskip_final_snapshot = true

STEP 3:
Within the folder create a file called"mysql-db.tf" and at this level we, "variablize" and consumed username, password and engine version.  

resource "aws_db_instance" "dev-mysqldb" {
allocated_storage = var.Dev_allocated_storage
db_name = var.Dev-db_name
engine = var.Dev-db_engine
engine_version = var.Dev-db_engine_version
instance_class = var.Dev-db_instance_class
username = var.Dev-db_username
password = var.Dev-db_password
skip_final_snapshot = var.Dev-dbskip_final_snapshot
}

STEP 4:
Create another file "provider.tf"

terraform {
required_version = "1.3.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.39.0"
}
}
}

provider "aws" {
region = "us-east-1"
profile = "default"
}


STEP 5:
Create a "variable.tf" file and pass the "true" function in the file at the level of sensitive

variable "Dev_allocated_storage" {
type = number
}
# DB_name
variable "Dev-db_name" {
type = string
}
# DB_engine
variable "Dev-db_engine" {
type = string
sensitive = true
}
variable "Dev-db_engine_version" {
type = string
sensitive = true
}
variable "Dev-db_instance_class" {
type = string
}

variable "Dev-db_username" {
type = string
sensitive = true
}
variable "Dev-db_password" {
type = string
sensitive = true
}
variable "Dev-dbskip_final_snapshot" {
type = string
}


STEP 5 : 
"cd" into the path i.e the folder

STEP 6 :
Run the terraform commands
-Init
-Validate



-plan



-apply



-destroy

STEP 7:
Congratulations, you have successfully used terraform sensitive to deploy mysql-db. Next slide we will discuss on modules.

NB: If you successfully deployed this hands-on, kindly leave a comment and feedbacks. 😊

Referencing :Below are the terraform documentation link:




CONFIGURING A PHISHING CAMPAIGN IN MICROSOFT DEFENDER.

Configuring a phishing campaign in Microsoft Defender (specifically Microsoft Defender for Office 365) involves creating a simulated attack ...