Saturday, April 29, 2023

GCP CONSOLE SET-UP ( TEAM & ADMIN)


 

Today, we will guide you to set-up your cloud environment as an admin and team. I'll advise you use chrome.


STEP 1:


The address bar search https://console.cloud.google.com/


STEP 2:

Click start for free

STEP 3:
STEP 4:



STEP 5:

Select one organization from the list and click next. 

STEP 6:
Select three choices and choose next

STEP 7:
 Select any four choices. 


STEP 8:

Select your role devOps, Cloud.



STEP 9:
You can access the GCP services via the hamburger button (navigation menu).





STEP  10:
Set up your admin account. Go to domain with the link.
The domain is the actual piece that you'll be interacting with. 


STEP 11:
Click "my domains".

STEP 12:




STEP 13:
Choose your domain name, select price, disable auto renew, add to chart and buy.




STEP 14:
After you purchase the domain, check your email and verify. It will take you back to google domain. 

STEP 15:
Click on manage.


STEP 16:
You get charged when you're deploying resources within workspace as a user. For learning  purpose we will make use of single user which is free tier. Google cloud will grant you $134 freebies within admin console and team console $300 for 3 months. 

Open this link and provide a business name. 



STEP 17:
Provide the domain name you just registered. 

STEP 18:
At this point you'll be creating a user email "administrator@yourdomain name" and user password. 



STEP 19:
Click on protect and i'm ready to protect my domain.

STEP 20:
Click on sign in to verify.

STEP 21:
Click this link to sign in as an administrator email which ends with your domain dot com
Example: administrator@xyz.com

STEP 22.
Now, you're officially in your cloud admin console. 







Awesome, you have successfully set up your console. ๐Ÿ˜Š Next slide, you will set up an organization.


Referencing :




Friday, April 28, 2023

GCP- IDENTITY AUDITING ,ROLE RECOMMENDER, SERVICE ACCOUNT.

 







CLOUD IAM is a service that you make use to authenticate and authorize users or applications. Should you want to integrate users from on-premises, AWS, AZURE, IBM etc, you need IAM to create access. In order to consistently manage the accesses of software engineers within the platform and maintaining best practise with high security posture within your environment. For example, a colleague might reach out to you because they do not have access and requested you to provide, you do not have to provide access because there's a possibility that the individual does not need that access probably because they are assigned to a specific role or service. As a cloud engineer you need to be aware of security. Google Cloud has provided services to monitor your platform and analyze the behaviour in your environment by implementing best practise; 

1.  CLOUD IAM POLICY ANALYZER : Allow you to identify that specific resources like restricted GCS bucket, and it will scan the environment to know who has access to the resources . This gives you the ability to check a detailed review in a closed-up environment in an automated fashion. Policy Analyzer uses the cloud  Asset API , which offers best-effort data freshness. while almost all policy updates appear in policy analyzer in minutes, its possible that policy analyzer won't include the most recent policy updates. 
There are two main implementation within policy analyzer for this analyses to perform: 

a. Query Scope: This has to do with the level at which you want to perform the analyses within your organization. When it comes to the identities, do you want to analyze from the root to projects. You have the ability to target the actual project or folder. 

b. Query Parameter: This has to do with the actual resource (GCS bucket) you want to audit or validate. By this, you can  provide the principal as the parameter to validation against if the user have been given that permission at the folder or organization level. 


2.  CLOUD POLICY TROUBLESHOOTER: Makes it easier to understand user accessibility to a resource or doesn't have permission to call an API . 

example: Perhaps a colleague is having troubles with communication protocol. You can further investigate by making use of policy troubleshooter and provider full details. 

For example: HR hires you as a developer and you're given organization admin access and Quality Assurance engineer is also given organization access or folder access. In this regards quality assurance team has been given escalated privilege which is bad.  To automatically check your environment regarding previous and current team, IAM role recommender will be the service. 

There are three component you inject with in policy;

a. The actual principle: At this level you provide the email. Meaning the identity you want to validate against. 

b. Resources: If you want to access a compute engine and you're getting error, probably you're trying to create a snapshot for a compute engine instance . At this level you provide the specific resources and the actual permission that is needed to be able to interact with the action which the individual is trying to initiate. 

c. Permission: You pass the actual permission from GCS bucket  within the column to validate the resources and principal. it will check the pool of roles if the individual has access. This will give you the final result in milliseconds to know who has access. 


3.  IAM ROLE RECOMMENDER: This suggests a group of permission regarding role. Does its role at the background and you find it within security insight dashboard. Is a google native artificial intelligence tool to automatically check historical data. Checks the access of the individual and recommend. like Uber highly uses role recommender.

 example: If you have 40 developers, 20 Cloud Architect , and you have been creating GCS, GCE  with this, its hard to track all the activities. Over time, role recommender will monitor and gather the data for the last 90 days and suggest the best purpose role to the engineers. The AI will estimate/ filters what policy the developer might need to use and policies unused. That is very specific. However, if you're doing this manually it will take you months to achieve. 

4.  IAM POLICY STIMULATOR: This allows you check the validation of a particular user policy before its implemented. And it only works for existing policy either for groups, service account etc.

5.  SERVICE ACCOUNT : Are used by particular type of entities like machines and applications. A developer can make an API REQUEST  CALL to initiate that via the application. One of the most risk in IAM is service account

example: You have an automation Pipeline and there's a service account in that process and its been used for authorization and you want to modify it, if you're careful your code could break, because you did not take precaution around your pipeline set-up. In this case policy stimulator will stimulate the behaviour based on the existing user access. 

example: You could have different service account idle probably because the team no longer works and this users still exist within IAM. This can host certain level of risk, in this case you can make use of activity analyzer to decommission those users immediately.


6.  ACTIVITY ANALYZER/EVENT MANAGER: It tracks and trace every activities within your GCP eco system.


Referencing : https://cloud.google.com/docs

Wednesday, April 26, 2023

GCP RESOURCE MANAGER



GCP RESOURCE MANAGER- As an engineer when you are designing an architecture within google cloud, based on the google cloud global structure and the resource you want to create, that would determine the placement either a region or zone etc.  This will determine the level of availability when it comes to the resources. 

Google cloud has provided a global structure which is divided into regions and zonal ( AZ). A region is like New York, North Virginia, California, Texas while availability zone are smaller part within the region, the availability zones are powered by those data centres and within the availability zone you can have AZ1 AZ2  AZ3 and more, depending on google cloud services and user to either increase the AZ. 
 

The data centres are the actual location where the actual systems resides, all the virtual machine, network resources, database systems, storage facilities, all runs within the particular data centres that are within available zone that are within regions. This is the hierarchy. Within regions you have availability zones which are deployment areas for compute related resources Google cloud has And if one AZ goes down, it will not affect their whole structure. 

One of the pillars in google cloud provided is Reliability. There were two main point, FAULT TOLERANT AND HIGH AVAILABILITY.  Which means creating your resources across multiple areas but it will include other practises which will make your environment to be resilient. 





 


You might be asked why one data centre goes down, and it will not affect the other. The reason is because there are certain distance given to a new data centre and the existing data centre within google cloud. And its designed that each centres are completely dependent. The network connection will be isolated from data centre A - B. Natural disaster, power failure can cause AZ to go down.


ZONAL: We refer to virtual machine (Google compute engine). The VM's resides in a particular availability zones and persistent disc (PD) are used for storage within google cloud. By default VM are zonal based resources. 
For example, Based on the above structure, you created a VM, placed in ZONE A, and colleague mistakenly terminate ZONE A . Will you be able to login through ZONE B or will the resource be lost completely? In this scenario, your virtual machine is completely gone and that is because you specified that AZ unless you design a disaster recovery architecture to take BACK-UP, SNAP SHOTS, IMAGES etc. By default google cloud has that automatic replication across the board based on the resources. 

CLOUD SQL DATABASE : Are regional not zonal, like RDS. When we talk about high availability, a service like load balancing which is an interface to distribute request to the backend. Each of the load balancing has IP to entry access and exit point of your systems. We will elaborate more when we get to networking.
For example, Take for instance, a massive storm occurred and a region goes down and nothing is assessable within the region. In this case, you are designing a DUAL REGION SOLUTIONs. With GCS you can use it to provide resources that span across two different regions. Your data will be replicated and stored across two different regions. 

MULTI--REGION RESOURCE PLACEMENT: This helps you to create more region in which you have multiple AZ.  And the services you need to achieve this is GOOGLE CLOUD STORAGE. 

GLOBAL STORAGE: When you create a resource in any of the domain within google cloud by default that network is global. it spans across multiple regions.  Even if the two structure above ( region) goes down. It will NOT affect the resource because it is global. A CLOUD VPC network is global not regional not AZ specific. Other cloud providers does not have this when it comes to networks. 


Next week, we will buttress on the resource hierarchy. Direct link below. Happy Learning!!๐Ÿ˜Š

References : 
Questia 
IAM & Resource hierarchy :
https://www.blogger.com/blog/post/edit/5428112557550405099/2839242659608842442
google cloud:

Monday, April 24, 2023

SIX PILLARS OF GOOGLE CLOUD



 
In building a structure, as a Cloud Architect, you are literally responsible in the actual intelligence in designing a solution that would solve a problem of an organization. You are responsible for guiding the clients towards the solution and there are varieties of different factors that you have to consider or offer the client based on the problem statement. 

Some of the basis to decide a particular architecture in google cloud. Google Cloud has Six pillars of architectural framework namely: 

  • Design Consideration : When you are providing solutions to clients, you have to consider the base foundation and consider them by introducing some of the practises that involves around the actual technology that you will be taking advantages. For example, a customer wants to store data in the cloud, for design consideration you have to look at the access to that data, how you manage access control, when it comes to compute resources, network, you have to develop best practises that is span across the workload in a consistent manner. 
  • Operational excellence : You will solely focusing on the stability of your environment. For example, an e-commerce, banking company. You will be able to gain insight into their operations. 
  • Security , Privacy , Compliance : As a solution Architect when you are designing and developing solution for the client make sure you consider the security aspect of that application. For example, hackers may bridge your environment and ask for ransom or forcefully disrupt the system or may be they are competitors and they try to bring your business down or  malicious attack, or spread bugs in your system. As an architect you should design a robust solution to protect the system. 
  • With Compliance, this has to do with government , body regulations that governs how you operate. It is your responsibility as a solution architect to design system that would be both compliant based on government rules and regulations as well as third party regulations. For example, US Citizen data and Canadian data, Data it could be broken down into two:
  1. Sensitive : This is an information that literally be used to trace an individual like email, contact, name, date of birth. This kind of information is handled by companies and there are certain rules that have been put in place by government which MUST be compliant to keep the data secure.  If you are not compliant, you will pay severely.  
  2. non-sensitive 
  • Reliability, Fault Tolerant and Highly Available: Reliability is the ability for you to be able to build and design systems that are fault tolerant, elasticity and highly available. Fault tolerant: Is the ability to have a system that the failure of one component would not render the environment a complete failure. For example, you have 5 systems that are serving the application and users are accessing it and two goes down, the 3 left should be up and running. In this case, if you have this kind of structure it means your environment is FAULT TOLERANT.  You can have an environment that is fault tolerant and its not highly available. 
          Highly Available: This is the main goal of an infrastructure. You should have an environment that well architected and can spans across region. This means that, at any hour of the week, day, month. You should have a direct access to the application. This includes fault tolerant and also the ability to scale. You cannot have an highly available environment which is not FAULT TOLERANT.          
  • Cost Optimization: As you architect the infrastructure you need to consider cost. Google cloud provided over 400+ services, and you will find out that 5-6 services can give you a solutions to cost based on client problem. You consider which serve gives the customer the best and you decide which service is cost effective. 
  • Performance Optimization: Your infrastructure should be more resilient accessing the application. We can say resource allocation is another factor, this is based in the size of the user base.

The next slide talks about GCP resources manager. Hope you find this tip informative๐Ÿ˜Š! Happy Learning!

Sunday, April 23, 2023

SIMILARITIES OF PRIVATE & PUBLIC CLOUD

 







Today, we will buttress on the main  instrument in cloud.

 VIRTUALIZATION : Before Virtualization there was nothing like cloud. Virtualization gave birth to cloud which makes cloud computing possible. The reason is because, the word virtual which means, you have the ability to make available particular resources to be consumable via the network through the API CALLS. It can either public or private network. 

It all started with private data centres/cloud. Public cloud did not exist before private cloud.  Organization took advantage of virtualization to have that flexibility when it comes to partitioning and allocating resources as needed and efficiently utilizing the resources they had on-ground with the USE OF  HYPERVISOR. Even though they're in the private cloud sector  they still manage all the different appliances within the physical data centre, they needed the physical F5 LOAD BALANCER , FIREWALLS etc. All these falls within the loop of expenses. 

For example: 

Another organization wants to host an application which probably will be lunched in Europe. You have to factor to set-up a data centre in the EU which comes with certain limitations like capital. This is when big organizations like GOOGLE, MICROSOFT leverage the virtualization to a network structure which is spread across the globe via the open INTERNET. And provide some security services to be able to secure their resources virtually by integrating different technology.This is how public cloud existed. 

You give people the opportunity to consume those same virtual resources via the internet not through the co-operate network. However, the fact that you can access through the internet does not mean its not secure thats where most companies are skeptical when it comes to that. Google cloud has developed a  robust technology which you can use to protect your environment and improve on your security posture as and make it even better as compare to someone that running the private data centres. 

Today, Government agencies are adopting cloud. Due to its security postures. 


PHYSICAL DATA CENTRE.




In our next slide, we will discuss the six pillars of GCP  framework!Happy Learning๐Ÿ˜Š!!

Saturday, April 22, 2023

CHALLENGES IN CLOUD


 

We will discuss some challenges which 80% of organization that are still operating some kind of IT system or resources is on-premises face. And this is based on google analysis. Let's talk a look at the pool of companies that have some type of application that they are running , majority are still in that private data centre, physical location and managing workload. That is what has triggered massive disruption of  CLOUD. 

Today, GCP provides various services to solve the on-premise challenges. 

On-premises : It is difficult to estimate the workload capacity. Meaning the amount of resources that you will need to be able to manage your workload. Hence, Google cloud provides you a services which allows you to ONLY pay for what is used. 

Based on your experience as a cloud expert can you tell me some of the difficulties your client face on-premise which literally makes them to start migrating to cloud or using GCP. 

On-premises : Its not possible to go global in minutes with GCP you can reproduce the infrastructure in multiple regions within minutes. GCP uses automation in achieving this. 

On-premises: The security another concern, however, GCP is highly secured that means each services within the network environment has its own security and some of the features will be maintenance and centralized management.  

On-premises: Is expensive to manage, the software are installed and runs on a company's own hardware infrastructure and hosted locally. whereas GCP optimize cost, the software is stored and managed on the providers servers and accessed through the web browser or other interface. 


In our next slide, we will discuss about deployment models!Happy Learning๐Ÿ˜Š!!

Thursday, April 20, 2023

GCP SERVICE MODEL





 The service model has to do with the public cloud providers. How you choose a particular service depending on what you want to accomplish on the platform. 

There three key domains when it comes to compute, storage, network, database and so on. When it comes GCP, and you have a particular application you want to deploy, as a sophisticated engineer you will decide how much effort you're ready to put into the project as well as the flexibility.

Effort: You can decide that, you do not want to manage the application or environment within GCP, probably because the organization do not have the expertise or they do not have the actual bandwidth allocated for the workload. So you want a service that provides some  MANAGED CAPABILITIES in that case, your decisions will be affected based on how much effort you want to put into the project.

Flexibility: This has to do with the different changes from the base to the top configuration which you want to introduce within the actual deployment application in your environment. for example, Google Cloud can maintain and manage the actual compute layer and so on. With this your decisions will be affected based on how much flexibility with regards to your workload management. 

    THERE ARE FIVE SERVICE MODELS.

 Infrastructure As A Service (IAAS)

For instance, a company that want to be able to install certain softwares on the specific resources that the application will be running on, which gives the company a control how the application/ libraries behave. Such as python, Java, ruby. You want to be able to update it to a particular version. The KEY word is CONTROL over how to configure applications as they wish and  you decide what to install inside the machine. IAAS provides you that level of flexibility. Although, Google Cloud will be responsible in providing you the machine, managing the network layer that the compute sits on , and the actual compute itself ensuring its up and running at all time. As a platform professional you can use  GOOGLE COMPUTE ENGINE (GCE) to accomplish this as a virtual machine. 


Platform As A Service (PAAS):

In this case Google Cloud has services that they can provide to you to host your application and you do not have to manage the infrastructure at all. The infrastructure is fully managed by google cloud. When we talk about the infrastructure, google cloud focus on compute layer and everything that is below and some of the things above. And the only thing the client does is to handover its application to google cloud to manage it. As a cloud expert you can use GOOGLE APP ENGINE (GAE) to accomplish this.

Software As A Service (SAAS):

Google cloud is offering a software to you as a service, you do not have to do anything on the software when it comes to management, google cloud takes full responsibility. You only manage your data. And there are many services google cloud offers in that domain one of which is  GOOGLE CLOUD STORAGE( GCS)

Container As A Service (CAAS):
Is a subscription based cloud service model that allows you to manage containers, applications, and clusters using APIs, container-based virtualization and so on. This helps you to streamline and manage containers within software infrastructure either on-premises or in cloud. CAAS is a container package of software that includes all dependencies like code, runtime, configuration and libraries. example :Google Kubernetes Engine (GKE), Cloud Run. 

Function As A Service( FAAS):
This a a way to build and run applications without worrying about the underlying infrastructure. It function as Serverless computing which allows HTTP functions to be deployed for usage by other services or users. Developers write a code that triggers an incoming HTTP request. Some examples of FAAS include AWS Lambda, Cloud functions. 




In our next slide, we will buttress on cloud challenges!Happy Learning๐Ÿ˜Š!!

Referencing:

    https://cloud.google.com/iap/docs/concepts-overview

  Mbandi AAK

Questia: https://www.blogger.com/blog/post/edit/5428112557550405099/1296614191750267780?hl=en

Wednesday, April 19, 2023

GCP IAM & RESOURCES HIERARCHY




Cloud Identity and Access Management (Cloud IAM) is a security framework that verify users and control their access rights, and denying access privileges. You are able to authenticate users and secure access across cloud, SaaS, on-premise and APIs. Here are some tips to help you use it better! 

  • Make sure that you only give people access to what they need.
  • Make sure that you take away access when people don't need it anymore.
  • Make sure that different people have different jobs so that no one person has too much power.
  • Make sure that you have a plan for how to manage all the different people who need access.
  • Make sure that you keep your password safe ( Credentials).
  • Make sure that you give people different levels of access depending on what they need.

The "who" can be a person, group, or application. 
The "what" refers to specific privileges or actions and the "resources" could be any Google Cloud service.


  GOOGLE CLOUD RESOURCE HIERARCHY

There is a resource hierarchy within the  resource manger. There are four different aspect of resource hierarchy. One of the first thing you define which will help you design the infrastructure will be;

  1. ROOT ORGANIZATION: This is mainly the domain of company and google cloud will need this as the principal piece that the organization need to represent your organization within GCP. Everything you literally be managing as an environment will be tied to this piece. For example, you search facebook.com, uber.com, shoeline.com, each of this search represent a domain to identify their structure within GCP. 

Another example: Lets say you have 500 employees within your organization each of them has an email that ends with the company domain, like uche@saskhealthregion.com. 

If you're making use of workspace formally called G- suite you can integrate all 500 users into cloud platform and centralize the control, even if you need to block a particular employee. 

1. FOLDERS (department): Can be used to segregate the different workloads that you are engaged in within the organization. For example, you have four team( A,B,C,D) and they handle independent project, generically, they will not need access because they are completely working on different project. Within a folder, you can have multiple objects to create resources. 

Folders are NOT used to deploy resource. The folder sits within the domain.


2. PROJECTS: In your cloud console you create a project, that project is called container where you house all the different resources that you can deploy within GCP. For example, if the resource sit within project that means project sits in the folder. 




3. RESOURCES: In blue shape has one parent and resources inherit policies from the parent. Examples of resources Cloud run, GCS, Cloud VPC, GCE. Within google cloud we have QUOTAS. Quota's are APIs that manage resources consumption within your ecosystem. Quota's are limits you set on your resources. You can request to increase quota through google cloud support. Managing limits can help with security and billing. 

4. LABELS: Are they object that manage and organize your workload around GCP probably for billing, governance, automation. It is based on key value pairs. 

QUESTION: One of the first question you may be likely asked, how will you access a resources in GCP meaning what interface to gain access to the platform to get familiar? When we talk about interface, we mean entry and exit communication.  There are four major interfaces to interact: 
2. mobile app (iOS and android), 
3. Cloud SDK (Software development kit)-  Allow you to programmatically interact your environment. This interface comprises of 3 major components. big query( warehouse), google cloud and google util( storage).
4. GCP client libraries( python, Node.js, Ruby ) mainly use by software developers. 

                 Cloud IAM 
Can only allow permitted set of policies either at the organization, Folders, Projects, or resources to function. 

Each policy contains set of roles and role members, with resources inheriting policies from their parent, lets think about this as resource policies are a union os parent and resource, in which we implement less restrictive parent policy will always override a more restrictive resource policy.

The organization administrator provides a user with the right access to all resources within the organization also the project creator role allows users to create project within the organization.





     WHAT IS A G SUITE

GCP is a suite of cloud computing services that runs on the same infrastructure that google uses internally for its end-user products. G-Suite is part of GCP WORKSPACE lunched in 2020. 

G Suite is a collection of cloud-based productivity and collaboration tools developed by Google. It includes Gmail, Google Drive, Google Docs, Calendar , Spread sheets and so on.

The three main editions of G -suite are monthly  Basic $6, Business $12 and Enterprises $25. There are several alternatives to G- suites  that you can consider like Fast mail, Office365, Zoho Workplace, Godaddy Email and office. etc


       THE ROLE OF RESOURCE MANAGER


   TYPES OF IAM ROLES

There are three types of IAM roles: Primitive/basic , Predefined, Custom.

Primitive/basic roles are the original roles that were available in the cloud console and it's broad. IAM basic roles offer fixed, coarse-grained levels of access.






GCP services offers their own sets of predefined roles, and they defined where those roles can be applied. This provides members with granular access to specific GCP resources and prevents unwanted access to other resources. The permissions itself are classes and methods in the APIs.      


In our next slide, we will buttress on service models! Happy Learning๐Ÿ˜Š!!

Referencing : https://cloud.google.com/iap/docs/concepts-overview

  https://domains.google/?pli=1

                      Polarsparc

Questia: https://www.blogger.com/blog/post/edit/5428112557550405099/7327624698061978921?hl=en

Tuesday, April 11, 2023

DOCKER RUN COMMAND

 



Docker run command is used to run a command in a new container. it creates a new space where you can put things and play with them without affecting your room. Each time operators executes docker run, the container processes that runs is isolated in that, its has its own file system, its own networking which is separate from the host.

Docker run command you must specify an IMAGE to derive a container. Within the default image, we can relate to these features:
  • network settings
  • container identification
  • runtime on the CPU and memory
  • detached running
There are few docker run commands that we would like to learn. In this case we will run a docker redis and jenkins command to run a container running a Redis/ Jenkins service. 

STEP 1:

Lunch Ec2- t2micro- ssh, port 80
SSH

sudo yum update
sudo su -
yum install docker -y
systemctl start docker
systemctl enable docker
systemctl status docker

STEP 2: 
docker run redis. In this case, the latest Redis version=7.0.10

docker run redis
Unable to find image 'redis:latest' locally
latest: Pulling from library/redis
f1f26f570256: Pull complete
8a1809b0503d: Pull complete
d792b14d05f9: Pull complete
ad29eaf93bf6: Pull complete
7cda84ccdb33: Pull complete
95f837a5984d: Pull complete
Digest: sha256:7b83a0167532d4320a87246a815a134e19e31504d85e8e55f0bb5bb9edf70448
Status: Downloaded newer image for redis:latest
1:C 06 Apr 2023 19:48:06.966 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 06 Apr 2023 19:48:06.966 # Redis version=7.0.10, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 06 Apr 2023 19:48:06.966 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 06 Apr 2023 19:48:06.967 * monotonic clock: POSIX clock_gettime
1:M 06 Apr 2023 19:48:06.968 * Running mode=standalone, port=6379.
1:M 06 Apr 2023 19:48:06.969 # Server initialized
1:M 06 Apr 2023 19:48:06.969 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 06 Apr 2023 19:48:06.969 # WARNING Your system is configured to use the 'xen' clocksource which might lead to degraded performance. Check the result of the [slow-clocksource] system check: run 'redis-server --check-system' to check if the system's clocksource isn't degrading performance.
1:M 06 Apr 2023 19:48:06.970 * Ready to accept connections



What if we need to run an older version of redis, you specify with (:) 


## The (:4.0) is the tag.
docker run redis:4.0
Also if do not specify a tag docker will automatically assume the default it to be "latest". Latest is the tag associates to the software latest version. As a user, to find the information about this version, you visit docker hub.

STEP 3:
For example: Lets deploy an image from docker hub for Jenkins. Jenkins is a build application, it is a continuous and delivery server. Instead of running so many dependencies on your host machine. All you do, is run Jenkins as a container . Keep in mind that Jenkins is a web server. 

[root@ip-172-**-00-00 ec2-user]# docker run jenkins/jenkins
Using default tag: latest
Error response from daemon: manifest for jenkins:latest not found: manifest unknown: manifest unknown
[root@ip-172-*1-00-00 ec2-user]# docker pull jenkins:2.60.3
2.60.3: Pulling from library/jenkins
55cbf04beb70: Pull complete
1607093a898c: Pull complete
9a8ea045c926: Pull complete
d4eee24d4dac: Pull complete
c58988e753d7: Pull complete
794a04897db9: Pull complete
70fcfa476f73: Pull complete
0539c80a02be: Pull complete
54fefc6dcf80: Pull complete
911bc90e47a8: Pull complete
*************************************************************
*************************************************************
*************************************************************

Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:

# This output generated a password as an admin user to unlock JENKINS
1ea25d6b860e4bc186fc2ece7a7aad02

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

*************************************************************
*************************************************************
*************************************************************

2023-04-08 17:36:44.465+0000 [id=29] INFO jenkins.InitReactorRunner$1#onAttained: Completed initialization
2023-04-08 17:36:44.504+0000 [id=22] INFO hudson.lifecycle.Lifecycle#onReady: Jenkins is fully up and running
2023-04-08 17:36:44.597+0000 [id=42] INFO h.m.DownloadService$Downloadable#load: Obtained the updated data file for hudson.tasks.Maven.MavenInstaller
2023-04-08 17:36:44.599+0000 [id=42] INFO hudson.util.Retrier#start: Performed the action check updates server successfully at the attempt #1

Open another shell to access the docker host. You can see Jenkins is running on port "8080". We are currently within the internal IP of the docker host. 

Package docker-20.10.17-1.amzn2023.0.6.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
[root@ip-172-31-81-127 ec2-user]# systemctl start docker
[root@ip-172-31-81-127 ec2-user]# systemctl enable docker
[root@ip-172-31-81-127 ec2-user]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: d>
Active: active (running) since Sat 2023-04-08 17:34:11 UTC; 20min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 26694 (dockerd)
Tasks: 10 (limit: 1112)
Memory: 76.7M
CPU: 14.200s
CGroup: /system.slice/docker.service
└─26694 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/con>

Apr 08 17:34:10 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:34:10 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:34:11 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:34:11 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:34:11 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:34:11 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:34:11 ip-172-31-81-127.ec2.internal systemd[1]: Started docker.servic>
Apr 08 17:34:11 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:35:00 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
Apr 08 17:40:43 ip-172-31-81-127.ec2.internal dockerd[26694]: time="2023-04-08T>
lines 1-22
^C
[root@ip-172-31-81-127 ec2-user]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45199e9a57b6 jenkins/jenkins "/usr/bin/tini -- /u…" 11 minutes ago Up 11 minutes 8080/tcp, 50000/tcp practical_feistel


DOCKER INSPECT
To find out the IP of the container /details of a specific container run "docker inspect container id" It returns the details of the container in a json format like:
  • Configuration 
  • network settings ( Within Bridge network)
  • mounts
  • state
  • container id. etc.
## To find out your internal IP, docker inspect and container ID
docker inspect 45199

"NetworkSettings": {
"Bridge": "",
"SandboxID": "00000000000002270000000000",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"50000/tcp": null,
"8080/tcp": null
},
"SandboxKey": "/var/run/docker/netns/47929a2f8743",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "000000000002cae00000000000",
"Gateway": "00.100.0.00",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "00:00:hf:00:00:00",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "**********000000000000",
"EndpointID": "0000000000000000000527f8a000000000000",
"Gateway": "00.100.0.00",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "00000000",
"DriverOpts": null
}
}
}
}
]

The IP "172.17.0.2:8080" to access the browser




              PORT MAPPING ON CONTAINER :
In order to achieve port mapping, first, you have to stop the docker container. This is an instance where you run a web application in a docker container on your docker host. And we ensure the server is running. The question is, how does a user access the application?

For example: Lets' say our application is running on port "3200". It means you can access the application on port "3200". You can use the IP of the docker container which is a default "172.17.0.2" (internal ip) and can only be assessable within the docker host. In order to access the browser within the docker host https://"172.17.0.2":8080. Since, this is an internal IP address users outside world cannot access the IP. 

However, we can use the IP of the docker host "192.168.1.5"and for this to work, we will have to map the port inside the docker container to A FREE PORT ON THE DOCKER HOST. 

For example: If you want the users(world) to access your application through "port 80" on my docker host, you can map port 80 of the local host to port "3200" on the docker container using the docker "-p"(publish) parameter within the the run command below. Now, the user can have access to the application by going to the URL HTTP
"192.168.1.5:80. Which means all traffic on port 80 docker host will gets routed to 3200 inside the docker container. You have options to map multiple container to hosts. 


HOW DATA IS CAPTURED IN REAL TIME 
The aspect of data, we will make use of mysql database. 


Docker run mysql

Each time a database are created the files are stored in /var/lib/mysql
inside the docker container. By default the docker container has its own file system and any changes should be done within the container. 
In order to persist data, you will map a directory OUTSIDE the container on the docker host to a directory INSIDE the container. In this case we use the "-V" parameter and create an external directory "/uche/datadir"and map it to  /var/lib/mysql. When docker container runs  it will implicitly mount the external directory to the FOLDER inside the docker container. 
 All your data will be stored in the external volume, you do not have to worry about loosing it. 




Docker run STDIN: The stdin is used to attach standard input to the container and this is done by attaching the "-i". The "-i" parameter is for interactive mode. Another command you can attach to the parameter is "t".The "-it" helps you to interact with the sudo terminal.
      
 DOCKER LOGS
You can view log of your container run "docker log container id"



Next slide we will talk about the networking . Happy Learning !!! ๐Ÿ˜Š 
Kindly like and comment. 










CONFIGURING A PHISHING CAMPAIGN IN MICROSOFT DEFENDER.

Configuring a phishing campaign in Microsoft Defender (specifically Microsoft Defender for Office 365) involves creating a simulated attack ...