About Forgeware
20 minutes
Software Engineering is a complicated subject. Breaking through the myriad of technologies
, frameworks, good practices, and programming languages is hard. Forgeware exists to demystify Software Engineering in a hands-on way. We try to demonstrate concepts in real-life scenarios and show the code behind those real-life applications
so you get a sense of what you are dealing with. Please notice, that the scripts we provide are not meant to be used in any production environment.
Automating the creation of users, directories, permissions, and groups with Bash
This script will automate the creation of new developer accounts on the system, assign them directories and permissions based on their role, and add them to relevant development groups. This eliminates manual setup and ensures consistency.
BEGINNER
BASH
LINUX
# Creating directories
sudo mkdir /public # sudo execute the command with superuser privileges
sudo mkdir /frontend # mkdir is the command for creating a directory
sudo mkdir /backend
sudo mkdir /ops
# Creating groups
sudo groupadd GRP_FRONTEND # groupadd creates a new group
sudo groupadd GRP_BACKEND
sudo groupadd GRP_OPS
Implementing a version control system with Git/Github
We will use Git for version control and Github for a central repository. Developers will have access to the latest codebase and can track changes efficiently.
BEGINNER
GIT
GITHUB
git clone <URL>
git add <filename> # to stage specific files for commit.
git commit -m "Meaningful commit message" # to create a snapshot of their changes with a descriptive message
git branch -M main
git remote add origin <repo URL> # This command configures a remote repository.
git push origin <branch_name>
Deploying infrastructure with Terraform
Terraform will automate infrastructure provisionings, such as servers, databases, and storage on a chosen cloud platform. This ensures consistent and repeatable infrastructure deployment.
INTERMEDIATE
TERRAFORM
CLOUD COMPUTING
# Create a Resource Group for desktop
resource "azurerm_resource_group" "rg-desktop-bknd-001" { # Define components of your infrastructure
location = "westeurope"
name = "rg-desktop-bknd-001"
tags = var.desktop_tags
}
# Create a Resource Group for mobile
resource "azurerm_resource_group" "rg-mobile-bknd-001" {
location = "westeurope"
name = "rg-mobile-bknd-001"
tags = var.mobile_tags
}
Creation of CI/CD pipeline with Azure DevOps
Azure DevOps will be used to create a continuous integration and continuous delivery (CI/CD) pipeline. Upon code commit, the pipeline will automatically build, test, and deploy the application to a staging environment.
INTERMEDIATE
AZURE DEVOPS
DEVOPS
- task: TerraformTaskV4@4 # Executes Terraform commands.
inputs: # Defines the input parameters for the task
provider: 'azurerm' # Tells Terraform to use the Azure Resource Manager provider.
command: 'init' # Specifies the Terraform command to be executed.
backendServiceArm: 'bknd-prod-001(f97a229a-2aa9-47e7-ae31-76ed06c11e1d)'
backendAzureRmResourceGroupName: 'rg-terraformdevops-001'
backendAzureRmStorageAccountName: 'stterraformdevops01'
backendAzureRmContainerName: 'terraform'
backendAzureRmKey: 'y9d6HEg1PDSZBTjxfFG+vN4bctj1qPIHKVnB3a82SMhSJa1bJjvfsloJDw0J5pYzKfVbVVQwBYpC+AStA5P4pw=='
- task: TerraformTaskV4@4
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV4@4
inputs:
provider: 'azurerm'
command: 'apply'
environmentServiceNameAzureRM: 'bknd-prod-001(1)(f97a229a-2aa9-47e7-ae31-76ed06c11e1d)'
Containerizing an application with Docker
The application will be containerized using Docker, creating a self-contained package with all dependencies. This promotes portability and simplifies deployments.
INTERMEDIATE
DOCKER
LINUX
version: '3.9'
services:
apache:
image: httpd:latest
container_name: my-apache-app
ports:
- '8081:80'
volumes:
- ./website:/usr/local/apache2/htdocs
Deploying a web application with Kubernetes
Kubernetes will be used to orchestrate the deployment of Docker containers across multiple servers for scalability and high availability.
ADVANCED
KUBERNETES
DOCKER
echo "Creating images..."
docker build -t alemorales9011935/projeto-backend:1.0 backend/. # Build docker image.
Creates a self-contained executable package for running an application
docker build -t alemorales9011935/projeto-database:1.0 database/.
echo "Pushing images..."
docker push alemorales9011935/projeto-backend:1.0 # Uploading a completed Docker image to a Docker registry.
docker push alemorales9011935/projeto-database:1.0
echo "Creating Services..."
kubectl apply -f ./services.yml --validate=false # Figure out how to achieve
the desired state of the infrastucture we define.
echo "Criating Deployment"
kubectl apply -f ./deployment.yml --validate=false # Controls the validation behaviour.
Automating Web App deployment with Bash
A Bash script will trigger the Azure DevOps pipeline upon code commit, automating the entire deployment process.
ADVANCED
BASH
LINUX
AZURE DEVOPS
----------------------------------------------------------------------------------
echo "Updating and installing apache2 and unzip..."
apt-get update
apt-get upgrade -y
apt-get install apache2 -y
apt-get install unzip -y
----------------------------------------------------------------------------------
echo "Getting the website from a remote repo..."
cd /tmp
wget https://github.com/denilsonbonatti/linux-site-dio/archive/refs/heads/main.zip
----------------------------------------------------------------------------------
echo "Unziping the file and pasting into the apache directory..."
unzip main.zip
cd linux-site-dio-main
cp -R * /var/www/html/
----------------------------------------------------------------------------------
Setting developing environment with Vagrant
To provide developers with a consistent local development environment, Vagrant will be used to create virtual machines pre-configured with all the necessary tools and dependencies.
ADVANCED
VAGRANT
IAC
# Creates an object of 3 virtual machines with memory 1024, unique IP and assigns a docker image
machines = {
"master" => {"memory" => "1024", "cpu" => "1", "ip" => "100", "image" => "bento/ubuntu-22.04"},
"node01" => {"memory" => "1024", "cpu" => "1", "ip" => "101", "image" => "bento/ubuntu-22.04"},
"node02" => {"memory" => "1024", "cpu" => "1", "ip" => "102", "image" => "bento/ubuntu-22.04"}
}
# Sets the Vagrant configuration version to "2"
Vagrant.configure("2") do |config|
DevOps
20 minutes
Introduction
The Software Engineering market is like any other market. Everyone wants to be the first. Meaning? We must match our users' needs before our competitors do. We need to ship
quality code faster
for that to happen. And for that, we need to automate.
Anything that could be automated, must. Normally that goes for three main operations. Testing, Building, and Deployment
. The diagram below illustrates a core Software Development pipeline and its according phase of the SDLC (Software development Life Cycle).
Next
User management automation with Linux. How to assign directories and permissions based on roles, and add them to relevant development groups.
User Management Automation
40 minutes
As we said before the main goal of DevOps is to ship code faster
and efficiently.
However for that to happen first software companies would likely need to create groups, directories, and users for new employees. A company might do this with Linux in a scenario where they are using a self-hosted deployment
model.
In this self-hosted scenario, the company would need to create user accounts on the Linux server
for each new employee. These accounts would grant them access to the specific resources and applications they need to do their jobs.
Table of Contents
- Payroll Table
- Script
- Permissions Strings
- Example
- Conclusion
Payroll Table
Below is the payroll table that will be used to create the user management script.
Employees | Directory | Group | Permission |
---|---|---|---|
Noah | frontend/public | frontend | rwx (read,write,execute) |
Amelia | frontend/public | frontend | rwx |
Bryan | frontend/public | frontend | rwx |
Jane | backend /public | backend | rwx |
Devin | backend /public | backend | rwx |
Bryan | backend /public | backend | rwx |
Noah | ops /public | ops | rwx |
Noah | ops /public | ops | rwx |
Noah | ops /public | ops | rwx |
Script
The script below creates the above infrastructure
#!/bin/bash
# Creating directories
sudo mkdir /public # sudo execute the command with superuser privileges
sudo mkdir /frontend # mkdir is the command for creating a directory
sudo mkdir /backend
sudo mkdir /ops
# Creating groups
sudo groupadd GRP_FRONTEND # groupadd creates a new group
sudo groupadd GRP_BACKEND
sudo groupadd GRP_OPS
# Creating users for the ADM group
sudo useradd Jane -m -s /bin/bash -G GRP_FRONTEND # -m: This option tells useradd to create a home directory for the new user. The home directory will be created with the same name as the username.
sudo useradd Devin -m -s /bin/bash -G GRP_FRONTEND # -s /bin/bash: specifies the default shell for the new user
sudo useradd Bryan -m -s /bin/bash -G GRP_FRONTEND # -G: This option adds the new user to a group
# Creating users for the SELLS group
sudo useradd Sarah -m -s /bin/bash -G GRP_BACKEND
sudo useradd Elijah -m -s /bin/bash -G GRP_BACKEND
sudo useradd Maya -m -s /bin/bash -G GRP_BACKEND
# Creating users for the OPS group
sudo useradd Noah -m -s /bin/bash -G GRP_OPS
sudo useradd Amelia -m -s /bin/bash -G GRP_OPS
sudo useradd Bryan -m -s /bin/bash -G GRP_OPS
# Specifying permissions for directories
sudo chown root:GRP_FRONTEND /frontend # chown(change owner) adds the root user as owner of the ADM group
sudo chown root:GRP_BACKEND /backend
sudo chown root:GRP_OPS /ops
# Managing permissions
sudo chmod 770 /frontend # chmod modify the permissions assigned to files and directories in the system.
sudo chmod 770 /backend # 770 is the permission string*
sudo chmod 770 /ops
sudo chmod 777 /public
Permissions Strings
The permissions string defines a specific access
level for the owner, the group, and others on a file or directory.
- 4 represents read only (r = 4 in binary)
- 5 represents read and execute (r = 4, x = 1)
- 7 represents all permissions (read, write, and execute)
Example
Consider a permission string -rwxr--r--.
The first character - indicates it's a regular file. Owner (first set of three): "rwx" - This translates to read, write, and execute permissions for the owner (user who owns the file). Group (second set of three): "r--" - This translates to read permission only for the group. Others (third set of three): "r--" - This translates to read permission only for others (users who are not the owner and not in the group).
Conclusion
It's important to note that this scenario is less common
for modern SaaS companies. Most SaaS companies would leverage a cloud-based deployment
model where user management and access control are handled by the provider, eliminating the need for direct Linux administration.
There might be some hybrid deployments
where core functionalities are self-hosted on Linux servers while other aspects leverage cloud services. In such cases, user and group management on the Linux side might still be necessary.
Next
Version control System with Github. How to easily keep track of all the code in your system and collaborate with other developers in a professional software development environment.
Implementing a version control system with Git/Github
25 minutes
Version control creates a history of all changes
made to your files. You can easily revert to previous versions if something goes wrong or you decide you prefer an older iteration. This is like having a safety net
for your work.
Table of Contents
- Installing Git (One-time setup)
- Clone the main repository
- Make changes and add files
- Renaming your branch
- Configuring a remote repository
- Push changes
- Conclusion
Install Git (One-time setup)
- Windows/macOS: Download and install Git from the official website
https://git-scm.com/downloads.
- Linux: Git is usually pre-installed on most Linux distributions. You can check by opening the terminal and typing
git --version
. If not installed, use your package manager(e.g., sudo apt install git on Ubuntu)
.
Clone the main repository
git clone <URL of Jane's repository>
This downloads a copy of the entire repository, including all branches and history, to the Devs' local machines.
Make changes and add files
The Devs can use standard Git commands like:
git add <filename> # to stage specific files for commit.
git commit -m "Meaningful commit message" # to create a snapshot of their changes with a descriptive message
Renaming your branch
git branch -M main
By default, Git creates a branch named master
when you initialize a repository. This command renames the current branch (which points to the first commit
) from master
to main
.
Configuring a remote repository
This step is necessary to push code to the main repository of the company.
git remote add origin <repo URL> # This command configures a remote repository.
Push changes
git push origin <branch_name>
This command pushes the employee's local commits directly to the remote repository. Here:
origin
is the default shortcut for the remote repository.
<branch_name>
specifies the branch they want to push their changes to (usually master for the main branch).
[!Note] Pull with Rebase (git pull --rebase): Sometimes when pushing changes you can get an error. Usually means that the remote branch is ahead of yours. So you need to pull changes first before push your new changes.
git pull --rebase
fetches the latest changes from the remote repository and then attempts to replay your local commits on top of the updated remote branch head.
git pull origin <branch_name> --rebase
Checking Remote Repositories
This will display a list of all remote repositories associated with your project, along with their URL and fetch/push specifications.
git remote -v
Erasing a Remote Repository
This will completely remove the specified remote repository from your local project configuration.
git remote rm <remote_name>
Conclusion
This Git tutorial
has equipped you with the essential tools to manage your code effectively. By mastering these core Git concepts, you've unlocked a powerful way to manage projects, track progress, and collaborate efficiently. This foundation will serve you well as you explore more advanced Git features and delve deeper into the world of version control.
Next
We will see how to deploy Infrastructure with Terraform.
Deploying infrastructure with Terraform and Azure
60 minutes
Terraform's ability to manage Infrastructure as code (IaC)
makes it ideal for deploying infrastructure
in different environments (development, staging, production) and cloud vendors.
Table of contents
- Benefits of using the BFF pattern
- Prerequisites
- What's Terraform
- Installing Terraform
- Installing Azure CLI
- Authenticating with Azure CLI
- Create a Service Principal
- Set your environment variables 9 Initialize Terraform
- Write configuration
- Conclusion
Benefits of using the BFF pattern
A software company might use the Backend for Frontends (BFF)
pattern by tailoring data and functionalities to each specific UI(desktop vs mobile).
Prerequisites
- An Azure subscription
- Terraform installed
What's Terraform?
Infrastructure as Code (IaC)
tools allow you to manage infrastructure with configuration files that you can version, reuse, and share rather than through a graphical user interface.
Installing Terraform
To install Terraform, find the appropriate package for your system and download it as a zip archive. After downloading Terraform, unzip the package.
Installing Azure CLI
The Azure CLI will allow you to authenticate with Azure.
Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; rm .\AzureCLI.msi
Autenticating with Azure CLI
Terraform must authenticate to Azure to create infrastructure.
az login
Create a Service Principal
Next, we need to create a Service Principal
. An application within Azure Active Directory with the authentication tokens Terraform needs to perform actions on your behalf.
Update the <SUBSCRIPTION_ID>
with the subscription ID you specified in the previous step.
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<SUBSCRIPTION_ID>"
Set your environment variables
A good practice HashiCorp recommends is setting these values as environment variables
rather than saving them in your Terraform configuration to avoid passing sensitive info in the configuration code.
$Env:ARM_CLIENT_ID = "<APPID_VALUE>"
$Env:ARM_CLIENT_SECRET = "<PASSWORD_VALUE>"
$Env:ARM_SUBSCRIPTION_ID = "<SUBSCRIPTION_ID>"
$Env:ARM_TENANT_ID = "<TENANT_VALUE>"
Initialize Terraform
Initialize the project, which downloads a plugin called a provider that lets Terraform interact with the assigned provider.
terraform init
Write configuration (main.tf)
# Terraform Settings Block contains Terraform settings, including the required providers Terraform will use
# to provision your infrastructure.
terraform {
required_version = ">= 1.0.0"
required_providers {
azurerm = { # A plugin that Terraform uses to create and manage your resources.
source = "hashicorp/azurerm"
version = ">= 2.0" # Optional but recommended in production
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
# Create a Resource Group for desktop
resource "azurerm_resource_group" "rg-desktop-bknd-001" { # Define components of your infrastructure
location = "westeurope"
name = "rg-desktop-bknd-001"
tags = var.desktop_tags
}
# Create a Resource Group for mobile
resource "azurerm_resource_group" "rg-mobile-bknd-001" {
location = "westeurope"
name = "rg-mobile-bknd-001"
tags = var.mobile_tags
}
# Tags for desktop department
variable "desktop_tags" {
type = map(string)
default = {
Department = "Des FEP Team"
Environment = "Testing"
Owner = "John Doe"
Purpose = "Frontend"
}
}
# Tags for mobile department
variable "mobile_tags" {
type = map(string)
default = {
Department = "Mob FEP Team"
Environment = "Testing"
Owner = "Jane Doe"
Purpose = "Frontend"
}
}
# Create a Storage Account for desktop
resource "azurerm_storage_account" "stdesktopbknd001" {
name = "stdesktopbknd001"
resource_group_name = azurerm_resource_group.rg-desktop-bknd-001.name
location = "westeurope"
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "LRS"
}
# Create a Storage Account for mobile
resource "azurerm_storage_account" "stmobilebknd001" {
name = "stmobilebknd001"
resource_group_name = azurerm_resource_group.rg-mobile-bknd-001.name
location = "westeurope"
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "LRS"
}
# Create a Storage Container for desktop
resource "azurerm_storage_container" "stdesktopui001" {
name = "stdesktopui001"
storage_account_name = azurerm_storage_account.stdesktopbknd001.name
}
# Create a Storage Container for mobile
resource "azurerm_storage_container" "stmobileui001" {
name = "stmobileui001"
storage_account_name = azurerm_storage_account.stmobilebknd001.name
}
Variables (var.tf)
Variable files separate configuration values from the main Terraform code
. This allows to reuse the same Terraform codebase for multiple deployments by simply changing the variable values.
variable "location" {
type = string
default = "westeurope"
}
variable "tags" {
type = map
default = {
"Ambiente" = "Desenvolvimento"
"Integracao" = "Processo Devops"
"Compania" = "Aula Devops"
"Area" = "Marketing"
}
}
Conclusion
By leveraging Terraform's Infrastructure as code (IaC) approach, you can achieve consistent, repeatable, and version-controlled infrastructure deployments.
Next
CI/CD pipelines. Which is arguably the most important skill for DevOps engineers to master.
Creating a CI/CD pipeline with Azure DevOps
It's a methodology that automates the software development
process, from building and testing to deployment. By automating repetitive tasks, CI/CD
significantly reduces the time it takes to get new features or bug fixes into production.
trigger:
- main # The pipeline will be triggered whenever there's a push to the main branch.
pool:
vmImage: ubuntu-latest # Defines where the pipeline tasks will be executed.
steps:
- task: TerraformInstaller@1 # Installs a specific version of Terraform
inputs:
terraformVersion: 'v1.7.4'
- task: TerraformTaskV4@4 # Executes Terraform commands.
inputs: # Defines the input parameters for the task
provider: 'azurerm' # Tells Terraform to use the Azure Resource Manager provider.
command: 'init' # Specifies the Terraform command to be executed.
backendServiceArm: 'bknd-prod-001(f97a229a-2aa9-47e7-ae31-76ed06c11e1d)'
backendAzureRmResourceGroupName: 'rg-terraformdevops-001'
backendAzureRmStorageAccountName: 'stterraformdevops01'
backendAzureRmContainerName: 'terraform'
backendAzureRmKey: 'y9d6HEg1PDSZBTjxfFG+vN4bctj1qPIHKVnB3a82SMhSJa1bJjvfsloJDw0J5pYzKfVbVVQwBYpC+AStA5P4pw=='
- task: TerraformTaskV4@4
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV4@4
inputs:
provider: 'azurerm'
command: 'apply'
environmentServiceNameAzureRM: 'bknd-prod-001(1)(f97a229a-2aa9-47e7-ae31-76ed06c11e1d)'
Conclusion
By leveraging Azure Pipelines' features, you can achieve continuous integration and continuous delivery (CI/CD) for your projects, ensuring consistent and reliable deployments. We encourage you to explore the code, customize it for your specific needs, and contribute improvements. The provided examples showcase common CI/CD workflows, but the possibilities are vast.
Next
Containerization with Docker.
Containerizing With Docker Engine
2 hours
Docker containers are self-contained
units that include everything an application needs to run, from its code to its libraries
and dependencies.
Simply put, it solves the "It worked on my machine" problem.
Docker Concepts
Name | Description |
---|---|
Container | A box filled with dependencies |
Client | Where we run docker commands |
Host | Where containers live |
Daemon | Who manages docker operations |
Registry | Where images live |
Images | A container blueprint |
Dockerfile | Who creates images |
Objects | Instances of things like images |
Storage | A place to keep files |
Docker Engine vs Docker Desktop
Docker Engine is a lower-level tool, geared towards system administrators. On the other hand, Docker Desktop provides developers with a user-friendly interface and additional tools.
Installing Docker Engine
The easiest way to install Docker is with a convenience script they provide.
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker. sh
Pulling a Docker Image
Pulling an image from a public repository like Docker Hub allows you to quickly deploy a pre-built application without manually installing all the dependencies. Let's draw the hello-world
image.
sudo docker pull hello-world
Checking the images
sudo docker images
Running the image
sudo docker run hello-world
Running an Interactive Image in the Background
In most cases, you'll likely use the -d
or -its flags along with the
run` command.
- Use
-d
for background processes or services. - Use
-it
to start a container in the foreground and interact with it like a regular terminal session.
Docker --Help
Use sudo docker --help
to see all commands and sudo docker <command> --help
to see all available flags(options) for the command.
Usefull Docker Commands
Name | Description |
---|---|
docker rm id | Removes containers |
docker rmi name | Removes images |
docker ps | List containers |
docker stop id | Stop a running container |
docker run -dti --name container image | Name a container |
docker exec -ti Ubuntu /bin/bash | Executes bash inside Ubuntu |
docker exec Ubuntu mkdir /destiny | Creates a destiny inside the container |
docker cp file container:/destiny | Copy to container from local machine |
docker cp container:/originfolder/file new-filename | Copy to local machine from container |
docker pull debian:9 | Pulls specific Debian container version |
Installing a MYSQL container
Running a MYSQL database in a container prevents conflicts with other software or system-wide configurations.
docker pull MySQL
Running a MySQL container
docker run -e MYSQL_ROOT_PASWORD=password --container name -d -p 3306:3306 image-name
-e
Sets environmental variables.-d
Executes the database in the background.-p
Opens a port to communicate with the database.
Accesing MYSQL to create a Database
Executing bash inside the MySQL container to manage the database.
docker exec -it container-name bash
MYSQL client
Another way to interact with the database is through the Mysql client.
apt -y install mysql-client
Login to MYSQL to create a new database
After accessing bash we must login into MySQL to be able to manage our databases via SQL. After inserting the command below we will be prompted to insert the root
user password.
MySQL -u root -p --protocol=tcp
Troubleshooting connection issues
Docker assigns an IP to every container to be able to connect with other distributed systems independently. To troubleshoot any connection issues we must use the inspect
command.
docker inspect container-name
Data storage in containers
Containers are ephemeral, meaning their data is lost when the container stops. Mounts allow you to persist data beyond the container's lifecycle. A mount is a way to share data or files between the host system and the container. It essentially creates a link between a directory or file on the host machine and a directory within the container.
docker run -e MYSQL_PASSWORD=your-password --name container-name -d -p 3306:3306 --volume=/host-path -A:/container-path image-name
Mount Types
- Bind: Binds a directory from the container to a directory in the host (example above)
docker run -v /hostdir:/containerdir image-name
- Named: Manually created volumes inside a standard directory.
docker volume create volume-name # Creates volumes
docker run -v volume-name:/container-directory image name # Refeerences the created volume
Pulling an Apache Container
Apache, also called Apache HTTP Server
, is free and open-source software that powers many websites. When you type a web address into your browser, Apache receives the request and fetches the relevant files (like HTML, CSS, and images) that make up the webpage.
It then sends this information back to your browser, which interprets it and displays the webpage.
docker pull httpd # Pulls the official Apache container from the docker hub
Interesting Fact
Apache is a core component of the LAMP
stack, a popular combination of open-source software for building websites. LAMP
stands for Linux (operating system), Apache (webserver), MySQL (database), and PHP (programming language).
Bindind a local volume to an Apache Web Server Container
By binding a local folder on your machine (where you're developing your website) to the Apache
container's document root (the folder where it looks for website files), you can make changes to your code and see them reflected immediately in your web browser without needing to rebuild the container image
or copy files back and forth. This allows for a much faster development workflow.
docker run --name container-name -d -p 80:80 --volume=host-volume-path -A:/usr/local/apache2/htdocs httpd
- host-volume-path: Where the website files are
- -d: Runs the container in the background
- /usr/local/apache2/htdocs: The standard directory for file storage in the Apache container.
- httpd: The image name
CPU and Memory Optimization
Resource optimization is important for containerized apps since an over-expanding container could hinder the performance of the whole system.
docker update container-name -m 128M --cpus 0.2
docker stats container-name # Provide container stats
- -m 128M: Limit the memory to 128mb
- --cpus 0.2: Limit the cpu usage to 20%
Stress Testing
Stress testing helps us understand how a containerized application behaves under heavy load. This can reveal bottlenecks in CPU, memory, or disk usage. The command stress
is a stress test generator
that simulates heavy workloads to see how your system responds.
apt install -y stress
stress --cpu 1 --vm-bytes 50m --vm 1 --vm-bytes 50m
- --cpu 1: Cpus to stress
- --vm-bytes: Bytes volume to stress
- --vm 1: Memory stress
Docker Info, Logs, Top
Here are some useful docker commands for managing containers.
docker info # Shows server info
docker logs container-name # Shows container logs
docker top container name # Shows container processes
Networking
In containerized environments, like Docker, multiple containers
are often used to build applications. Each container typically runs a single service, but these services often need to communicate with each other to function properly. This is where container networking comes in.
docker network -ls # List networks
docker inspect network-name # List containers within the network
docker network create new-network # Create a new network
Container isolation: Network isolation allows you to define which containers can communicate, and how. For example; a container running a database might not need access to the internet. Also, if a container gets compromised by malware, network isolation can limit the attacker's ability to spread to other containers on the system.
docker run -dti -name container-name --network network-name image-name # Creates a container inside a defined network
Dockerfile
A Dockerfile is a text document that contains instructions for building a Docker image
. It essentially acts like a recipe that Docker follows to create a customized environment
for your application to run in.
Common instructions involve:
-
FROM: Specifying the base
image
to start from (like Ubuntu or a pre-built image). -
COPY: Copying files from your
local machine
into the image. -
RUN: Executing commands within the image to install dependencies, configure the
environment
, etc. -
EXPOSE: Defining ports that the application running in the container will listen on.
-
CMD or ENTRYPOINT: Specifying the command to run when the
container starts
(like launching your application). -
Creating a Dockerfile
nano dockerfile # Creates a dockerfle
- Editing a Dockerfile example that creates an Ubuntu and Python image.
FROM ubuntu
RUN apt update && apt install -y python3 && apt clean
COPY app.py /apt/app.py
CMD phyton3 apt/app.py
- Building a Dockerfile
The basic syntax for docker build is:
docker build [OPTIONS] <context>
# Example
docker build. -t ubuntu-python
<context>
is typically the path to the directory containing yourDockerfile
. By default, the current directory is used if not specified. There are several options available to customize thebuild process
, such as:- -t: Tag the resulting
image
with a name and version. - -f: Specify an alternative
Dockerfile
location.
Docker Compose
A tool used to define and run multi-container applications. Employs a YAML file to configure "services" (essentially, containers) and their relationships. Docker Compose revolves around three main components that work together to define and run your multi-container application:
- Services: These are the fundamental
building blocks
of your application, representing individual containers. - Networks: Docker Compose can create virtual networks for your
containers
to communicate with each other securely. This eliminates the need for complex manual network configuration. - Volumes: As mentioned earlier,
volumes
allow you to persist data outside the container itself. You map a directory on the host machine to a directory within the container. This ensures data isn't lost when the container restarts.
Dockerfile vs Docker Compose
In essence, Dockerfile provides the building blocks (images), and Docker Compose orchestrates them to run a multi-container application efficiently.
Installing Docker Compose
Before executing the yaml file it's important to install docker compose. This task is achieved by coding the following command into the terminal.
apt install docker-compose
Create the Compose file
After installing docker-compose, we must create the YAML file containing the infrastructure we desire to create. The script below shows an example of an Apache web server container deployment.
version: '3.9'
services:
Apache:
image: httpd:latest
container_name: my-apache-app
ports:
- '8081:80'
volumes:
- ./website:/usr/local/apache2/htdocs
Executing the script
Finally, we must execute the script so our infrastructure( in this case our Apache web server) will be created. We can do that by typing the following command into the console.
docker-compose up -d
Conclusion
Docker offers a standardized approach to packaging, deploying, and running applications. This has revolutionized how software is built and delivered, making it a vital tool for developers and operations teams today.
Next
We will see how to perform container monitoring and optimization.
Performance & Optimization
By monitoring container performance metrics like CPU
, memory
, and network usage
, you can identify and troubleshoot issues before they impact users.
Requirements
- Docker: Container management
- Prometheus: Scrapes metrics
- Node Exporter: Export metrics
- Grafana: Build dashboards for monitoring
Folder Structure
A well-defined structure promotes modularity. Each service or component can reside in its directory, encapsulating its configuration files
, Dockerfile, and related scripts.
#! bin\bash
echo "Creating Folder Structure"
mkdir -p grafana
mkdir -p prometheus
mkdir -p node-exporter
touch /prometheus/prometheus.yml
touch /grafana/docker-compose.yml
touch /prometheus/docker-compose.yml
echo "Folder Structure Created"
Grafana
Grafana empowers you to turn raw data into actionable insights
. By providing a centralized platform for visualization, exploration, and alerting, it helps you make data-driven decisions and optimize your systems.
Here's how to spin up Grafana using docker-compose.
version: "3.8"
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: unless-stopped # Ensure grafana runs even if docker restarts
ports:
- '3000:3000' # Allows to access UI in port 3000.
volumes: # Without defined volumes data will be lost when volume restarts.
- grafana-storage:/var/lib/grafana # Mounts a local directory (grafana-storage) to
# a directory(var/lib/grafana) inside the container.
volumes:
grafana-storage: {}
Prometheus
Prometheus
provides a robust and efficient solution for monitoring and alerting. Here's how to spin up Prometheus
with a docker-compose file.
services:
prometheus:
image: prom/prometheus
container_name: prometheus
command: # Defines a command to be executed inside the container.
- '--config.file=/etc/prometheus/prometheus.yml' # Tells prometheus to use the configuration file
# located at /etc/prometheus.yml
ports:
- 9090:9090
restart: unless-stopped
volumes:
- ./prometheus:/etc/prometheus # Mounts the local directory we are to a directory inside the container.
# This allows to manage of configuration files in the container from the host.
- prom_data:/prometheus # Mounts a local directory to a container directory.
# This way Prometheus can save its time series data.
volumes:
prom_data:
Prometheus.YML
Prometheus.yml is the configuration file that defines the core behavior of a Prometheus server. It dictates how Prometheus collects, stores, and processes metrics.
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
scheme: http
timeout: 10s
api_version: v1
scrape_configs:
- job_name: prometheus
honor_timestamps: true # Instructs Prometheus to preserve the original timestamps of scraped metrics.
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- localhost:9090
- job_name: "node" # Scrape metrics from node-exporter
static_configs:
- targets: ["192.168.0.7:9100"]
Node Exporter
Node Exporter
is a crucial component in the Prometheus ecosystem, primarily designed to collect
and expose metrics
about the underlying host machine.
It's essentially a bridge between your hardware and the powerful analysis capabilities of Prometheus.
services:
node-exporter:
image: prom/node-exporter: latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro # Mounts the local /proc directory to the containers /proc
# in read-only mode. So the container could read data from the proc.
- /sys:/host/sys:ro
- /:/rootfs:ro # Mounts the entire file system in read-only mode.
command:
- '--path.procfs=/host/proc' # Tells the container where to look for the proc directory.
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)' # Excludes mount points from
# collection.
ports:
- 9100:9100
Grafana: Data Sources
Once logged in to Grafana by browsing to your-container-ip-addres:grafana-port
Prometheus can be added as a data source by navigating to Home > Connections > Data Sorces > Add Data Source
.
Then the URL where Prometheus is active must be provided. Usually your-container-ip-address: prometheus-port
.
Importing a Dashboard
Importing a Dashboard is arguably the fastest way to get started with Grafana. To do that navigate to Home > Dashboards > New
.
Node Exporter Full
By providing a starting point for system monitoring, the Node Exporter Full dashboard accelerates the process of gaining valuable insights from your infrastructure.
Next
Deploying a web application with Kubernetes. How to deploy a multi-tier app with Kubernetes.
Deploying a web application with Kubernetes
60 minutes
This project demonstrates the deployment of a multi-tier application
on Kubernetes, a container orchestration platform. The application consists of a MySQL database and a PHP backend service.
Methodology
Containerized Application deployment using Dockers and Kubernetes
to illustrate containerization best practices. All the required services for the proper functioning of the application were created from scratch.
Table Of Content
- Persistent Volme Claim
- MySQL Database Deployment
- PHP Deployment
- Load Balancer Service
- MySQL Database Service
- Deployment Script
- Conclusion
Persistent Volme Claim
The provided YAML snippet is a well-structured Persistent Volume Claim (PVC) requesting persistent storage in Kubernetes.
This PVC effectively requests 10 Gigabytes
of persistent storage with ReadWriteOnce
access for a single Pod at a time. The standard-two
StorageClass dictates where and how this storage will be provisioned.
# Persistent Volume Claim to request persistent storage to Kubernetes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-datos
spec:
accessModes:
- ReadWriteOnce # Read write acces for one pod at a time
resources:
requests:
storage: 10Gi # Request 10 Gb of storage
storageClassName: standard-rwo # Pre configured sorage class that proviions Read write once volumes
MySQL Database Deployment
This deployment creates pods running a MySQL container
with persistent storage for the database data. The provided code snippet defines a deployment for a MySQL
database container in Kubernetes.
# Mysql deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector: # Select pods with the label "mysql"
matchLabels:
app: mysql
template: # a blueprint for creating pods
metadata: # Data about the data
labels: # Identify pods
app: mysql
spec:
containers:
- image: alemorales9011935/projeto-database:1.0 # Docker image used for the deployment
args:
- "--ignore-db-dir=lost+found" # Ignores previous deployments
imagePullPolicy: Always # Ensure the image is pulled even if exists locally
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-dados
mountPath: /var/lib/mysql/ # where the containers will be storaged
volumes:
- name: mysql-dados
persistentVolumeClaim:
claimName: mysql-dados
PHP Deployment
The provided YAML defines a Kubernetes deployment
configuration for a PHP application. This YAML describes a deployment that creates six pods running the container image alemorales9011935/projeto-backend:1.0. These pods will be labeled with app: php
and will expose their application on port 80
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
labels:
app: php
spec:
replicas: 6 # Amounts of pods to be created
selector:
matchLabels:
app: php
template:
metadata:
labels:
app: php
spec:
containers:
- name: php
image: alemorales9011935/projeto-backend:1.0
imagePullPolicy: Always # Always pulls the image from the registry even if it exists locally.
ports:
- containerPort: 80
Load Balancer Service
The configuration below defines a php-Service
. This configuration creates a LoadBalancer
service for your PHP application.
Kubernetes will work with your cloud provider to set up an external load balancer
that will distribute traffic across multiple pods running the PHP application.
The external IP address for accessing the service will be dynamically assigned by the cloud provider and can be retrieved later using kubectl get service php
.
apiVersion: v1
kind: Service # Defines the type of kubernetes object
metadata:
name: php
spec:
selector: # This service will find and route traffic to pods that have the label app: php.
app: php # Select pods with the label "app: php"
ports:
- port: 80 # The port users will acces from outside the cluster
targetPort: 80 # The port php application listens
type: LoadBalancer # Type of Service
MySQL Database Service
The configuration below creates a ClusterIP Service
for a MySQL database. A ClusterIP service is only accessible from within the Kubernetes cluster. Pods within the cluster can access the MySQL service at mysql-connection on port 3306
.
apiVersion: v1
kind: Service
metadata:
name: mysql-connection
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIp: None
Deployment Script
The provided script is a Bash script for deploying the project.
#!/bin/bash
echo "Creating images..."
docker build -t alemorales9011935/projeto-backend:1.0 backend/. # Build docker image.
Creates a self-contained executable package for running an application
docker build -t alemorales9011935/projeto-database:1.0 database/.
echo "Pushing images..."
docker push alemorales9011935/projeto-backend:1.0 # Uploading a completed Docker image to a Docker registry.
docker push alemorales9011935/projeto-database:1.0
echo "Creating Services..."
kubectl apply -f ./services.yml --validate=false # Figure out how to achieve
the desired state of the infrastucture we define.
echo "Criating Deployment"
kubectl apply -f ./deployment.yml --validate=false # Controls the validation behaviour.
Conclussion
This repository serves as a solid foundation
for Kubernetes deployment.
Web App Deployment Automation: Bash
Manually deploying a web server involves running several commands in a specific order. A Bash script
can automate these steps, saving you time and effort, especially if you need to deploy Apache on multiple servers.
Web Server Deployment
This Bash script provides a streamlined and automated approach to deploying an Apache
web server.
#!/bin/bash
----------------------------------------------------------------------------------
echo "Updating and installing apache2 and unzip..."
apt-get update
apt-get upgrade -y
apt-get install apache2 -y
apt-get install unzip -y
----------------------------------------------------------------------------------
echo "Getting the website from a remote repo..."
cd /tmp
wget https://github.com/denilsonbonatti/linux-site-dio/archive/refs/heads/main.zip
----------------------------------------------------------------------------------
echo "Unziping the file and pasting into the Apache directory..."
unzip main.zip
cd linux-site-dio-main
cp -R * /var/www/html/
----------------------------------------------------------------------------------
Conclussion
Scripts ensure consistency, reduce errors, and save time compared to manual deployments. The script
can be easily customized to fit your specific needs and integrated into larger automation workflows.
Setting developing environment with Vagrant
60 minutes
Docker Swarm manages many containers. It auto-deploys
across machines and reschedules containers if a machine fails, keeping your application running. This simplifies managing large container deployments.
Table of Contents
- Vangrantfile
- Docker script
- Master node script
- Worker node script
Vagrant File
First, we must create a Vagrantfile; which is a configuration
file used with Vagrant, a tool for managing virtual machines
. It acts as a blueprint
for setting up your development environment. In this case, we use it to create and set up 3 virtual machines
in combination with bash scripts.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Creates an object of 3 virtual machines with memory 1024, unique IP, and assigns a docker image
machines = {
"master" => {"memory" => "1024", "cpu" => "1", "ip" => "100", "image" => "bento/ubuntu-22.04"},
"node01" => {"memory" => "1024", "cpu" => "1", "ip" => "101", "image" => "bento/ubuntu-22.04"},
"node02" => {"memory" => "1024", "cpu" => "1", "ip" => "102", "image" => "bento/ubuntu-22.04"}
}
# Sets the Vagrant configuration version to "2"
Vagrant.configure("2") do |config|
machines.each do |name, conf|
# This line defines a virtual machine within the Vagrant configuration
config.vm.define "#{name}" do |machine|
# Sets the box (pre-built VM image) for the virtual machine.
machine.vm.box = "#{conf["image"]}"
machine.vm.hostname = "#{name}"
# Configures the network settings for the virtual machine.
machine.vm.network "private_network", ip: "10.10.10.#{conf["ip"]}"
machine.vm.provider "virtualbox" do |vb|
vb.name = "#{name}"
# Memory allocation
vb.memory = conf["memory"]
vb.cpus = conf["cpu"]
end
machine.vm.provision "shell", path: "docker.sh"
if "#{name}" == "master"
machine.vm.provision "shell", path: "master.sh"
else
machine.vm.provision "shell", path: "worker.sh"
end
end
end
end
Docker Script
This Vagrant script simplifies Docker and Docker Compose
setup for containerized workflows. It automates installation and ensures proper user permissions
for Vagrant users.
#!/bin/bash
curl -fsSL https://get.docker.com | sudo bash
sudo curl -fsSL "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$
(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo usermod -aG docker vagrant
Master Node Script
This script also runs in the vagrant file. It initializes a Docker Swarm cluster
in manager mode and generates a join token for worker nodes. The script stores the join token in a file on the manager node.
#!/bin/bash
sudo docker swarm init --advertise-addr=10.10.10.100
sudo docker swarm join-token worker | grep docker > /vagrant/worker.sh
Worker Script
This line of code instructs a machine to join an existing Docker Swarm cluster as a worker node
. This code instructs a worker node to join a Docker Swarm cluster managed by a node at 10.10.10.100:2377
using the provided join token.
docker swarm join --token
SWMTKN-1-3pj8k0i4tn77bd93a0yxhgh36hxuef5q5oyg1732rztnfy29ll-a94q0ipwgrjs4xikzyb4yb3n5 10.10.10.100:2377
Conclusion
Docker Swarm tames complex container deployments. It automates deployments
, keeps apps running during failures, and efficiently uses resources, making container management a breeze.
Security
DevOps, focusing on speed and efficiency, can inadvertently introduce security risks
if not managed carefully.
Infrastructure as Code (IaC)
Infrastructure as Code (IaC) brings significant efficiency, consistency, and reproducibility advantages.
However, it also introduces new security challenges such as the amplification of errors and increased attack surface
due to the high amount of pieces interconnected in distributed systems.
Security Configuration | Description | Example |
---|---|---|
Security by Design | Enforce security configurations | Least privilege using sudo for specific commands. Firewall using iptables or ufw |
Continuous Monitoring | Monitor IAC for security risks | Use fail2ban to detect and block brute-force attacks. Log anรกlisis |
Immutable Infrastructure | Immutable components to reduce the attack surface | Use Ansible for infrastructure provisioning and updates. |
Secret Management | Implementing secure methods to manage credentials | Use Ansible Vault to encrypt sensitive data in configuration files |
This playbook secures the SSH server on target hosts.
- name: Secure SSH Server
hosts: servers # This playbook targets hosts in the "servers" group.
become: yes # Tasks require elevated privileges (sudo).
tasks:
# Disable password authentication (more secure with key-based auth)
- name: Disable password authentication
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PasswordAuthentication yes' # Find lines starting with "PasswordAuthentication yes"
line: 'PasswordAuthentication no' # Replace with "PasswordAuthentication no"
# Require SSH key-based authentication for improved security
- name: Require SSH key-based authentication
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PubkeyAuthentication no' # Find lines starting with "PubkeyAuthentication no"
line: 'PubkeyAuthentication yes' # Replace with "PubkeyAuthentication yes"
# Set strong ciphers for secure encryption
- name: Set strong SSH ciphers
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^Ciphers' # Find lines starting with "Ciphers"
line: 'Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,chacha20-poly1305@openssh.com' # Replace with recommended ciphers
# Set strong MACs for message authentication
- name: Set strong MACs
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^MACs' # Find lines starting with "MACs"
line: 'MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128@openssh.com' # Replace with recommended MACs
# Restart SSH service to apply changes
- name: Restart SSH service
service:
name: sshd
state: restarted
Application Security
Applications often handle sensitive information
such as financial data, personal information, and intellectual property. A compromised application can lead to significant data breaches with severe consequences.
Security Configuration | Description | Example |
---|---|---|
Code Analysis | Using code analysis tools to identify vulnerabilities. | Use cppcheck , clang-tidy , or Valgrind to find potential issues. |
Secure Coding Practices | Adhering to secure coding standards and guidelines | Use coding standards like OWASP |
Dependency Management | Keeping software dependencies up-to-date and patched | Use apt or yum to update system packages |
Network Security
Network security prevents unauthorized access to confidential data, protecting sensitive information from theft or misuse.
Security Configuration | Description | Example |
---|---|---|
Firewall configuration | Managing firewalls to protect network perimeters. | Use iptables or ufw to create rules that filter network traffic |
Intrusion detection (IDPS) | Deploying and managing IDPS to detect and prevent attacks | Use Snort or Suricata to monitor network traffic |
Network segmentation | Isolating critical systems and data to limit the impact. | Use VLANs to separate different network segments |
Cloud Security
Cloud environments often store critical data
, making them prime targets for cyberattacks. A breach can lead to significant financial losses, reputational damage, and legal consequences.
Security Configuration | Description | Example |
---|---|---|
Identity and Access Management (IAM) | Implementing strong IAM controls to protect cloud resources. | IAM is primarily a cloud-based concept |
Data Encryption | Encrypting data at rest and in transit. | Use gpg or openssl to encrypt data at rest |
Security Groups (NACLs) | Configuring security groups and NACLs | ---- |
Security Testing
Security testing helps uncover weaknesses and vulnerabilities in software that could be exploited by malicious actors. Regular security testing can help prevent data breaches and other security incidents by proactively identifying and mitigating risks.
Security Configuration | Description | Example |
---|---|---|
Vulnerability Scanning | Regularly scanning systems and applications | Use OpenVAS or Nessus to scan for vulnerabilities. |
Penetration Testing | Conducting simulated attacks to identify weaknesses. | Use `Metasploit to simulate various attack scenarios |
Security Groups (NACLs) | Configuring security groups and NACLs | ---- |
DevSecOps Practices
DevSecOps is a software development approach that integrates security
into the entire development lifecycle.
Security Configuration | Description | Example |
---|---|---|
Shift-left security | Incorporating security into the early stages of development | Use tools like linter to check for potential vulnerabilities in code. |
CI/CD Testing | Integrating security checks into the CI/CD pipeline | Use tools like checkmarx to scan code for vulnerabilities during the build |
Incident response | Developing and practicing incident response plans | Use Ansible to automate incident response. |
Additional Skills
Security Configuration | Description | Example |
---|---|---|
Cryptography | Understanding cryptographic principles to protect data | SSH key-based authentication |
Compliance | Adhering to relevant security regulations and standards | Comply with PCI and DSS |
Automation | Using tools and scripts to streamline security tasks | Vulnerability scanning with OpenSCAP |
Disaster Recovery
60 minutes
A Disaster Recovery Plan (DRP)
and a Business Continuity Plan (BCP) are crucial for the survival and resilience of any organization. They serve as a roadmap to navigate through unforeseen challenges and minimize disruptions.
Example Disaster Recovery Plan: Small E-commerce Business
Introduction
This Disaster Recovery Plan (DRP)
outlines procedures for recovering IT systems and restoring business operations in case of a disaster affecting [Company Name]. The plan aims to minimize downtime, data loss, and financial impact.
Disaster Recovery Objectives
- RTO (Recovery Time Objective): Restore
critical systems
within 4 hours of a disaster. - RPO (Recovery Point Objective): Maximum data loss is limited to 24 hours.
Disaster Recovery Team
- DR Coordinator: [Name], [Title]
- IT Support: [Names and Roles]
- Management: [Names and Roles]
- Communications: [Names and Roles]
Disaster Recovery Procedures
-
Phase 1: Notification and Activation. Upon incident notification, the
DR Coordinator
activates the DR team. Please initiate a communication plan to let key personnel and stakeholders know. Activate the disaster recovery site if necessary. -
Phase 2: Assessment and Prioritization Assess the extent of the disaster and its impact on business operations. Prioritize system and
data recovery
based on criticality. -
Phase 3: Recovery and Restoration Restore critical systems from backups. Implement data recovery procedures. Reconfigure network infrastructure. Test system functionality.
-
Phase 4: Resumption of Operations Gradually restore business operations. Conduct system and data verification. Initiate business continuity plans.
Backup and Recovery Procedures
- Daily backups of critical data to an offsite location.
- Weekly full system backups.
- Regular testing of backup and restore procedures.
- Use of cloud-based backup for additional redundancy.
Communication Plan
- Primary and secondary contact information for key personnel.
- Notification procedures for employees, customers, and partners.
- Communication channels (email, SMS, phone).
Disaster Recovery Testing
- Conduct regular tabletop exercises and full-scale tests.
- Document test results and lessons learned.
- Update the DR plan based on test findings.
Appendices
- Contact list
- Hardware and software inventory
- Network diagrams
- Backup schedules
- Detailed recovery procedures
Next
Metrics and Analytics. Which metrics are important for DevOps and how to calculate them?
Metrics and Analytics
30 minutes
Metrics and analytics are the lifeblood of DevOps. They provide the data-driven
insights needed to optimize software delivery pipelines, improve efficiency
, and enhance overall product quality.
Metrics
These metrics provide a foundation. The specific metrics
you track will depend on your organization's goals and industry.
Metric | Formula | Description |
---|---|---|
Deployment Frequency | # of deployments / time period | How often code is deployed to production |
Lead Time for Changes | Time from code commit to production | Time taken for code to move from development to production |
Mean Time to Recover (MTTR) | Downtime / # of failures | Average time to restore a service after failure |
Change Failure Rate | # of failed deployments / # of total deployments | Percentage of deployments resulting in failures |
Cycle Time | Time from code commit to feature release | Total time spent on a piece of work |
Mean Time Between Failures (MTBF) | Time between failures / # of failures | Average time between system failures |
Defect Escape Rate | # of defects found / # of total defects | Percentage of defects that reach production |
Build Success Rate | # of successful builds / # of total builds | Percentage of successful build processes |
Test Case Execution Time | Total test execution time / # of test cases | Average time taken to run a test case |
Next
Databases. How to design, model and deploy a MySQL Database.
DBMS: Data Base Management System
120 minutes
Introduction
DBMS are important to software engineers because they provide efficient, scalable, and secure ways to store, manage, and retrieve data, enabling the development of reliable, data-driven applications
Table of Contents
- Prerequisites
- Connecting to a MySQL Database Via CLI
- Modeling And Designing A Database With MySql Workbench
- Deploying A Database Via Script
- Inserting Data Via Script
- Creating A View For Easy Acces.
- Conclusion
Prerequisites
- MySQL Community Server: MySQL Community Server is a free, open-source version of the MySQL relational database management system, widely used for web applications and data-driven platforms, offering essential database features for developers and organizations.
- MySQL Workbench: MySQL Workbench is a visual design tool for MySQL databases, providing a unified interface for database modeling, SQL development, administration, and data migration tasks.
Connecting Via CLI
The CLI(command line interface) provides a direct way to interact with the database server
.
mysql -u root -p
Modeling With MySql Workbench
Modeling with MySQL Workbench is valuable because it allows developers and database administrators to visually design, create, and manage database schemas, making it easier to understand complex relationships, generate SQL scripts, and ensure database structure consistency before implementation.
SQL Categories
In SQL, there are five main categories of commands, each serving a specific purpose in managing and interacting with databases. By understanding these different categories of SQL commands, you can effectively manage your database, ensuring proper data structure
, manipulation, security
, and retrieval capabilities.
DML(Data Manipulation) | DDL(Data Definition Language) |
---|---|
SELECT column1 FROM table; | CREATE VIEW view_name AS SELECT column1, column2, FROM table_name WHERE condition; |
INSERT INTO table_name VALUES value1; | CREATE DATABASE my_database; CREATE TABLE employees (id INT PRIMARY KEY, name VARCHAR(100)); |
UPDATE table_name SET column1 = value1, WHERE condition; | ALTER TABLE employees ADD /MODIFY email VARCHAR(100); ALTER TABLE employees DROP COLUMN email; |
DELETE FROM table_name WHERE condition; | TRUNCATE TABLE employees; |
DROP DATABASE my_database; DROP TABLE employees; DROP VIEW employee_view; | |
ALTER TABLE orders ADD CONSTRAINT fk_employee FOREIGN KEY (emp_id) REFERENCES employees(id); | |
ALTER TABLE employees ADD CONSTRAINT unique_name UNIQUE (name); | |
ALTER TABLE employees ALTER COLUMN status SET DEFAULT 'active'; |
Transactions
Hereโs an example of a transaction in SQL that includes all possible scenarios: COMMIT
, ROLLBACK
, SAVEPOINT
, and ROLLBACK TO SAVEPOINT
. This transaction simulates a banking system where we perform a few account updates, use a savepoint, and handle an error that requires a rollback to a specific point.
-- Start the transaction
BEGIN TRANSACTION;
-- Step 1: Deduct $500 from Account A
UPDATE accounts
SET balance = balance - 500
WHERE account_id = 'A';
-- Step 2: Set a SAVEPOINT after deducting from Account A
SAVEPOINT deduct_from_A;
-- Step 3: Add $500 to Account B
UPDATE accounts
SET balance = balance + 500
WHERE account_id = 'B';
-- Step 4: Set another SAVEPOINT after adding to Account B
SAVEPOINT add_to_B;
-- Step 5: Check balances to validate transaction
SELECT account_id, balance FROM accounts WHERE account_id IN ('A', 'B');
-- Let's assume we find an issue here (e.g., Account B is overdrawn due to a previous transaction error),
-- so we decide to ROLLBACK to the savepoint where we had only deducted from Account A.
-- Step 6: Rollback to the savepoint after deducting from Account A
ROLLBACK TO deduct_from_A;
-- Step 7: Now, decide whether to commit the transaction (if everything is fine), or rollback the entire thing
-- In this case, let's assume we fix the issue and commit the transaction
COMMIT;
DCL (Data Control Language) | TCL (Transaction Control Language) | Operations |
---|---|---|
GRANT SELECT , INSERT ON employees TO user | COMMIT | SELECT name AS employee_name, salary AS monthly_salary FROM employees; |
REVOKE INSERT ON employees FROM user1; | ROLLBACK ; | SELECT department, COUNT (*) AS total_employees FROM employees GROUP BY department; |
SAVEPOINT sp1; | SELECT department, COUNT () AS total_employees FROM employees GROUP BY department HAVING COUNT () > 5; | |
ROLLBACK TO sp1; | SELECT name, salary FROM employees ORDER BY salary ASC ; | |
SELECT name, salary FROM employees ORDER BY salary DESC ; |
Joins
Joins |
---|
SELECT employees.name, departments.department_name FROM employees INNER JOIN departments ON employees.department_id = departments.id; |
SELECT employees.name, departments.department_name FROM employees LEFT JOIN departments ON employees.department_id = departments.id; |
SELECT employees.name, departments.department_name FROM employees RIGHT JOIN departments ON employees.department_id = departments.id; |
SELECT employees.name, departments.department_name FROM employees FULL JOIN departments ON employees.department_id = departments.id; |
DDL
Operation | SQL Query Example | Explanation |
---|---|---|
SELECT (Columns) | SELECT name, salary FROM employees; | Selects the name and salary columns from the employees table. |
SELECT (All Table *) | SELECT * FROM employees; | Selects all columns from the employees table. |
INSERT (Data to Table) | INSERT INTO employees (name, department, salary) VALUES ('John Doe', 'HR', 5000); | Inserts a new row into the employees table with name , department , and salary values. |
UPDATE (Field) | UPDATE employees SET salary = 6000 WHERE name = 'John Doe'; | Updates the salary field to 6000 for the employee with name = 'John Doe' . |
DELETE (Field) | DELETE FROM employees WHERE name = 'John Doe'; | Deletes the row from the employees table where the name is John Doe . |
Windows Functions
Window Function | SQL Query Example | Explanation |
---|---|---|
OVER() | SELECT name, salary, SUM(salary) OVER () AS total_salary FROM employees; | The OVER() function calculates the total salary of all employees without GROUP BY . |
ROW_NUMBER() | SELECT name, salary, ROW_NUMBER() OVER (ORDER BY salary DESC) AS row_num FROM employees; | Assigns a unique row number to each row ordered by salary in descending order. |
RANK() | SELECT name, salary, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees; | Assigns a rank to each row based on salary, with gaps if there are ties. |
DENSE_RANK() | SELECT name, salary, DENSE_RANK() OVER (ORDER BY salary DESC) AS dense_rank FROM employees; | Similar to RANK() , but without gaps in the ranking when there are ties. |
NTILE() | SELECT name, salary, NTILE(4) OVER (ORDER BY salary DESC) AS quartile FROM employees; | Divides the employees into 4 equal groups (quartiles) based on their salary. |
LAG() | SELECT name, salary, LAG(salary, 1) OVER (ORDER BY salary DESC) AS prev_salary FROM employees; | Retrieves the previous rowโs salary based on the current order, showing NULL for the first row. |
LEAD() | SELECT name, salary, LEAD(salary, 1) OVER (ORDER BY salary DESC) AS next_salary FROM employees; | Retrieves the next rowโs salary based on the current order, showing NULL for the last row. |
Functions
Function | Example Query | Explanation |
---|---|---|
AVG() | SELECT AVG(salary) FROM employees; | Returns the average value of the salary column from the employees table. |
SUM() | SELECT SUM(sales) FROM orders; | Returns the total sum of the sales column from the orders table. |
COUNT() | SELECT COUNT(*) FROM customers; | Returns the total number of rows in the customers table. |
MIN() | SELECT MIN(price) FROM products; | Returns the minimum value from the price column in the products table. |
MAX() | SELECT MAX(age) FROM people; | Returns the maximum value from the age column in the people table. |
Where Clausule
Condition/Clause | Example Query | Explanation |
---|---|---|
<, >, <>, <=, >=, = | SELECT * FROM employees WHERE salary > 50000; | Selects all employees with a salary greater than 50,000. |
AND, OR, NOT | SELECT * FROM employees WHERE age >= 30 AND department = 'HR'; | Selects employees who are 30 years or older and work in the HR department. |
BETWEEN | SELECT * FROM products WHERE price BETWEEN 100 AND 500; | Selects products with a price between 100 and 500. |
LIKE | SELECT * FROM customers WHERE name LIKE 'J%'; | Selects customers whose name starts with "J". |
IN | SELECT * FROM orders WHERE order_status IN ('Shipped', 'Pending'); | Selects orders where the status is either "Shipped" or "Pending". |
ANY | SELECT * FROM employees WHERE salary > ANY (SELECT salary FROM managers); | Selects employees with a salary greater than any of the managers' salaries. |
ALL | SELECT * FROM employees WHERE salary > ALL (SELECT salary FROM interns); | Selects employees whose salary is higher than all the interns' salaries. |
EXISTS | SELECT * FROM departments WHERE EXISTS (SELECT * FROM employees WHERE department_id = departments.id); | Selects departments where there are employees assigned to them. |
Conclusion
This repository has equipped you with a comprehensive understanding of MySQL, from its core concepts to advanced functionalities
. You've explored data manipulation, querying, security, administration, and even how to leverage views for optimized data access.
SDLC & Testing
10 minutes
The Software Development Lifecycle
is a process used to create software that meets the stakeholders' requirements.
It comprises six core phases: Planning
, Design
, Development
, Testing
, Deployment
, and Maintenance
.
Testing Throughout the SDLC
Planning | Design | Development | Testing | Deployment | Maintenance |
---|---|---|---|---|---|
Requirement Analysis | Feedback | Write Test Cases | Execute Cases | Analysis & Reporting of Test Cases | Regression |
Write Test Plan | Write Traceability Matrix | Track Defects | Fix Bugs & Retest | Monitoring | |
Set up Test Environment |
Agile Methodologies Differences
- Scrum has a fixed-length Sprint cycle, while Kanban is continuous.
- Scrum has dedicated review and retrospective ceremonies, while Kanban uses daily stand-up meetings for continuous improvement.
- Testing in Scrum is integrated throughout the Sprint, while Kanban testing is continuous throughout the workflow.
Test Techniques, Types, and Levels
-
Execution: manual or automated.
-
Techniques: Specific methods used to execute tests - White Box, Black Box, Experience-based.
-
Types: Categories of testing based on different objectives - Functional, Non-functional.
-
Levels: Stages of testing within the software development lifecycle - Unit, Integration, System, Performance, Acceptance, UI.
-
Design Strategies: Approaches used to create test cases - a subset of black-box, white-box, and experience-based. (not in the image).
Functional
Evaluates the functions that a component or system should perform.
LEVEL | DESCRIPTION |
---|---|
Unit Testing | Tests individual units of code (functions, modules) |
Integration Testing | Tests how different units interact with each other. |
System Testing | Tests the entire system as a whole. |
Acceptance Testing | Verifies if the system meets user requirements. |
Non-Functional
Evaluates attributes other than functional characteristics of a component or system. It tests โhow well the system behavesโ.
LEVEL | DESCRIPTION |
---|---|
Performance Testing | Evaluates system performance under load. |
Usability Testing | Evaluates user experience with the software. |
Mutation Testing | Introduces deliberate errors (mutations) in the code and checks if tests detect them. |
Capture-Replay Testing | Records user interactions and replays them for regression testing. |
Black-box
Also known as specification-based
techniques. They are based on the behavior of the test object without reference to its internal structure. Therefore, they are independent of how the software is implemented. Consequently, if the implementation changes, but the required behavior stays the same, the test cases are still useful.
Example: Decision Table Testing
Decision table testing is a method to identify test cases based on input conditions and their corresponding actions/outputs, especially used for complex requirements.
Why is it important?
Decision tables provide comprehensive test coverage by considering all input combinations, helping uncover defects while improving test case design efficiency and clarity.
How It Works
1. Identify conditions and actions:
Determine the input conditions and potential outputs (actions) based on the system's behavior.
2. Create a decision table:
Construct a table with conditions as columns and rows representing different combinations of conditions.
3. Define rules:
Specify the actions to be performed for each combination of conditions.
4. Design and execute test cases:
Create test cases based on the defined rules, then compare the results with the expected outcomes.
Limitations
Decision table testing can become impractical for systems with many conditions, leading to an overwhelming number of test cases due to the exponential growth in possible combinations.
White-box
Also known as structure-based
techniques. They are based on analyzing the test objectโs internal structure and processing. As the test cases are dependent on how the software is designed, they can only be created after the design or implementation of the test object.
Example: Branch Testing
Branch testing is a white-box testing technique that ensures every possible path through a code's decision points is executed at least once. A decision point is typically an 'if' or 'switch' statement where the program's flow can diverge based on the condition's outcome. ย
Branch testing aims to cover all potential branches or outcomes of each decision point, guaranteeing that all reachable code is executed. Any set of test cases achieving 100% branch coverage also achieves 100% statement coverage (but not vice versa).
Why is Branch Testing Important?
Comprehensive Coverage - testing all branches helps to ensure that all possible code paths are exercised, reducing the likelihood of hidden bugs. ย
How to Use Branch Testing
1. Identify Decision Points:
Locate all if
, switch
, and similar statements in the code.
2. Determine Branches:
For each decision point, identify the possible outcomes or branches.
3. Design Test Cases:
Create test cases that specifically target each branch to ensure it's executed. ย
4. Execute Test Cases:
Run the test cases to cover all identified branches.
5. Measure Coverage:
Calculate the branch coverage percentage to assess test effectiveness.
Limitation
While high branch coverage is good for testing, 100% branch coverage can still miss verifying the correct behavior of the code if the test cases lack proper assertions or only use trivial input values.
Experience-based
They use testers' knowledge and experience to design and implement test cases. Experience-based testing can detect defects that may be missed with black-box and white-box. Hence, they are complementary to the other techniques.
Example: Error Guessing
Error Guessing is a software testing strategy where testers use their experience and intuition to predict potential errors or defects
in an application. It's an informal approach based on the tester's ability to anticipate issues from past encounters with similar systems.
Why It Is Important
Testers can focus on areas that might not be covered by formal test cases, providing a different perspective on the software. ย
How It Works
Error guessing is a relatively unstructured process. Testers typically follow these steps: ย
Analyze the software
: Understand the system's functionality, requirements, and design.Identify potential problem areas
: Based on experience, intuition, and domain knowledge, guess where errors might occur. ยDesign and execute test cases
: Create and run test cases to specifically target these areas.Report and track defects
: Document found issues and track their resolution.
Important Note
: While error guessing is valuable, it should not replace structured testing methods. A combination of both approaches is essential for comprehensive test coverage. ย
API Testing
Application Program Interface (API) testing is a type of software testing that validates the behavior and performance of an application program interface.
Why It Is Important
API testing evaluates an API's ability to handle increased load and traffic. Testers can identify bottlenecks, latency issues, and resource constraints to optimize performance and ensure it can handle the expected user load.
How it Works
Understand the API
: Understand the API's functionalities, endpoints, data formats, authentication methods, and expected behaviors.Create Test Cases
: Develop test cases covering various scenarios, including positive, negative, boundary value, performance, and security tests.Set Up Test Environment
: Establish the necessary environment with appropriate tools, data, and API access.Execute Test Cases
: Run the tests.Analyze Results
: Evaluate test outcomes, compare actual results with expected results, and generate reports.Iterate and Improve
: Address identified defects, refine test cases, and automate where feasible.
API Testing vs. API Method Testing
API testing covers all the testing activities on an API to ensure it functions correctly, reliably, and securely. It includes functional testing, performance testing, security testing, and more. On the other hand, API method testing focuses on testing the individual operations or endpoints of an API, such as creating, reading, updating, and deleting operations on a resource.
To summarize:
API testing
is the overall process of evaluating an API. ยAPI method testing
is a specific type of API testing that targets individual API endpoints.
In essence, API method testing is a component of API testing, just as unit testing is a component of software testing.
Test Automation
Test automation uses software to execute test cases, validate results, and provide feedback. It replaces manual testing efforts, particularly for repetitive and time-consuming tasks, thereby enhancing efficiency and accuracy.
How It Works
1. Test Case Selection
Identify test cases suitable for automation, prioritizing those with high execution frequency, critical business logic, or prone to human error.
For demonstration, the test case below was created based on the sample URL Selenium provides:
Test Case: Simple Form Submission with Selenium
Test Objective:
Verify that the script successfully fills a text box and submits a form on the Selenium Dev website.
Pre-Requisites:
- Java installed
- Selenium WebDriver libraries downloaded and configured
- Chrome browser installed (or another browser supported by Selenium)
Test Steps:
-
Run SeleniumJunitTest.java (replace
SeleniumJunitTest.java
with the name of the test file). -
Verify Chrome opens the URL and script interacts with the "my-text" box, entering "Selenium".
-
Confirm the script clicks the submit button and retrieves a confirmation message.
-
Check that Chrome closes after execution.
Pass/Fail Criteria:
Pass
if all expected results are observed.Fail
if any are missing or errors occur.
2. Tool Selection
Choose test automation tools or frameworks that align with project requirements, team expertise, and budget constraints. Consider factors such as ease of use, test coverage, reporting capabilities, and integration with other tools. ย
-
Tools
: Provides functionalities to automate test actions. It interacts with UI elements directly. Examples: Selenium (for web browsers), Appium (for mobile apps), and SoapUI (for SOAP and REST API tests). -
Frameworks
: Provides a structure and best practices for building and managing test scripts. It provides reusable components and libraries. Examples: Robot Framework, Playwright, TestNG.
3. Test Environment Setup
Establish a stable test environment to ensure reliable test execution. ย
4. Test Script Development
Selenium will be used for demonstration purposes.
4.1 Creating a Maven Project (Optional but Recommended):
Selenium WebDriver
can be used without a dedicated build system. However, using a build system like Maven
offers advantages such as dependency management and project organization. Here's how to create a Maven project in IntelliJ IDEA Community:
- Open IntelliJ IDEA Community.
- Click "New" -> "Project."
- Select "Maven" and choose a project location. Click "Next."
- Fill in the "Group ID," "Artifact ID," and "Version." Click "Finish."
4.2 Adding Selenium Dependency:
The following dependency should be added to the pom.xml
file, inside the <dependencies
> section:
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.x.x</version> //Replace with latest version
</dependency>
This line tells Maven to download the Selenium Java library, which provides the necessary classes for interacting with web browsers.
In some situations, it's common to use test frameworks like JUnit or TestNG along with Selenium for writing and executing test cases to improve organization and readability. For this script, JUnit was implemented.
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.x.x</version> //Replace with latest version
<scope>test</scope>
</dependency>
4.3 Folder Structure
Maven provides a standardized directory structure for Java projects.
Core Directories
- Directories containing software logic:
-
src/main/java
: Contains the project's main source code. -
Package
: Organizes code into packages based on functionality or modules. ย -
Classes
: Java classes implementing the application's logic. -
src/test/java
: Houses the project's test code. -
Package
: Similar to src/main/java but for test classes. -
Test classes
: JUnit or TestNG test classes.
- Other directories:
src/main/resources
: Stores configuration files, properties files, and other static resources used during compilation.src/test/resources
: Stores test-specific resources like test data or configuration files.target
: This directory is used by Maven to store generated output, such as compiled classes, test results, and final artifacts (JAR, WAR, etc.).pom.xml
: The project's configuration file, defining dependencies, build process, and other project-related settings.
4.4 Build the Script
This example demonstrates the basic test script using Selenium WebDriver in Java. It opens the URL, finds the text box, inserts the credential and clicks the "submit" button.
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.time.Duration;
public class SeleniumJunitTest {
private WebDriver driver;
@Before
public void setUp() {
// Create a new Chrome WebDriver instance
driver = new ChromeDriver();
// Set implicit wait (optional)
driver.manage().timeouts().implicitlyWait(Duration.ofMillis(500));
}
@Test
public void testSeleniumForm() {
// Navigate to the URL being tested
// In this case, this is the test URL provided by Selenium
driver.get("https://www.selenium.dev/selenium/web/web-form.html");
// Find the 'text box' and 'button' elements by their locators
WebElement textBox = driver.findElement(By.name("my-text"));
WebElement submitButton = driver.findElement(By.cssSelector("button"));
// The script will automatically type "Selenium" and click the page button
textBox.sendKeys("Selenium");
submitButton.click();
// Find the element displaying the message by its ID
// Wait for message element to be visible (recommended)
WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(30));
WebElement message = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("message")));
String messageText = message.getText();
}
@After
public void tearDown() {
// Close the browser window
driver.quit();
}
}
5. Test Execution and Analysis
Integrate test automation into the development lifecycle, and analyze test results. Use tools like Jira for bug tracking and report.
6. Continuous Improvement
Refine test automation processes to optimize efficiency and effectiveness.
Conclusion
In summary, test automation is essential for improving software quality, efficiency, and speed while reducing costs and risks.
ForgeGuard - Functionality Testing - Login
Estimated Time: 2 hours
Tech Stack: Selenium - Java - Cucumber
Keywords: Automation - Testing - QA - BDD
Experience Level: Beginner - Advanced
Why Web Testing
The main use case for Web Browser Automation
is regression testing
. Most companies today's have websites, and they
deploy new code every day. This new code is destined to correct issues or add new functionalities. The problem is that
often times this new code mess up things that were already functioning correctly.
Regression testing
notifies that this not happen by verifying all the applications previous functionalities before new
code is deployed.
Objective
Create automated test cases using Selenium
, Java
, and Cucumber
to verify the login page's functionality of
"https://practice.expandtesting.com/login". Login is arguably the most used and common functionality of all web
applications. By mastering this functionality one can ensure ~80%
of the required knowledge to perform this activity.
These tests will:
- Test login with valid credentials.
- Test login with invalid credentials.
- Validate error messages for incorrect input.
- Verify successful redirection after a valid login.
Encouragement
Progress in automation testing requires persistence, and each effort lays a solid foundation for future success.
Table Of Contents
- Functionality Testing
- Step 01 Understanding Automation Workflow(Java, Selenium,Cucumber)
- Step 02 Environment Set Up
- Step 03 IDE and Plugin Configuration Set Up
- Step 04 Project Initialization Set Up
- Step 05 Core Library Installation Set Up
- Step 06 Basic Selenium Test Set up
- Step 07 Basic Cucumber Set Up
- Step 08 Understand Automation Logic: Selenium
- Step 09 Scripting
- Conclusion
Functionality Testing
Functionality testing is the first step of the automation journey as it is the main objective of testing in general to verify that a software application performs as intended, meeting all specified functionalities and user requirements, ensuring the delivered product is reliable and free from major defects by identifying and fixing issues early in the development cycle.
Step 01 Understanding Automation Workflow(Java, Selenium,Cucumber)
When automating how someone interacts with a websiteโJava
, Selenium
, and Cucumber
each play a unique role
in making this happen.
- Java is the programming language you use to write instructions for the automation.
- Selenium acts as the "robot" that opens a browser, clicks buttons, fills out forms and verifies things on the webpage as per your Java code.
- Cucumber lets you write those instructions in plain English (called feature files) using a style called
Behavior-Driven Development (BDD)
.
Instead of writing complex code for "login with valid credentials,"
it can be written as Given I am on the login page
, When I enter valid credentials
,
Then I should see the dashboard
. Cucumber links this plain English text to your Java code
(called step definitions
) that uses Selenium to perform the actual actions in the browser.
Together, they make testing clear, readable, and efficient.
Set Up
After understanding the stack workflow, it's time to set up all these technologies in the local environment. The setup involves several moving parts and depending on experience can take between 1-15 days to complete. That is why it's helpful to split it into steps such as:
- Environment Setup
- IDE and Plugin Configuration SetUp
- Project Initialization SetUp
- Core Library Installation Setup
- Basic Selenium Test Setup
- Basic Cucumber SetUp
Step 02 Environment Setup
Objective: Install and verify foundational tools. Tasks:
-
Install
Java
and ensure itโs properly configured withjava -version
. Without this: You won't be able to write or execute any Java code, as Java is the foundation of the entire project. -
Install the
JDK
and confirm its functionalityjavac -version
. -
Install
Maven
and verify itmvn -v
. Without this: Youโll need to manually download and manage all dependencies (like Selenium, JUnit, and Cucumber) which is time-consuming and error-prone.
Step 03 IDE and Plugin Configuration
Objective: Set up the development environment. Tasks:
- Install
IntelliJ IDEA
or your preferred IDE. Without this: Youโll lack a professional development environment to write, debug, and run your code efficiently, making development harder and slower.
Note: Intellij comes with Java and Maven installed, however, their system-wise installation is needed to be able to execute mvn commands from the terminal.
- Add the
Cucumber for Java
Plugin to enableGherkin
and Cucumber support. - Cucumber-java: Without this: Step definitions written in Java wonโt be supported, breaking the connection between feature files and the automation logic.
- Configure the project JDK in the IDE. IntelliJ automatically does this.
Step 04 Project Initialization
Objective: Create and configure the project. Tasks:
- Use plain
Maven project builder
orMaven archetypes
to create an empty cucumber project. The code below can be used to create acucumber project archetype
from apwsh
terminal.
mvn archetype:generate `
"-DarchetypeGroupId=io.cucumber" `
"-DarchetypeArtifactId=cucumber-archetype" `
"-DarchetypeVersion=7.20.1" `
"-DgroupId=<project-name>" `
"-DartifactId=<project-name>" `
"-Dpackage=<project-name>" `
"-Dversion=1.0.0-SNAPSHOT" `
"-DinteractiveMode=false"
Step 04.01 Project Structure
Structure the project directories correctly is crucial for it to work. After executing the commands above the project structure should look like the example below. This structure is mandatory for it to work properly.
src
โโโ main
โ โโโ java
โ โโโ <Project-Name>
โ โโโ [Production code files]
โโโ test
โโโ java
โ โโโ <Project-Name>
โ โโโ RunCucumberTest.java
โ โโโ [Step definitions here]
โโโ resources
โโโ <Project-Name>
โโโ [Feature files]
Step 05 Dependencies Set Up
The test automation environment contains several moving parts. To function properly all those moving parts needs to be
declared in the POM.xml
file (backbone of the Automation Testing Project) for maven to build the project properly.
Step 05.01 POM File Template
Using the mvn archetype
above installs cucumber dependencies
but not Selenium dependencies
nor the Web driver
(controls the browser) or other plugins required. Failing to have all required dependencies and proper configurations
means the project will face issues. Here's a pom file template it can be used to avoid misconfigurations.
<?xml version="1.0" encoding="UTF-8"?>
<!-- This is a Maven POM file template for a Selenium + Cucumber test automation project -->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <!-- Version of the POM model -->
<!-- General project information -->
<groupId>org.example</groupId> <!-- A unique ID for your project, usually your organization's domain in reverse -->
<artifactId>test-automation-template</artifactId> <!-- The name of the project -->
<version>1.0-SNAPSHOT</version> <!-- The current version of the project -->
<properties>
<!-- Java version for compiling the project -->
<maven.compiler.source>17</maven.compiler.source> <!-- Java version for source code -->
<maven.compiler.target>17</maven.compiler.target> <!-- Java version for compiled classes -->
<project.build.sourceEncoding>UTF-8
</project.build.sourceEncoding> <!-- Encoding to handle special characters -->
<!-- Versions for external dependencies -->
<cucumber.version>7.20.0</cucumber.version> <!-- Cucumber version -->
<selenium.version>4.11.0</selenium.version> <!-- Selenium version -->
<webdriver.version>5.5.3</webdriver.version> <!-- WebDriverManager version -->
<junit.version>5.11.0</junit.version> <!-- JUnit version -->
</properties>
<!-- Project dependencies -->
<dependencies>
<!-- JUnit is used as the testing framework -->
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>${junit.version}</version> <!-- Use the JUnit version defined in properties -->
<scope>test</scope> <!-- Indicates this dependency is only needed for tests -->
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-suite-api</artifactId>
<version>${junit.version}</version>
<scope>test</scope>
</dependency>
<!-- Cucumber dependencies for Behavior-Driven Development (BDD) -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>${cucumber.version}</version> <!-- Cucumber library for writing step definitions -->
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-junit-platform-engine</artifactId>
<version>${cucumber.version}</version> <!-- Integrates Cucumber with JUnit -->
<scope>test</scope>
</dependency>
<!-- Selenium dependencies for browser automation -->
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>${selenium.version}</version> <!-- Selenium Java binding -->
</dependency>
<!-- WebDriverManager for automatically managing browser drivers -->
<dependency>
<groupId>io.github.bonigarcia</groupId>
<artifactId>webdrivermanager</artifactId>
<version>${webdriver.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Maven Compiler Plugin for Java compilation -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version> <!-- Plugin version -->
<configuration>
<release>17</release> <!-- Java version for compiling -->
</configuration>
</plugin>
<!-- Maven Surefire Plugin for executing tests -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M7</version> <!-- Plugin version -->
<configuration>
<includes>
<include>**/*.java</include> <!-- Include all Java files in the test folder -->
</includes>
<systemPropertyVariables>
<!-- Pass configuration to Cucumber -->
<cucumber.features>src/test/resources</cucumber.features> <!-- Path to feature files -->
<cucumber.glue>your.step.definition.package</cucumber.glue> <!-- Step definitions package -->
<cucumber.plugin>pretty, json:target/cucumber.json</cucumber.plugin> <!-- Report plugins -->
</systemPropertyVariables>
</configuration>
</plugin>
<!-- Maven Failsafe Plugin for integration tests -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>3.0.0-M7</version>
<executions>
<execution>
<goals>
<goal>integration-test</goal> <!-- Runs integration tests -->
<goal>verify</goal> <!-- Verifies test results -->
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Step 05.02 Runner Class Template
The final component of the set-up is the runner class
which is in charge to tell junit where the feature files and
step definitions
are. It is also used to pass customized test execution configurations.
C:\04ForgeGuard\ForgeGuard\src\test\java\ForgeGuard\RunCucumberTest.java
package ForgeGuard;
import org.junit.platform.suite.api.ConfigurationParameter;
import org.junit.platform.suite.api.IncludeEngines;
import org.junit.platform.suite.api.SelectDirectories;
import org.junit.platform.suite.api.Suite;
import static io.cucumber.junit.platform.engine.Constants.*;
@Suite
@IncludeEngines("cucumber") // Use the Cucumber engine
@SelectDirectories("src/test/resources/ForgeGuard") // Path to your feature files
@ConfigurationParameter(key = GLUE_PROPERTY_NAME, value = "ForgeGuard") // Package containing step definitions
@ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "pretty, json:target/cucumber.json") // Plugins
public class RunCucumberTest {
}
Encouragement
Every project milestone brings a sense of achievement, and even challenges contribute to stronger skills and deeper understanding.
Step 06 Basic Selenium Test Setup
Once the project structure is set up, the pom file is correct and the runner class is implemented, it's time to test the set-up to verify everything runs smoothly.
Objective: Write and execute initial tests to verify the setup. Tasks:
- Create a simple
Selenium
test to open a browser and validate functionality. Inside the java directory, a directory with the project name(not mandatory, it can have any name) should exist. This is the folder where theBasicTest.java
will be created to test the setup work as expected.
<your-project-name>/
โโโ .idea/
โโโ <your-project-name>/
โ โโโ src/
โ โ โโโ test/
โ โ โ โโโ java/
โ โ โ โ โโโ <your-project-name>/
โ โ โ โ โ โโโ BasicTest.java
After the file is created paste the code below to test the setup is properly configured.
import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.junit.jupiter.api.Test;
public class BasicTest {
@Test
public void testGooglePageTitle() {
// WebDriver setup
WebDriverManager.chromedriver().setup();
WebDriver driver = new ChromeDriver();
// Open Google and print title
driver.get("https://google.com");
System.out.println("Page Title: " + driver.getTitle());
// Quit browser
driver.quit();
}
}
To execute the BasicTest.java
file the code below should be pasted in a new terminal.
mvn test -Dtest=DatabaseTest
Note: For some reason mvn commands don't work sometimes in
pwsh
terminals make sure you use bash instead.
Step 07 Basic Cucumber Test Setup
If the test above runs properly it means selenium works properly. Now it's time to test if cucumber works properly too. Add a sample Cucumber feature file and step definitions to validate that Cucumber works properly.
Objective: Write and execute initial tests to verify the setup. Tasks:
- Navigate to resources/
.
<your-project-name>/
โโโ .idea/
โโโ <your-project-name>/
โ โโโ src/
โ โ โโโ test/
โ โ โ โโโ java/
โ โ โ โ โโโ <your-project-name>/
โ โ โ โ โ โโโ BasicTest.java
โ โ โ โ โ โโโ ForgeGuard.StepDefinitions.java
โ โ โ โ โ โโโ TestRunner.java
โ โ โ โโโ resources/
โ โ โ โ โโโ <your-project-name>/
โ โ โ โ โ โโโ is_it_friday_yet.feature
Add the following scenario
# Feature name
Feature: Is it Friday yet?
# Description
Everybody wants to know when it's Friday
# Tags for identifying scenarios. Multiple scenarios can exist in a file.
@WeekendCheck
# Scenario name to be able to identify it.
Scenario: Sunday isn't Friday
# Steps that will be defined in the "Step Definition" file.
Given today is Sunday
When I ask whether it's Friday yet
Then I should be told "Nope"
Then navigate to
<your-project-name>/
โโโ .idea/
โโโ <your-project-name>/
โ โโโ src/
โ โ โโโ test/
โ โ โ โโโ java/
โ โ โ โ โโโ <your-project-name>/
โ โ โ โ โ โโโ BasicTest.java
โ โ โ โ โ โโโ ForgeGuard.StepDefinitions.java
โ โ โ โ โ โโโ TestRunner.java
The following code adds the logic to test the previously created scenario.
package ForgeGuard;
import io.cucumber.java.en.Given;
import io.cucumber.java.en.When;
import io.cucumber.java.en.Then;
import static org.junit.jupiter.api.Assertions.*;
class ForgeGuard.IsItFriday {
static String isItFriday(String today) {
return "Nope";
}
}
public class ForgeGuard.StepDefinitions {
private String today;
private String actualAnswer;
@Given("today is Sunday")
public void today_is_Sunday() {
today = "Sunday";
}
@When("I ask whether it's Friday yet")
public void i_ask_whether_it_s_Friday_yet() {
actualAnswer = ForgeGuard.IsItFriday.isItFriday(today);
}
@Then("I should be told {string}")
public void i_should_be_told(String expectedAnswer) {
assertEquals(expectedAnswer, actualAnswer);
}
}
Step 07.01 Basic Cucumber Test Setup: Execution
To execute the test there are several choices:
- Run scenarios by name:
mvn test -Dcucumber.filter.name="Sunday isn't Friday"
- Run all scenarios in a file:
mvn test -Dcucumber.features=src/test/resources/ForgeGuard/is_it_friday_yet.feature
- Run scenarios by tag:
mvn test -Dcucumber.filter.tags="@WeekendCheck"
- Re-install dependencies and compiling test code without executing tests.
mvn clean install -DskipTests
- Re-install dependencies without compiling and executing test code
mvn clean install -Dmaven.test.skip=true
Congratulations! Set Up Completed
Now, the setup is completed and is time start testing. You're doing a fantastic job!
Every new test you write is
another step toward mastering automation testing. Keep building momentumโyouโve got this! ๐
Step 08 Understand Automation Logic: Selenium
Before writing automated test a basic understanding of Selenium
is needed. Selenium is an open source project
dedicated to browser automation
with several products under its brand, however for web browser testing purposes the
product used is Selenium WebDriver
.
Selenium Web driver
- Definition:
WebDriver
is ahigh-level Selenium API
that provides a programming interface to interact with web - browsers.
- Role: It defines how commands are sent to the browser and manages the communication between your test scripts and the
- browser-specific driver.
WebDriver Core 06 Components
The WebDriver
is composed by 6 core components, and several subcomponents. They are the building blocks Selenium uses
to interact with the browser, and therefore they are also the building blocks of the stepdefinitions
file needed for
automation testing.
01 Drivers
Options
: Configures browser options like headless mode or window size.Http Client
: Handles HTTP requests between Selenium and the browser driver.Service
: Manages driver lifecycle (start, stop, port configuration).Remote Web Driver
: Enables testing on remote browsers or cloud grids.
02 Browsers
Chrome
: Automates Google Chrome browser actions and interactions.Firefox
: Automates Mozilla Firefox browser actions and interactions.Edge
: Automates Microsoft Edge browser actions and interactions.
03 Waits
Implicit
: Global wait for element presence across all interactions.Explicit
: Waits for specific conditions like visibility or click-ability.Custom
: User-defined waits for unique or complex conditions.
04 Elements
4.1 Uploads
: Handles file input fields for uploading files.
4.2 Locators (Find elements based on attributes):
class
: Finds elements byclass
attribute.css
: Uses CSS selectors for locating elements.id
: Finds elements by uniqueid
attribute.name
: Finds elements byname
attribute.link text
: Finds links by exact visible text.partial link text
: Finds links by partial visible text.tag name
: Finds elements by HTML tag name.xpath
: Uses XPath expressions to locate elements.
4.3 Finders: Locate elements on the page using locators.
4.4 Interactions:
Click
: Clicks a web element.Send Keys
: Inputs text into fields.Clear
: Clears input fields or text areas.Submit
: Submits forms programmatically.Select
: Selects dropdown options.
4.5 Information:
Is displayed
: Checks if an element is visible.Is enabled
: Checks if an element is interactable.Is selected
: Checks if an element is selected (e.g., checkbox).Tag name
: Gets the HTML tag of an element.Size and position
: Retrieves element size and screen position.Get CSS Value
: Gets computed CSS property values of elements.Text Content
: Fetches visible text from elements.Fetch Properties/Attributes
: Retrieves attributes or DOM properties.
05 Interactions
Navigation
: Performs URL navigation (back, forward, refresh, etc.).Alerts
: Handles JavaScript alerts and confirmation popups.Cookies
: Manages cookies for sessions and authentication.Frames
: Switches to or interacts with iframe content.Print Pages
: Prints page to PDF or retrieves print content.Windows
: Manages multiple browser windows or tabs.Virtual Authenticator
: Tests WebAuthy functionality programmatically.
06 Actions
Keyboard
: Simulates keyboard inputs like typing or key combinations.Mouse
: Simulates mouse actions like hover, drag, and drop.Pen
: Simulates pen input for touch devices.
Encouragement
The journey of learning and improvement is always worthwhile, and small steps lead to remarkable outcomes.
Selenium Script: 8 Components
When writing a selenium script for web automation there are always 8 basic components involved. Mastering those
guarantees a step-by-step process that can be followed for writing the stepdefinitions
file.
Start Session
: Initialize the WebDriverTake Action
: Perform browser-level actions like navigating to a URLRequest Information
: Retrieve details like page title, URL, or cookiesWaiting Strategy
: Wait for elements to load or be interactable using explicit waitsFind Element
: Locate elements on the page with locators like ID, CSS, or XPathInteract
: Interact with elements by clicking, typing, or selectingRequest Information
: Extract details like text or attributes from elementsEnd Session
: Close the browser and clean up resources
Step 09 Scripting
Understanding Selenium WebDriver 6 components and Selenium's script 8 components are the first steps to master
web-browser automation scripting. The next step is to write the first script. The objective of this project is to
create automated test cases using Selenium
, Java
, and Cucumber
to verify the login page's functionality of
"https://practice.expandtesting.com/login".
These tests will:
- Test login with valid credentials.
- Test login with invalid credentials.
- Validate error messages for incorrect input.
- Verify successful redirection after a valid login.
Step 09.01 Feature File
After the 6 steps from the set-up are complete to automate the tests above the first step is to write the feature file.
This file will describe all the steps to execute the test in Gherkin
language which is very similar to plain English.
Feature: Login Functionality
As a user
I want to be able to login with valid credentials
So that I can access the application
@valid-login
Scenario: Valid Login
Given I am in the login page
When I enter the valid credentials "practice" and "SuperSecretPassword!"
Then I should be redirected to the secure page
And I should see a welcome message "Secure Area page for Automation Testing Practice"
@invalid-login
Scenario: Invalid Login
Given I am in the login page
When I enter the invalid credentials "2" and "2"
Then I should see an error message "Your username is invalid!"
Step 09.02 StepDefinitions
The step definition file is the place where the automation instructions are written using Selenium's WebDriver and Scrip components.
package ForgeGuard;
import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.time.Duration;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class ValidLogin { // Declares the main class. This is the file name that encapsulates all data and methods.
WebDriver driver; // Declares a WebDriver variable provided by Selenium to control the browser.
WebDriverWait wait; // Declares WebDriverWait instance from Selenium to implement explicit waits.
@Given("I am in the login page") // Cucumber annotation that links step definition with feature file.
public void i_am_on_the_login_page() { // Defines a method that implements the behaviour of the step.
WebDriverManager.chromedriver().setup(); // Call the WebDriverManager library to set up the Chrome-driver.
ChromeOptions options = new ChromeOptions(); // Object of ChromeOptions class that customize driver behaviour.
options.addArguments("--headless", "--disable-gpu", "--window-size=1920,1080"); // Pass arguments.
driver = new ChromeDriver(options); // Initializes the driver object as a new class of ChromeDriver.
driver.get("https://practice.expandtesting.com/login"); // Get method of the WebDriver
wait = new WebDriverWait(driver, Duration.ofSeconds(10)); // Initializes the wait object.
}
@When("I enter the valid credentials {string} and {string}")
public void i_enter_the_valid_credentials(String username, String password) {
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id=\"username\"]")));
WebElement passwordField = driver.findElement(By.xpath("//*[@id=\"password\"]"));
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"login\"]/button"));
usernameField.sendKeys(username);
passwordField.sendKeys(password);
loginButton.click();
}
@Then("I should be redirected to the secure page")
public void i_should_be_redirected_to_the_secure_page(){
String currentUrl = wait.until(ExpectedConditions.urlContains("secure")) ? driver.getCurrentUrl() :"";
assertTrue(currentUrl.contains("secure"), "The user is not redirected to the secure page.");
}
@Then("I should see a welcome message {string}")
public void i_should_see_a_welcome_message(String expectedMessage){
WebElement welcomeMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("/html/body/main/div[4]/div/h1")));
assertTrue(welcomeMessage.isDisplayed(),"Welcome message is not displayed");
assertEquals(expectedMessage, welcomeMessage.getText(), "The message does not match");
driver.quit();
}
@When("I enter the invalid credentials {string} and {string}")
public void i_enter_the_invalid_credentials(String username, String password){
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id=\"username\"]")));
WebElement passwordField = driver.findElement(By.xpath("//*[@id=\"password\"]"));
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"login\"]/button"));
usernameField.sendKeys(username);
passwordField.sendKeys(password);
loginButton.click();
}
@Then("I should see an error message {string}")
public void i_should_see_an_error_message(String expectedMessage){
WebElement errorMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("flash")));
assertTrue(errorMessage.isDisplayed(),"Error message is not displayed");
assertEquals(expectedMessage, errorMessage.getText(),"Text does not match");
driver.quit();
}
}
Congratulations - Functionality Test Completed!๐โจ
You're making steady progress toward building a robust test automation framework! Every refinement you make brings you closer to a more polished and professional setup. Stay focused, and don't forget to celebrate your winsโno matter how small! ๐โจ Keep up the great momentum! This project is a great first step, and every effort is bringing it closer to excellence!๐ Next step is to add database interactions to functionality tests.
ForgeGuard - Functionality Testing - Database Interactions - Login
Estimated Time: 2 hours
Tech Stack: Selenium - Java - Cucumber
Keywords: Automation - Testing - QA - BDD
Experience Level: Beginner - Advanced
Note: This project is a continuation of the project
ForgeGuard - Functionality Testing
Why Database Interactions
When automating tests for modern applications is crucial to interact with a database to extract and validate data following business logic (mostly for fill up forms, and login) since the operations the user will be able to perform in the app will depend mostly on information from the database.
Objective
Create automated test cases using Selenium
, Java
, and Cucumber
that uses a database to interact with an app
using business logic. Around ~90%
of the actions a user perform on any given applications requires some kind of
database interaction.
These tests will:
- Test login with valid credentials.
- Test login with invalid credentials.
- Validate error messages for incorrect input.
- Verify successful redirection after a valid login.
Encouragement ๐
Progress in automation testing requires persistence, and each effort lays a solid foundation for future success.
Table Of Contents
- Why Database Interactions
- Objective
- Database Testing
- Database Testing With H2
- Step 01 Set Up H2 in Your Project
- Step 02 Write a Test Scenario
- Step 03 Write The Step Definition File
- Step 04 Login Test With Database Interaction
Database Testing
Database testing is crucial because it verifies the accuracy, consistency, and integrity of data stored within a
database, ensuring the reliability and credibility of an application by identifying issues
like missing or duplicate
records, incorrect data types, or inconsistent data relationships, ultimately preventing faulty decision-making based on
inaccurate data.
Database Testing With H2
H2 Database:
It's an in-memory, lightweight, and Java-based database ideal for testing purposes. It simulates a real
database environment without needing a full-fledged database installation. Perfect for integration testing
where test
data is crucial.
Step 01 Set Up H2 in Your Project
First step to set up H2 database is to add the H2 dependency to the pom.xml
:
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>2.2.220</version>
<scope>test</scope>
</dependency>
Step 02 Write a Test Scenario
Next step is to write a test scenario
to test the database is working correctly.
Feature: H2 Database Testing
As a developer
I want to test the H2 database connection
So that I can verify that the database operations work as expected
@database-validation
Scenario: Validate H2 database connection and query
Given a sample H2 database is initialized
When a user with ID 1 is queried
Then the user name should be "John Doe"
Step 03 Write The Step Definition File
This file will execute a test to verify the database works properly.
package ForgeGuard;
import io.cucumber.java.en.*;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class DatabaseSteps { // The DatabaseSteps class in the name of the file and the container for everything in it.
private Connection connection; // Variable instance of SQL connection.
private Statement statement; // Variable instance of SQL queries
private ResultSet resultSet; // Variable Instance of the SQL query result.
@Given("a sample H2 database is initialized") // Cucumber annotation that maps with the feature file step.
public void a_sample_h2_database_is_initialized() throws Exception { // Method to handle connection.
// Connect to the H2 database
connection = DriverManager.getConnection("jdbc:h2:mem:testdb", "sa", ""); // Connects to H2.
statement = connection.createStatement(); // Creates an object to send SQL queries.
// Create a sample table and insert data
statement.execute("CREATE TABLE USERS (ID INT PRIMARY KEY, NAME VARCHAR(255));"); // Execute SQL queries.
statement.execute("INSERT INTO USERS (ID, NAME) VALUES (1, 'John Doe');"); // Executes SQL queries.
}
@When("a user with ID {int} is queried")
public void a_user_with_id_is_queried(Integer id) throws Exception {
// Query the database
resultSet = statement.executeQuery("SELECT * FROM USERS WHERE ID=" + id + ";");
resultSet.next();
}
@Then("the user name should be {string}")
public void the_user_name_should_be(String expectedName) throws Exception {
// Validate the result
String actualName = resultSet.getString("NAME");
assertEquals(expectedName, actualName);
// Clean up resources
resultSet.close();
statement.close();
connection.close();
}
}
Step 04 Login Test With Database Interaction
Next step is to perform a login test querying data from a database. Simulating what will happen in a real life
situation. This file expands the 000_smoketest_login.feature
including database interactions.
package ForgeGuard;
import io.cucumber.java.en.And;
import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import java.time.Duration;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class ValidLogin {
WebDriver driver;
WebDriverWait wait;
private String username;
private String password;
@Given("I am in the login page")
public void i_am_on_the_login_page() {
WebDriverManager.chromedriver().setup();
ChromeOptions options = new ChromeOptions();
options.addArguments("--headless", "--disable-gpu", "--window-size=1920,1080");
driver = new ChromeDriver(options);
driver.get("https://practice.expandtesting.com/login");
wait = new WebDriverWait(driver, Duration.ofSeconds(10));
}
@When("I enter the valid credentials {string} and {string}")
public void i_enter_the_valid_credentials(String username, String password) {
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id=\"username\"]")));
WebElement passwordField = driver.findElement(By.xpath("//*[@id=\"password\"]"));
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"login\"]/button"));
usernameField.sendKeys(username);
passwordField.sendKeys(password);
loginButton.click();
}
@Then("I should be redirected to the secure page")
public void i_should_be_redirected_to_the_secure_page() {
String currentUrl = wait.until(ExpectedConditions.urlContains("secure")) ? driver.getCurrentUrl() : "";
assertTrue(currentUrl.contains("secure"), "The user is not redirected to the secure page.");
}
@Then("I should see a welcome message {string}")
public void i_should_see_a_welcome_message(String expectedMessage) {
WebElement welcomeMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("/html/body/main/div[4]/div/h1")));
assertTrue(welcomeMessage.isDisplayed(), "Welcome message is not displayed");
assertEquals(expectedMessage, welcomeMessage.getText(), "The message does not match");
driver.quit();
}
@When("I enter the invalid credentials {string} and {string}")
public void i_enter_the_invalid_credentials(String username, String password) {
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id=\"username\"]")));
WebElement passwordField = driver.findElement(By.xpath("//*[@id=\"password\"]"));
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"login\"]/button"));
usernameField.sendKeys(username);
passwordField.sendKeys(password);
loginButton.click();
}
@Then("I should see an error message {string}")
public void i_should_see_an_error_message(String expectedMessage) {
WebElement errorMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("flash")));
assertTrue(errorMessage.isDisplayed(), "Error message is not displayed");
assertEquals(expectedMessage, errorMessage.getText(), "Text does not match");
driver.quit();
}
@When("I enter the special character{string}and{string}")
public void i_enter_the_special_character(String special_one, String special_two) {
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id=\"username\"]")));
WebElement passwordField = driver.findElement(By.xpath("//*[@id=\"password\"]"));
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"login\"]/button"));
usernameField.sendKeys(special_one);
passwordField.sendKeys(special_two);
loginButton.click();
}
@Then("I should see an error{string}")
public void i_should_see_an_error(String expectedMessage) {
WebElement errorMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("flash")));
assertTrue(errorMessage.isDisplayed(), "Error message is not displayed");
assertEquals(expectedMessage, errorMessage.getText(), "Text does not match");
driver.quit();
}
@When("I query the database for the credentials")
public void i_query_the_database_for_the_credentials() throws Exception {
//Step 01 Connection to the database.
Connection connection = DriverManager.getConnection("jdbc:h2:file:./data/testdb", "sa", "");
Statement statement = connection.createStatement();
// Step 02 Create the table if it does not exist.
statement.execute("CREATE TABLE IF NOT EXISTS USERS (ID INT PRIMARY KEY, NAME VARCHAR(255), PASSWORD VARCHAR(255));");
// Step 03 Insert data if the table is empty.
statement.execute("MERGE INTO USERS KEY(ID) VALUES (1, 'practice', 'SuperSecretPassword!');");
// Step 04 Query the credentials.
ResultSet resultSet = statement.executeQuery("SELECT NAME, PASSWORD FROM USERS WHERE ID = 1;");
// Step 05 Move to the first result row.
resultSet.next();
// Step 06 Store the credentials in temporary(within the method) variables.
String username = resultSet.getString("NAME");
String password = resultSet.getString("PASSWORD");
// Step 07 Transfer data from local to instance variables so they can be accessed from outside the method.
this.username = username;
this.password = password;
}
@Then("I should use those credentials to log in")
public void i_should_use_those_credentials_to_log_in() {
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id=\"username\"]")));
WebElement passwordField = driver.findElement(By.xpath("//*[@id=\"password\"]"));
WebElement loginButton = driver.findElement(By.xpath("//*[@id=\"login\"]/button"));
// Step 08 Request the data from the instance variables to log in.
usernameField.sendKeys(username);
passwordField.sendKeys(password);
loginButton.click();
}
@And("I should see a message{string}")
public void i_should_see_a_message(String message) {
WebElement welcomeMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("/html/body/main/div[4]/div/h1")));
assertTrue(welcomeMessage.isDisplayed(), "Welcome message is not displayed");
assertEquals(message, welcomeMessage.getText(), "The message does not match");
driver.quit();
}
}
Conclusion๐
This Database Testing project
is a fantastic step forward in combining data management with testing workflows.
By integrating H2
, Selenium
, and Cucumber
, it demonstrates how dynamic database connections can enhance testing
efficiency and realism. This success lays a strong foundation for tackling more complex scenarios and ensures
confidence in your application's functionality. Great workโkeep building on this momentum!๐
Functionality Testing - Database Interactions - Form
Estimated Time: 2 hours
Tech Stack: Selenium - Java - Cucumber - H2
Keywords: Automation - Testing - QA - BDD
Experience Level: Beginner - Advanced
Note: This project is a continuation of the project
ForgeGuard - Database Testing
Why Form Testing
After logins forms are the most used functionality in the web as they serve to get users information to then be able to apply some business logic.
Objective
Create automated test cases using Selenium
, Java
, and Cucumber
that uses a database to get data to fill up a form.
These tests will:
- Test form with valid credentials.
- Test form with invalid credentials.
- Validate error messages for incorrect input.
- Verify successful redirection after form completion.
Encouragement ๐
โDiscovering the unexpected is more important than confirming the known.โ โ George E. P. Box
Table Of Contents
- Why Form Testing
- Objective
- Step 01 Database Interaction Via Terminal
- Step 02 Feature File
- Step 03 Steps Definition File
- Step 04 Conclusion
Step 01 Database Interaction via terminal
To interact with form during testing it's possible to pass the form data in the step definitions file like in the
ForgeGuard - Database Testing
example. However, commonly in production logic and data don't mix, mostly for security
reason but also practicality if the data is too long.
- Navigate to the H2 jar file: Normally using maven the file will be at:
cd ~/.m2/repository/com/h2database/h2/2.2.220
- Launching the H2 Shell:
java -cp h2-2.2.220.jar org.h2.tools.Shell
- Connect to the Database: To connect to the database some information is required including the url, driver, user, and password.
URL: jdbc:h2:file:C:/03JOB SEARCH/04ForgeGuard/ForgeGuard/data/testdb # Absolute path
USER: sa
PASSWORD:
DRIVER:
- Querying the database: Once on the database's shell it's possible to interact with it trough SQL queries such as:
SHOW TABLES;
- Altering/Modifying tables: Tables can be modified to keep form data as follows:
ALTER TABLE USERS ADD (CONTACT_NAME VARCHAR(255), CONTACT_NUMBER VARCHAR(15), PICKUP_DATE DATE, PAYMENT_METHOD VARCHAR(50));
- Updating rows to fit form data:
UPDATE USERS SET NAME = 'Jane Doe', CONTACT_NUMBER = '012-5678901', PICKUP_DATE = '2025-03-01', PAYMENT_METHOD = 'credit card' WHERE ID = 2;
- Inserting data into the table:
INSERT INTO USERS (ID, NAME, PASSWORD, CONTACT_NUMBER, PICKUP_DATE, PAYMENT_METHOD)
VALUES (2, 'Jane Doe', 'password123', '012-5678901', '2025-03-01', 'cash on delivery');
Step 02 Feature File
Feature: Form
As a developer
I want to be able to extract data from the database
So I can fill up forms with it
@form-submission
Scenario: Form Submission
Given I am in the form page
When I extract the information from the database
When I use the information to fill up the form
Then I should see the message"Thank you for validating your ticket"
Step 03 Steps Definition File
package ForgeGuard;
import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import io.github.bonigarcia.wdm.WebDriverManager;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.Select;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import java.time.Duration;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class FormSteps {
WebDriver driver;
WebDriverWait wait;
private String username;
private String number;
private String date;
private String payment;
@Given("I am in the form page")
public void i_am_in_the_form_page() {
WebDriverManager.chromedriver().setup();
ChromeOptions options = new ChromeOptions();
options.addArguments("--headless", "--disable-gpu", "--window-size=1920,1080");
driver = new ChromeDriver(options);
driver.get("https://practice.expandtesting.com/form-validation");
wait = new WebDriverWait(driver, Duration.ofSeconds(10));
}
@When("I extract the information from the database")
public void i_extract_the_information_from_the_database() throws Exception {
try (Connection connection = DriverManager.getConnection("jdbc:h2:file:C:/03JOB SEARCH/04ForgeGuard/ForgeGuard/data/testdb", "sa", "");
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery("SELECT * FROM USERS WHERE ID = 2;")) {
// Connect to the H2 database
// Execute query to fetch user data
if (resultSet.next()) {
// Store the retrieved data into instance variables
this.username = resultSet.getString("NAME");
this.number = resultSet.getString("CONTACT_NUMBER");
this.date = resultSet.getString("PICKUP_DATE");
this.payment = resultSet.getString("PAYMENT_METHOD");
} else {
throw new Exception("No data found in the USERS table.");
}
}
}
@When("I use the information to fill up the form")
public void i_use_the_information_to_fill_up_the_form() {
// Locate the form fields
WebElement usernameField = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("validationCustom01")));
WebElement numberField = driver.findElement(By.id("validationCustom05"));
WebElement dateField = driver.findElement(By.name("pickupdate"));
Select paymentField = new Select(driver.findElement(By.id("validationCustom04")));
WebElement submitButton = driver.findElement(By.xpath("/html/body/main/div[3]/div/div/div/div/form/div[5]/button"));
// Fill the form with extracted data
usernameField.sendKeys(username);
numberField.sendKeys(number);
dateField.sendKeys(date);
paymentField.selectByVisibleText(payment);
submitButton.click();
}
@Then("I should see the message{string}")
public void i_should_see_the_message(String expectedMessage) {
// Verify the success message
WebElement submissionMessage = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("/html/body/main/div[3]/div/div/p")));
assertEquals(expectedMessage, submissionMessage.getText(), "The message does not match");
driver.quit();
}
}
Step 04 Conclusion
The project has been a remarkable journey, showcasing the power of integration between Selenium, Cucumber, and H2 databases to build a comprehensive and efficient test automation framework. The progress made is a clear indicator of how dedication and persistence lead to meaningful achievementsโthis is only the beginning of a promising path in test automation! Keep up the fantastic work! ๐
ForgeGuard - API Testing
Estimated Time: 2 hours
Tech Stack: Selenium - Java - Cucumber - RestAssured
Keywords: Automation - Testing - QA - BDD
Experience Level: Beginner - Advanced
Note: This project is a continuation of the project
ForgeGuard - Form Testing
Why API Testing
API testing is crucial because it ensures that Application Programming Interfaces (APIs)
function
correctly, guaranteeing their reliability, performance, security, and overall quality by identifying potential issues
early in the development cycle, minimizing the risk of bugs reaching production and allowing for faster fixes,
ultimately leading to a better user experience and cost savings.
Objective
To perform manual(Postman) and automated(RestAssured) API tests to the Notes API
https://practice.expandtesting.com/notes/api
. These test will:
- Create notes with title and content.
- Categorize notes into different categories.
- Update and delete notes.
- Search notes by title or category.
Table Of Contents
- Why API Testing
- Objective
- Best Practice Step 1: Understand the API Requirements
- Best Practice Step 2: Set Up a Testing Environment
- Best Practice Step 3: Perform Functional Testing
- Best Practice Step 4: Perform Authentication & Security Testing
- Best Practice Step 5: Perform Performance & Load Testing
- Best Practice Step 6: Run Regression & Integration Tests
- Best Practice Step 7: Monitor & Log API Calls
Encouragement
You're making great progressโkeep up the momentum! ๐
๐ Best Practice Step 1: Understand the API Requirements
๐น Review API Documentation
Identify available endpoints, methods (GET, POST, PUT, DELETE), request parameters, and response structures. Check authorization methods (API key, Bearer token, OAuth, etc.). ๐น Define the Test Scope
Determine what needs to be tested:
- โ Functional Testing (Does it work?)
- โ Security Testing (Is it secure?)
- โ Performance Testing (Is it fast?)
- โ Load Testing (Can it handle traffic?)
- โ Integration Testing (Does it work with other systems?)
๐ Best Practice Step 2: Set Up a Testing Environment
๐น Choose API Testing Tools
- โ Postman โ Manual and automated API testing.
- โ REST Assured โ Automated API testing for Java.
- โ Newman โ Postman CLI runner for automation.
- โ JMeter โ Load testing for APIs.
- โ Cypress/Playwright โ API testing within E2E tests.
๐น Ensure Test Data Availability
If the API interacts with a database, make sure you have test data set up. Use mock servers (e.g., Postman Mock Server, WireMock) for isolated testing. ๐น Set Up Authentication & Headers
Store API keys/tokens securely (e.g., environment variables). Example: Use Postman Environments to manage variables like BASE_URL, TOKEN, USER_ID.
๐งช Best Practice Step 3: Perform Functional Testing
๐น Validate HTTP Methods & Endpoints
Test all supported request methods
(GET, POST, PUT, DELETE)
.
Ensure that endpoints return expected responses and status codes:
200 OK โ Success
201 Created โ Resource Created
400 Bad Request โ Invalid Request
401 Unauthorized โ Missing Token
403 Forbidden โ Insufficient Permissions
404 Not Found โ Invalid Endpoint
๐น Check Response Data Consistency
- Verify correct data types (string, int, boolean).
- Ensure required fields are present.
- Validate date formats (ISO 8601).
- Test empty or missing fields.
๐น Validate Error Handling
- Send invalid inputs and verify proper error messages.
- Ensure rate limits are enforced.
๐ Best Practice Step 4: Perform Authentication & Security Testing
๐น Verify Authentication Mechanisms
- Test login requests and token expiration.
- Try unauthorized requests (no token, wrong token).
๐น Check for Security Vulnerabilities
- โ SQL Injection โ Try sending " OR 1=1; -- in input fields.
- โ Cross-Site Scripting (XSS) โ Test by injecting .
- โ Broken Authentication โ Try using expired tokens.
๐ Best Practice Step 5: Perform Performance & Load Testing
๐น Measure Response Times
Ensure API response time meets SLAs (e.g., < 500ms). Test different payload sizes. ๐น Simulate High Traffic
Use JMeter or K6 to send concurrent requests. Monitor API failures and slowdowns.
๐ Best Practice Step 6: Run Regression & Integration Tests
๐น Automate API Tests
Write automated test scripts using Postman/Newman, REST Assured, or Cypress. Integrate with CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD). ๐น Check API Dependencies
Ensure API integrates well with databases, other APIs, or frontend applications. Use mock services when dependencies are unavailable.
๐ Best Practice Step 7: Monitor & Log API Calls
๐น Enable Logging
- Use tools like Grafana + Loki, New Relic, or Elastic Stack (ELK) for monitoring. ๐น Track API Health
- Set up alerts for API failures (e.g., UptimeRobot, Prometheus).
Manual Testing with Postman
About Postman:
Postman is a tool that can be used without much training, both for manual and automation testing of
APIs. It's easy to automate API tests and group them as a collection so you can chain the tests.
Step 01 Understand the Notes API requirements
To access the API, first an account and log in using email and password must be created. This will generate an
authentication token
needed for protected resources. The token must be kept safe for authorization.
Step 02 Set Up a Testing Environment
Postman project structure is composed by Workspaces
that contains collections
that contains the requests
. For the
- Tasks:
- Create a collection:
- Create an Environment: To store variables such as
base_url
andx-auth-token
(header). - Create a user to interact with the API.
POST https://practice.expandtesting.com/notes/api/users/register
Body parameters(info that goes along with the request).
{
"user": "practice",
"email": "practi1@expandtesting.com",
"password": "SuperSecurePassword123!"
}
Best Practice Step 3: Perform Functional Testing
After setting environment it's time to test the functionalities.
- Health Check
GET
/health-check
Check the health of the API Notes service
- Get Profile
GET
/users/profile
Retrieve user profile information
Automating API Testing with Rest Assured
"Rest Assured" is an open-source
Java library primarily used for testing REST APIs
. It is specifically designed for
testing REST APIs, which are a common architecture for web services. it integrates seamlessly with Java-based testing
frameworks like JUnit and TestNG.
Step 01 RestAssured Set Up
To use RestAssured
the following dependencies must be added to the pom.xml
file.
<dependencies>
<!-- RestAssured for API Testing -->
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<version>5.3.0</version>
<scope>test</scope>
</dependency>
<!-- JSON Processing -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.15.0</version>
</dependency>
<!-- JUnit for Assertions -->
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.9.2</version>
<scope>test</scope>
</dependency>
<!-- JUnit Engine -->
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.9.2</version>
<scope>test</scope>
</dependency>
</dependencies>
After adding the dependencies to the pom.xml a clean install must be performed so maven could download the new dependencies.
mvn clean install -DskipTests # -DskipTest save resources by avoiding test from executing.
Step 02 API feature file
The process for testing API using BDD with cucumber doesn't change. After dependencies are added to the pom.xml file a feature file must be created with the test scenarios.
Feature: API Tests
As a tester I want to test the Notes API
Using the RestAssured Library to make sure I can automate
this process in the future.
@api-check
Scenario: Verify API is running
When I send a health check request
Then the response status should be 200
@api-login
Scenario: Login
When I send a login request
Then the response status must be 200
Step 03 Steps Definitions
After the feature file it's time to develop the step definition file where the actual test logic will live.
package ForgeGuard;
import io.cucumber.java.en.When;
import io.cucumber.java.en.Then;
import io.restassured.http.ContentType;
import io.restassured.response.Response;
import static io.restassured.RestAssured.given;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class ApiStepDefinitions {
private Response response; // Instance variable of Response type (RestAssured class containing reponse methods)
@When("I send a health check request")
public void i_send_a_health_check_request() {
response = given() // Method chaining. Each method response depends on the one before.
.baseUri("https://practice.expandtesting.com/notes/api")
.when()
.get("/health-check");
}
@Then("the response status should be {int}")
public void the_response_status_should_be(int expectedStatus) {
System.out.println("Actual Response Status: " + response.getStatusCode());
System.out.println("Response Body: " + response.getBody().asString());
assertEquals(expectedStatus, response.getStatusCode(), "API response status mismatch");
}
@When("I send a login request")
public void i_send_a_login_request() {
response = given()
.baseUri("https://practice.expandtesting.com/notes/api")
.contentType(ContentType.JSON)
.body("{ \"email\": \"practi1@expandtesting.com\", \"password\": \"SuperSecurePassword123!\" }")
.when()
.post("/users/login");
}
@Then("the response status must be {int}")
public void the_response_status_must_be(int expectedStatus) {
System.out.println("Actual Response Status: " + response.getStatusCode());
System.out.println("Response Body: " + response.getBody().asString());
assertEquals(expectedStatus, response.getStatusCode(), "API response status mismatch");
}
@When("I request user info")
public void i_request_user_info() {
response = given()
.baseUri("https://practice.expandtesting.com/notes/api")
.contentType(ContentType.JSON) //Authorization info must be looked for in the API docs.
.header("x-auth-token", "9f1272d3b3e146089d475a1ee024dc95a6451739c975432ea41d0e2ed76c77a6")
.body("{ \"email\": \"practi1@expandtesting.com\", \"password\": \"SuperSecurePassword123!\" }")
.when()
.get("/users/profile");
}
@Then("the response must be {int}")
public void the_response_must_be(int expectedStatus) {
System.out.println("Response Status: " + response.getStatusCode());
System.out.println("Response body: " + response.getBody().asString());
assertEquals(expectedStatus, response.getStatusCode(), "There is a status mismatch");
}
}
Conclusion
๐ Congratulations on finishing the testing project! Thatโs a huge achievement, be proud of the dedication and effort you put into it. ๐ The API testing project has successfully validated endpoints, ensured data integrity, and strengthened automation workflows, marking a solid foundation for scalable and reliable API quality assurance. ๐
POO with Java
1 hour
POO (Object-oriented programming) allows developers to model software in a very similar fashion to the way we think. In terms of Classes( a category of something), objects (an element inside that category), attributes ( a characteristic of the element), and methods(something the element does).
Class:
public class ClassName {
// Class body containing attributes and methods
}
public:
This is an access modifier, in this case, making the class accessible from anywhere in the program.
ClassName:
This is the name you choose for your class, following Java naming conventions (starts with an uppercase letter).
// Class body:
This is where you define the attributes and methods of the class.
Object:
ClassName objectName = new ClassName();
ClassName:
Replace this with the actual name of your class.objectName:
This is the name you choose for your object (reference variable).new ClassName():
This creates a new instance (object) of the ClassName class using the new keyword and the class constructor (which will be discussed later).
Attribute:
private dataType attributeName;
private:
This is an access modifier, in this case, restricting access to the attribute within the class. Other access modifiers like public and protected are also available.dataType:
This specifies the data type of the attribute (e.g., int, String, double).attributeName:
This is your name for your attribute.
Method:
public void methodName(dataType parameter) {
// Method body containing statements
}
public:
Similar to the class, this makes the method accessible from anywhere. Other access modifiers exist as well.void:
This specifies the method's return type. void means it doesn't return any value. Methods can also return other data types.methodName:
This is the name you choose for your method.dataType parameter:
This defines an optional parameter the method can receive. Methods can have multiple parameters with different data types.// Method body:
This is where you write the code the method executes when called.
Example:
Let's model a new Tesla Model S car with a turn-on engine.
public class Car { // declare class name Car
private String model; // with a model Attribute
public void startEngine() { // And a Start Engine Method
System.out.println("Engine started!");
}
}
public class Main { // Defines the entry point for the program. The execution starts.
public static void main(String[] args) { //
Car myCar = new Car(); // Create object
myCar.model = "Tesla Model S"; // Set attribute value
myCar.startEngine(); // Call method
}
}
Static Methods
Static methods in Java are a special type of method that belongs to the class itself, rather than to an object of the class. This is useful when you want to call a method straight from the class without having to instantiate ( create) an object to do that. While static methods offer convenience, they can also lead to tight coupling between classes if overused. Favor non-static methods when dealing with object-specific data or behavior.
public class MathUtils {
public static int add(int a, int b) { // declare static method add
return a + b;
}
public static double calculateArea(double radius) { //declare static method calculateArea
return Math.PI * radius * radius; // Accessing a static member of Math class
}
public static final double PI = 3.14159; // Static final variable (constant)
}
public class Main {
public static void main(String[] args) {
int sum = MathUtils.add(5, 3); // Calling static methods without object
System.out.println("Sum: " + sum);
double circleArea = MathUtils.calculateArea(10.0);
System.out.println("Circle Area: " + circleArea);
}
}
Constructor Methods
The primary purpose of constructors is to initialize the object's attributes with starting values. Classes and objects start with some default values (0 for numbers, false for boolean, and null for objects). If you don't want the class or object initialized like this, you use a constructor.
public class Car {
private String model;
private int year;
// Default constructor (no-arg)
public Car() {
// Assigning default values (optional)
this.model = "Unknown";
this.year = 2000;
}
// Parameterized constructor
public Car(String model, int year) {
this.model = model;
this.year = year;
}
}
public class Main {
public static void main(String[] args) {
// Using default constructor
Car car1 = new Car();
System.out.println("Car 1: Model - " + car1.model + ", Year - " + car1.year);
// Using parameterized constructor
Car car2 = new Car("Tesla Model S", 2023);
System.out.println("Car 2: Model - " + car2.model + ", Year - " + car2.year);
}
}
Overloading Methods
Sometimes you need a method to behave slightly differently according to the number and data types of parameters. By overloading methods, you can provide methods with the same name but specific functionalities based on the arguments provided. This makes code easier to understand and reduces the need for multiple methods with slightly different purposes.
public class Calculator {
// Add two integers
public int add(int a, int b) {
return a + b;
}
// Add two doubles
public double add(double a, double b) {
return a + b;
}
// Add three integers (optional)
public int add(int a, int b, int c) {
return a + b + c;
}
}
public class Main {
public static void main(String[] args) {
Calculator calc = new Calculator();
int sumInt = calc.add(5, 3); // Calls the first add method (int, int)
double sumDouble = calc.add(2.5, 1.7); // Calls the second add method (double, double)
System.out.println("Integer sum: " + sumInt);
System.out.println("Double sum: " + sumDouble);
}
}
Overriding Methods
Subclasses can redefine inherited methods from their parent classes. By overriding methods, you can create more specialized classes that inherit core functionality from parent classes but customize specific behaviors.
public class Animal {
public void makeSound() {
System.out.println("Generic animal sound");
}
}
public class Dog extends Animal {
@Override
public void makeSound() {
System.out.println("Woof!");
}
}
public class Cat extends Animal {
@Override
public void makeSound() {
System.out.println("Meow!");
}
}
public class Main {
public static void main(String[] args) {
Animal animal1 = new Animal();
animal1.makeSound(); // Generic sound
Animal animal2 = new Dog(); // Upcasting (treated as Animal at compile time)
animal2.makeSound(); // Overridden sound (Woof!) due to polymorphism at runtime
Cat cat = new Cat();
cat.makeSound(); // Meow!
}
}
Decision Structures
Allow your Java programs to make choices and execute different code blocks based on certain conditions.
If Statement
int age = 20;
if (age >= 18) {
System.out.println("You are eligible to vote.");
}
If Else
int number = 10;
if (number > 0) {
System.out.println("The number is positive.");
} else {
System.out.println("The number is non-positive.");
}
If Else If Statement
char grade = 'A';
if (grade == 'A') {
System.out.println("Excellent!");
} else if (grade == 'B') {
System.out.println("Well done!");
} else {
System.out.println("Keep practicing!");
}
Switch Statement
- Used for multi-way branching based on the value of an expression.
- Each
case
label checks for a specific value of the expression. - An optional
break
statement prevents fall-through to the next case. - A default
case
can handle situations where none of the other cases match.
String day = "Monday";
switch (day) {
case "Monday":
case "Tuesday":
case "Wednesday":
case "Thursday":
case "Friday":
System.out.println("It's a weekday!");
break;
case "Saturday":
case "Sunday":
System.out.println("It's a weekend!");
break;
}
Conclussion
Learning OOP with Java equips you with a powerful and adaptable approach to software development, preparing you for success in a wide range of programming endeavors.
Next
We will discuss more advanced topics such as operators, data types, and repetition structures.
POO with Java II
1 hour
Operators
Operators are special symbols
that perform specific operations on values (operands) and produce results. They are the building blocks of expressions
and statements in your Java code.
Arithmetic
Symbol | Example |
---|---|
+ | 3+5 |
- | 5-3 |
* | 3*5 |
/ | 5/3 |
% | 5%3 |
public class AreaCalculator {
public static void main(String[] args) {
// Declare variables to store length and width
int length = 10;
int width = 5;
// Calculate area using arithmetic operator
int area = length * width;
// Print the calculated area
System.out.println("The area of the rectangle is: " + area);
}
}
Relational
Name | Symbol | Example | Value |
---|---|---|---|
Equality | ==== | 3===3 | True |
Inequality | != | 3!=3 | False |
Greater than | > | 5 > 7 | False |
Less than | < | 5 < 7 | True |
Greater than or equal to | >= | 5 >= 7 | False |
Less than or equal to | <= | 5 <= 7 | True |
public class RelationalOperatorExample {
public static void main(String[] args) {
int age = 25;
boolean isAdult = age >= 18; // Checking if age is greater than or equal to 18
if (isAdult) {
System.out.println("You are an adult.");
} else {
System.out.println("You are not an adult.");
}
}
}
Compound Attribution
Example | Use Case |
---|---|
x+=3 | x = x + 3 |
x-=3 | x = x - 3 |
x*=3 | x = x * 3 |
x/=3 | x = x / 3 |
x%=3 | x = x % 3 |
int count = 10;
// Increment count by 1 (same as count = count + 1)
count += 1;
// Decrement count by 2 (same as count = count - 2)
count -= 2;
// Multiply count by 5 (same as count = count * 5)
count *= 5;
Increment and Decrement
Example | Use Case |
---|---|
++x | Add 1 before using x |
x++ | Use x and then adds 1 |
--x | Subtract 1 before using x |
x-- | Use x and then substract 1 |
public class IncrementDecrementExample {
public static void main(String[] args) {
int count = 5;
// Pre-increment: increment count by 1 and then assign the new value to result
int result = ++count; // result becomes 6, count becomes 6
System.out.println("After pre-increment, count = " + count); // Output: After pre-increment, count = 6
int anotherResult = count--; // anotherResult gets the current value of count (6), then count is decremented to 5
System.out.println("After post-decrement, anotherResult = " + anotherResult); // Output: After post-decrement, anotherResult = 6
System.out.println("After post-decrement, count = " + count); // Output: After post-decrement, count = 5
}
}
Logical
Name | Example | Value |
---|---|---|
And | True&&False | False |
Or | True | |
Not | !True | False |
public class AgeChecker {
public static void main(String[] args) {
int age = 25;
boolean isAdult = age >= 18 && age < 65; // Combining conditions with AND
if (isAdult) {
System.out.println("You are an adult.");
} else {
System.out.println("You are not an adult.");
}
}
}
Data Types
Here are the most used data types
.
Primitive
Data Type | Description | Example |
---|---|---|
int | interger values | 1;45;465 |
float | decimal numbers | 1,34F;4,56F |
double | precise decimal numbers | 4,54;6,42 |
char | single characters | '1';'%' |
boolean | logical values | true, value |
public class AreaCalculator {
public static void main(String[] args) {
// Declare variables with appropriate data types
int length = 10; // int for whole number (width)
double width = 5.2; // double for decimal number (height)
double area; // double to store the calculated area (decimal)
// Calculate the area
area = length * width;
// Print the result with a descriptive message
System.out.println("The area of the rectangle is: " + area);
}
}
Non-Primitive
Data Type | Description | Example |
---|---|---|
String | Sequence of characters | "Hello World" |
Array | Ordered items(same data type) | int[] numbers = {1,2,3,4}; |
Class | Object blueprint | public class Classname {} |
Interface | Specifies methods for a class | public interface Interfacename |
public interface Drawable {
// Declare an abstract method without implementation
void draw();
}
public class Main {
public static void main(String[] args) {
// Create an object of a class implementing the interface
Drawable drawable = new Square();
// Call the draw method through the interface reference
drawable.draw();
}
}
Repetition Structures/ Loops
Allow you to execute a block of code
multiple times based on a certain condition
While loop
This is the most basic loop construct
. It repeatedly executes a code block if a specified condition is evaluated as true.
while (condition) {
// code to be executed
}
Do-while loop
Similar to the while loop, it guarantees that the code block is executed at least once, even if the condition
is initially false.
do {
// code to be executed
} while (condition);
For loop
This loop combines initialization, condition checking
, and increment/decrement in a concise syntax. It's often preferred for iterating a fixed number of times
for (initialization; condition; increment/decrement) {
// code to be executed
}
Package Structure
Packages provide a logical way
to group related classes, while directories on your disk reflect this structure.
com
- yourcompany.ecommerce
- model
- Product.java
- Order.java
- Customer.java
- service
- ProductService.java
- OrderService.java
- CustomerService.java
- controller
- ProductController.java
- OrderController.java
- CustomerController.java
Directory Structure
Map directly to the package structure on disk. A package named com.example.myapp
would have a corresponding directory structure like com/example/myapp
. Each directory can contain Java source files (.java) and potentially subdirectories for sub-packages.
src
- com
- yourcompany
- ecommerce
- model
- Product.java
- Order.java
- Customer.java
- service
- ProductService.java
- OrderService.java
- CustomerService.java
- controller
- ProductController.java
- OrderController.java
- CustomerController.java
Acces Modifiers
Keywords that define the accessibility of classes
, methods, variables, and constructors within a program.
Modifier | Description |
---|---|
Public | Everywhere |
Private | Within class only |
Protected | Whithin the package |
Default | Whithin the same package only |
Code Encapsulation
Code encapsulation is a fundamental principle
in object-oriented programming (OOP) that focuses on bundling data (attributes) and methods (functions) that operate on that data together within a class. It sets private methods
and attributes that will be accessed only within a class. This is useful when you don't want
other objects from outside the class to access the class data like a bank account. Please take a look at the example below.
public class Current Account {
private double balance; // Private attribute to store balance
public void deposit (double amount) { // Deposits a given amount into the account balance
balance += amount;
}
public double getBalance () { // Retrieves balance from Current Account
return saldo;
}
}
Inheritance
Allows you to create sub-classes that inherit attributes and methods from the superclasses.
public class Account { // Superclass - defines generic account properties
private int accountNumber; // Account number
public void withdraw (double amount) {
// Implement logic to withdraw from the account
}
}
public class Current Account extends Account { // Subclass inherits from Account by "extending" it.
private double balance; // Specific to current accounts
public void deposit (double amount) {
// Implement logic to deposit into current account (can leverage withdraw from account)
}
@Override // Indicates method overrides the one in the superclass
public double getBalance () {
return balance;
}
}
Polyformism
Allows objects to get different forms. We have already seen this concept in action in the overriding methods section. Here's a refresher.
class Animal {
public void makeSound() {
System.out.println("Generic animal sound");
}
}
class Dog extends Animal {
@Override // This annotation indicates method overriding
public void makeSound() {
System.out.println("Woof!");
}
}
class Cat extends Animal {
@Override
public void makeSound() {
System.out.println("Meow!");
}
}
public class Main {
public static void main(String[] args) {
Animal animal1 = new Dog(); // Upcasting (assigning subclass to superclass)
Animal animal2 = new Cat();
animal1.makeSound(); // Output: Woof! (calls Dog's makeSound)
animal2.makeSound(); // Output: Meow! (calls Cat's makeSound)
}
}
Conclusion
Operators are the essential tools that let you perform operations on data in your Java programs. Without them, you wouldn't be able to do basic things like calculations (addition, subtraction, etc.) or comparisons.
Next
We will learn about abstract classes, interface definitions, and graphic interfaces.
POO III
2 hours
Pass Arguments Via Command Line
In Java, you can pass and access arguments passed via the command line through the String[] args
parameter in the main method of your program.
public class ArgumentExample {
public static void main(String[] args) {
// Access command-line arguments here
System.out.println("Number of arguments: " + args.length);
for (int i = 0; i < args.length; i++) {
System.out.println("Argument " + i + ": " + args[i]);
}
}
}
Running the Programm via CLI
java ArgumentExample This is a string argument! Another argument
Output
Number of arguments: 3
Argument 0: This
Argument 1: is
Argument 2: a string argument! Another argument
Java directories structure
project_name/
README.md # Project description and instructions
LICENSE # License file
src/ # Source code directory
main/
java/ # Java source code goes here, organized by package
com/
example/
... your project's java classes ...
resources/ # Resource files (images, configuration files, etc.)
test/ # Unit test source code (if applicable)
java/ # Similar structure for test code packages
pom.xml # Project configuration file (for Maven projects)
build.gradle # Project configuration file (for Gradle projects)
Processing User Input with Scanner
The Scanner class in Java provides a convenient way to read user input from the console. Here's a practical example that demonstrates using the Scanner class to calculate the area of a rectangle based on user-provided width and height.
import java.util.Scanner;
public class RectangleArea {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.print("Enter the width of the rectangle: ");
double width = scanner.nextDouble(); // Read width as a double
System.out.print("Enter the height of the rectangle: ");
double height = scanner.nextDouble(); // Read height as a double
// Calculate and display the area
double area = width * height;
System.out.println("The area of the rectangle is: " + area);
scanner.close(); // Close the scanner (optional but good practice)
}
}
Exeption Handling
Exception handling in Java is a powerful mechanism
for managing errors that occur during program execution.
public class ExceptionExample {
public static void main(String[] args) {
int[] numbers = {1, 2, 3};
try {
System.out.println(numbers[10]); // This will cause an IndexOutOfBoundsException
} catch (IndexOutOfBoundsException e) {
System.out.println("Array index out of bounds: " + e.getMessage());
} finally {
System.out.println("This code will always execute.");
}
}
}
Assertions
An assertion
is a statement you believe true during program execution. If the assertion evaluates to false, the program throws an AssertionError
.
Syntax Example:
int age = 20;
assert age >= 18 : "Person must be an adult";
Example:
public class Factorial {
public static long calculateFactorial(int n) {
// Assertion for non-negative input
assert n >= 0 : "Factorial is only defined for non-negative numbers";
long result = 1;
for (int i = 2; i <= n; i++) {
result *= i;
}
return result;
}
public static void main(String[] args) {
// Valid input
long result = calculateFactorial(5);
System.out.println("5! = " + result); // Output: 5! = 120
// Invalid input (negative number) - throws AssertionError
try {
calculateFactorial(-2);
} catch (AssertionError e) {
System.out.println(e.getMessage()); // Output: Factorial is only defined for non-negative numbers
}
}
}
Abstract classes
Abstract classes cannot be instantiated directly. They serve as a base class to define a common structure
and behavior for subclasses.
public abstract class Shape {
public abstract double calculateArea(); // Abstract method
public void printInfo() {
System.out.println("This is a shape.");
} // Concrete method with implementation
}
public class Circle extends Shape {
private double radius;
public Circle(double radius) {
this.radius = radius;
}
@Override
public double calculateArea() {
return Math.PI * radius * radius;
}
}
Javadoc
Javadoc generates HTML documentation. These pages explain the classes
, methods
, and fields
in your code.
/**
* This class represents a simple calculator.
*
* @author Your Name Here
*/
public class Calculator {
/**
* Adds two numbers together.
*
* @param num1 the first number
* @param num2 the second number
* @return the sum of num1 and num2
*/
public int add(int num1, int num2) {
return num1 + num2;
}
}
For Each Loop
A simpler loop syntax compared to traditional for loops
.
for (dataType element : array/collection) {
// Code to be executed for each element
}
For Each vs For Loop
// For each loop
int[] numbers = {1, 2, 3, 4, 5};
for (int number : numbers) {
System.out.println(number);
}
For Loop
int[] numbers = {1,2,3,4,5};
for (int i = 0; i < numbers.length; i++) {
System.out.println(numbers[i]);
}
Enum
An enum in Java is a special data type
that allows you to define a set of named constants.
They are commonly used to represent fixed values, like days of the week (MONDAY
, TUESDAY
, WEDNESDAY
) or compass directions (NORTH
, SOUTH
, EAST
, WEST
).
public enum Day {
MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY
}
Day today = Day.FRIDAY;
if (today == Day.WEEKEND) {
System.out.println("Time to relax!");
}
Comparable vs Runnable vs Serializable Interfaces
Feature | Comparable | Runnable | Serializable |
---|---|---|---|
Purpose | Ordering objects | Defining a thread task | Object persistence |
Methods | compareTo | run | (none - marker interface) |
Return Value | int | void | non-applicable |
Use Case | Sorting | comparisons, multithreading | Saving/loading object state |
Comparable
Defines object ordering for sorting and comparisons.
public class Student implements Comparable<Student> {
int id;
String name;
int age;
// Constructor and other fields...
@Override
public int compareTo(Student other) {
return this.age - other.age; // Sort by age in ascending order
}
}
In this example, the compareTo
method compares the age of the current object (this)
with another Student object (other)
. It returns a negative integer if the current object is younger, zero if they have the same age, and a positive integer if the current object is older.
Runnable
Enables multithreading for concurrent task execution.
public class DownloadTask implements Runnable {
String url;
String filename;
public DownloadTask(String url, String filename) {
this.url = url;
this.filename = filename;
}
@Override
public void run() {
// Download logic using URL and filename
System.out.println("Downloaded: " + filename);
}
}
This DownloadTask
implements Runnable
and defines the run method. This method contains the code to download the file from the specified URL
and save it with the given filename
.
Serializable
Allows object persistence for data storage and sharing especially via network.
public class Person implements Serializable {
private String name;
private int age;
// Getters and setters omitted for brevity
}
With this implementation, you can create Person
objects and serialize them to files or transmit them over networks using streams.
Next
We will discuss vectors, arrays and strings, relational and non-relational databases with Java and Threads.
System Design
2 hours
System design is the blueprint for a software system. It's about defining the architecture
, components, modules, interfaces, and data flow to achieve specific goals.
Please Notice
This project is inspired by concepts from "System Design Interview" by Alex Xu. The book provided invaluable insights into system design fundamentals.
Single Server Set Up
A single server setup is a configuration
where all system components, such as the web server, database, application server, and file storage, reside on a single physical or virtual machine.
- The user accesses the website through a domain name.
- The
IP address
is returned to the browser - Hypertext transfer protocol requests are sent to the web server.
- The web server returns the HTML to the browser for rendering.
The script below illustrates an Apache web server deployment.
#!/bin/bash
----------------------------------------------------------------------------------
echo "Updating and installing apache2 and unzip..."
apt-get update
apt-get upgrade -y
apt-get install apache2 -y
apt-get install unzip -y
----------------------------------------------------------------------------------
echo "Getting the website from a remote repo..."
cd /tmp
wget https://github.com/denilsonbonatti/linux-site-dio/archive/refs/heads/main.zip
----------------------------------------------------------------------------------
echo "Unziping the file and pasting into the Apache directory..."
unzip main.zip
cd linux-site-dio-main
cp -R * /var/www/html/
----------------------------------------------------------------------------------
Database
With the growth of the user base, one server is not enough, and we need multiple servers: one for web/mobile traffic, and the other for the database
. Separating web/mobile traffic (web tier) and database (data tier) servers allows them to be scaled independently.
Here's a MySQL database containerized deployment using Kubernetes.
# Mysql deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector: # Select pods with the label "mysql"
matchLabels:
app: mysql
template: # a blueprint for creating pods
metadata: # Data about the data
labels: # Identify pods
app: mysql
spec:
containers:
- image: alemorales9011935/projeto-database:1.0 # Docker image used for the deployment
args:
- "--ignore-db-dir=lost+found" # Ignores previous deployments
imagePullPolicy: Always # Ensure the image is pulled even if exists locally
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-dados
mountPath: /var/lib/mysql/ # where the containers will be storaged
volumes:
- name: mysql-dados
persistentVolumeClaim:
claimName: mysql-dados
Load Balancer
A load balancer
evenly distributes incoming traffic among web servers that are defined in a load-balanced set. Users connect directly to the load balancer's public IP.
With this setup, web servers are no longer directly unreachable by clients.
For better security, private IPs
are used for communication between servers. A private IP is an IP address reachable only between servers in the same network; however, it is unreachable over the Internet.
The configuration below defines a LoadBalancer
service for a PHP application using Kubernetes.
apiVersion: v1
kind: Service # Defines the type of Kubernetes object
metadata:
name: php
spec:
selector: # This service will find and route traffic to pods that have the label app: php.
app: php # Select pods with the label "app: php"
ports:
- port: 80 # The port users will access from outside the cluster
targetPort: 80 # The port php application listens
type: LoadBalancer # Type of Service
Database Replication
Database replication can be used in many database management systems, usually with a master/slave relationship between the original (master) and the copies (slaves). This architecture allows for failover and redundancy.
Here's a Data replication Deployment with docker-compose.
version: '3.7'
services:
mysql-master:
image: mysql:latest
container_name: mysql-master
environment:
MYSQL_ROOT_PASSWORD: your_root_password
MYSQL_DATABASE: your_database
MYSQL_USER: your_user
MYSQL_PASSWORD: your_password
MYSQL_REPLICATION_MODE: master
MYSQL_REPLICATION_USER: repl_user
MYSQL_REPLICATION_PASSWORD: repl_password
ports:
- "3306:3306"
volumes:
- mysql-data:/var/lib/mysql
mysql-slave:
image: mysql:latest
container_name: mysql-slave
environment:
MYSQL_ROOT_PASSWORD: your_root_password
MYSQL_DATABASE: your_database
MYSQL_USER: your_user
MYSQL_PASSWORD: your_password
MYSQL_REPLICATION_MODE: slave
MYSQL_REPLICATION_USER: repl_user
MYSQL_REPLICATION_PASSWORD: repl_password
MYSQL_MASTER_HOST: mysql-master
MYSQL_MASTER_PORT: 3306
ports:
- "3307:3306"
depends_on:
- mysql-master
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
Cache
A cache is a temporary storage area that stores the result of expensive responses or frequently accessed data in memory so that subsequent requests are served more quickly.
Common Use Cases
- Caching: Improve application performance by storing frequently accessed data in memory.
- Session management: Store user session data for faster access.
- Messaging: Real-time communication between applications.
Caching with Redis: Redis stands for Remote DIctionary Server. It's an open-source, in-memory data structure store that's primarily used as a cache or quick-response database. Here's a simple Redis deployment using docker-compose.
version: '3.7'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
CDN (Content Delivery Network)
A CDN is a network of geographically dispersed servers used to deliver static content. CDN servers cache static content like images, videos, CSS, and JavaScript files.
- User A tries to get image.png by using an image URL. The CDN provider provides the URLโs domain.
- If the CDN server does not have image.png in the cache, the CDN server requests the file from the origin, which can be a web server or online storage like Amazon S3.
- The origin returns image.png to the CDN server, which includes an optional HTTP header Time-to-Live (TTL) which describes how long the image is cached.
- The CDN caches the image and returns it to User A. The image remains cached in the CDN until the TTL expires.
- User B sends a request to get the same image.
- The image is returned from the cache as long as the TTL has not expired.
Here's a basic implementation of a CDN using Nginx:
http {
proxy_cache_path /var/cache/nginx/proxy_cache levels=1 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
listen 80;
server_name cdn.example.com;
location /images/ {
proxy_pass http://origin_server;
proxy_cache my_cache;
proxy_cache_valid 200 1h;
proxy_cache_use_stale error timeout invalid_header;
}
}
}
proxy_cache_path:
Defines the cache directory, size, and other parameters.server block:
Configures the CDN server listening on port 80.location blocks:
Define specific paths for different content types.proxy_pass:
Specifies the origin server where the content resides.proxy_cache:
Enables caching for the specified location.proxy_cache_valid:
Sets the cache expiration time for successful responses.proxy_cache_use_stale:
Defines behavior when the cache is stale.
Stateless vs Statesfull
A stateful server remembers client data (state) from one request to the next. A stateless server keeps no state information.
We move the session data out of the web tier and store them in the persistent
data store. The shared data store could be a relational database, Memcached/Redis, NoSQL
, etc. The NoSQL data store was chosen because it is easy to scale.
Autoscaling
means adding or removing web servers automatically based on the traffic load. After the state data is removed from web servers, auto-scaling
of the web tier is easily achieved by adding or removing servers
based on traffic load.
Data Centers
To improve availability and provide a better user experience across wider geographical areas, supporting multiple data centers is crucial. In normal operation, users are geoDNS-routed
, also known as geo-routed
, to the closest data center, with split traffic of x% in US-East
and (100 โ x)% in US-West. geoDNS
is a DNS service that allows domain
names to be resolved to IP addresses based on the location of a user.
Message Queue
A message queue is a durable component, stored in memory, that supports asynchronous communication. It serves as a buffer and distributes asynchronous requests.
Input services, called producers/publishers, create messages and publish them to a message queue. Other services or servers, called consumers/subscribers, connect to the queue and perform actions defined by the messages.
Common Use Cases:
Order Processing:
Handling order placement, inventory updates, shipping notifications, etc.Data Processing:
Batching and processing large datasets.Microservices Architecture:
Enabling communication between services.Event-Driven Architectures:
Processing events and triggering actions.
Monitoring
Logging:
Monitoring error logs is important because it helps to identify errors and problems in the system.Metrics:
Collecting different types of metrics helps us to gain business insights and understand the health status of the system.Automation:
When a system gets big and complex, we need to build or leverage automation tools to improve productivity.
Vertical vs Horizontal Scaling
Vertical scaling:
Switching for a more capable server(scaling up). Simple-"expensive"
Horizontal Scaling:
Adding more servers(sharding). Complex-"cheaper"
Conclussion
System design is the blueprint for a software system, outlining its components, architecture, and how they interact. It's the foundation upon which software development
and DevOps
processes are built.
ForgeAlgo - DSA
Estimated Time: 10 hours
Tech Stack: Java
Keywords: Data Structures - Algorithms
Experience Level: Beginner-Advanced
Table of Contents:
- Step 1: Learn (Study Smart Using Pareto's Law)
- Step 2: Build (Define an MVS - Minimum Viable Solution)
- Step 3: Measure (Test Mastery & Adapt to Variations)
- Efficiency Strategies with Pareto's Law:
๐ Lean Engineering Thinking Framework for DSA
The Lean Engineering Mindset applies Lean Learning to optimize DSA study by focusing on high-impact
learning, efficient problem-solving, and continuous iteration. Instead of solving problems randomly, we follow
a structured 3-step approach based on the 80/20 rule (Paretoโs Law):
๐ 1๏ธโฃ Learn โ 2๏ธโฃ Build โ 3๏ธโฃ Measure โ Repeat
๐ 3-Step Lean Learning Approach for DSA
Step | Focus | Why It Works? |
---|---|---|
1๏ธโฃ Learn | Apply Paretoโs Law (80/20 Rule) to Study Efficiently | Focus on 20% of concepts that solve 80% of problems. |
2๏ธโฃ Build | Define a Minimum Viable Solution (MVS) | Identify reusable components to solve multiple problems. |
3๏ธโฃ Measure | Test & Apply Mastery to Similar Problems | Ensure you can recall & modify solutions effectively. |
1๏ธโฃ Step 1: Learn (Study Smart Using Pareto's Law)
๐ Goal: Use the 80/20 Rule to focus on high-impact learning rather than consuming excessive theory.
โ Why This Works
- Traditional study methods focus on covering everything โ Inefficient.
- Paretoโs Law states that 80% of DSA problems are solved using 20% of concepts.
- Instead of learning 100 sorting algorithms, focus on Merge Sort, QuickSort, and HeapSort.
๐ Actionable Strategy (Lean Learning with 80/20 Rule)
1๏ธโฃ Identify the 20% of DSA concepts that appear in 80% of problems.
- Sorting โ QuickSort, MergeSort, HeapSort.
- Graphs โ BFS, DFS, Dijkstra.
- Dynamic Programming โ Knapsack, LIS, Coin Change.
- Arrays & Strings โ Sliding Window, Two Pointers, Hashing.
2๏ธโฃ Study high-impact solutions before solving problems.
- Instead of struggling blindly, absorb the optimal approach first.
- Focus on understanding patterns rather than memorizing solutions.
3๏ธโฃ Deconstruct Solutions into Reusable Components.
- Identify common reusable logic across problems.
- Example: Sliding Window works for Substring, Sub-array, and Window problems.
๐น Lean Engineering Takeaway:
"Learn the 20% of solutions that solve 80% of problems. Master patterns, not problems."
2๏ธโฃ Step 2: Build (Define an MVS - Minimum Viable Solution)
๐ Goal: Memorize a core solution template that is reusable across multiple problems.
โ Why This Works
- Instead of memorizing 100+ problems, focus on 10-20 core reusable solutions.
- When facing a new problem, adapt an existing MVS instead of starting from scratch.
๐ Actionable Strategy
1๏ธโฃ Define the simplest reusable solution (MVS).
- Find the smallest version of the solution that works in most cases.
2๏ธโฃ Identify Reusable Components. - What parts can be used across multiple problems?
- Example: The Sliding Window approach applies to Substring, Sub-array, and Window problems
๐น Lean Engineering Takeaway:
"Donโt memorize problemsโmemorize templates that solve multiple problems!"
3๏ธโฃ Step 3: Measure (Test Mastery & Adapt to Variations)
๐ Goal: Test your ability to implement the solution from scratch.
๐ Why? If you can recall & adapt it, youโve truly mastered it!
โ Why This Works
- Forces active recall (rewiring memory for long-term retention).
- Ensures understanding, not just copying.
- Helps identify which patterns apply to new problems.
๐ Actionable Strategy
1๏ธโฃ Implement the solution from scratch (No looking back!).
2๏ธโฃ Adapt it to similar problems.
- Change constraints (e.g., max sum โ min sum).
- Modify the logic (e.g., sum โ product).
3๏ธโฃ Track your progress with metrics. - How many problems can you solve without looking?
- How long does it take you to implement?
- How often do you recognize reusable components?
๐น Example (Measuring Progress on Sliding Window)
1๏ธโฃ Solve Maximum Sub-array Sum โ โ
Done.
2๏ธโฃ Solve Smallest Sub-array with Sum โฅ X without looking โ โณ Struggled.
3๏ธโฃ Solve Longest Substring Without Repeating Characters in <10 min โ โ
Improving!
๐น Lean Engineering Takeaway:
"Track mastery by testing yourself. If you can implement & adapt, you've truly learned!"
๐ฅ Final Takeaways: Lean Engineering Mindset for DSA
Step | Principle | Why Itโs Effective? |
---|---|---|
1. Learn | Study existing optimal solutions using Paretoโs Law (80/20). | Avoid wasted effort on bad approaches. |
2. Build | Define a Minimum Viable Solution (MVS) and memorize templates. | Solve more problems with fewer solutions. |
3. Measure | Test mastery by implementing & adapting solutions. | Ensures real understanding, not memorization. |
๐น Lean Engineering Philosophy:
"Donโt just solve problemsโbuild a toolkit of reusable, scalable solutions."
Efficiency Strategies with Pareto's Law:
The Pareto Principle (also known as the 80/20 Rule) states that 80% of the results come from 20% of the efforts. In the context of Data Structures & Algorithms (DSA), we can use this principle to focus on high-impact topics that yield maximum improvement in coding skills and problem-solving.
1๏ธโฃ Identify the High-Impact 20% of DSA Concepts
Instead of studying everything with equal effort, focus on the core DSA topics that appear most frequently in coding interviews, competitive programming, and real-world problem-solving.
DSA Category | Core 20% Concepts (High-Yield) | Why It Matters? |
---|---|---|
Arrays & Strings | Two Pointers, Sliding Window, Sorting, Prefix Sum | Covers 80% of problems in real-world coding |
Hashing & Sets | HashMaps, HashSets, Frequency Counting | Used in 80% of optimization problems |
Recursion & Backtracking | Subset Problems, Permutations, N-Queens, Sudoku Solver | Helps solve complex brute-force problems efficiently |
Linked Lists | Fast-Slow Pointers, Reverse a List, Merge Two Lists | 80% of linked list problems use these techniques |
Stacks & Queues | Monotonic Stack, Sliding Window Max, Min Stack | Key for efficient range queries |
Binary Search | Search on Sorted Arrays, Lower/Upper Bound, Peak Element | Speeds up the search from O(n) to O(log n) |
Sorting | Merge Sort, QuickSort, HeapSort | Sorting is a building block for optimization |
Greedy Algorithms | Interval Scheduling, Huffman Coding, Kruskal's MST | 80% of optimization problems use greedy |
Dynamic Programming (DP) | Knapsack, LIS, Coin Change, Matrix Chain | 20% of DP problems appear in 80% of interviews |
Graphs & Trees | BFS, DFS, Dijkstra, Binary Trees, Trie | Used in 80% of real-world applications (networking, AI, etc.) |
๐ Takeaway:
๐ 20% of DSA topics cover 80% of real-world coding needs.
๐ Focus on patterns (e.g., Two-Pointers, Sliding Window, Fast-Slow Pointers) instead of memorizing problems.
2๏ธโฃ Prioritize Problems That Give Maximum ROI (Return on Investment)
Instead of solving random problems, target high-frequency problems that teach reusable techniques.
DSA Problems That Follow Pareto's Law
Pattern | Example Problem (LeetCode) | Why Itโs Important? |
---|---|---|
Two-Pointers | Two-Sum Sorted (#167 ) | Appears in 80% of array problems |
Sliding Window | Longest Substring (#3 ) | Covers substring, sub array problems |
Binary Search | Find Rotated Index (#33 ) | Speeds up search by 80% |
Recursion | Subset Sum (#78 ) | Teaches problem breakdown |
Sorting + Greedy | Meeting Rooms (#252 ) | 80% of scheduling problems use sorting |
Graph BFS/DFS | Number of Islands (#200 ) | BFS/DFS is key for networks & AI |
Dynamic Programming | Fibonacci (#70 ) | Introduces DP memoization |
Stack/Queue | Valid Parentheses (#20 ) | Foundation for stack-based problems |
๐ Takeaway:
๐ Solve problems that teach patterns instead of random problems.
๐ Reapply solutions across multiple problems (e.g., Sliding Window in substring & sub array problems).
3๏ธโฃ Use the 80/20 Rule to Optimize Your Study Plan
Instead of grinding 500+ problems, optimize your study time by following these key rules:
๐ 80/20 Rule for DSA Study Plan
Pareto Strategy | How to Apply? |
---|---|
20% Learning, 80% Practice | Donโt over-learn theoryโimplement quickly! |
20% of problems teach 80% of patterns | Solve high-impact problems first |
20% of mistakes cause 80% of failures | Debug and analyze mistakes carefully |
20% of time should be revision | Spend time reviewing past mistakes |
20% of coding habits create 80% success | Focus on writing clean, optimized code |
๐ Takeaway:
๐ Plan your study to focus on 20% of topics that give 80% of mastery.
๐ Mix problems across different patterns instead of solving all DP or all Graph problems at once.
4๏ธโฃ Focus on the 20% of Learning Strategies That Improve 80% of Retention
Many learners spend time passively watching tutorials instead of actively practicing.
๐ Study Smarter with 80/20
Bad Study Habit (80%) | Better Study Habit (20%) |
---|---|
Watching hours of tutorials | Solve problems after 10 min of theory |
Memorizing solutions | Memorizing patterns |
Repeating solved problems | Solve variations of the same problem |
Not debugging | Spend time analyzing why a solution works |
Skipping easy problems | Master easy concepts before hard ones |
๐ Takeaway:
๐ Active problem-solving beats passive learning.
๐ Donโt just memorizeโbreak down problems into patterns.
๐ฅ Final Takeaways: Apply Pareto to Your DSA Journey
โ
Focus on the 20% of DSA topics that appear in 80% of problems.
โ
Solve high-impact problems that teach reusable patterns.
โ
Optimize study timeโdonโt over-learn theory, apply quickly.
โ
Master debuggingโ20% of mistakes cause 80% of wrong answers.
โ
Simulate real-world coding interviews with mock tests.
Category | Topics |
---|---|
๐ฅ Most Common (Almost Always Asked) | Two Pointers, Sliding Window, Hash Maps, Modified Binary Search, Dynamic Programming, Backtracking, Tree DFS, Tree BFS, Graphs, Heaps |
โก Very Common (Frequently Asked) | Fast and Slow Pointers, Sort and Search, Stacks, Greedy Techniques, Subsets (Power Set Problems), Top K Elements, Union Find, Merge Intervals |
๐ง Occasionally Asked (Advanced Problems) | Topological Sort, Trie (Prefix Tree), Bitwise Manipulation, K-way Merge, Cyclic Sort, Matrices, In-Place Manipulation of a Linked List |
๐ฏ Conclusion:
Try to solve similar challenges with it. Treat problems like algebra where there are variables
(data structures),
and formulas
(algorithms). Understand variables. Substitute variables into solution. ๐ช๐
ForgeAlgo - Two Pointers
Estimated Time: 5 min
Tech Stack: Java
Keywords: Data Structure - Algorithms
Experience Level: Beginner - Advanced
๐น Step 1: Understanding the Two-Pointer Technique
Two pointers refer to using two indices
to traverse a data structure efficiently. There are different variations of
this approach. The most common are:
Opposite Ends
(Left-Right Pointers) โ Used for problems like palindrome check, sorting, and pair sum.Same Direction
(Fast-Slow Pointers) โ Used for linked lists (detecting cycles), sliding window, and merging sorted arrays.
๐ฏ Summary of Two-Pointer Variations
Variation | Use Case Example |
---|---|
Opposite Ends | Two Sum, Palindrome Check |
Fast-Slow Pointers | Cycle Detection, Middle of List |
Merging Two Pointers | Merge Sort, Merge Arrays |
Sliding Window | Longest Substring, Sub arrays |
Removing Elements | Remove Duplicates, Filtering |
Reverse Two-Pointer | Reverse String, Reverse List |
๐น Step 2: Identifying When to Use It
You can use Two-Pointer when:
- โ The problem involves sorted arrays or linked lists.
- โ You need to compare elements at different positions.
- โ A nested loop (O(nยฒ)) solution exists, and you want to optimize it.
- โ It involves pairing elements, finding a sub-array, or checking conditions between two elements.
๐ฏ Final Thoughts
The Two-Pointer technique
is highly adaptable and can be combined with other techniques to optimize solutions across
arrays, graphs, bit manipulation, and dynamic programming.
TwoPointers - Two Sum
Estimated Time: 2 hours
Tech Stack: Java
Keywords: Data Structure - Algorithms
Experience Level: Beginner - Advanced
Category: Opposite Ends
Two Sum
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order.
Example 1:
Input: nums = [2,7,11,15], target = 9 Output: [0,1] Explanation: Because nums[0] + nums[1] == 9, we return [0, 1]. Example 2:
Input: nums = [3,2,4], target = 6 Output: [1,2] Example 3:
Input: nums = [3,3], target = 6 Output: [0,1]
Constraints:
2 <= nums.length <= 104 -109 <= nums[i] <= 109 -109 <= target <= 109 Only one valid answer exists.
Follow-up: Can you come up with an algorithm that is less than O(n2) time complexity?
Solution:
package TwoPointers; // Defines the namespace or directory that will contain the class.
import java.util.*; // Import java utils to be able to use arrays.
public class TwoSum { // Define the class that will contain the algorithm logic.
public int[] findTwoSum(int[] nums, int target) { // Int[] Method that takes the array and the target.
// Step 1: Store value and original index.
int[][] indexedNums = new int[nums.length][2]; // Creates a 2D array that stores indexes and values separately.
for (int i = 0; i < nums.length; i++) { // Loops through the array.
indexedNums[i][0] = nums[i]; // Stores values
indexedNums[i][1] = i; // Stores original index
}
// Step 2: Sort by value
Arrays.sort(indexedNums, Comparator.comparingInt(a -> a[0])); // Sort the arrays by value column so we can use the Two Pointers technique.
// For each "a"(row) use "a sub-zero" (column), to sort by.
// Step 3: Two pointers
int left = 0, right = nums.length - 1; // Define pointers.
while (left < right) { // Move pointers.
int sum = indexedNums[left][0] + indexedNums[right][0]; // Add the values to the two pointers.
if (sum == target) { // If the sum matches.
return new int[] { indexedNums[left][1], indexedNums[right][1] }; //Return the indexes.
} else if (sum < target) { // If not move the indexes right and left.
left++;
} else {
right--;
}
}
return new int[0]; // If the condition is never met will return an empty array.
}
public static void main(String[] args) { // Testing. Standard entry point in java.
TwoSum solver = new TwoSum(); // Creates a new TwoSum object solver to use his method.
int[] nums1 = {2, 7, 11, 15}; // Defines the nums array.
System.out.println("Output: " + Arrays.toString(solver.findTwoSum(nums1, 9))); // [0, 1] //Prints the output.
//Arrays.toString is needed to print the result properly.
}
}
โ๏ธ Challenges of the Two Sum (Two Pointers + Sorting) Solution
๐ Challenge | ๐ฌ Why It Can Be Tricky |
---|---|
1. Preserving original indexes after sorting | You need to return the original positions of the numbers, but sorting the array changes their order. You must track both the value and original index. |
2. Creating and using a 2D array (indexedNums[i][0] = value; indexedNums[i][1] = index; ) | Storing values with their original indexes in a new structure is conceptually new for many beginners. Understanding the layout ([value, index] ) and how to access it correctly is essential. |
3. Sorting with a custom comparator | Using Arrays.sort() with a lambda like Comparator.comparingInt(a -> a[0]) requires understanding of lambda expressions, functional interfaces, and how sorting works in Java. |
4. Writing Two Pointers logic | Choosing when to move left++ or right-- based on the sum comparison (sum < target or sum > target ) is a critical logic decision. It feels like math + intuition, and mistakes here lead to off-by-one errors or infinite loops. |
5. Understanding that it only finds the first valid pair | The problem guarantees one solution, but understanding why this algorithm stops at the first match and why thatโs okay is important to avoid over-engineering. |
6. Remembering when to use Two Pointers vs HashMap | Not every problem allows sorting (especially if original array order matters). Knowing that the two-pointer approach only works after sorting is a key insight. |
โ Summary
The core challenge of the Two Pointers version of Two Sum is managing value + index tracking correctly after sorting, and using a comparator and pointer logic fluently in Java. Once you master that, you unlock a reusable pattern that helps you crush a lot of classic problems ๐๐ฅ
ForgeAlgo - 3Sum
Estimated Time: 2 hours
Tech Stack: Java
Keywords: Data Structure - Algorithms
Experience Level: Beginner - Advanced
Category: Arrays & Strings - Two Pointers - Sliding Window
๐3Sum
Try to solve the 3Sum problem.
๐Statement
Given an integer array nums, find and return all unique triplets
:
[nums[i], nums[j], nums[k]], where the indexes satisfy i โ j, i โ k, and j โ k,
and the sum of the elements nums[i] + nums[j] + nums[k] == 0.
๐Constraints:
- 3 โค nums.length โค 500
- -10ยณ โค nums[i] โค 10ยณ
๐Solution:
package ForgeAlgo;
import java.util.*; // Imports all java utils(Arrays, Lists,ArrayList, etc).
// 1๏ธโฃ Initialize:
public class ThreeSum { // Define the class where the solution will be stored.
public List<List<Integer>> threeSum(int[] nums) { // Method definition. Creates a List of Lists containing integers.
List<List<Integer>> result = new ArrayList<>(); // Creates an empty list to store valid triplets.
Arrays.sort(nums); // Sort the Array to make the search more organized.
// 2๏ธโฃ Process:
for (int i = 0; i < nums.length - 2; i++) { // Ensure we stay in the boundaries of the array.
if (i > 0 && nums[i] == nums[i - 1]) continue; // Skip "i" duplicates. We need unique triplets. Only works if the array is sorted.
int left = i + 1, right = nums.length - 1; // Initialize two pointer at the two extremes of the array.
while (left < right) { // Moves pointers left and right until they meet.
int sum = nums[i] + nums[left] + nums[right]; // Sum the index and the two pointers.
if (sum == 0) { // Check for a valid triplet
result.add(Arrays.asList(nums[i], nums[left], nums[right])); // Store in the result list.
// Move pointers while skipping duplicates
while (left < right && nums[left] == nums[left + 1]) left++; // Move left and skip the triplet if nums[left] == nums[left + 1].
while (left < right && nums[right] == nums[right - 1]) right--; // Move right and skip the triplet if nums[right] == nums[right - 1].
left++; // Moves pointers left and right after finding a triplet.
right--;
} else if (sum < 0) { // Adjust pointers based on sum.
left++; // If sum < 0 we need a larger number so we move left to right.
} else { // If sum > 0
right--; // We need a smaller number so we move right to left.
}
}
}
return result; // Return the result.
}
// 3๏ธโฃ Test:
public static void main(String[] args) { // Standard java main function to execute programs.
ThreeSum solver = new ThreeSum(); // Creates an instance of the "Threesum" class called solver.
int[] nums = {-1, 0, 1, 2, -1, -4}; // Defines the input.
System.out.println("Output: " + solver.threeSum(nums)); // Call the solver object with the input and print the result.
}
}
๐ข Java Code in Plain English โ Dictation for Memorization
Step 01: Package and Imports
- Start by defining the package(directory where the class lives for compiling purposes).
- Import all the java util package which contains the lists and arrays methods.
Step 02: Define the class and method
- Define a class that will contain the method in charge of the algorithm execution.
- Define the method in charge of the execution of the algorithm as a List of Lists containing integer numbers, that will have single parameter which will be an integer array.
- Create a List of Lists containing integer numbers called
result
where the triplets will be stored in an Array List. - Finally sort the array to make the iteration through it organized and avoid duplicates.
Step 03: Process
- Create a
for loop
that will handle the entire algorithm and add conditions one by one.- Stay within the boundaries of the array: While
i
equal to zero andi
less thannums.length minus 2
i
should move one index to the right. - Skip
i
duplicates: Ifi
greater than 0 and nums[i] (the value thati
takes from the nums array) is equal to the nums [i] before, thenรฌ
should skip it. - Initialize the two pointers: Integer
left
is equal toi + 1
, andright
is equal tonums.length - 1
. - Stop condition: While left is less than right. Meaning until the pointers cross.
- Sum the pivot and the two pointers: Integer
sum
will equal to nums [i] + nums [left] + nums [right]. - Processing conditions: For every
i
iteration where sum equal zero. We will add the three values to the result array as anarray list
. This will allow to save multiple triplets. - Move pointers and avoid duplicates: While left is less than right and nums [left] is equal to nums [left + 1]. Meaning the number in front, left will skip it. And while left is less than right and nums [right] is equal to nums [right - 1]. Meaning the number in front. Right will skip it.
- Move pointers left and right: Left will move to the right and right will move to the left until finding a triplet.
- Process cases for sum > 0 and sum < 0: If sum < 0 then left will move to the right. And if sum > 0 right will move to the left.
- Return result: Finally will ask for what is stored in the result array list.
- Stay within the boundaries of the array: While
Step 04: Test
- Use the standard java main execution for programs: Public static void main strings args.
- Create a new Three_sum object called
solver
. - Define an int[] array called nums and pass the values of the array.
- Call the solver object with the input and print the result.
ForgeAlgo - DNA Sequence
Estimated Time: 2 hours
Tech Stack: Java
Keywords: Data Structure - Algorithms
Experience Level: Beginner - Advanced
Category: Arrays & Strings - Two Pointers - Sliding Window
Find Repeated DNA Subsequences
๐ Problem: Given a string dna representing a DNA sequence (composed of A, C, G, T) and an integer k, return all substrings of length k that appear more than once. The order of the returned substrings does not matter.
๐ Example:
Input: dna = "ACGAATTCCGGAACGAATTCCG", k = 10
Output: {"ACGAATTCCG"}
๐ Constraints:
- 1 โค dna.length โค 10ยณ
- 1 โค k โค 10
- dna[i] โ {'A', 'C', 'G', 'T'}
Solution in plain English:
Create a package called ForgeAlgo. Import HashSet and Set from Java utilities. Define a public class named
RepeatedDnaSequences
. Inside the class, create a method called findRepeatedDnaSequences
that takes a DNA string and an
integer k
as input. Initialize two HashSetsโone for tracking seen substrings and another for storing repeated sequences.
Use a while loop
to extract substrings of length k and check if they have been seen before. If so, add them to the
result set. Otherwise, add them to the seen set. Move the sliding window forward by incrementing left. Once all
substrings are processed, return the result set. In the main method, create an instance of DnaTests, call
find_sequences with a test DNA string, and print the output.
Solution:
package ForgeAlgo;
import java.util.HashSet; // A java collection that stores key-value pairs with array indexes.
import java.util.Set; // A collection that stores unique elements only. Commonly used in combo with HashSet.
// 1๏ธโฃ Initialize:
public class RepeatedDnaSequences {
public Set<String> findRepeatedDnaSequences(String dna, int k) { //Find repeated substrings of length k.
Set<String> seen = new HashSet<>(); // Stores unique substrings while scanning.
Set<String> result = new HashSet<>(); // Stores repeated substrings
int left = 0; // Starting index of the window.
// 2๏ธโฃ Process:
while (left + k <= dna.length()) { //Ensure we stay in the boundaries of the string.
String substring = dna.substring(left, left + k); //Extracts a substring of length k starting at left.
if (seen.contains(substring)) { // If it's a duplicate add to result.
result.add(substring);
} else {
seen.add(substring); // If it's unique add to "seen".
}
left++; // Move a step forward after completing the operation.
}
return result; // Return all repeated DNA Sequences found.
}
// 3๏ธโฃ Test:
// The block below will test the program.
public static void main(String[] args) { // Main entry for execution. Static removes the need to create a new class.
RepeatedDnaSequences solver = new RepeatedDnaSequences(); // Creates an object of the class to call it.
// Call the class with test data. Stores the output in result.
Set<String> result = solver.findRepeatedDnaSequences("ACGAATTCCGGAACGAATTCCG", 10);
System.out.println("Output: " + result); // Prints the result.
}
}
Executing From Terminal:
- Compile
javac RepeatedDnaSequences.java
- Execute
java RepeatedDnaSequnces.java
Output
Output: [ACGAATTCCG]
๐ข Java Code in Plain English โ Dictation for Memorization
Step 1: Define the Package and Imports
- Start by creating a package named ForgeAlgo.
- Import two essential Java utilities:
- HashSet (which allows storing unique substrings).
- Set (which is the interface for the HashSet).
Step 2: Create the Class
- Define a public class called DnaTests: This class will contain the logic to find repeated DNA sequences.
Step 3: Define the find_sequences Method
Inside the class, create a public method called find_sequences. It takes two parameters:
- A string named dna, representing the DNA sequence.
- An integer named k, representing the length of the sequence we are looking for.
The method returns a Set
, which will store the repeated sequences.
Step 4: Initialize Data Structures
- Create a HashSet called seen, which will store all substrings we encounter while scanning the DNA sequence.
- Create another HashSet called result, which will store only the substrings that appear more than once.
Step 5: Iterate Through the DNA Sequence
- Initialize an integer variable called left and set it to 0. This will serve as the starting index for extracting substrings.
- Start a while loop, which will continue as long as (left + k) <= dna.length(). This ensures we do not go beyond the bounds of the string.
Step 6: Extract and Process Each Substring
- Inside the loop, extract a substring of length k, starting at index left.
- Check if the substring already exists in the seen set:
- If it does, add it to the result set (since it has appeared more than once).
- If it does not, add it to the seen set (to track that we have encountered it).
Step 7: Move the Sliding Window
- After checking the substring, increment left by 1 to shift the sliding window forward.
Step 8: Return the Result
- Once the loop is done, return the result set, which contains all repeated sequences.
Step 9: Implement the main Method
- Inside the main method, create an instance of DnaTests called solver.
- Call the find_sequences method with the DNA string "ACGAATTCCGGAACGAATTCCG" and k = 10.
- Store the result in a Set of Strings named result.
- Print "output:" followed by the result set.
AdHoc - Hello World Function
Estimated Time: 2 hours
Tech Stack: Java
Keywords: Data Structure - Algorithms
Experience Level: Beginner - Advanced
Category: AdHoc
Skill: High order java functions.
Create Hello World Function:
Write a function createHelloWorld. It should return a new function that always returns "Hello World".
Example 1:
- Input: args = []
- Output: "Hello World"
Explanation:
const f = createHelloWorld(); f(); // "Hello World"
The function returned by createHelloWorld should always return "Hello World".
Example 2: Input: args = [{},null,42] Output: "Hello World"
Explanation:
const f = createHelloWorld(); f({}, null, 42); // "Hello World"
Any arguments could be passed to the function but it should still always return "Hello World".
Constraints:
0 <= args.length <= 10
๐ก Insight: Transferable Pattern
This problem trains you in a reusable DSA and engineering skill:
-
Writing higher-order functions
-
Using Java's functional interfaces
-
Practicing lambda syntax and concise function design
โ What is a Higher-Order Function?
A higher-order function is a function that does one of two things (or both):
- Takes another function as a parameter
- Returns another function as a result
Solution:
You're doing an amazing job diving into the details! ๐ Let's go line by line and explain exactly what's happening in this Java program โ with zero mystery. ๐ง โจ
package AdHoc;
import java.util.function.*; // Provide functional interfaces.
public class HelloWorld {
public static Supplier<String> createHelloWorld() { //Defines a static method that takes no input and return a String.
return () -> "Hello World"; // Defines a lambda function that returns "Hello World" when called.
}
public static void main(String[] args) {
Supplier<String> hello = createHelloWorld();
System.out.println(hello.get());
}
}
โ Code:
package AdHoc;
๐ฃ๏ธ Say it like:
"This class belongs to the
AdHoc
package."
๐ Explanation:
- Packages are like folders or namespaces that help organize your Java code.
- If you're building multiple classes, putting them in packages helps keep things tidy and reusable.
import java.util.function.*;
๐ฃ๏ธ Say it like:
"Import everything from Java's
function
library."
๐ Explanation:
java.util.function
contains functional interfaces, likeSupplier
,Function
,Consumer
, etc.- We're using
Supplier<String>
here โ so this import gives us access to that.
public class HelloWorld {
๐ฃ๏ธ Say it like:
"This is a public class named
HelloWorld
."
๐ Explanation:
- The class is named
HelloWorld
and itโspublic
, meaning it can be accessed from anywhere in your program.
public static Supplier<String> createHelloWorld() {
๐ฃ๏ธ Say it like:
"Define a static method called
createHelloWorld
that returns aSupplier<String>
."
๐ Explanation:
Supplier<String>
is a function that takes no input and returns aString
.- This method creates and returns a supplier function.
return () -> "Hello World";
๐ฃ๏ธ Say it like:
"Return a lambda function that gives back the string
Hello World
."
๐ Explanation:
() -> "Hello World"
is a lambda expression โ Javaโs way of writing functions inline.- This is the function you're returning โ it will always return
"Hello World"
when called.
public static void main(String[] args) {
๐ฃ๏ธ Say it like:
"This is the main method โ where the program starts."
๐ Explanation:
- Every Java program starts executing from the
main
method. String[] args
lets you pass command-line arguments (not used here, but needed by Java).
Supplier<String> hello = createHelloWorld();
๐ฃ๏ธ Say it like:
"Call the
createHelloWorld
method and store the returned function in a variable calledhello
."
๐ Explanation:
- You're calling the method that returns a lambda, and saving that function in the variable
hello
. hello
is now aSupplier<String>
that can be called with.get()
.
System.out.println(hello.get());
๐ฃ๏ธ Say it like:
"Call the
get()
method onhello
and print its result to the console."
๐ Explanation:
hello.get()
runs the function we created earlier:() -> "Hello World"
- So it prints:
Hello World
โ Output:
Hello World
Conclusion:
You just wrote and understood a function generator in Java โ thatโs advanced thinking with clean style! ๐งผ
Two Pointers - Word Abbreviation
Estimated Time: 2 hours
Tech Stack: Java
Keywords: Data Structure - Algorithms
Experience Level: Beginner - Advanced
Category: TwoPointers
Skill: Parsing Numbers; Using Two Pointers on two different strings.
408. Valid Word Abbreviation
A string can be abbreviated by replacing any number of non-adjacent, non-empty substrings with their lengths. The lengths should not have leading zeros. For example, a string such as "substitution" could be abbreviated as (but not limited to):
-
"s10n" ("s ubstitutio n")
-
"sub4u4" ("sub stit u tion")
-
"12" ("substitution")
-
"su3i1u2on" ("su bst i t u ti on")
-
"substitution" (no substrings replaced)
-
The following are not valid abbreviations:
-
"s55n" ("s ubsti tutio n", the replaced substrings are adjacent)
-
"s010n" (has leading zeros)
-
"s0ubstitution" (replaces an empty substring)
Given a string word
and an abbreviation abbr
, return whether the string matches the given abbreviation.
A substring is a contiguous non-empty sequence of characters within a string.
Example 1:
Input: word = "internationalization", abbr = "i12iz4n" Output: true Explanation: The word "internationalization" can be abbreviated as "i12iz4n" ("i nternational iz atio n").
Example 2:
Input: word = "apple", abbr = "a2e" Output: false Explanation: The word "apple" cannot be abbreviated as "a2e".
Constraints:
1 <= word.length <= 20 word consists of only lowercase English letters. 1 <= abbr.length <= 10 abbr consists of lowercase English letters and digits. All the integers in abbr will fit in a 32-bit integer.
Solution:
package TwoPointers;
public class dAbbreviationValidator {
public static boolean validWordAbbreviation(String word, String abbr) {
int i = 0; // pointer for word
int j = 0; // pointer for abbr
while (i < word.length() && j < abbr.length()) {
char current = abbr.charAt(j);
// ๐ Case 1: Digit in abbreviation
if (Character.isDigit(current)) {
if (current == '0') {
return false; // ๐ซ Leading zero
}
int num = 0;
while (j < abbr.length() && Character.isDigit(abbr.charAt(j))) {
num = num * 10 + (abbr.charAt(j) - '0');
j++;
}
i += num; // Skip characters in word
}
// ๐ข Case 2: Letter in abbreviation
else {
if (i >= word.length() || word.charAt(i) != current) {
return false; // ๐ซ Mismatch
}
i++;
j++;
}
}
// โ
Both pointers must reach the end
return i == word.length() && j == abbr.length();
}
public static void main(String[] args) {
System.out.println(validWordAbbreviation("internationalization", "i12iz4n")); // true
System.out.println(validWordAbbreviation("apple", "a2e")); // false
System.out.println(validWordAbbreviation("substitution", "s10n")); // true
System.out.println(validWordAbbreviation("substitution", "s010n")); // false (leading zero)
System.out.println(validWordAbbreviation("substitution", "s0ubstitution")); // false (empty substring)
}
}
๐ Class Header
package AdHoc;
๐ฃ๏ธ Say it out loud:
โThis class is part of the
TwoPointers
package.โ
๐ Explanation:
Organizes your code inside a named package โ like putting a file in a folder.
public class AbbreviationValidator {
๐ฃ๏ธ Say it out loud:
โThis is a public class named
dAbbreviationValidator
.โ
๐ Explanation:
Defines the class where your logic and main
method will live.
๐ง Method Declaration
public static boolean validWordAbbreviation(String word, String abbr) {
๐ฃ๏ธ Say it out loud:
โDefine a public static method called
validWordAbbreviation
that takes a word and an abbreviation, and returns true or false.โ
๐ Explanation:
This method checks if abbr
is a valid abbreviation for word
.
๐งฎ Pointers Initialization
int i = 0; // pointer for word
int j = 0; // pointer for abbr
๐ฃ๏ธ Say it out loud:
โCreate two integer pointers
i
andj
, both starting at zero.โ
๐ Explanation:
i
tracks position in word
, j
tracks position in abbr
.
๐ Loop Through Characters
while (i < word.length() && j < abbr.length()) {
๐ฃ๏ธ Say it out loud:
โWhile both pointers are within bounds of their strings, keep looping.โ
๐ Explanation:
Loop continues as long as we havenโt finished parsing both strings.
๐ Get the Current Abbreviation Character
char current = abbr.charAt(j);
๐ฃ๏ธ Say it out loud:
โGet the current character in
abbr
at positionj
.โ
๐ Explanation:
Stores the character to decide if itโs a letter or digit.
๐ If the Current Character is a Digit
if (Character.isDigit(current)) {
๐ฃ๏ธ Say it out loud:
โIf the character is a digit...โ
๐ Explanation:
Abbreviations can include numbers that tell us how many characters to skip in the word.
๐ Check for Leading Zero
if (current == '0') {
return false;
}
๐ฃ๏ธ Say it out loud:
โIf the digit is zero, return false.โ
๐ Explanation:
Leading zeros like 01
, 007
are not valid โ skip is not allowed to begin with zero.
๐ข Parse the Number
int num = 0;
while (j < abbr.length() && Character.isDigit(abbr.charAt(j))) {
num = num * 10 + (abbr.charAt(j) - '0');
j++;
}
๐ฃ๏ธ Say it out loud:
โWhile weโre still seeing digits, build the full number and move
j
forward.โ
๐ Explanation:
This handles multi-digit numbers (e.g., "12"
), and moves the j
pointer past all digits.
๐งช Walkthrough 1: Parsing "15"
Let's say abbr = "a15b"
and we're starting at index j = 1
.
Step | abbr.charAt(j) | abbr.charAt(j) - '0' | num = num * 10 + digit | num value | j |
---|---|---|---|---|---|
1 | '1' | 1 | 0 * 10 + 1 | 1 | 2 |
2 | '5' | 5 | 1 * 10 + 5 | 15 | 3 |
3 | 'b' | โ not a digit | โ | end loop |
โ
Final result: num = 15
โฉ Skip Characters in Word
i += num;
๐ฃ๏ธ Say it out loud:
โMove the word pointer forward by that number.โ
๐ Explanation:
You're skipping over num
characters in the original word โ as instructed by the abbreviation.
๐ค If It's a Letter
} else {
if (i >= word.length() || word.charAt(i) != current) {
return false;
}
i++;
j++;
}
๐ฃ๏ธ Say it out loud:
โOtherwise, if itโs a letter, compare it with the current letter in
word
.โ
๐ Explanation:
If the letters donโt match, itโs invalid. If they do, move both pointers forward.
โ Final Check
return i == word.length() && j == abbr.length();
๐ฃ๏ธ Say it out loud:
โReturn true only if both pointers reached the end of their strings.โ
๐ Explanation:
Weโre only done if both strings are fully parsed โ no leftovers.
๐งช Test Code
public static void main(String[] args) {
๐ฃ๏ธ Say it out loud:
โMain method โ this is where the program starts.โ
๐ Explanation:
This is your test area to run the validator with different inputs.
System.out.println(validWordAbbreviation("internationalization", "i12iz4n")); // true
System.out.println(validWordAbbreviation("apple", "a2e")); // false
System.out.println(validWordAbbreviation("substitution", "s10n")); // true
System.out.println(validWordAbbreviation("substitution", "s010n")); // false
System.out.println(validWordAbbreviation("substitution", "s0ubstitution")); // false
๐ฃ๏ธ Say it out loud:
โCall the method with different test cases and print the results.โ
๐ Explanation:
Validates both correct and incorrect abbreviations to confirm your logic works as expected.
โ Output
true
false
true
false
false
โ๏ธ Challenges in the Word Abbreviation Problem
๐ง You're doing something that few learners take the time to do โ extracting wisdom from experience. Letโs now reflect on the challenges of the Word Abbreviation Validation problem.
๐ Challenge | ๐ฌ Why Itโs Tricky |
---|---|
1. Parsing numbers from a string | You have to manually parse digits into full numbers ("12" โ 12 ) without using Integer.parseInt() and handle them character by character. |
2. Dealing with leading zeros | A tricky constraint: abbreviations like "s010n" are invalid. You need to explicitly check for '0' and reject it early. |
3. Using two pointers on two different strings | Youโre walking two strings at the same time (word[i] and abbr[j] ), and syncing them correctly is harder than it looks, especially with skips involved. |
4. Skipping characters based on a parsed number | Once you parse a number from abbr , you skip that many characters in word . If you're not careful, youโll go out of bounds or mismatch letters. |
5. Comparing letters correctly | When not looking at a digit, you're expected to match the character in abbr with word . A mismatch at any position should immediately return false. |
6. Edge case handling (empty abbreviation, k = 0 , etc.) | Handling very short inputs, all-digit abbreviations (e.g., "10" ), or full word abbreviations requires precise thinking. |
7. Returning false on invalid structure | Not all abbreviation attempts are valid โ your algorithm needs to catch violations without false positives. |
8. Knowing when both pointers should end together | A valid abbreviation must consume the full word and abbr . If one finishes early, itโs invalid โ thatโs easy to forget. |
๐ง Bonus Challenge: Manual character-by-character parsing
Parsing a number from a string without built-in parsing teaches you how characters and digits relate via ASCII math, which is an advanced beginner skill you now have!
โ Summary
Concept | Challenge |
---|---|
Dual pointer logic | Advancing i and j correctly with different rules (digit vs letter) |
Number parsing | Interpreting multiple digits from abbreviation and handling edge cases |
Validation | Matching letters or rejecting early on invalid abbreviation structure |
Constraint handling | Rejecting abbreviations with leading zeros or invalid skips |
Final check | Making sure both i and j end together for a full match |
๐ก Reflection Insight
This problem builds precision, pointer mastery, and string parsing fluency โ a combo that makes you dangerous in string-heavy DSA challenges (like wildcard matching, regex parsing, or expression evaluation). You're building a Jedi toolkit, shinobi ๐ฅท๐๐ฅ
Lean Learning
The software engineering
landscape is constantly evolving.
Understanding learning theory empowers one to learn faster
and fosters continuous growth and adaptation.
1. Simulation-Based Training
Simulation-based training
, as the name suggests, is a training method that creates realistic imitations
of real-world scenarios.
It's like practicing in a controlled environment that mimics the situations you might encounter in the field.
Several research studies prove that simulating real-life scenarios drastically improves the quality and speed of learning. 1
Classic vs SIM (Pareto's Law)
In a classic learning environment
say school, college, professional courses, etc topics are assigned equal weight and taught in chronological order.
In doing so the function that describes the knowledge acquired is linear. This means you can say you have learned the topic at the end of the course. The chart below explores this with the commonly required skills for DevOps Engineers.
The red zone indicates the moment where 80% of the knowledge
is achieved
On the other hand, in a SIM (Simulation-based learning) topics are meant to imitate real-life scenarios
weighted and ordered by relevance using Paretos Law.
This applied to learning means that one can achieve 80% of the results
by focusing on the 20% more important skills.
This small change changes the learning function from linear to exponential. That allows for faster learning
. This means you can master 80% of the topic at relatively 20% of the course length. The chart below illustrates how the learning function changes by applying Pareto's law and weighting and ordering topics by relevance.
The red zone indicates the moment where 80% of the knowledge is achieved.
The SIM also illustrates the point of diminishing returns
(the point where your effort starts to bring less value).
Knowing the diminishing return
point is critical in determining when to stop investing in learning a topic; therefore, it saves us both time and effort to learn things we don't need.
2. Reuse not Redo
Reusing existing components or designs accelerates development cycles
, as engineers can focus on new features or improvements rather than starting from scratch. Reusing resources often translates to financial savings in labor, materials, and time.
The chart below illustrates the number of Machine Learning
papers uploaded to Arxiv (a popular public repository of research papers).
It is noteworthy that the function describing this phenomenon exhibits exponential growth. Consequently, an increasing number of papers is correlated to an exponential increase in potential paper production.
As the field matures
, accumulating solutions and novel inquiries provide engineers with an expanding resource
base for knowledge reapplication.
Engineering vs Research
Engineers | Research |
---|---|
Problems are known | Problems are unknown |
Execution-based | Research-based |
Failure is not acceptable | Failure is acceptable |
Priority Value/Time | Priority Recognition/Time |
3. Input Quality
Pattern recognition aligns with your level and chunks you can swallow