Introduction

Welcome to my notebook, a personal guide and companion through the intricate world of programming with a focus on Rust, Go, and PHP. This notepad is designed to serve as a structured journey into the fundamentals and advanced concepts that underpin modern software development, including algorithms, data structures, design patterns, and architectural insights.

As we delve into these languages, each known for its unique features and applications, the goal is to build a robust understanding that transcends language-specific syntax and dives into the core principles of efficient and effective programming.

This notepad will document key learnings, practical examples, and personal insights, acting as both a reference and a reflection of progress. Having built a solid foundation in PHP and C, I am now expanding my programming expertise by diving into Go and Rust. This expansion of my skill set is motivated by a desire to master diverse programming paradigms and technologies, enhancing my ability to tackle more complex software challenges efficiently.

Let this journey be both challenging and rewarding, as you not only learn how to code but also how to think and solve problems like a seasoned software architect. Here's to the many lines of code that will lead to new discoveries and opportunities!

Introduction to Data Structures

Data structures are fundamental components of programming, enabling the organization, management, and storage of data in efficient ways. Each programming language offers its unique take on data structures, tailored to its paradigms and typical use cases. This chapter explores the implementation and utilization of data structures in three powerful and diverse programming languages: PHP, Go, and Rust.

PHP, a dynamically typed language predominantly used for server-side scripting, provides a rich set of built-in data structures such as arrays and objects, along with more specialized structures through its Standard PHP Library (SPL) and the newer DS (Data Structures) extension. PHP's approach to data structures is flexible and dynamic, catering well to web development needs.

Go, a statically typed language designed for simplicity and performance, emphasizes data structures that support concurrent programming patterns out of the box. Go's built-in data types like slices and maps are complemented by powerful synchronization primitives to build complex data structures safely and efficiently in a concurrent environment.

Rust, known for its focus on safety and performance, offers data structures that prevent many common bugs, such as null pointer dereferencing and concurrent data races, without the overhead of a garbage collector. Rust’s standard library includes collections such as vectors, hash maps, and more complex structures that enforce its strict ownership and borrowing rules, ensuring thread safety.

This chapter will delve into each language’s available data structures, discussing their characteristics, typical use cases, and the best practices for their implementation. By comparing these languages, we aim to highlight how data structures can be optimally utilized in different programming environments to achieve efficient and effective solutions.

Data Structures in PHP

Basic Data Structures

Arrays

An array in PHP is an ordered map (PHP arrays are in fact implemented as ordered hashtables). It can be used as an array, list, hash table, dictionary, collection, stack and queue.

$array = ["foo", "bar", "baz"];

$array = [
    "foo" => "bar",
    "bar" => "foo",
];

Objects

Objects in PHP are created by instantiating a class. An object is an instance of a class. A class is a blueprint for an object.

class Foo {
    public $foo;
    public $bar;

    public function __construct($foo, $bar) {
        $this->foo = $foo;
        $this->bar = $bar;
    }
    
    public function getFoo() {
        return $this->foo;
    }    
}

Enumerations

With the release of PHP 8.1, PHP introduced built-in support for enumerations, commonly known as enums. Enums are a way to define a type that can have a fixed set of values. Prior to PHP 8.1, developers often used class constants to achieve similar functionality, but native enum support offers a more robust, integrated, and semantic approach.

Basic Enum

enum Status {
    case Pending;
    case Approved;
    case Denied;
}

function printStatus(Status $status) {
    echo $status->name;
}

printStatus(Status::Pending); // Outputs: Pending

Enum with Values

enum Status: string {
    case Pending = 'p';
    case Approved = 'a';
    case Denied = 'd';
}

function printStatus(Status $status) {
    echo $status->value;
}

printStatus(Status::Pending); // Outputs: p

Advanced Data Structures

Standard PHP Library (SPL) has introduced a number of data structures that are available in PHP. These include:

$list = new SplDoublyLinkedList();
$list->push('a');
$list->push('b');
$list->push('c');
$list->push('d');

$q = new SplStack();
$q[] = 1;
$q[] = 2;
$q[] = 3;
foreach ($q as $elem)  {
 echo $elem."\n";
}

$queue = new SplQueue();
$queue->enqueue('first');
$queue->enqueue('second');
$queue->enqueue('third');

echo $queue->dequeue(); // Outputs: first
echo $queue->top();     // Outputs: second

$h = new SplMinHeap();
// [parent, child]
$h->insert([0, 1]);
$h->insert([1, 2]);
$h->insert([1, 3]);
$h->insert([1, 4]);

$queue = new SplPriorityQueue();
$queue->insert('a', 3);
$queue->insert('b', 6);
$queue->insert('c', 1);

foreach ($queue as $elem) {
    echo $elem."\n";
}

for ($h->top(); $h->valid(); $h->next()) {
    list($parentId, $myId) = $h->current();
    echo "$myId ($parentId)\n";
}

$array = new SplFixedArray(5);
$array->next();
$array->current();

$s = new SplObjectStorage();
$o1 = new stdClass;
$s->attach($o1);

DS Library

The DS extension is a C based library that provides specialized data structures as an alternative to the PHP array. It is designed to be fast and memory efficient.

Installation

pecl install ds
# or 
dnf install php-ds

Remember to enable the extension in your php.ini file.

Data Structures

  • Vector: Similar to an array, a Vector is a sequence of values in a contiguous buffer that grows automatically. It is more performant in scenarios where you need to frequently append values.
  • Deque: A double-ended queue that allows elements to be added or removed from both the front and the back of the sequence. It is optimized for scenarios where you need to manipulate both ends.
  • Map: An ordered dictionary that maps keys to values. It allows for any type as keys, maintaining order by insertion.
  • Set: A collection of unique values. This structure is optimized for finding and checking for the existence of values.
  • PriorityQueue: Similar to a heap where each value has a priority, and the order of values is determined by their priority.
  • Stack: A last in, first out (LIFO) data structure. It is optimized for cases where you only need to add or remove elements from one end.
  • Queue: A first in, first out (FIFO) data structure, ideal for scenarios where elements need to be processed in the order they were added.
$vector = new \Ds\Vector();

$vector->push('a');
$vector->push('b', 'c');
$vector[] = 'd';

$map = new \Ds\Map();

$map->put('a', 1);
$map->put('b', 2);
$map['c'] = 3;


Data Structures in Go

Basic Data Structures

Arrays

An array in Go is a fixed-length sequence of elements of a single type. Once an array is declared, it cannot be resized. The length of an array is part of its type, so arrays cannot be resized.

var a [3]int
a[0] = 1
a[1] = 2
a[2] = 3

Slices

A slice is a variable-length sequence which is made up of elements of the same type. A slice is a reference to an underlying array. A slice is a lightweight data structure that gives access to a section of an underlying array.

var s []int
s = append(s, 1)
s = append(s, 2)
s = append(s, 3)

Maps

A map is an unordered collection of key-value pairs. Maps are used to look up a value by its associated key.

var m map[string]int
m = make(map[string]int)
m["foo"] = 1
m["bar"] = 2

Structs

A struct is a composite data type that groups together variables of different types.

type Person struct {
    Name string
    Age  int
}

p := Person{Name: "Alice", Age: 30}

Advanced Data Structures

Go does not have a built-in implementation for advanced data structures like linked lists, stacks, queues, heaps, and priority queues. However, these data structures can be implemented using slices and maps.

type Stack struct {
    items []int
}

func (s *Stack) Push(item int) {
    s.items = append(s.items, item)
}

func (s *Stack) Pop() int {
    if len(s.items) == 0 {
        return -1
    }
    item := s.items[len(s.items)-1]
    s.items = s.items[:len(s.items)-1]
    return item
}
```go

Data Structures in Rust

Arrays

An array in Rust is a fixed-size sequence of elements of the same type. Once an array is declared, it cannot be resized. The length of an array is part of its type, so arrays cannot be resized.

let a = [1, 2, 3];

Vectors

A vector is a variable-length sequence which is made up of elements of the same type. A vector is a reference to an underlying array. A vector is a growable data structure that gives access to a section of an underlying array.

let mut v = vec![1, 2, 3];
v.push(4);
v.push(5);

Slices

A slice is a reference to a contiguous sequence of elements in a collection. Slices are used to give a reference to a section of an array or a vector.

let a = [1, 2, 3, 4, 5];
let slice = &a[1..3];

Hash Maps

A hash map is a collection of key-value pairs. Hash maps are used to look up a value by its associated key.

use std::collections::HashMap;

let mut scores = HashMap::new();

scores.insert(String::from("Blue"), 10);
scores.insert(String::from("Yellow"), 50);

Enums

An enum is a type that can have a fixed set of values. Each value is called a variant. Enums are useful for representing a small, fixed set of related values.

enum IpAddr {
    V4(u8, u8, u8, u8),
    V6(String),
}

let home = IpAddr::V4(127, 0, 0, 1);
let loopback = IpAddr::V6(String::from("::1"));

Advanced Data Structures

Structs

Structs in Rust are used to create custom data types by combining multiple other types. Structs are crucial for practical Rust programming, allowing you to encapsulate related data into one cohesive unit.

#![allow(unused)]
fn main() {
struct Person {
    name: String,
    age: u8,
}
impl Person {
    fn greet(&self) {
        println!("Hello, my name is {} and I am {} years old.", self.name, self.age);
    }
}

let p = Person {
    name: String::from("Alice"),
    age: 30,
};
p.greet();
}

Linked Lists

Rust's standard library provides two types of linked lists: LinkedList and VecDeque. LinkedList is a doubly-linked list, which allows efficient insertion and removal of elements from both ends of the list.

#![allow(unused)]
fn main() {
use std::collections::LinkedList;

let mut list: LinkedList<u32> = LinkedList::new();
list.push_back(1);
list.push_back(2);
list.push_front(0);
println!("{:?}", list); // Outputs: [0, 1, 2]
}

Stacks

A stack is a last in, first out (LIFO) data structure. In Rust, you can implement a stack using a Vec and push and pop elements from the end of the vector.

struct Stack {
    items: Vec<i32>,
}

impl Stack {
    fn push(&mut self, item: i32) {
        self.items.push(item);
    }

    fn pop(&mut self) -> Option<i32> {
        self.items.pop()
    }
}

fn main() {
    let mut stack = Stack { items: vec![] };
    stack.push(1);
    stack.push(2);
    stack.push(3);

    while let Some(item) = stack.pop() {
        println!("{}", item);
    }
}

Queues

A queue is a first in, first out (FIFO) data structure. In Rust, you can implement a queue using a VecDeque from the standard library.

use std::collections::VecDeque;

fn main() {
    let mut queue = VecDeque::new();
    queue.push_back(1);
    queue.push_back(2);
    queue.push_back(3);

    while let Some(item) = queue.pop_front() {
        println!("{}", item);
    }
}

Alignment and Padding

Rust

Rust uses padding to ensure that each field in a struct is aligned to a memory address that is a multiple of its size. This is done to optimize memory access and prevent performance penalties due to misaligned memory access.

Consider the following example:

use std::mem;
use std::time::{Duration, Instant};

// The #[repr(C)] attribute in Rust specifies the memory 
// layout of a struct to be compatible with C.
// Otherwise, the compiler is free to reorder fields 
// for better alignment and padding.

#[repr(C)] 
struct Suboptimal {
    a: u8,   // 1 byte
    b: u64,  // 8 bytes
    c: u16,  // 2 bytes
}

#[repr(C)]
struct Optimized {
    b: u64,  // 8 bytes
    c: u16,  // 2 bytes
    a: u8,   // 1 byte
}

trait Initializable {
   fn new(a: u8, b: u64, c: u16) -> Self;
}

impl Initializable for Suboptimal {
   fn new(a: u8, b: u64, c: u16) -> Self {
       Self { a, b, c }
   }
}

impl Initializable for Optimized {
   fn new(a: u8, b: u64, c: u16) -> Self {
       Self { b, c, a }
   }
}

trait Summable {
   fn sum(&self) -> u64;
}

impl Summable for Suboptimal {
   fn sum(&self) -> u64 {
       self.a as u64 + self.b as u64 + self.c as u64
   }
}

impl Summable for Optimized {
   fn sum(&self) -> u64 {
       self.a as u64 + self.b as u64 + self.c as u64
   }
}

const NUM_STRUCTS: usize = 1_000;

fn populate<T: Initializable>() -> (Vec<T>, Duration) {
   let start = Instant::now();
    let data: Vec<T> = (0..NUM_STRUCTS)
        .map(|i| T::new((i % 256) as u8, i as u64, (i % 65536) as u16))
        .collect();
   let population_time = start.elapsed();
   (data, population_time)
}

fn repeat_population<T: Initializable>(n: usize) -> Vec<Duration> {
   (0..n)
       .map(|_| {
           let (_, duration) = populate::<T>();
           duration
       })
       .collect()
}

fn iterate_and_calculate<T: Summable>(data: &Vec<T>) -> (u64, Duration) {
   let start = Instant::now();
    let total: u64 = data.iter().map(|s| s.sum()).sum();
   let iteration_time = start.elapsed();
   (total, iteration_time)
}

fn repeat_iteration<T: Summable>(data: &Vec<T>, n: usize) -> (u64, Vec<Duration>) {
    (0..n)
        .map(|_| iterate_and_calculate(data))
        .fold((0, Vec::new()), |(total, mut durations), (new_total, duration)| {
            durations.push(duration);
            (total + new_total, durations)
        })
}

fn main() {
   println!("Optimized and Suboptimal Structs both hold 11 Bytes of data.");
   println!("Size of Suboptimal Struct is {} bytes", mem::size_of::<Suboptimal>());
   println!("Size of Optimized Struct is {} bytes", mem::size_of::<Optimized>());
   println!("Field offsets in Suboptimal: {:?}",
            [unsafe { &(*(0 as *const Suboptimal)).a as *const _ as usize },
             unsafe { &(*(0 as *const Suboptimal)).b as *const _ as usize },
             unsafe { &(*(0 as *const Suboptimal)).c as *const _ as usize }]);
   println!("Field offsets in Optimized: {:?}",
            [unsafe { &(*(0 as *const Optimized)).b as *const _ as usize },
             unsafe { &(*(0 as *const Optimized)).c as *const _ as usize },
             unsafe { &(*(0 as *const Optimized)).a as *const _ as usize }]);

    let n: usize = 3;

    let (sub_data, _) = populate::<Suboptimal>();
    let sub_population_time = repeat_population::<Suboptimal>(n);

    let (opt_data, _) = populate::<Optimized>();
    let opt_population_time = repeat_population::<Optimized>(n);

    let (_, sub_iteration_time) = repeat_iteration(&sub_data, n);
    let (_, opt_iteration_time) = repeat_iteration(&opt_data, n);
    
   let (sub_total, _) = repeat_iteration(&sub_data, n);
   let (opt_total, _) = repeat_iteration(&opt_data, n);

   println!("Population Time:");
   println!("  Suboptimal: {:?}", sub_population_time);
   println!("  Optimized:  {:?}", opt_population_time);

   println!("Iteration Time:");
   println!("  Suboptimal: {:?}", sub_iteration_time);
   println!("  Optimized:  {:?}", opt_iteration_time);

   assert_eq!(sub_total, opt_total);
}

It is worth noting that the Suboptimal struct has a size of 24 bytes, while the Optimized struct has a size of 16 bytes. However, the memory layout of the Optimized struct is more efficient due to the alignment of the fields, which can lead to better performance.

When running the code, you will observe that the Optimized struct has better performance in terms of population and iteration times compared to the Suboptimal struct. This is due to the optimized memory layout of the Optimized struct, which reduces the number of cache misses and improves memory access efficiency.

Rust compiler provides the #[repr(C)] attribute to specify the memory layout of a struct to be compatible with C. This can be useful when interfacing with C libraries or when you need to control the memory layout of a struct for performance reasons. Without this attribute, the Rust compiler is free to reorder fields for better alignment and padding, which may not be optimal for performance-critical applications.

Algorithms

Big O Notation

Big O notation is a mathematical concept used in computer science to describe the performance or complexity of an algorithm. Specifically, it expresses the upper bound of the time complexity or space complexity in terms of the size of the input data (denoted as \( n \) ). Understanding Big O notation is crucial for evaluating which algorithms are most efficient under different circumstances.

Big O notation helps in understanding the worst-case scenario of an algorithm's running time or space requirement, which is critical for determining the scalability of an algorithm.

The notation is generally expressed as \[ O(f(n)) \] , where \( f(n) \) represents a function that approximates the number of steps (in time complexity) or the amount of memory (in space complexity) in terms of the input size \( n \).

Common Time Complexities

  • Constant Time \( O(1) \) : The execution time or space remains constant regardless of the input size.
  • Logarithmic Time \( O(\log n) \) : The execution time increases logarithmically with the input size.
  • Linear Time \( O(n) \) : The execution time increases linearly with the input size.
  • Linearithmic Time \( O(n \log n) \) : The execution time increases linearly with the input size multiplied by a logarithmic factor.
  • Quadratic Time \( O(n^2) \) : The execution time increases quadratically with the input size.
  • Exponential Time \( O(2^n) \) : The execution time grows exponentially based on the input size.
  • Factorial Time \( O(n!) \) : The execution time grows factorial with the input size.

Fundamental Algorithms

  • Sorting Algorithms
  • Search Algorithms
  • Recursive Algorithms

Sorting

The organization of an array, or the way its elements are arranged before sorting, can significantly impact the performance and behavior of sorting algorithms. The initial order of elements — whether they are already sorted, randomly distributed, or sorted in reverse — can make a considerable difference in how quickly and efficiently the sorting process completes. Here are some key ways the organization of an array influences different sorting algorithms:

  • Already Sorted Arrays
#![allow(unused)]
fn main() {
let mut already_sorted_numbers = [11, 22, 25, 34, 64, 90];
}
  • Reverse Sorted Arrays
#![allow(unused)]
fn main() {
let mut reverse_sorted_numbers = [90, 64, 34, 25, 22, 11];
}
  • Randomly Distributed Arrays
#![allow(unused)]
fn main() {
let mut randomly_distributed_numbers = [64, 34, 25, 12, 22, 11, 90];
}
  • Nearly Sorted Arrays
#![allow(unused)]
fn main() {
let mut nearly_sorted_numbers = [11, 25, 22, 34, 64, 90];
}
  • Few Unique Elements
#![allow(unused)]
fn main() {
let mut few_unique_numbers = [34, 34, 34, 34, 34, 34];
}

Bubble Sort

Bubble Sort is a straightforward sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which means the list is sorted. Despite being easy to understand and implement, Bubble Sort is not very efficient for large datasets as its average and worst-case complexity is \( O(n^2) \).

Implementation

PHP

function bubbleSort(array &$arr) {
    $n = count($arr);
    do {
        $swapped = false;
        for ($i = 1; $i < $n; $i++) {
            if ($arr[$i - 1] > $arr[$i]) {
                // Swap elements
                $temp = $arr[$i - 1];
                $arr[$i - 1] = $arr[$i];
                $arr[$i] = $temp;
                $swapped = true;
            }
        }
        $n--; // Decrement the count of items to be sorted
    } while ($swapped);
}

// Example usage
$arr = [64, 34, 25, 12, 22, 11, 90];
echo "Original array:\n";
print_r($arr);

bubbleSort($arr);
echo "Sorted array:\n";
print_r($arr);

Go

package main

import "fmt"

func bubbleSort(slice []int) {
    n := len(slice)
    for i := 0; i < n-1; i++ { // Outer loop for multiple passes
        swapped := false
        for j := 0; j < n-i-1; j++ { // Inner loop for each pass
            if slice[j] > slice[j+1] {
                // Swap elements if they are in the wrong order
                slice[j], slice[j+1] = slice[j+1], slice[j]
                swapped = true
            }
        }
        // If no elements were swapped, the slice is sorted
        if !swapped {
            break
        }
    }
}

func main() {
    slice := []int{64, 34, 25, 12, 22, 11, 90}
    fmt.Println("Original slice:", slice)
    bubbleSort(slice)
    fmt.Println("Sorted slice:", slice)
}

Rust

Implementing Bubble Sort in Rust is simple. The following code snippet demonstrates how to sort an array of integers using Bubble Sort in Rust. The bubble_sort function takes a mutable reference to an array of integers and sorts it in place. The main function creates an array of integers, prints the original array, sorts it using Bubble Sort, and then prints the sorted array.

use std::time::Instant;

fn bubble_sort(arr: &mut [i32]) {
    let mut n = arr.len();
    let mut swapped = true;

    while swapped {
        swapped = false;
        for i in 1..n {
            if arr[i - 1] > arr[i] {
                arr.swap(i - 1, i);
                swapped = true;
            }
        }
        n -= 1; 
    }
}

fn sort_with_info(description: &str, arr: &mut [i32]){
    let start = Instant::now();
    println!("{}: {:?}", description, arr);
    bubble_sort(arr);
    println!("Sorted Result: {:?}", arr);
    println!("Time elapsed is: {:?}", start.elapsed());
    println!("");
}

fn main() {
    let mut already_sorted_numbers = [11, 22, 25, 34, 64, 90];
    sort_with_info("Already Sorted", &mut already_sorted_numbers);
    let mut reverse_sorted_numbers = [90, 64, 34, 25, 22, 11];
    sort_with_info("Reverse Sorted", &mut reverse_sorted_numbers);
    let mut randomly_distributed_numbers = [64, 34, 25, 12, 22, 11, 90];
    sort_with_info("Randomly Distributed", &mut randomly_distributed_numbers);
    let mut nearly_sorted_numbers = [11, 25, 22, 34, 64, 90];
    sort_with_info("Nearly Sorted", &mut nearly_sorted_numbers);
    let mut few_unique_numbers = [34, 34, 34, 34, 34, 34];
    sort_with_info("Few Unique", &mut few_unique_numbers);
}

Quick Sort

Quick Sort is a comparison-based sorting algorithm that uses a divide-and-conquer strategy to sort an array. It is an efficient sorting algorithm with an average time complexity of \( O(n \log n) \). Quick Sort works by selecting a pivot element from the array and partitioning the other elements into two sub-arrays according to whether they are less than or greater than the pivot. The sub-arrays are then recursively sorted.

Rust

use std::time::Instant;

fn quicksort(arr: &mut [i32]) {
    // Base case: arrays with less than 2 elements are already sorted
    if arr.len() <= 1 {
        return;
    }

    // Partitioning phase
    let pivot_index = partition(arr);

    // Recursively apply the same logic to the left and right subarrays
    quicksort(&mut arr[0..pivot_index]);
    quicksort(&mut arr[pivot_index + 1..]);
}

fn partition(arr: &mut [i32]) -> usize {
    let pivot = arr[arr.len() - 1];  // Using the last element as the pivot
    let mut i = 0;

    for j in 0..arr.len() - 1 {
        if arr[j] <= pivot {
            arr.swap(i, j);
            i += 1;
        }
    }

    // Swap the pivot element to its correct position
    arr.swap(i, arr.len() - 1);
    i  // Return the index of the pivot element
}

fn sort_with_info(description: &str, arr: &mut [i32]){
    let start = Instant::now();
    println!("{}: {:?}", description, arr);
    quicksort(arr);
    println!("Sorted Result: {:?}", arr);
    println!("Time elapsed is: {:?}", start.elapsed());
    println!("");
}

fn main() {
let mut sorted = vec![11, 22, 25, 34, 64, 90];
    sort_with_info("Already Sorted", &mut sorted);
    
    let mut reverse_sorted = vec![90, 64, 34, 25, 22, 11];
    sort_with_info("Reverse Sorted", &mut reverse_sorted);
    
    // Randomly distributed array
    let mut random = vec![64, 34, 25, 12, 22, 11, 90];
    sort_with_info("Randomly Distributed", &mut random);
    
    // Nearly sorted array
    let mut nearly_sorted = vec![11, 25, 22, 34, 64, 90];
    sort_with_info("Nearly Sorted", &mut nearly_sorted);
    
    // Few unique elements
    let mut few_unique = vec![34, 34, 34, 34, 34, 34];
    sort_with_info("Few Unique", &mut few_unique);
}

PHP

function quickSort(&$arr, $left = 0, $right = null) {
    // If right is null, we're considering the whole array at the start.
    if ($right === null) {
        $right = count($arr) - 1;
    }

    if ($left < $right) {
        // Partition the array and get the pivot index.
        $pivotIndex = partition($arr, $left, $right);

        // Sort the sub-array to the left of the pivot.
        quickSort($arr, $left, $pivotIndex - 1);

        // Sort the sub-array to the right of the pivot.
        quickSort($arr, $pivotIndex + 1, $right);
    }
}

function partition(&$arr, $left, $right) {
    $pivot = $arr[$right];  // Using the last element as the pivot.
    $i = $left - 1;

    for ($j = $left; $j < $right; $j++) {
        // If the current element is smaller than or equal to pivot,
        // swap it with the element at $i.
        if ($arr[$j] <= $pivot) {
            $i++;
            swap($arr, $i, $j);
        }
    }

    // Swap the pivot element with the element at i+1 so that the pivot
    // is now at its correct sorted position.
    swap($arr, $i + 1, $right);

    return $i + 1;  // Return the index position of the pivot.
}

function swap(&$arr, $i, $j) {
    $temp = $arr[$i];
    $arr[$i] = $arr[$j];
    $arr[$j] = $temp;
}

$array = [64, 34, 25, 12, 22, 11, 90];
quickSort($array);
print_r($array);

Radix Sort

Radix Sort is a non-comparative integer sorting algorithm that sorts data with integer keys by grouping keys by individual digits that share the same significant position and value. It is a linear time complexity sorting algorithm that is efficient for sorting large numbers of integers. Radix Sort is often used to sort numbers in a fixed range, such as integers with a fixed number of digits.

use std::time::Instant;

fn radix_sort(arr: &mut [i32]) {
    // Here we define the radix of the number system, which is 10 for decimal numbers
    let radix = 10;  
    let mut done = false;
    // Starting with the least significant digit
    let mut digit_position = 1;  

    while !done {
        // Assume the loop is done unless we find an element with a higher place value
        done = true;  
        // Create buckets for each digit from 0 to 9
        let mut buckets: Vec<Vec<i32>> = vec![vec![]; radix as usize];  

        // Place each number in the corresponding bucket based on its current digit
        for &number in arr.iter() {
            let digit = (number / digit_position) % radix;
            buckets[digit as usize].push(number);
            if done && digit > 0 {  
            // Check if there are still numbers with more digits left to process
                done = false;
            }
        }

        // Collect numbers back from buckets to array
        let mut idx = 0;
        for bucket in &buckets {
            for &number in bucket {
                arr[idx] = number;
                idx += 1;
            }
        }

        // Move to the next more significant digit
        digit_position *= radix;  
    }
}

fn sort_with_info(description: &str, arr: &mut [i32]){
    let start = Instant::now();
    println!("{}: {:?}", description, arr);
    radix_sort(arr);
    println!("Sorted Result: {:?}", arr);
    println!("Time elapsed is: {:?}", start.elapsed());
    println!("");
}

fn main() {
    let mut sorted = vec![11, 22, 25, 34, 64, 90];
    sort_with_info("Already Sorted", &mut sorted);
    
    let mut reverse_sorted = vec![90, 64, 34, 25, 22, 11];
    sort_with_info("Reverse Sorted", &mut reverse_sorted);
    
    // Randomly distributed array
    let mut random = vec![64, 34, 25, 12, 22, 11, 90];
    sort_with_info("Randomly Distributed", &mut random);
    
    // Nearly sorted array
    let mut nearly_sorted = vec![11, 25, 22, 34, 64, 90];
    sort_with_info("Nearly Sorted", &mut nearly_sorted);
    
    // Few unique elements
    let mut few_unique = vec![34, 34, 34, 34, 34, 34];
    sort_with_info("Few Unique", &mut few_unique);
}

Bread-First Search

What is the shrotest path to go to X?

Breadth-first search (BFS) is an algorithm used for traversing tree or graph data structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a 'search key') and explores the neighbor nodes first, before moving to the next level neighbors.

BFS explores the nodes one level at a time, following a breadthward motion. It uses a queue data structure to keep track of all the nodes that need to be explored. The algorithm is implemented using a queue data structure to store intermediate results.

#![allow(unused)]
fn main() {
use std::collections::VecDeque;

fn bfs(graph: &Vec<Vec<usize>>, start: usize, end: usize) -> Option<Vec<usize>> {
    let mut queue = VecDeque::new();
    let mut visited = vec![false; graph.len()];
    let mut parent = vec![0; graph.len()];

    queue.push_back(start);
    visited[start] = true;

    while !queue.is_empty() {
        let node = queue.pop_front().unwrap();

        if node == end {
            let mut path = vec![];
            let mut at = end;

            while at != start {
                path.push(at);
                at = parent[at];
            }

            path.push(start);
            path.reverse();
            return Some(path);
        }

        for &neighbour in &graph[node] {
            if !visited[neighbour] {
                visited[neighbour] = true;
                parent[neighbour] = node;
                queue.push_back(neighbour);
            }
        }
    }

    None
}

}

Design Patterns

Design patterns are standard approaches to solving frequent issues in software design. They act as ready-made templates that you can adapt to address specific recurring challenges in your programming projects.

Creational

Creational design patterns provide various object creation mechanisms. They help in creating objects in a manner suitable for the situation. The creational design patterns are:

Behavioral

Behavioral design patterns focus on communication between objects. They help in defining how objects interact with each other and how they operate. The behavioral design patterns are:

Structural

Structural design patterns deal with object composition. They help in defining relationships between objects and how they can be assembled to form larger structures. The structural design patterns are:

Creational

Creational design patterns are design patterns that deal with object creation mechanisms, trying to create objects in a manner suitable to the situation. The basic form of object creation could result in design problems or added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.

Creational design patterns are composed of two dominant ideas. One is encapsulating knowledge about which concrete classes the system uses. Another is hiding how instances of these concrete classes are created and combined.

List of Creational Design Patterns

  1. Abstract Factory
  2. Builder
  3. Factory Method
  4. Prototype
  5. Singleton
  6. Multiton
  7. Object Pool
  8. Lazy Initialization

Abstract Factory

The Abstract Factory pattern is a creational design pattern that allows you to produce families of related objects without specifying their concrete classes. This pattern is particularly useful when a system needs to be independent of how its products are created, composed, and represented. It also provides a way to encapsulate a group of individual factories that have a common theme without specifying their concrete classes.

Implementation in Rust

In Rust, implementing the Abstract Factory pattern involves using traits to define an interface for creating an abstract product and implementing this interface in different factories to create concrete products. Here’s a detailed example to demonstrate how to implement the Abstract Factory pattern in Rust.

trait Button {
    fn click(&self);
}

trait Checkbox {
    fn check(&self);
}

struct WinButton;
struct MacButton;

impl Button for WinButton {
    fn click(&self) {
        println!("Windows Button clicked");
    }
}

impl Button for MacButton {
    fn click(&self) {
        println!("Mac Button clicked");
    }
}

struct WinCheckbox;
struct MacCheckbox;

impl Checkbox for WinCheckbox {
    fn check(&self) {
        println!("Windows Checkbox checked");
    }
}

impl Checkbox for MacCheckbox {
    fn check(&self) {
        println!("Mac Checkbox checked");
    }
}

trait GUIFactory {
    fn create_button(&self) -> Box<dyn Button>;
    fn create_checkbox(&self) -> Box<dyn Checkbox>;
}

struct WinFactory;
struct MacFactory;

impl GUIFactory for WinFactory {
    fn create_button(&self) -> Box<dyn Button> {
        Box::new(WinButton)
    }

    fn create_checkbox(&self) -> Box<dyn Checkbox> {
        Box::new(WinCheckbox)
    }
}

impl GUIFactory for MacFactory {
    fn create_button(&self) -> Box<dyn Button> {
        Box::new(MacButton)
    }

    fn create_checkbox(&self) -> Box<dyn Checkbox> {
        Box::new(MacCheckbox)
    }
}

fn build_ui(factory: &dyn GUIFactory) {
    let button = factory.create_button();
    let checkbox = factory.create_checkbox();

    button.click();
    checkbox.check();
}

fn main() {
    let win_factory = WinFactory;
    let mac_factory = MacFactory;

    println!("Test UI with Windows Factory:");
    build_ui(&win_factory);

    println!("Test UI with Mac Factory:");
    build_ui(&mac_factory);
}

Builder

The Builder pattern is a creational design pattern used to construct complex objects step by step. It allows you to produce different types and representations of an object using the same construction code. This pattern is particularly useful in Rust when you need to build objects that have several optional or configurable properties.

Implementation in Rust

In Rust, implementing the Builder pattern involves defining a builder struct that is responsible for constructing the object. The builder struct contains methods to set the optional properties of the object and a build method to create the final object. Here's an example to demonstrate how to implement the Builder pattern in Rust.

struct Car {
    make: String,
    model: String,
    color: Option<String>,
    transmission: Option<String>,
    convertible: Option<bool>,
}
struct CarBuilder {
    make: String,
    model: String,
    color: Option<String>,
    transmission: Option<String>,
    convertible: Option<bool>,
}

impl CarBuilder {
    // Constructor to start the building process
    fn new(make: &str, model: &str) -> Self {
        CarBuilder {
            make: make.to_string(),
            model: model.to_string(),
            color: None,
            transmission: None,
            convertible: None,
        }
    }

    // Method to set the color of the car
    fn color(mut self, color: &str) -> Self {
        self.color = Some(color.to_string());
        self
    }

    // Method to set the transmission type
    fn transmission(mut self, transmission: &str) -> Self {
        self.transmission = Some(transmission.to_string());
        self
    }

    // Method to specify if it is a convertible
    fn convertible(mut self, convertible: bool) -> Self {
        self.convertible = Some(convertible);
        self
    }

    // Final method to build the Car
    fn build(self) -> Car {
        Car {
            make: self.make,
            model: self.model,
            color: self.color,
            transmission: self.transmission,
            convertible: self.convertible,
        }
    }
}

fn main() {
    let car = CarBuilder::new("Toyota", "Corolla")
        .color("red")
        .transmission("manual")
        .convertible(true)
        .build();

    println!("Car Make: {}", car.make);
    println!("Car Model: {}", car.model);
    println!("Car Color: {:?}", car.color);
    println!("Car Transmission: {:?}", car.transmission);
    println!("Car Convertible: {:?}", car.convertible);
}

Factory Method

Factory Method is a creational design pattern that provides an interface for creating objects in a superclass, but allows subclasses to alter the type of objects that will be created.

The Factory Method pattern is useful when you need to create an object that is part of a family of related objects. It allows you to create objects without specifying the exact class of object that will be created. This pattern is particularly useful when you need to create objects that share a common interface but have different implementations.

Implementation in Rust

In Rust, implementing the Factory Method pattern involves defining a trait that serves as the factory interface and implementing this trait in different concrete factories to create specific objects. Here's an example to demonstrate how to implement the Factory Method pattern in Rust.

trait Vehicle {
    fn drive(&self);
}

struct Car;
impl Vehicle for Car {
    fn drive(&self) {
        println!("The car is driving.");
    }
}

struct Bike;
impl Vehicle for Bike {
    fn drive(&self) {
        println!("The bike is pedaling.");
    }
}

trait VehicleFactory {
    fn create_vehicle(&self) -> Box<dyn Vehicle>;
}

struct CarFactory;
impl VehicleFactory for CarFactory {
    fn create_vehicle(&self) -> Box<dyn Vehicle> {
        Box::new(Car)
    }
}

struct BikeFactory;
impl VehicleFactory for BikeFactory {
    fn create_vehicle(&self) -> Box<dyn Vehicle> {
        Box::new(Bike)
    }
}

fn main() {
    let car_factory = CarFactory;
    let bike_factory = BikeFactory;

    let car = car_factory.create_vehicle();
    let bike = bike_factory.create_vehicle();

    car.drive();
    bike.drive();
}

Prototype

The Prototype pattern is a creational design pattern that allows cloning objects, even complex ones, without coupling to their specific classes. It is useful when the creation of an object is expensive or complex, and the object is similar to an existing object.

The Prototype pattern is particularly useful in Rust when you need to create objects that are similar to existing objects but have different configurations. It allows you to clone an existing object and modify its properties without creating a new object from scratch.

Implementation in Rust

In Rust, implementing the Prototype pattern involves defining a struct that serves as the prototype and implementing the Clone trait for that struct. The Clone trait provides a clone method that creates a new object with the same properties as the original object. Here's an example to demonstrate how to implement the Prototype pattern in Rust.

#[derive(Clone)]
struct Circle {
    pub x: u32,
    pub y: u32,
    pub radius: u32,
}

fn main() {
    let circle1 = Circle {
        x: 10,
        y: 15,
        radius: 10,
    };

    // Prototype in action.
    let mut circle2 = circle1.clone();
    circle2.radius = 77;

    println!("Circle 1: {}, {}, {}", circle1.x, circle1.y, circle1.radius);
    println!("Circle 2: {}, {}, {}", circle2.x, circle2.y, circle2.radius);
}

Singleton

The Singleton pattern is a creational design pattern that ensures a class has only one instance and provides a global point of access to it. It is useful when you need to restrict the instantiation of a class to a single object and provide a single point of access to a resource.

Implementation in Rust

A pure safe way to implement Singleton in Rust is using no global variables at all and passing everything around through function arguments. The oldest living variable is an object created at the start of the main()

fn change(global_state: &mut u32) {
    *global_state += 1;
}

fn main() {
    let mut global_state = 0u32;

    change(&mut global_state);
    println!("Final state: {}", global_state);

    change(&mut global_state);
    println!("Final state: {}", global_state);
}

Using mutex

Starting with Rust 1.63, it can be easier to work with global mutable singletons, although it's still preferable to avoid global variables in most cases.

Now that Mutex::new is const, you can use global static Mutex locks without needing lazy initialization.

use std::sync::Mutex;

static ARRAY: Mutex<Vec<i32>> = Mutex::new(Vec::new());

fn do_a_call() {
    ARRAY.lock().unwrap().push(1);
}

fn main() {
    do_a_call();
    do_a_call();
    do_a_call();

    println!("Called {} times", ARRAY.lock().unwrap().len());
}

Implementation in Go

Using sync.Once

Go's sync.Once package offers a thread-safe method to initialize a value precisely once. This feature is perfect for implementing the Singleton pattern, eliminating the need for explicit locking mechanisms.

package singleton

import "sync"

type singleton struct{}

var instance *singleton
var once sync.Once

func GetInstance() *singleton {
    once.Do(func() {
        instance = &singleton{}
    })
    return instance
}

Using mutex

The sync.Mutex provides a locking mechanism to ensure that only one goroutine can access the critical section of code at a time, which is useful for creating and managing the singleton instance safely.

package singleton

import (
    "fmt"
    "sync"
)

type singleton struct {}

var instance *singleton
var lock = &sync.Mutex{}

func getInstance() *singleton {
    if instance == nil {
        // Only one goroutine at a time can go next
        lock.Lock()
        defer lock.Unlock()

        if instance == nil {
            fmt.Println("Instance created")
            instance = &singleton{}
        } else {
            fmt.Println("Instance exists")
        }
    } else {
        fmt.Println("Instance exists")
    }
    return instance
}

Multiton

The multiton pattern is a variation of the singleton pattern, designed to support multiple instances of a single class, each identified by a unique key. This approach is advantageous in scenarios where a singleton's single-instance limitation is too restrictive, but where the total number of instances is finite and predetermined.

Like the singleton, the multiton employs a private constructor and a static method to manage its instances. The key difference lies in the static method, which accepts a unique identifier as an argument. This identifier determines whether a corresponding instance already exists: if it does, that instance is returned; if not, a new instance is created, stored, and then returned.

The multiton pattern ensures that a class can have multiple instances, each distinguished by a unique identifier, thus managing instance creation in a controlled manner. This is particularly useful when you need several instances of a class but want to limit their number to prevent excessive resource use.

To implement the multiton pattern, define a class with a private constructor to block direct instantiation. Then, provide a static method that takes a unique identifier for the desired instance. When this method is called, it either retrieves an existing instance linked to that identifier or creates and stores a new one, ensuring that each identifier is associated with a single instance.

This setup allows controlled access and distinct management of multiple instances based on their unique identifiers, ensuring that each is created only once.

Example Implementation in Rust

use std::collections::HashMap;
use std::sync::{Arc, Mutex};

#[derive(Debug, Clone)]
struct MyType {
    id: String,
    data: String,
}

struct Multiton {
    instances: Mutex<HashMap<String, Arc<MyType>>>,
}

impl Multiton {
    // Initializes a new Multiton manager
    fn new() -> Multiton {
        Multiton {
            instances: Mutex::new(HashMap::new()),
        }
    }

    // Retrieves an existing instance or creates a new one if it does not exist
    fn get_instance(&self, id: &str) -> Arc<MyType> {
        let mut instances = self.instances.lock().unwrap();
        instances.entry(id.to_string()).or_insert_with(|| {
            Arc::new(MyType {
                id: id.to_string(),
                data: format!("Data for {}", id),
            })
        }).clone()
    }
}
fn main() {
    let multiton = Multiton::new();

    // Retrieve an instance with ID "instance1"
    let instance1 = multiton.get_instance("instance1");
    println!("Retrieved: {:?}", instance1);

    // Retrieve the same instance again to demonstrate that it returns the same object
    let same_instance1 = multiton.get_instance("instance1");
    println!("Retrieved again: {:?}", same_instance1);

    // Retrieve a different instance with ID "instance2"
    let instance2 = multiton.get_instance("instance2");
    println!("Retrieved: {:?}", instance2);
}


Object Pool

Object Pool is a creational design pattern that provides a pool of objects that clients can reuse and recycle. It is useful when the cost of creating an object is high, and the number of objects in use is high.

Object Pool maintains a pool of reusable objects and manages the creation, destruction, and reuse of these objects. When a client requests an object, the Object Pool checks if there are any available objects in the pool. If an object is available, it is returned to the client; otherwise, a new object is created and added to the pool.

Object Pool can improve performance by reducing the overhead of creating and destroying objects. It can also help in managing limited resources efficiently by recycling objects instead of creating new ones.

Example Implementation in Rust

use std::sync::{Arc, Mutex};
use std::collections::VecDeque;

struct PoolObject {
    id: usize, // Example property
}

struct ObjectPool {
    available: Mutex<VecDeque<Arc<PoolObject>>>,
}

impl ObjectPool {
    // Initialize the ObjectPool with a given number of pre-allocated objects
    fn new(size: usize) -> ObjectPool {
        let mut objects = VecDeque::with_capacity(size);
        for i in 0..size {
            objects.push_back(Arc::new(PoolObject { id: i }));
        }
        ObjectPool {
            available: Mutex::new(objects),
        }
    }

    // Retrieve an object from the pool
    fn get(&self) -> Option<Arc<PoolObject>> {
        let mut pool = self.available.lock().unwrap();
        pool.pop_front()
    }

    // Return an object to the pool
    fn put(&self, item: Arc<PoolObject>) {
        let mut pool = self.available.lock().unwrap();
        pool.push_back(item);
    }
}

fn main() {
    let pool = ObjectPool::new(2); // Create a pool with 2 objects

    // Get an object from the pool
    let object1 = pool.get().expect("No objects available");
    println!("Got object with ID: {}", object1.id);

    // Get another object
    let object2 = pool.get().expect("No objects available");
    println!("Got object with ID: {}", object2.id);

    // Return objects to the pool
    pool.put(object1);
    pool.put(object2);

    // Retrieve an object again to demonstrate reuse
    let object3 = pool.get().expect("No objects available");
    println!("Got object with ID: {}", object3.id);
}

Lazy Initialization

Lazy initialization is a design pattern that defers the creation of an object until it is first used. It is a type of creational pattern that is used to improve performance and resource utilization. The lazy initialization pattern is useful when the cost of object creation is high and the object is not always required.

Lazy initialization can be implemented in various ways, such as using a private constructor, a static method, or a closure to create the object when it is first accessed. This pattern ensures that the object is created only when it is needed, reducing unnecessary resource consumption.

Example Implementation in Rust

#![allow(unused)]

fn main() {
}

Behavioral

Behavioral design patterns are design patterns that identify common communication patterns between objects and realize these patterns. By doing so, these patterns increase flexibility in carrying out this communication.

Behavioral patterns are concerned with the assignment of responsibilities between objects and how they communicate. The interactions between the objects should be such that they are talking to each other and still are loosely coupled. The loose coupling is the key to making the system more maintainable and scalable.

List of Behavioral Design Patterns

  1. Strategy
  2. Observer
  3. Chain of Responsibility
  4. Command
  5. State
  6. Template Method
  7. Visitor
  8. Interpreter
  9. Memento
  10. Mediator
  11. Iterator

Strategy

The strategy pattern is a behavioral design pattern that enables selecting an algorithm at runtime. It defines a family of algorithms, encapsulates each algorithm, and makes the algorithms interchangeable within that family.

The strategy pattern is particularly useful in Rust when you need to define a set of algorithms and choose one of them at runtime. It allows you to encapsulate the algorithm in a separate struct and switch between different algorithms without changing the client code.

Implementation in Rust

In Rust, implementing the strategy pattern involves defining a trait that represents the strategy and implementing this trait for different concrete strategies. Here's an example to demonstrate how to implement the strategy pattern in Rust.

trait SortStrategy {
    fn sort(&self, data: &mut Vec<i32>);
}

struct BubbleSort;
impl SortStrategy for BubbleSort {
    fn sort(&self, data: &mut Vec<i32>) {
        for i in 0..data.len() {
            for j in 0..data.len() - 1 {
                if data[j] > data[j + 1] {
                    data.swap(j, j + 1);
                }
            }
        }
    }
}

struct QuickSort;

impl SortStrategy for QuickSort {
    fn sort(&self, data: &mut Vec<i32>) {
        if data.len() <= 1 {
            return;
        }

        let pivot = data.pop().unwrap();
        let mut left = Vec::new();
        let mut right = Vec::new();

        for x in data {
            if *x < pivot {
                left.push(*x);
            } else {
                right.push(*x);
            }
        }

        self.sort(&mut left);
        self.sort(&mut right);

        data.clear();
        data.extend(left);
        data.push(pivot);
        data.extend(right);
    }
}

struct Sorter {
    strategy: Box<dyn SortStrategy>,
}

impl Sorter {
    fn new(strategy: Box<dyn SortStrategy>) -> Self {
        Sorter { strategy }
    }

    fn sort(&self, data: &mut Vec<i32>) {
        self.strategy.sort(data);
    }
}

fn main() {
    let mut data = vec![3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5];
    let bubble_sort = Box::new(BubbleSort);
    let quick_sort = Box::new(QuickSort);

    let sorter = Sorter::new(bubble_sort);
    sorter.sort(&mut data);
    println!("Bubble sorted: {:?}", data);

    let sorter = Sorter::new(quick_sort);
    sorter.sort(&mut data);
    println!("Quick sorted: {:?}", data);
}

Further Reading

Strategy Pattern by Immaculate Coder

Observer

The Observer pattern is a behavioral design pattern that allows an object (called the subject) to notify other objects (called observers) when the subject's state changes.

The Observer pattern is particularly useful when you need to notify multiple objects about changes in another object. It allows you to define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.

Chain of Responsibility

Welcome to the Chain of Responsibility pattern. This pattern is a behavioral design pattern that allows an object to pass a request along a chain of handlers. Upon receiving a request, each handler decides either to process the request or to pass it along the chain.

The Chain of Responsibility pattern is particularly useful when the system needs to decouple senders and receivers of requests. It allows multiple objects to handle a request without the sender needing to know which object will process it.

Command

The Command pattern is a behavioral design pattern that turns a request into a stand-alone object. This object contains all the information about the request, including the method to call, the method arguments, and the object that owns the method. This decouples the sender from the receiver, allowing you to parameterize clients with queues, requests, and operations.

The Command pattern is used to encapsulate a request as an object, thereby allowing for parameterization of clients with different requests, queues, and operations. This pattern also supports undoable operations, logging, and transactional systems.

State

The state pattern is a behavioral pattern that allows an object to change its behavior when its internal state changes. The object will appear to change its class.

The state pattern is useful when an object's behavior is dependent on its state and it must change its behavior at runtime depending on that state.

OOP like way

Rust does not have a built-in state pattern, but it can be implemented using traits.

pub struct Post {
    state: Option<Box<dyn State>>,
    content: String,
}

impl Post {
    pub fn new() -> Post {
        Post {
            state: Some(Box::new(Draft {})),
            content: String::new(),
        }
    }

    // not part of state pattern
    pub fn add_text(&mut self, text: &str) {
        self.content.push_str(text);
    }

    pub fn content(&self) -> &str {
        self.state.as_ref().unwrap().content(&self)
    }

    pub fn request_review(&mut self) {
        if let Some(s) = self.state.take() {
            self.state = Some(s.request_review())
        }
    }

    pub fn approve(&mut self) {
        if let Some(s) = self.state.take() {
            self.state = Some(s.approve())
        }
    }
}

trait State {
    fn request_review(self: Box<Self>) -> Box<dyn State>;
    fn approve(self: Box<Self>) -> Box<dyn State>;
    fn content<'a>(&self, post: &'a Post) -> &'a str {
        ""
    }
}

struct Draft {}

impl State for Draft {
    fn request_review(self: Box<Self>) -> Box<dyn State> {
        Box::new(PendingReview {})
    }

    fn approve(self: Box<Self>) -> Box<dyn State> {
        self
    }
}

struct PendingReview {}

impl State for PendingReview {
    fn request_review(self: Box<Self>) -> Box<dyn State> {
        self
    }

    fn approve(self: Box<Self>) -> Box<dyn State> {
        Box::new(Published {})
    }
}

struct Published {}

impl State for Published {
    fn request_review(self: Box<Self>) -> Box<dyn State> {
        self
    }

    fn approve(self: Box<Self>) -> Box<dyn State> {
        self
    }

    fn content<'a>(&self, post: &'a Post) -> &'a str {
        &post.content
    }
}

fn main() {
    let mut post = Post::new();
    
    post.add_text("I ate a salad for lunch today");
    assert_eq!("", post.content());
    
    post.request_review();
    assert_eq!("", post.content());
    
    post.approve();
    assert_eq!("I ate a salad for lunch today", post.content());
    
    println!("{}", post.content());
}

Functional way

We utilize Rust ownership system to enforce the state transitions.


struct Post {
    content: String,
}

impl Post {
    fn new() -> DraftPost {
        DraftPost {
            content: String::new(),
        }
    }
    
    fn content(&self) -> &str {
        &self.content
    }
}


struct DraftPost {
    content: String,
}


impl DraftPost {
    fn add_text(&mut self, text: &str) {
        self.content.push_str(text);
    }
    
    fn request_review(self) -> PendingReviewPost {
        PendingReviewPost {
            content: self.content,
        }
    }
}

struct PendingReviewPost {
    content: String,
}

impl PendingReviewPost {
    fn approve(self) -> Post {
        Post {
            content: self.content,
        }
    }
}

fn main() {
    let mut post: DraftPost = Post::new();
    
    post.add_text("I ate a salad for lunch today");
    
    let post: PendingReviewPost = post.request_review();
    
    let post: Post = post.approve();
    assert_eq!("I ate a salad for lunch today", post.content());
    
    println!("{}", post.content());
}

Template Method

The Template Method is a behavioral design pattern that defines the program skeleton of an algorithm in the superclass but lets subclasses override specific steps of the algorithm without changing its structure.

The Template Method pattern is particularly useful when you want to define the steps of an algorithm but allow subclasses to provide their own implementations for some of those steps. It allows you to define the structure of an algorithm in the superclass and let subclasses provide concrete implementations for some of the steps.

Visitor

The Visitor pattern is a behavioral design pattern that allows adding new behaviors to existing classes without altering their structure. It is useful when you have to perform operations on a group of similar objects.

The Visitor pattern is particularly useful when you need to perform operations on a group of objects that have different classes. It allows you to define a new operation without changing the classes of the objects on which it operates.

Interpreter

The Interpreter pattern is a behavioral design pattern that defines a grammar for interpreting a language and provides an interpreter to parse the grammar. It is used to define a language grammar and provide a way to interpret sentences in that language.

The Interpreter pattern is particularly useful when you need to interpret a language or define a grammar for a domain-specific language. It allows you to define a language grammar and provide an interpreter to parse and interpret sentences in that language.

Memento

Memento is a behavioral design pattern that lets you save and restore the previous state of an object without revealing the details of its implementation.

Mediator

The Mediator pattern is a behavioral design pattern that defines an object that encapsulates how objects interact with each other. It promotes loose coupling by keeping objects from referring to each other explicitly, and it allows their interaction to vary independently.

The Mediator pattern is used to define an object that encapsulates how a set of objects interact. It promotes loose coupling by keeping objects from referring to each other explicitly, and it allows their interaction to vary independently.

Iterator

The Iterator pattern provides a way to access the elements of an aggregate object sequentially without exposing its underlying representation.

The Iterator pattern is used to provide a standard way to traverse a collection of items without exposing the underlying implementation. It allows you to access the elements of a collection sequentially without needing to know the internal structure of the collection.

trait Iterator {
    fn has_next(&self) -> bool;
    fn next(&mut self) -> Option<&str>;
}

struct ConcreteIterator {
    index: usize,
    data: Vec<String>,
}

impl ConcreteIterator {
    fn new(data: Vec<String>) -> Self {
        Self { index: 0, data }
    }
}

impl Iterator for ConcreteIterator {
    fn has_next(&self) -> bool {
        self.index < self.data.len()
    }

    fn next(&mut self) -> Option<&str> {
        if self.has_next() {
            let item = &self.data[self.index];
            self.index += 1;
            Some(item)
        } else {
            None
        }
    }
}

fn main() {
    let data = vec!["A".to_string(), "B".to_string(), "C".to_string()];
    let mut iterator = ConcreteIterator::new(data);

    while iterator.has_next() {
        if let Some(item) = iterator.next() {
            println!("{}", item);
        }
    }
}
struct Fibonnaci {
    current: u64,
    next: u64,
}

impl Iterator for Fibonnaci {
    type Item = u64;
    
    fn next (&mut self) -> Option<Self::Item> {
        let current = self.current;
        self.current = self.next;
        self.next = current + self.next;
        Some(current)
    }
}

fn fibonnaci() -> Fibonnaci {
    Fibonnaci { current: 0, next: 1 }
}

fn main() {
    let mut fib = fibonnaci();
    for i in 0..=10 {
        if let Some(n) = fib.next() {
            println!("{} -> {}", i, n);
        }
    }
}

Structural

Structural design patterns are concerned with how objects are composed to form larger structures. Structural class patterns use inheritance to compose interfaces or implementations. Structural object patterns describe ways to compose objects to realize new functionality.

List of Structural Design Patterns

  1. Adapter
  2. Bridge
  3. Composite
  4. Decorator
  5. Facade
  6. Flyweight
  7. Proxy

Adapter

Adapter pattern is pattern part of the Structural Design Patterns. It allows objects with incompatible interfaces to collaborate.

trait Target {
    fn request(&self) -> String;
}

struct Adaptee;

impl Adaptee {
    fn specific_request(&self) -> String {
        String::from("Adaptee: specific request")
    }
}

struct Adapter {
    adaptee: Adaptee,
}

impl Adapter {
    fn new(adaptee: Adaptee) -> Self {
        Self { adaptee }
    }
}

impl Target for Adapter {
    fn request(&self) -> String {
        self.adaptee.specific_request()
    }
}

fn client_code(target: &dyn Target) {
    println!("Client: I'm using the Target interface: {}", target.request());
}    

Bridge

The Bridge pattern is a structural design pattern that lets you split a large class or a set of closely related classes into two separate hierarchies—abstraction and implementation—which can be developed independently of each other.

trait Implementor {
    fn operation_implementation(&self) -> String;
}

struct ConcreteImplementorA;

impl Implementor for ConcreteImplementorA {
    fn operation_implementation(&self) -> String {
        String::from("ConcreteImplementorA: operation implementation")
    }
}

struct ConcreteImplementorB;

impl Implementor for ConcreteImplementorB {
    fn operation_implementation(&self) -> String {
        String::from("ConcreteImplementorB: operation implementation")
    }
}

trait Abstraction {
    fn operation(&self) -> String;
}

struct RefinedAbstraction {
    implementor: Box<dyn Implementor>,
}

impl Abstraction for RefinedAbstraction {
    fn operation(&self) -> String {
        format!("RefinedAbstraction: operation with {}", self.implementor.operation_implementation())
    }
}

fn client_code(abstraction: &dyn Abstraction) {
    println!("Client: I'm using the Abstraction interface: {}", abstraction.operation());
}

fn main() {
    let concrete_implementor_a = ConcreteImplementorA;
    let concrete_implementor_b = ConcreteImplementorB;

    let refined_abstraction = RefinedAbstraction {
        implementor: Box::new(concrete_implementor_a),
    };
    client_code(&refined_abstraction);

    let refined_abstraction = RefinedAbstraction {
        implementor: Box::new(concrete_implementor_b),
    };
    client_code(&refined_abstraction);
}

Composite

The Composite pattern is a structural design pattern that lets you compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.

trait Component {
    fn operation(&self) -> String;
}

struct Leaf {
    name: String,
}

impl Component for Leaf {
    fn operation(&self) -> String {
        format!("Leaf: {}", self.name)
    }
}

struct Composite {
    children: Vec<Box<dyn Component>>,
}

impl Component for Composite {
    fn operation(&self) -> String {
        let mut result = String::new();
        for child in &self.children {
            if !result.is_empty() {
                result.push_str(", ");
            }
            result.push_str(&child.operation());
        }
        format!("Branch: [{}]", result)
    }
}

fn client_code(component: &dyn Component) {
    println!("Client: I got a composite component:");
    println!("RESULT: {}", component.operation());
}

fn main() {
    let leaf = Leaf {
        name: String::from("Leaf"),
    };
    client_code(&leaf);

    let composite = Composite {
        children: vec![
            Box::new(Leaf {
                name: String::from("Leaf A"),
            }),
            Box::new(Leaf {
                name: String::from("Leaf B"),
            }),
        ],
    };
    client_code(&composite);
}

Decorator

The Decorator pattern is a structural design pattern that allows adding new behaviors to objects dynamically by placing them inside special wrapper objects that contain these behaviors.

trait Component {
    fn operation(&self) -> String;
}

struct ConcreteComponent;

impl Component for ConcreteComponent {
    fn operation(&self) -> String {
        "ConcreteComponent".to_string()
    }
}

struct Decorator {
    component: Box<dyn Component>,
}

impl Decorator {
    fn new(component: Box<dyn Component>) -> Self {
        Self { component }
    }
}

impl Component for Decorator {
    fn operation(&self) -> String {
        format!("Decorator({})", self.component.operation())
    }
}

fn main() {
    let component = Box::new(ConcreteComponent);
    let decorator = Decorator::new(component);
    println!("{}", decorator.operation());
}
<?php

interface Component
{
    public function operation(): string;
}

class ConcreteComponent implements Component
{
    public function operation(): string
    {
        return "ConcreteComponent";
    }
}

class Decorator implements Component
{
    private Component $component;

    public function __construct(Component $component)
    {
        $this->component = $component;
    }

    public function operation(): string
    {
        return "Decorator(" . $this->component->operation() . ")";
    }
}

$component = new ConcreteComponent();

$decorator = new Decorator($component);

echo $decorator->operation();
package main

import "fmt"

type Component interface {
    Operation() string
}

type ConcreteComponent struct{}

func (c *ConcreteComponent) Operation() string {
    return "ConcreteComponent"
}

type Decorator struct {
    component Component
}

func (d *Decorator) Operation() string {
    return fmt.Sprintf("Decorator(%s)", d.component.Operation())
}

func NewDecorator(component Component) *Decorator {
    return &Decorator{component: component}
}

func main() {
    component := &ConcreteComponent{}
    decorator := NewDecorator(component)
    fmt.Println(decorator.Operation())
}

Facade

The Facade pattern is a structural design pattern that provides a simplified interface to a complex system of classes, library, or framework. It is used to hide the complexities of the system and provide an interface to the client using which the client can access the system.

The Facade pattern is used to provide a simple interface to a complex system. It is used to hide the complexities of the system and provide an interface to the client using which the client can access the system.

struct Subsystem1;

impl Subsystem1 {
    fn operation1(&self) -> String {
        "Subsystem1: Ready!".to_string()
    }
}

struct Subsystem2;

impl Subsystem2 {
    fn operation1(&self) -> String {
        "Subsystem2: Get ready!".to_string()
    }
}

struct Facade {
    subsystem1: Subsystem1,
    subsystem2: Subsystem2,
}

impl Facade {
    fn new(subsystem1: Subsystem1, subsystem2: Subsystem2) -> Self {
        Self {
            subsystem1,
            subsystem2,
        }
    }

    fn operation(&self) -> String {
        let mut result = "Facade initializes subsystems:\n".to_string();
        result += &self.subsystem1.operation1();
        result += &self.subsystem2.operation1();
        result
    }
}

fn main() {
    let subsystem1 = Subsystem1;
    let subsystem2 = Subsystem2;
    let facade = Facade::new(subsystem1, subsystem2);
    println!("{}", facade.operation());
}

Flyweight

Flyweight is a structural design pattern that allows programs to support vast quantities of objects by keeping their memory consumption low.

The Flyweight pattern is used to reduce the memory usage or computational expenses by sharing as much as possible with similar objects. It is used to minimize memory usage or computational expenses by sharing as much as possible with similar objects.

use std::collections::HashMap;

struct FlyweightFactory {
    flyweights: HashMap<String, Flyweight>,
}

impl FlyweightFactory {
    fn new() -> Self {
        Self {
            flyweights: HashMap::new(),
        }
    }

    fn get_flyweight(&mut self, key: String) -> Flyweight {
        if !self.flyweights.contains_key(&key) {
            self.flyweights.insert(key.clone(), Flyweight::new(key));
        }
        self.flyweights.get(&key).unwrap().clone()
    }
}

struct Flyweight {
    key: String,
}

impl Flyweight {
    fn new(key: String) -> Self {
        Self { key }
    }

    fn operation(&self) -> String {
        format!("Flyweight: {}", self.key)
    }
}

fn main() {
    let mut factory = FlyweightFactory::new();
    let flyweight1 = factory.get_flyweight("key1".to_string());
    let flyweight2 = factory.get_flyweight("key2".to_string());
    let flyweight3 = factory.get_flyweight("key1".to_string());

    println!("{}", flyweight1.operation());
    println!("{}", flyweight2.operation());
    println!("{}", flyweight3.operation());
}

Proxy

The Proxy design pattern is a structural pattern that allows you to provide a substitute or placeholder for another object. This proxy manages access to the original object, enabling you to perform actions either before or after the request reaches the original object.

trait Subject {
    fn request(&self) -> String;
}

struct RealSubject;

impl Subject for RealSubject {
    fn request(&self) -> String {
        "RealSubject: Handling request".to_string()
    }
}

struct Proxy {
    real_subject: RealSubject,
}

impl Subject for Proxy {
    fn request(&self) -> String {
        "Proxy: Handling request".to_string()
    }
}

fn main() {
    let real_subject = RealSubject;
    let proxy = Proxy { real_subject };

    println!("{}", proxy.request());
}

Rust Specific Design Patterns

This section will cover design patterns that are specific to Rust programming language.

Table of Contents

  1. Extension traits
  2. RAII
  3. Builder
  4. Newtype

Extension traits

Extension traits are a way to extend the functionality of a type without modifying the original type. This is useful when you want to add methods to a type that you don't own, such as types from external libraries or the standard library.

Extension traits are implemented using the trait keyword followed by the name of the trait and the type that you want to extend. The methods defined in the extension trait can be called on any instance of the type that implements the trait.

trait Palindrome {
    fn is_palindrome(&self) -> bool;
}

impl Palindrome for String {
    fn is_palindrome(&self) -> bool {
        let s = self.chars().collect::<Vec<char>>();
        s == s.iter().rev().cloned().collect::<Vec<char>>()
    }
}

fn main() {
    let mut x = String::from("racecar");
    println!("{}", x.is_palindrome());
    x = String::from("hello");
    println!("{}", x.is_palindrome());
}

RAII

RAII stands for Resource Acquisition Is Initialization. It is a programming idiom used in Rust to manage resources. The idiom is based on the idea that resources should be acquired during the initialization of an object and released during the object's destruction. This ensures that resources are always released when they are no longer needed, even in the presence of exceptions.

RAII is a common pattern in Rust and is used to manage resources such as files, network connections, and locks. The idiom is implemented using Rust's ownership system, which ensures that resources are released when they go out of scope.

Example

#![allow(unused)]
fn main() {
use std::ops::Deref;

struct Foo {}

struct Mutex<T> {
    // We keep a reference to our data: T here.
    //..
}

struct MutexGuard<'a, T: 'a> {
    data: &'a T,
    //..
}

// Locking the mutex is explicit.
impl<T> Mutex<T> {
    fn lock(&self) -> MutexGuard<T> {
        // Lock the underlying OS mutex.
        //..

        // MutexGuard keeps a reference to self
        MutexGuard {
            data: self,
            //..
        }
    }
}

// Destructor for unlocking the mutex.
impl<'a, T> Drop for MutexGuard<'a, T> {
    fn drop(&mut self) {
        // Unlock the underlying OS mutex.
        //..
    }
}

// Implementing Deref means we can treat MutexGuard like a pointer to T.
impl<'a, T> Deref for MutexGuard<'a, T> {
    type Target = T;

    fn deref(&self) -> &T {
        self.data
    }
}

fn baz(x: Mutex<Foo>) {
    let xx = x.lock();
    xx.foo(); // foo is a method on Foo.
              // The borrow checker ensures we can't store a reference to the underlying
              // Foo which will outlive the guard xx.

    // x is unlocked when we exit this function and xx's destructor is executed.
}
}

Builder

Builder is a creational design pattern that lets you construct complex objects step by step. The pattern allows you to produce different types and representations of an object using the same construction code.

#[derive(Debug, PartialEq)]
pub struct Foo {
    // Lots of complicated fields.
    bar: String,
}

impl Foo {
    // This method will help users to discover the builder
    pub fn builder() -> FooBuilder {
        FooBuilder::default()
    }
}

#[derive(Default)]
pub struct FooBuilder {
    // Probably lots of optional fields.
    bar: String,
}

impl FooBuilder {
    pub fn new(/* ... */) -> FooBuilder {
        // Set the minimally required fields of Foo.
        FooBuilder {
            bar: String::from("X"),
        }
    }

    pub fn name(mut self, bar: String) -> FooBuilder {
        // Set the name on the builder itself, and return the builder by value.
        self.bar = bar;
        self
    }

    // If we can get away with not consuming the Builder here, that is an
    // advantage. It means we can use the FooBuilder as a template for constructing
    // many Foos.
    pub fn build(self) -> Foo {
        // Create a Foo from the FooBuilder, applying all settings in FooBuilder
        // to Foo.
        Foo { bar: self.bar }
    }
}

#[test]
fn builder_test() {
    let foo = Foo {
        bar: String::from("Y"),
    };
    let foo_from_builder: Foo = FooBuilder::new().name(String::from("Y")).build();
    assert_eq!(foo, foo_from_builder);
}

Newtype

Newtype is a design pattern that allows you to create a new type that is distinct from its original type. This pattern is useful when you want to add some additional functionality to an existing type without creating a new type from scratch.

use std::fmt::Display;

// Create Newtype Password to override the Display trait for String
struct Password(String);

impl Display for Password {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        write!(f, "****************")
    }
}

fn main() {
    let unsecured_password: String = "ThisIsMyPassword".to_string();
    let secured_password: Password = Password(unsecured_password.clone());
    println!("unsecured_password: {unsecured_password}");
    println!("secured_password: {secured_password}");
}

Advantages

  • Allows you to create a new type that is distinct from its original type.
  • Provides a way to add additional functionality to an existing type without creating a new type from scratch.
  • Helps in type safety by preventing accidental misuse of the original type.
  • Improves code readability by giving a descriptive name to the new type.
  • Enables you to implement traits for the new type without affecting the original type.

Disadvantages

  • Requires additional boilerplate code to define the newtype and implement traits for it.
  • May lead to code duplication if the newtype needs to implement the same traits as the original type.
  • Can introduce confusion if the newtype is not used consistently throughout the codebase.

When to Use

  • When you want to create a new type that is distinct from its original type.
  • When you need to add additional functionality to an existing type without modifying the original type.
  • When you want to improve type safety by preventing accidental misuse of the original type.
  • When you need to implement traits for a specific use case without affecting the original type.

Example Use Cases

  • Creating a new type to represent a secure version of an existing type (e.g., Password).
  • Wrapping a primitive type to provide additional validation or formatting (e.g., Email, PhoneNumber).
  • Defining a new type to represent a specific domain concept (e.g., UserId, ProductId).
  • Implementing traits for a specific use case without affecting the original type (e.g., Display, Debug).

Architecture

Hexagonal Architecture

Introduction

Hexagonal Architecture is an architectural pattern that aims to create a flexible and maintainable software system by decoupling the core business logic from the external interfaces. This approach is particularly useful for applications that require a high degree of modularity and testability. In this chapter, we will explore how to implement Hexagonal Architecture in a PHP environment, focusing on practical applications and benefits.

Core Concepts of Hexagonal Architecture

  1. Core Business Logic: The heart of the application, containing the domain-specific rules and operations.
  2. Ports and Adapters: Interfaces that define how the core business logic interacts with the external world.
  3. Primary Ports: Interfaces that expose the core business logic to the external world.
  4. Secondary Ports: Interfaces that allow the core business logic to interact with external services and resources.
  5. Adapters: Implementations of the ports that connect the core business logic to the external world.
               --------------          ---------------
      -------> |  Primary   | -------> |             | 
               |  Ports     |          |  Hexagonal  |
      <------- |            | <------- |  Architecture|
               --------------          |             |
               --------------          ---------------
      -------> |  Secondary | -------> |             | 
               |  Ports     |          |             |
      <------- |            | <------- |             |
               --------------          ---------------

Implementing Hexagonal Architecture in PHP

Defining Primary Ports

Primary ports define the interfaces through which the core business logic interacts with the external world. These interfaces should be technology-agnostic and focused on the domain-specific operations.

interface UserRepository {
    public function findById(int $id): ?
    public function save(User $user): void;
}
interface EmailService {
    public function sendEmail(string $to, string $subject, string $body): void;
}

Defining Secondary Ports

Secondary ports define the interfaces through which the core business logic interacts with external services and resources. These interfaces should abstract away the implementation details of external dependencies.

interface UserGateway {
    public function getUserById(int $id): ?User;
    public function saveUser(User $user): void;
}
interface EmailGateway {
    public function send(string $to, string $subject, string $body): void;
}

Implementing Adapters

Adapters are concrete implementations of the ports that connect the core business logic to the external world. These adapters handle the translation between the domain-specific operations and the external interfaces.

class UserRepositoryAdapter implements UserRepository {
    private UserGateway $userGateway;

    public function __construct(UserGateway $userGateway) {
        $this->userGateway = $userGateway;
    }

    public function findById(int $id): ?User
    {
        return $this->userGateway->getUserById($id);
    }

    public function save(User $user): void
    {
        $this->userGateway->saveUser($user);
    }

}
class EmailServiceAdapter implements EmailService {
    private EmailGateway $emailGateway;

    public function __construct(EmailGateway $emailGateway) {
        $this->emailGateway = $emailGateway;
    }

    public function sendEmail(string $to, string $subject, string $body): void
    {
        $this->emailGateway->send($to, $subject, $body);
    }
}

Wiring Everything Together

In the application bootstrap process, we wire the core business logic with the adapters for the primary and secondary ports.

$userGateway = new DatabaseUserGateway();
$userRepository = new UserRepositoryAdapter($userGateway);

$emailGateway = new SmtpEmailGateway();
$emailService = new EmailServiceAdapter($emailGateway);

$application = new Application($userRepository, $emailService);

Benefits of Hexagonal Architecture

  1. Modularity: The separation of concerns between the core business logic and the external interfaces allows for easier maintenance and updates.
  2. Testability: The decoupling of the core logic from the external dependencies makes it easier to write unit tests and integration tests.
  3. Flexibility: The ability to swap out adapters for different implementations without affecting the core logic provides flexibility in choosing external services and resources.
  4. Scalability: The modular design of Hexagonal Architecture makes it easier to scale the application by adding new features or adapting to changing requirements.
  5. Security: The clear separation of concerns reduces the risk of security vulnerabilities by limiting the exposure of the core logic to external interfaces.
  6. Maintainability: The clean and organized structure of Hexagonal Architecture makes it easier to understand and maintain the codebase over time.
  7. Extensibility: The flexibility of Hexagonal Architecture allows for easy extension of the application with new features and functionalities.
  8. Isolation: The isolation of the core business logic from external dependencies reduces the risk of cascading failures and improves fault tolerance.
  9. Reusability: The modular design of Hexagonal Architecture promotes code reuse and reduces duplication of logic across different parts of the application.
  10. Adaptability: The ability to adapt to changing requirements and technologies by swapping out adapters for different implementations.
  11. Consistency: The consistent structure and separation of concerns in Hexagonal Architecture lead to a more maintainable and predictable codebase.
  12. Performance: The modular design of Hexagonal Architecture can improve performance by allowing for more efficient interactions with external services and resources.
  13. Documentation: The clear separation of concerns in Hexagonal Architecture makes it easier to document and understand the application's design and functionality.
  14. Collaboration: The modular design of Hexagonal Architecture facilitates collaboration among team members by providing clear boundaries between different components of the application.

Domain-Driven Design

Domain-Driven Design (DDD)

Event Sourcing

CQS and CQRS

Command Query Separation (CQS)

Command Query Separation is a design principle in software engineering that suggests separating methods that modify state (commands) from methods that return results without modifying state (queries). The idea is to keep operations that change the system's state distinct from operations that simply retrieve data. This separation helps in designing clearer, more maintainable code and can enhance the predictability of how changes affect the application. In short, individual methods in an object or service, stating that a method should either be a command (change the system's state) or a query (retrieve data) but not both. It is the foundational principle behind CQRS.

Introduction to Command Query Responsibility Separation (CQRS)

Command Query Responsibility Segregation (CQRS)

Command Query Responsibility Segregation (CQRS) is an architectural pattern that separates the read and write operations of a data model into distinct interfaces. This approach can significantly enhance the performance, scalability, and security of an application. In this chapter, we will explore how to implement CQRS in a PHP environment, focusing on practical applications and benefits.

Core Concepts of CQRS

  1. Commands: Actions that modify state. In PHP, commands are usually implemented as classes that encapsulate all the information needed to perform an action.
  2. Queries: Requests for information. Unlike commands, queries do not modify any data.
  3. Command Handlers: Logic to handle the execution of commands.
  4. Query Handlers: Logic to retrieve data in response to queries.
              --------------          ---------------
     -------> |  Commands  | -------> |             | 
              --------------          |  CQRS       |
              --------------          |  Framework  |
     <------- |  Queries   | <------- |             |
              --------------          ---------------

Implementing Command Query Responsibility Segregation (CQRS) in PHP

class CreateUserCommand {
    public string $username;
    public string $email;
    public string $password;

    public function __construct(string $username, string $email, string $password) {
        $this->username = $username;
        $this->email = $email;
        $this->password = $password;
    }
}
class CreateUserHandler {
    private UserRepository $userRepository;

    public function __construct(UserRepository $repository) {
        $this->userRepository = $repository;
    }

    public function handle(CreateUserCommand $command): void {
        $user = new User($command->username, $command->email, $command->password);
        $this->userRepository->save($user);
    }
}
class FindUserByEmailQuery {
    public string $email;

    public function __construct(string $email) {
        $this->email = $email;
    }
}
class FindUserByEmailHandler {
    private UserRepository $userRepository;

    public function __construct(UserRepository $repository) {
        $this->userRepository = $repository;
    }

    public function handle(FindUserByEmailQuery $query): ?User {
        return $this->userRepository->findByEmail($query->email);
    }
}

Evolutionary Architecture

Evolutionary architecture is a set of practices that support guiding the design of a system in a way that allows it to evolve over time. It is based on the idea that the architecture of a system should be able to adapt to changing requirements and technologies. This is in contrast to traditional approaches to architecture, which often involve making decisions that are difficult to change later on.

Evolutionary architecture is a response to the increasing complexity and uncertainty of modern software systems. It recognizes that it is impossible to predict all of the requirements and constraints that a system will face in the future, and that the architecture of a system should be able to evolve in response to these changes.

The key principles of evolutionary architecture include:

  • Incremental change: Evolutionary architecture is based on the idea that change should be introduced incrementally, rather than all at once. This allows the architecture of a system to evolve gradually over time, rather than requiring a complete redesign.
  • Feedback loops: Evolutionary architecture relies on feedback loops to guide the evolution of a system. By collecting data on how a system is performing in production, architects can make informed decisions about how to evolve the architecture of the system.
  • Experimentation: Evolutionary architecture encourages experimentation and exploration. By trying out different approaches and seeing how they work in practice, architects can learn what works and what doesn't, and use this knowledge to guide the evolution of the system.
  • Emergent design: Evolutionary architecture is based on the idea that the architecture of a system should emerge over time, rather than being defined up front. This allows the architecture of a system to evolve in response to changing requirements and constraints.
  • Resilience: Evolutionary architecture is designed to be resilient to change. By building systems that are flexible and adaptable, architects can ensure that the architecture of a system can evolve in response to changing requirements and technologies.
  • Antifragility: Evolutionary architecture is based on the idea that systems should not just be resilient to change, but should actually benefit from it. By building systems that can adapt and evolve in response to changing requirements, architects can create systems that become stronger over time.
  • Evolutionary pressure: Evolutionary architecture is based on the idea that systems should be subject to evolutionary pressure. By exposing a system to different environments and conditions, architects can learn how the system responds to change and use this knowledge to guide the evolution of the system.
  • Architectural fitness functions: Evolutionary architecture relies on the use of architectural fitness functions to guide the evolution of a system. These fitness functions define the properties that a system should exhibit, and can be used to evaluate how well the architecture of a system meets these properties.
  • Automation: Evolutionary architecture relies on automation to support the evolution of a system. By automating the deployment, testing, and monitoring of a system, architects can ensure that changes can be introduced quickly and safely.
  • Collaboration: Evolutionary architecture is based on the idea that architecture is a collaborative activity. By involving all members of a development team in the design of a system, architects can ensure that the architecture of a system reflects the needs and constraints of all stakeholders.
  • Evolutionary mindset: Evolutionary architecture is based on the idea that architecture is not a one-time activity, but an ongoing process. By adopting an evolutionary mindset, architects can ensure that the architecture of a system continues to evolve in response to changing requirements and technologies.
  • Adaptability: Evolutionary architecture is designed to be adaptable to changing requirements and technologies. By building systems that are flexible and modular, architects can ensure that the architecture of a system can evolve over time.
  • Simplicity: Evolutionary architecture is based on the idea that the architecture of a system should be as simple as possible. By keeping the architecture of a system simple and focused, architects can ensure that the system is easy to understand and evolve over time.
  • Decomposition: Evolutionary architecture is based on the idea that systems should be decomposed into smaller, more manageable components. By breaking a system down into smaller parts, architects can ensure that the architecture of a system is easier to understand and evolve over time.
  • Evolutionary design: Evolutionary architecture is based on the idea that the design of a system should evolve over time. By allowing the design of a system to change in response to changing requirements and technologies, architects can ensure that the architecture of a system remains relevant and effective.
  • Continuous improvement: Evolutionary architecture is based on the idea that the architecture of a system should be continuously improved over time. By collecting data on how a system is performing in production, architects can identify areas for improvement and use this knowledge to guide the evolution of the system.
  • Evolutionary testing: Evolutionary architecture is based on the idea that testing should be an integral part of the evolution of a system. By writing tests that validate the architecture of a system, architects can ensure that changes can be introduced safely and confidently.
  • Evolutionary documentation: Evolutionary architecture is based on the idea that documentation should evolve along with the architecture of a system. By keeping documentation up to date and relevant, architects can ensure that the architecture of a system is well understood and can be evolved over time.

Design Data-Intensive Applications

Designing data-intensive applications involves understanding the principles, patterns, and best practices for building systems that manage large volumes of data. These applications often require handling data storage, processing, and retrieval at scale, while ensuring reliability, efficiency, and maintainability.

Introduction

Data-intensive applications are software systems that manage and process large volumes of data. These applications are common in a wide range of domains, including e-commerce, social media, finance, healthcare, and more. Designing data-intensive applications involves addressing various challenges related to data storage, processing, and retrieval, as well as ensuring the reliability, efficiency, and maintainability of the system.

Data-intensive applications typically involve working with different types of data, such as structured, semi-structured, and unstructured data. They often require storing data in databases, caches, and data lakes, processing data using batch and stream processing systems, and retrieving data through query engines and search indexes.

Key Concepts

Data Storage

Data storage is a critical aspect of data-intensive applications. It involves choosing the right storage technologies and data models to store and manage data efficiently. Common data storage technologies include relational databases, NoSQL databases, key-value stores, document stores, column-family stores, and graph databases.

Data Processing

Data processing is another key aspect of data-intensive applications. It involves transforming and analyzing data to derive insights and make informed decisions. Common data processing techniques include batch processing, stream processing, data pipelines, ETL (extract, transform, load) processes, and data warehousing.

Data Retrieval

Data retrieval is the process of querying and retrieving data from storage systems. It involves designing efficient data access patterns, indexing strategies, and caching mechanisms to optimize data retrieval performance. Common data retrieval techniques include SQL queries, NoSQL queries, full-text search, and key-value lookups.

Data Modeling

Data modeling is the process of designing data structures and schemas to represent and organize data effectively. It involves understanding the relationships between different data entities, defining data attributes and constraints, and normalizing or denormalizing data as needed. Common data modeling techniques include entity-relationship modeling, schema design, and data normalization.

Data Consistency

Data consistency is the property of data that ensures that all copies of the data are in sync and reflect the latest updates. Achieving data consistency in distributed systems is a challenging problem due to network partitions, latency, and failures. Common consistency models include strong consistency, eventual consistency, causal consistency, and monotonic consistency.

Data Availability

Data availability is the property of data that ensures that data is accessible and retrievable when needed. Achieving high data availability in distributed systems requires designing fault-tolerant and resilient architectures that can handle failures gracefully. Common availability patterns include replication, sharding, load balancing, and failover.

Infrasructure as Code

Infrastructure as Code (IaC) is the process of managing and provisioning computing infrastructure (processes, servers, networks, etc.) using machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The code can be in a version control system. It is a type of IT infrastructure that operations teams can automatically manage and provision through code, rather than using a manual process.

IaC allows you to automate the process of managing and provisioning infrastructure, which can help reduce errors, increase efficiency, and improve consistency. It also allows you to treat your infrastructure as code, which means you can apply the same software development practices to your infrastructure that you use for your applications.

Containers

Containers are a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.

Containers are a form of operating system virtualization. A single container might be used to run anything from a small microservice or software process to a larger application. Containers are designed to be portable, making it easy to move them between environments, such as from a developer's laptop to a test environment, from a staging environment into production, or from a physical machine to a virtual machine in the cloud.

Containers are a key component of modern software development and deployment practices, such as DevOps and continuous integration and delivery (CI/CD). They provide a consistent, reliable environment for applications to run in, making it easier to build, test, and deploy software.

This section covers the basics of containers, including popular containerization tools like Docker and Podman, as well as container orchestration platforms like Kubernetes.

Docker

Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define a multi-container application in a single file, then spin up the entire application with a single command. Docker Compose is particularly useful for development and testing environments, where you need to run multiple containers together.

What is Docker Compose?

Docker Compose is a tool that allows you to define and run multi-container Docker applications. It uses a YAML file to define the services, networks, and volumes that make up your application, then uses that file to create and start all the containers with a single command.

Docker Compose is designed to simplify the process of running multi-container applications. It allows you to define your application's architecture in a single file, making it easy to spin up the entire application with a single command. This is particularly useful for development and testing environments, where you need to run multiple containers together to test your application.

Basic Concepts of Docker Compose

Services

A service in Docker Compose is a container that is part of your application. Each service is defined in the docker-compose.yml file and can have its own configuration options, such as the image to use, environment variables, ports to expose, and volumes to mount.

Networks

Docker Compose creates a default network for your application, allowing the containers to communicate with each other. You can also define custom networks in the docker-compose.yml file to isolate different parts of your application.

Volumes

Volumes in Docker Compose allow you to persist data generated by your containers. You can define volumes in the docker-compose.yml file to mount host directories or named volumes into your containers.

Environment Variables

You can define environment variables for your services in the docker-compose.yml file. These variables can be used to configure your containers at runtime, such as setting database connection strings or API keys.

Docker Compose Commands

Docker Compose provides a set of commands to manage your multi-container application. Some common commands include:

  • docker-compose up: Create and start all the containers in your application.
  • docker-compose down: Stop and remove all the containers in your application.
  • docker-compose ps: List the containers in your application.
  • docker-compose logs: View the logs of the containers in your application.
  • docker-compose exec: Run a command in a running container.
  • docker-compose build: Build or rebuild the images for your services.
  • docker-compose restart: Restart the containers in your application.
  • docker-compose stop: Stop the containers in your application without removing them.
  • docker-compose rm: Remove stopped containers.
  • docker-compose pull: Pull the latest images for your services.
  • docker-compose push: Push the built images to a registry.
  • docker-compose config: Validate and view the configuration of your docker-compose.yml file.
  • docker-compose scale: Scale your services to multiple instances.
  • docker-compose top: Display the running processes of your services.
  • docker-compose events: Receive real-time events from your containers.
  • docker-compose pause: Pause the containers in your application.
  • docker-compose unpause: Unpause the containers in your application.
  • docker-compose version: Show the Docker Compose version information.
  • docker-compose help: Display help information for Docker Compose.

Docker Swarm

Docker Swarm is a container orchestration tool that allows you to manage a cluster of Docker nodes as a single virtual system. It enables you to deploy and scale containerized applications across multiple machines, and provides features such as service discovery, load balancing, and rolling updates.

What is Docker Swarm?

Docker Swarm is a clustering and orchestration tool for Docker containers. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm provides a simple yet powerful way to manage a cluster of Docker nodes, allowing you to deploy and scale containerized applications across multiple machines.

Docker Swarm follows a master-slave architecture, where one or more Docker hosts act as manager nodes (master) and the rest as worker nodes (slaves). The manager nodes are responsible for orchestrating the deployment and scaling of services, while the worker nodes execute the tasks assigned to them.

Key Features of Docker Swarm

Service Discovery

Docker Swarm provides built-in service discovery, allowing containers to discover and communicate with each other by name. This simplifies the process of connecting containers in a distributed application.

Load Balancing

Docker Swarm automatically load balances traffic across containers in a service, distributing requests evenly to ensure optimal performance and availability.

Rolling Updates

Docker Swarm supports rolling updates, allowing you to update services without downtime. It gradually replaces old containers with new ones, ensuring that the application remains available during the update process.

High Availability

Docker Swarm ensures high availability by distributing containers across multiple nodes. If a node fails, the containers are automatically rescheduled on other nodes to maintain service availability.

Scalability

Docker Swarm enables you to scale services horizontally by adding or removing containers based on demand. It automatically distributes containers across nodes to handle increased load.

Security

Docker Swarm provides built-in security features such as mutual TLS (Transport Layer Security) authentication between nodes, encryption of network traffic, and role-based access control (RBAC) for managing user permissions.

How to Create a Docker Swarm Cluster

To create a Docker Swarm cluster, you need at least one manager node and one or more worker nodes. Here are the general steps to create a Docker Swarm cluster:

  1. Initialize the Swarm on a Manager Node: Run the docker swarm init command on a Docker host to initialize it as a manager node. (MANAGER_IP should be replaced with the IP address of the manager node)
   docker swarm init --advertise-addr <MANAGER_IP>

```bash
   docker swarm init --advertise-addr <MANAGER_IP>
  1. Join Worker Nodes to the Swarm: Run the docker swarm join command on each worker node to join it to the Swarm.
    docker swarm join --token <TOKEN
  1. Deploy Services to the Swarm: Use the docker service create command to deploy services to the Swarm. You can specify the number of replicas, ports, and other configurations for the service.
    docker service create --replicas 3 --name my-service -p 8080:80 my-image
  1. Scale Services: You can scale services by changing the number of replicas using the docker service scale command.
    docker service scale my-service=5
  1. Update Services: To update a service, use the docker service update command with the desired configurations.
    docker service update --image new-image my-service
  1. Remove Services: You can remove services using the docker service rm command.
    docker service rm my-service
  1. Leave the Swarm: To remove a node from the Swarm, run the docker swarm leave command on the node.
    docker swarm leave

Docker Swarm provides a simple and efficient way to manage containerized applications at scale. By leveraging its features, you can deploy, scale, and update services seamlessly across a cluster of Docker nodes.

KinD

What is KinD?

KinD (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker container “nodes”. It was developed by the Kubernetes SIGs (Special Interest Groups) community and is a lightweight way to run Kubernetes clusters for development and testing.

KinD was designed to be a fast and easy way to create Kubernetes clusters for testing and development. It uses Docker containers to run Kubernetes nodes, which makes it lightweight and easy to set up. KinD is a great tool for developers who want to test their applications on Kubernetes without having to set up a full-scale cluster.

Why Use KinD?

KinD offers several advantages for developers and testers who need to run Kubernetes clusters locally:

  1. Lightweight: KinD uses Docker containers to run Kubernetes nodes, making it lightweight and easy to set up. You can run multiple Kubernetes clusters on a single machine without using a lot of resources.
  2. Fast: KinD is fast to set up and start. You can have a Kubernetes cluster up and running in minutes.
  3. Easy to Use: KinD is easy to use and configure. You can create and manage Kubernetes clusters with a few simple commands.
  4. Great for Testing: KinD is a great tool for testing applications on Kubernetes. You can quickly create and destroy clusters to test different configurations and scenarios.
  5. Community Support: KinD is developed by the Kubernetes community and has good community support. You can find help and resources online if you run into issues.
  6. Integration with CI/CD Pipelines: KinD can be integrated with CI/CD pipelines to test Kubernetes configurations and applications in a controlled environment.

How to Install KinD?

You can install KinD on Linux, macOS, or Windows by following the instructions provided in the official KinD documentation.

Here are the general steps to install KinD:

  1. Install Docker: KinD requires Docker to run Kubernetes nodes. You can install Docker by following the instructions provided on the official Docker website.
  2. Download KinD Binary: Download the KinD binary for your operating system from the [KinD releases page](
  3. Install KinD: Install KinD by moving the downloaded binary to a directory in your PATH. You can also use a package manager to install KinD on Linux or macOS.
  4. Verify Installation: Verify that KinD is installed correctly by running the kind version command in your terminal. You should see the KinD version displayed.

How to Create a KinD Cluster?

Creating a KinD cluster is a simple process that involves running a few commands in your terminal. Here are the general steps to create a KinD cluster:

  1. Initialize a Cluster: Run the kind create cluster command to create a new KinD cluster. You can specify the number of worker nodes and other configurations in the command.

    kind create cluster --name my-cluster --config config.yaml
    
  2. Verify Cluster: Verify that the KinD cluster is created by running the kubectl get nodes command. You should see the Kubernetes nodes listed in the output.

    kubectl get nodes
    
  3. Interact with the Cluster: You can interact with the KinD cluster using kubectl commands. For example, you can deploy applications, create services, and manage resources in the cluster.

    kubectl create deployment nginx --image=nginx
    kubectl expose deployment nginx --port=80 --type=NodePort
    kubectl get services
    
  4. Delete the Cluster: You can delete the KinD cluster by running the kind delete cluster command. This will remove all the Docker containers used to run the cluster.

    kind delete cluster --name my-cluster
    

KinD provides a simple and efficient way to run Kubernetes clusters locally for testing and development. By using KinD, you can quickly set up and manage Kubernetes clusters on your local machine without the need for a full-scale cluster.

Config.yml Example
# kind-config.yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
   - role: control-plane  # This node acts as the master node
   - role: worker         # This node is a worker node
   - role: worker
networking:
   apiServerAddress: "127.0.0.1"  # IP address where the API Server will be accessible
   apiServerPort: 6443            # Port on which the API Server will be accessible
   podSubnet: "10.244.0.0/16"     # Subnet range for the pods
   serviceSubnet: "10.96.0.0/12"  # Subnet range for the services
kubeadmConfigPatches:
   - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
   - |
      kind: KubeProxyConfiguration
      mode: "ipvs"
featureGates:
   EndpointSlice: true
   IPv6DualStack: true
  1. kind & apiVersion: Specifies the resource type and API version of the configuration file.
  2. nodes: Defines the roles of each node in the cluster. You can specify multiple worker nodes or even multiple control planes for high availability.
    • role: control-plane: Indicates this node will act as the master.
    • role: worker: Indicates these nodes will act as workers.
  3. networking:
    • apiServerAddress: The IP address where the API server is exposed. Typically, localhost is used for local development.
    • apiServerPort: The port on which the API server will be accessible.
    • podSubnet: Specifies the subnet range for pods.
    • serviceSubnet: Specifies the subnet range for services.
  4. kubeadmConfigPatches: Allows for customization of the Kubernetes configuration.
    • The first patch adds a label to the kubelet on all nodes.
    • The second patch sets the kube-proxy mode to "ipvs," which can be more efficient under certain conditions.
  5. featureGates: Enables or disables specific Kubernetes features.

Podman

Podman is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers and container images on your Linux System. Podman provides a Docker-compatible command line interface (CLI) that also supports the Open Container Initiative (OCI) specification. Podman is a part of the libpod library, which is a set of tools for managing containers and pods, including Podman, Buildah, Skopeo, and CRI-O.

Podman is designed to be a drop-in replacement for Docker, providing a similar user experience and feature set. However, Podman does not require a daemon to run containers, making it more lightweight and secure than Docker. Podman can run containers as a non-root user, which improves security by reducing the attack surface of the container runtime.

Podman is a powerful tool for building, running, and managing containers on Linux systems. It is well-suited for developers, system administrators, and anyone who needs to work with containers in a secure and efficient manner.

Podman key concept is usage of Pods which are groups of containers that share the same network and storage namespaces. Pods are useful for running multi-container applications that need to communicate with each other over the network or share storage volumes.

Yaml file is used to define the pod configuration. The Yaml file contains the configuration for the pod, including the containers that are part of the pod, the network and storage volumes they share, and other settings. The Yaml file can be used to create, manage, and delete pods using the podman command line interface.

podman generate kube myapp > myapp.yaml

Kubernetes

Networking in Linux

Linux networking tools

Introduction

Linux provides a wide range of networking tools that can be used to manage and troubleshoot network connections. These tools are essential for system administrators and network engineers to monitor and configure network interfaces, troubleshoot connectivity issues, and analyze network traffic. In this chapter, we will explore some of the most commonly used networking tools in Linux and how they can be used to manage network connections effectively.

Common Networking Tools

1. ifconfig - Interface Configuration

2. ip - IP Configuration

# Display information about all network interfaces
ip addr show

# Display information about a specific network interface
ip addr show eth0

# Bring up a network interface
ip link set eth0 up

# Bring down a network interface
ip link set eth0 down

3. netstat - Network Statistics

4. ss - Socket Statistics

5. ping - Packet Internet Groper

# Send ICMP echo requests to a specific host
ping google.com

6. traceroute - Trace Route

# Display the route packets take to a specific host
traceroute google.com

7. tcpdump - TCP Packet Analyzer

9. nmap - Network Mapper

10. iptables - IP Tables Firewall

# Display the current firewall rules
iptables -L

# Allow incoming traffic on a specific port
iptables -A INPUT -p tcp --dport 80 -j ACCEPT

11. nc - Netcat

# Listen for incoming connections on a specific port
nc -l 1234

Virtual Machines

Virtual machines (VMs) rely on host hardware to establish connections with other devices and locations within a network. This document outlines how VM networking functions and explains the default VM network configuration.

Virtual networking is built around the concept of a virtual network switch—a software-based construct running on the host machine. VMs access the network via this virtual switch. Depending on its configuration, the virtual switch enables VMs to connect to an existing virtual network managed by the hypervisor or use an alternative network connection method.

libvirtd is a popular virtualization daemon that provides a virtual network switch for VMs. It is used by hypervisors like KVM and QEMU to manage VM networking. The virsh command-line tool can be used to interact with libvirtd and manage VMs.

The default network interface for VMs is vibr0. This interface is created by libvirtd and acts as a bridge between the VMs and the host network. VMs can communicate with each other and the host machine through this interface. By default, all VMs are connected to the same virtual network, allowing them to communicate with each other without any additional configuration.

Pentesting

Web Application Security

OWASP Top 10

The Open Web Application Security Project (OWASP) is a non-profit organization that provides unbiased and practical information about application security. The OWASP Top 10 is a list of the ten most critical security risks to web applications. The list is updated every three years by a team of security experts from around the world.

  1. Injection: Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker's malicious data can trick the interpreter into executing unintended commands or accessing unauthorized data.
  2. Broken Authentication: Broken authentication occurs when an attacker can compromise passwords, keys, or session tokens to assume the identity of another user. This can lead to unauthorized access to sensitive data or functionality.
  3. Sensitive Data Exposure: Sensitive data exposure occurs when an application fails to adequately protect sensitive data, such as financial information, health records, or personal information. Attackers can exploit this vulnerability to steal or manipulate sensitive data.
  4. XML External Entities (XXE): XXE vulnerabilities occur when an application processes XML input from untrusted sources without disabling external entities. Attackers can exploit this vulnerability to read files, scan internal systems, and perform denial-of-service attacks.
  5. Broken Access Control: Broken access control occurs when an application fails to restrict users from accessing unauthorized functionality or data. Attackers can exploit this vulnerability to view sensitive files, modify other users' data, or change access rights.
  6. Security Misconfiguration: Security misconfiguration occurs when an application is not securely configured, such as default accounts, unnecessary services, or missing security headers. Attackers can exploit this vulnerability to gain unauthorized access, escalate privileges, or execute arbitrary code.
  7. Cross-Site Scripting (XSS): XSS vulnerabilities occur when an application includes untrusted data in a web page without proper validation or escaping. Attackers can exploit this vulnerability to execute malicious scripts in the context of a victim's browser, leading to session hijacking, defacement, or data theft.
  8. Insecure Deserialization: Insecure deserialization occurs when an application deserializes untrusted data without proper validation or integrity checks. Attackers can exploit this vulnerability to execute arbitrary code, escalate privileges, or perform denial-of-service attacks.
  9. Using Components with Known Vulnerabilities: Using components with known vulnerabilities occurs when an application includes outdated or insecure libraries, frameworks, or software components. Attackers can exploit these vulnerabilities to compromise the application, steal sensitive data, or gain unauthorized access.
  10. Insufficient Logging and Monitoring: Insufficient logging and monitoring occur when an application fails to generate, store, or analyze sufficient logs to detect and respond to security incidents. Attackers can exploit this vulnerability to maintain persistence, evade detection, or escalate privileges.

Cross-Site Scripting (XSS)

Cross-Site Scripting (XSS) is a security vulnerability that allows attackers to inject malicious scripts into web pages viewed by other users. XSS attacks can be used to steal sensitive information, hijack user sessions, deface websites, or distribute malware. There are three main types of XSS attacks:

  1. Stored XSS: Stored XSS occurs when an attacker injects a malicious script into a web application, and the script is stored on the server and executed whenever a user accesses the affected page. This type of XSS attack is persistent and can affect multiple users.
  2. Reflected XSS: Reflected XSS occurs when an attacker injects a malicious script into a URL or form input, and the script is reflected back to the user in the server's response. This type of XSS attack is non-persistent and requires the victim to click on a specially crafted link.
  3. DOM-based XSS: DOM-based XSS occurs when an attacker injects a malicious script into the Document Object Model (DOM) of a web page, and the script is executed by the victim's browser. This type of XSS attack is client-side and does not involve server-side processing.

Prevention

To prevent XSS attacks, web developers should follow best practices such as:

  • Input Validation: Validate and sanitize all user input to prevent malicious scripts from being executed.
  • Output Encoding: Encode user input before displaying it in web pages to prevent script injection.
  • Content Security Policy (CSP): Implement a Content Security Policy to restrict the sources of content that can be loaded on a web page.
  • HTTPOnly and Secure Cookies: Use HTTPOnly and Secure flags on cookies to prevent XSS attacks from stealing session tokens.
  • X-XSS-Protection Header: Enable the X-XSS-Protection header to enable the browser's built-in XSS filter.
  • Use Libraries and Frameworks: Use secure libraries and frameworks that provide built-in protection against XSS attacks.
  • Security Headers: Implement security headers such as X-Content-Type-Options, X-Frame-Options, and Referrer-Policy to enhance web application security.
  • Regular Security Audits: Conduct regular security audits and penetration testing to identify and remediate XSS vulnerabilities.
  • Security Awareness Training: Educate developers, testers, and users about the risks of XSS attacks and how to prevent them.
  • Bug Bounty Programs: Implement bug bounty programs to incentivize security researchers to report XSS vulnerabilities responsibly.
  • Incident Response Plan: Develop an incident response plan to respond to XSS attacks and other security incidents effectively.

Cross-Site Request Forgery (CSRF)

Cross-Site Request Forgery (CSRF) is a security vulnerability that allows attackers to trick users into executing unintended actions on a web application in which they are authenticated. CSRF attacks can be used to perform actions such as changing account settings, transferring funds, or deleting data without the user's consent. There are two main types of CSRF attacks:

  1. Synchronizer Token Pattern: The Synchronizer Token Pattern involves generating a unique token for each user session and including it in forms or URLs. When the user submits a form or clicks on a link, the server validates the token to ensure that the request is legitimate.
  2. Double Submit Cookie Pattern: The Double Submit Cookie Pattern involves setting a cookie with a random value and including the same value in a hidden form field. When the user submits the form, the server compares the cookie value with the form field value to prevent CSRF attacks.

Prevention

To prevent CSRF attacks, web developers should follow best practices such as:

  • Use Anti-CSRF Tokens: Implement anti-CSRF tokens using the Synchronizer Token Pattern or Double Submit Cookie Pattern to protect against CSRF attacks.
  • SameSite Cookies: Set the SameSite attribute on cookies to restrict cross-origin requests and prevent CSRF attacks.
  • HTTP Referer Header: Validate the HTTP Referer header to ensure that requests originate from the same domain and prevent CSRF attacks.
  • CSRF Protection Libraries: Use CSRF protection libraries and frameworks that provide built-in protection against CSRF attacks.
  • Security Headers: Implement security headers such as X-Content-Type-Options, X-Frame-Options, and Referrer-Policy to enhance web application security.
  • Regular Security Audits: Conduct regular security audits and penetration testing to identify and remediate CSRF vulnerabilities.

SQL Injection

SQL Injection is a security vulnerability that allows attackers to execute malicious SQL queries against a web application's database. SQL Injection attacks can be used to bypass authentication, retrieve sensitive information, modify data, or escalate privileges. There are three main types of SQL Injection attacks:

  1. In-band SQL Injection: In-band SQL Injection occurs when an attacker uses the same communication channel to launch the attack and retrieve the results. This type of SQL Injection is the most common and can be further divided into Error-based SQL Injection and Union-based SQL Injection.
  2. Inferential SQL Injection: Inferential SQL Injection occurs when an attacker is unable to see the results of an attack directly but can infer the results based on the application's behavior. This type of SQL Injection is also known as Blind SQL Injection and can be further divided into Boolean-based SQL Injection and Time-based SQL Injection.
  3. Out-of-band SQL Injection: Out-of-band SQL Injection occurs when an attacker uses a different communication channel to launch the attack and retrieve the results. This type of SQL Injection is less common but can be more effective in certain scenarios.

Prevention

To prevent SQL Injection attacks, web developers should follow best practices such as:

  • Parameterized Queries: Use parameterized queries with prepared statements to separate SQL code from user input and prevent SQL Injection attacks.
  • Input Validation: Validate and sanitize all user input to prevent malicious SQL queries from being executed.
  • Least Privilege Principle: Use the least privilege principle to restrict database users' permissions and prevent unauthorized access.
  • ORMs and Libraries: Use Object-Relational Mapping (ORM) frameworks and secure libraries that provide built-in protection against SQL Injection attacks.
  • Security Headers: Implement security headers such as X-Content-Type-Options, X-Frame-Options, and Referrer-Policy to enhance web application security.

Insecure Deserialization

Insecure Deserialization is a security vulnerability that allows attackers to manipulate serialized objects to execute arbitrary code, escalate privileges, or perform denial-of-service attacks. Insecure Deserialization attacks can be used to bypass authentication, execute remote code, or tamper with data. There are two main types of Insecure Deserialization attacks:

  1. Object Instantiation: Object Instantiation occurs when an attacker manipulates serialized objects to create instances of unauthorized classes or execute arbitrary code.
  2. Data Tampering: Data Tampering occurs when an attacker manipulates serialized objects to modify data, escalate privileges, or perform other malicious actions.

PHP Example

class Example {
    public $data;
    public function __wakeup()
    {
        eval($this->data);
    }
}

$serialized = 'O:7:"Example":1:{s:4:"data";s:10:"phpinfo();";}';
$object = unserialize($serialized);

NATIONAL VULNERABILITY DATABASE

The National Vulnerability Database (NVD) is the U.S. government repository of standards-based vulnerability management data. This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security-related software flaws, misconfigurations, product names, and impact metrics.

CVE-2024-2961

The iconv() function in the GNU C Library versions 2.39 and older may overflow the output buffer passed to it by up to 4 bytes when converting strings to the ISO-2022-CN-EXT character set, which may be used to crash an application or overwrite a neighbouring variable.

iconv -l | grep -E 'CN-?EXT'

Solution

Comment out the CN-EXT line in the /usr/lib64/gconv/gconv-modules.d/gconv-modules-extra.conf file and sudo iconvconfig to regenerate the cache.

Network Security

Wireless Security

Social Engineering

Physical Security

Cryptography

Resources

Great resources to learn Rust programming language:

  1. Cargo Book;
  2. The Rust Programming Language
  3. Rust API Guidelines
  4. CQRS and Event Sourcing in Rust
  5. Rust Design Patterns
  6. Rust Cookbook
  7. Rust by Example
  8. Rust Blog

Great resources to learn Go programming language:

  1. Go by Example
  2. Effective Go
  3. Go Blog

Great resources to learn PHP programming language:

  1. PHP Manual
  2. PHP The Right Way
  3. PHP Best Practices
  4. PHP Internals Book
  5. PHP Security Guide
  6. PHP Design Patterns
  7. PHP FIG
  8. PHP Standards Recommendations
  9. PHP Composer
  10. PHP Packagist