Bea.AI design blog
  • System design algorithms
    • Consistant Hashing, Bloom Filter, SkipLists, B-Tree, LRU/LFU
    • Reverse index, Inverted index, Trie, Rsync, Merkle tree
    • Leaky bucket/Token bucket, GeoHash, Quadtree, Leader election, Consensus
    • Time sync, Erasure coding, Message digest, Atomic commit, Mutual exclusion
    • Global state collection, Gossip, Replica management, Self-stabilization, HyperLoglog
    • Count-min Sketch, Hierarchial timing, Operational transform, Last write Wins, Vector clocks
  • Systems design
    • Metrics monitor & alart system
    • API gateway
    • Distributed Key-Value Storage
    • Distributed notification system
    • Task Scheduler
    • Elevator System
  • General Design Templates
    • System Design Blueprint
  • Design topics
    • Topics 1
    • Topics 2
    • Topics 3
    • Topics 4
    • Topics 5
    • Topics 6
    • Topics 7
    • Topics 8
    • Topics 9
    • Topics 10
    • Topics 11
    • Topics 12
    • Topics 13
    • Topics 14
    • Topics 15
    • Topics 16
    • Topics 17
    • Topics 18
    • Topics 19
    • Topics 20
    • Topics 21
    • Topics 22
    • Topics 23
  • System design interview steps & template
  • Typical systems and tips
  • Behaviour Questions
  • Roles requirement
    • SDE-traffic-apple
    • SDE-tools-linkedin
  • Common Systems to use in system design
    • Kafka
    • Flink
    • InfluxDB & Prometheus
    • Kubernetes & Docker
    • Zoomkeeper & Etcd
    • Redis
    • Distributed transaction
  • Design Patterns and Use Scenarios
    • Pattern to creating objects
    • Object Assembling
    • Object Interaction / Responsibility
  • Micro-service network / Gateway
    • Basic concept
    • Performance analysis & optimization
    • Open source techs
  • Systems
    • Distributed Priority Queue
    • Design a Live Video Streaming Platform
Powered by GitBook
On this page
  • 1. Consistence Hashing
  • 2. Bloom Filters
  • 3. Skip Lists
  • 4. B-Tree
  • 5. LRU and LFU
  1. System design algorithms

Consistant Hashing, Bloom Filter, SkipLists, B-Tree, LRU/LFU

PreviousSystem design algorithmsNextReverse index, Inverted index, Trie, Rsync, Merkle tree

Last updated 1 year ago

1. Consistence Hashing

Consistent hashing is a hashing technique that minimizes the need for remapping when a hash table is resized . In traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped. However, consistent hashing uses a special kind of hashing technique such that only keys need to be remapped on average, where “n” is the number of keys and “m” is the number of slots. This technique is particularly useful in load balancing, where a standard hash function can be used to calculate the hash value for a blob, and then the resultant value of the hash is used with modular operations to determine the specific server where the blob will be placed. This technique is used in various data storage and distribution systems, including OpenStack’s Object Storage Service Swift, Amazon’s storage system Dynamo, and Akamai content delivery network.

Consistent hashing is widely used in distributed system design for several use cases, including but not limited to:

  • Load balancing: Consistent hashing allows for easy distribution of load across multiple servers. When a new server is added to the cluster or an existing server is removed, only a small subset of keys need to be remapped to the new or remaining servers, minimizing the impact of the change on the overall system.

  • Caching: Consistent hashing is used to determine which cache node should store a particular key-value pair. This can help reduce the number of cache misses, improving the overall performance of the system.

  • Content delivery: Consistent hashing is used to map content to edge servers in a content delivery network (CDN). This helps ensure that content is served from the server closest to the user, reducing latency and improving the user experience.

  • Database sharding: Consistent hashing is also used to shard data across multiple database servers, allowing for efficient distribution of data and improved performance. In this case, each shard is allocated to a particular server based on its hash value.

Simple java implementation

import java.util.*;

// Class representing consistent hashing
public class ConsistentHashing {
    private final TreeMap<Integer, String> circle = new TreeMap<>(); // TreeMap to store hashed nodes
    private final int numVirtualNodes; // Number of virtual nodes per physical node

    // Constructor to initialize consistent hashing with the number of virtual nodes
    public ConsistentHashing(int numVirtualNodes) {
        this.numVirtualNodes = numVirtualNodes;
    }

    // Method to add a node to the consistent hashing ring
    public void addNode(String node) {
        // Add virtual nodes for the given physical node
        for (int i = 0; i < numVirtualNodes; i++) {
            String virtualNode = node + "#" + i; // Create a virtual node with an index
            int hash = getHash(virtualNode); // Get hash value for the virtual node
            circle.put(hash, node); // Put the virtual node in the TreeMap with its hash value
        }
    }

    // Method to remove a node from the consistent hashing ring
    public void removeNode(String node) {
        // Remove virtual nodes associated with the given physical node
        for (int i = 0; i < numVirtualNodes; i++) {
            String virtualNode = node + "#" + i; // Create a virtual node with an index
            int hash = getHash(virtualNode); // Get hash value for the virtual node
            circle.remove(hash); // Remove the virtual node from the TreeMap
        }
    }

    // Method to get the node responsible for a given key
    public String getNode(String key) {
        if (circle.isEmpty()) {
            return null; // Return null if the circle is empty
        }
        int hash = getHash(key); // Get hash value for the key
        if (!circle.containsKey(hash)) {
            // If the hash value is not in the TreeMap, find the closest succeeding node
            SortedMap<Integer, String> tailMap = circle.tailMap(hash);
            hash = tailMap.isEmpty() ? circle.firstKey() : tailMap.firstKey();
        }
        return circle.get(hash); // Return the node responsible for the key
    }

    // Method to get the hash value for a string
    private int getHash(String key) {
        // Use a simple hashing algorithm (e.g., Java's hashCode() function)
        return key.hashCode(); // Return the hash value of the key
    }

    // Main method to test the consistent hashing implementation
    public static void main(String[] args) {
        ConsistentHashing consistentHashing = new ConsistentHashing(3); // Create consistent hashing with 3 virtual nodes per physical node
        consistentHashing.addNode("Node1"); // Add physical nodes to the consistent hashing ring
        consistentHashing.addNode("Node2");
        consistentHashing.addNode("Node3");

        // Test keys and get the corresponding nodes
        String[] keys = {"Key1", "Key2", "Key3", "Key4", "Key5"};
        for (String key : keys) {
            String node = consistentHashing.getNode(key); // Get the node responsible for the key
            System.out.println("Key: " + key + ", Node: " + node); // Print the key and corresponding node
        }
    }
}

2. Bloom Filters

A Bloom filter is a probabilistic data structure that is used to test whether an element is a member of a set or not 1. It works by hashing elements and setting the corresponding bits in an array of bits. When testing if an element is in the set, the Bloom filter hashes the element and checks if all of the corresponding bits in the array are set. If any of the bits are not set, then the element is not in the set. However, if all of the bits are set, the element may or may not be in the set, as there could have been a collision with another element that set the same bits.

Bloom filters have a high space efficiency as they require relatively small amounts of memory compared to other data structures, such as hash tables or binary search trees. They are also very fast in terms of insertion and search time, with constant time complexity. However, they have a trade-off in terms of the number of false positives, which is the probability that a Bloom filter will incorrectly identify an element as being in the set when it is not. The probability of false positives can be controlled by adjusting the size of the Bloom filter and the number of hash functions used.

Bloom filters are commonly used in various applications, including:

  • Spell checkers: Bloom filters can be used to quickly check if a word is in a dictionary, reducing the need to search through the entire dictionary word by word.

  • Network routers and firewalls: Bloom filters can be used to store information about the network traffic, such as the IP addresses of known malware or spam sources, and quickly identify if incoming traffic is from a known source.

  • Databases and search engines: Bloom filters can be used to pre-filter candidate records or documents to avoid expensive disk access and CPU processing.

  • Web browsers: Bloom filters can be used to store information about visited URLs for fast phishing and malware detection.

Java simple implementation

import java.util.BitSet;
import java.util.Random;

// Bloom Filter class
public class BloomFilter {
    private BitSet bitSet; // BitSet to store the filter
    private int size; // Size of the filter
    private int[] hashes; // Array to store hash values
    private int numHashes; // Number of hash functions

    // Constructor to initialize the Bloom Filter with the given size and number of hash functions
    public BloomFilter(int size, int numHashes) {
        this.size = size;
        this.numHashes = numHashes;
        bitSet = new BitSet(size); // Initialize the BitSet with the given size
        hashes = new int[numHashes]; // Initialize the array for hash values
    }

    // Method to add an element to the Bloom Filter
    public void add(String value) {
        for (int i = 0; i < numHashes; i++) {
            int hash = hash(value, i); // Compute the hash value for the element
            bitSet.set(hash % size, true); // Set the corresponding bit in the BitSet
        }
    }

    // Method to check if an element might be in the Bloom Filter
    public boolean contains(String value) {
        for (int i = 0; i < numHashes; i++) {
            int hash = hash(value, i); // Compute the hash value for the element
            if (!bitSet.get(hash % size)) {
                return false; // If any of the corresponding bits is not set, the element is not in the filter
            }
        }
        return true; // If all corresponding bits are set, the element might be in the filter
    }

    // Method to compute the hash value for an element using a specific hash function
    private int hash(String value, int index) {
        // Use a simple hashing algorithm (e.g., Java's hashCode() function)
        // You may use more sophisticated hash functions depending on your requirements
        return value.hashCode() + index; // Add index to avoid collisions for different hash functions
    }

    // Main method to test the Bloom Filter implementation
    public static void main(String[] args) {
        BloomFilter bloomFilter = new BloomFilter(100, 3); // Create a Bloom Filter with size 100 and 3 hash functions
        // Add some elements to the Bloom Filter
        bloomFilter.add("apple");
        bloomFilter.add("banana");
        bloomFilter.add("orange");
        // Check if elements might be in the Bloom Filter
        System.out.println("Contains 'apple': " + bloomFilter.contains("apple")); // true
        System.out.println("Contains 'grape': " + bloomFilter.contains("grape")); // false
    }
}

3. Skip Lists

Skip lists are a type of probabilistic data structure, similar to a multilevel linked list, that can efficiently search, insert, and delete elements in a sorted sequence. They are designed to have an average search, insertion, and deletion time complexity of O(log n), making them a suitable alternative to balanced trees in some cases.

Skip lists allow for efficient searching by adding multiple levels of links between nodes, with the lower levels acting as shortcuts to higher levels. The linking between levels is done randomly, so that each element has a probability of appearing at each level, with higher levels having progressively smaller probabilities. This randomness helps ensure that the skip list stays balanced on average, even as elements are added or removed.

Skip lists are used in a variety of applications, including databases, game programming, and network routing. They are particularly well-suited for applications that require fast search times and frequent insertions, and may be a better choice than balanced trees in situations where the cost of rebalancing becomes prohibitive. However, because they rely on randomness, their performance can be difficult to predict precisely, and they may not be as suitable for applications that require very tight control over performance characteristics.

Simple java implementation

import java.util.Random;

// Class representing a node in the Skip List
class SkipListNode {
    int value; // Value stored in the node
    SkipListNode[] next; // Array to store pointers to next nodes at different levels

    // Constructor to initialize a node with a given value and number of levels
    public SkipListNode(int value, int level) {
        this.value = value;
        this.next = new SkipListNode[level + 1]; // Initialize the array of pointers
    }
}

// Class representing the Skip List
public class SkipList {
    private final Random random; // Random number generator
    private final int maxHeight; // Maximum height of the Skip List
    private SkipListNode head; // Head node of the Skip List

    // Constructor to initialize the Skip List with a given maximum height
    public SkipList(int maxHeight) {
        this.random = new Random();
        this.maxHeight = maxHeight;
        this.head = new SkipListNode(Integer.MIN_VALUE, maxHeight); // Initialize the head node
    }

    // Method to insert a value into the Skip List
    public void insert(int value) {
        // Generate a random height for the new node
        int height = getRandomHeight();
        // Create a new node with the given value and random height
        SkipListNode newNode = new SkipListNode(value, height);
        // Update pointers at each level to insert the new node into the Skip List
        for (int i = 0; i <= height; i++) {
            newNode.next[i] = head.next[i];
            head.next[i] = newNode;
        }
    }

    // Method to search for a value in the Skip List
    public boolean search(int value) {
        // Start from the top level and traverse down to find the value
        SkipListNode current = head;
        for (int i = maxHeight; i >= 0; i--) {
            while (current.next[i] != null && current.next[i].value < value) {
                current = current.next[i];
            }
            if (current.next[i] != null && current.next[i].value == value) {
                return true; // Value found
            }
        }
        return false; // Value not found
    }

    // Method to generate a random height for a new node
    private int getRandomHeight() {
        int height = 0;
        // Generate a random height until it is less than the maximum height
        while (random.nextDouble() < 0.5 && height < maxHeight) {
            height++;
        }
        return height;
    }

    // Method to print the contents of the Skip List
    public void print() {
        for (int i = maxHeight; i >= 0; i--) {
            SkipListNode current = head.next[i];
            System.out.print("Level " + i + ": ");
            while (current != null) {
                System.out.print(current.value + " ");
                current = current.next[i];
            }
            System.out.println();
        }
    }

    // Main method to test the Skip List implementation
    public static void main(String[] args) {
        SkipList skipList = new SkipList(4); // Create a Skip List with maximum height 4
        // Insert some values into the Skip List
        skipList.insert(3);
        skipList.insert(6);
        skipList.insert(7);
        skipList.insert(9);
        skipList.insert(12);
        skipList.insert(19);
        skipList.insert(17);
        skipList.insert(26);
        skipList.insert(21);
        skipList.insert(25);
        // Print the contents of the Skip List
        skipList.print();

        // Search for values in the Skip List
        System.out.println("Search for 17: " + skipList.search(17)); // true
        System.out.println("Search for 30: " + skipList.search(30)); // false
    }
}

4. B-Tree

A B-tree is a self-balancing tree data structure that maintains sorted data and allows efficient search, insertion, and deletion operations in logarithmic time. Unlike other self-balancing binary search trees, B-trees have the ability to deal with much larger data sets and are commonly used in storage systems such as databases and file systems. They generalize binary search trees by allowing nodes to have more than two children and have a fixed maximum size, referred to as the order, which determines the number of keys that can be stored in each node.

In a B-tree, internal nodes can have a variable number of child nodes within an ordered range, with each node storing keys and pointers to other nodes. The keys within a node are ordered, and the pointers are used to navigate between nodes during search operations. At the leaf level, actual data is stored in sorted order.

B-trees are designed to maintain balance, which ensures efficient search operations, and they achieve this through a process of node splitting and merging. When a new key is inserted, or a key is removed, the B-tree is rebalanced so that the number of levels in the tree remains relatively constant.

B-trees have logarithmic time complexity for search, insertion, and deletion operations, which makes them a popular choice for applications that require quick access to large data sets. They are also used extensively in modern database systems to provide fast indexing and querying capabilities.

Btree java implementation

import java.util.ArrayList;
import java.util.List;

// Class representing a node in the B-Tree
class BTreeNode {
    int[] keys; // Array to store keys
    int t; // Minimum degree (defines the range for number of keys)
    BTreeNode[] children; // Array to store child pointers
    int numKeys; // Current number of keys in the node
    boolean leaf; // Flag to indicate if the node is a leaf

    // Constructor to initialize a node
    public BTreeNode(int t, boolean leaf) {
        this.t = t;
        this.leaf = leaf;
        this.keys = new int[2 * t - 1];
        this.children = new BTreeNode[2 * t];
        this.numKeys = 0;
    }
}

// Class representing the B-Tree
public class BTree {
    private BTreeNode root; // Root node of the B-Tree
    private int t; // Minimum degree of the B-Tree

    // Constructor to initialize the B-Tree with a given minimum degree
    public BTree(int t) {
        this.t = t;
        root = new BTreeNode(t, true); // Create a new root node
    }

    // Method to insert a key into the B-Tree
    public void insert(int key) {
        if (root.numKeys == 2 * t - 1) {
            BTreeNode newRoot = new BTreeNode(t, false); // Create a new root node
            newRoot.children[0] = root; // Make the old root as child of the new root
            splitChild(newRoot, 0); // Split the old root and move 1 key to the new root
            insertNonFull(newRoot, key); // Insert the key into the B-Tree with the new root
            root = newRoot; // Update the root of the B-Tree
        } else {
            insertNonFull(root, key); // Insert the key into the B-Tree with the current root
        }
    }

    // Method to insert a key into a non-full B-Tree node
    private void insertNonFull(BTreeNode node, int key) {
        int i = node.numKeys - 1; // Start from the rightmost key
        if (node.leaf) {
            // If the node is a leaf, find the correct position for the new key
            while (i >= 0 && key < node.keys[i]) {
                node.keys[i + 1] = node.keys[i];
                i--;
            }
            // Insert the new key at the correct position
            node.keys[i + 1] = key;
            node.numKeys++; // Increase the number of keys in the node
        } else {
            // If the node is not a leaf, find the child to insert the new key
            while (i >= 0 && key < node.keys[i]) {
                i--;
            }
            i++;
            if (node.children[i].numKeys == 2 * t - 1) {
                // If the child is full, split it
                splitChild(node, i);
                if (key > node.keys[i]) {
                    i++;
                }
            }
            // Insert the new key into the appropriate child node
            insertNonFull(node.children[i], key);
        }
    }

    // Method to split a full child of a B-Tree node
    private void splitChild(BTreeNode parent, int childIndex) {
        BTreeNode child = parent.children[childIndex];
        BTreeNode newChild = new BTreeNode(t, child.leaf);
        newChild.numKeys = t - 1;
        for (int i = 0; i < t - 1; i++) {
            newChild.keys[i] = child.keys[i + t];
        }
        if (!child.leaf) {
            for (int i = 0; i < t; i++) {
                newChild.children[i] = child.children[i + t];
            }
        }
        child.numKeys = t - 1;
        for (int i = parent.numKeys; i > childIndex; i--) {
            parent.children[i + 1] = parent.children[i];
        }
        parent.children[childIndex + 1] = newChild;
        for (int i = parent.numKeys - 1; i >= childIndex; i--) {
            parent.keys[i + 1] = parent.keys[i];
        }
        parent.keys[childIndex] = child.keys[t - 1];
        parent.numKeys++;
    }

    // Method to search for a key in the B-Tree
    public boolean search(int key) {
        return search(root, key);
    }

    // Method to search for a key in a B-Tree node
    private boolean search(BTreeNode node, int key) {
        int i = 0;
        while (i < node.numKeys && key > node.keys[i]) {
            i++;
        }
        if (i < node.numKeys && key == node.keys[i]) {
            return true; // Key found in the current node
        }
        if (node.leaf) {
            return false; // Key not found in the leaf node
        }
        // Recursively search for the key in the appropriate child node
        return search(node.children[i], key);
    }

    // Method to print the contents of the B-Tree
    public void print() {
        print(root, "");
    }

    // Method to print the contents of a B-Tree node
    private void print(BTreeNode node, String prefix) {
        System.out.print(prefix); // Print the current prefix
        for (int i = 0; i < node.numKeys; i++) {
            System.out.print(node.keys[i] + " "); // Print the keys in the current node
        }
        System.out.println();
        if (!node.leaf) {
            for (int i = 0; i <= node.numKeys; i++) {
                print(node.children[i], prefix + "  "); // Recursively print the child nodes
            }
        }
    }

    // Main method to test the B-Tree implementation
    public static void main(String[] args) {
        BTree tree = new BTree(3); // Create a B-Tree with minimum degree 3
        // Insert some keys into the B-Tree
        tree.insert(1);
        tree.insert(3);
        tree.insert(7);
        tree.insert(10);
        tree.insert(11);
        tree.insert(13);
        tree.insert(14);
        tree.insert(15);
        tree.insert(18);
        tree.insert(16);
        tree.insert(19);
        tree.insert(24);
        tree.insert(25);
        tree.insert(26);
        tree.insert(21);
        tree.insert(4);
        tree.insert(5);
        tree.insert(20);
        tree.insert(22);
        tree.insert(2);
        tree.insert(17);
        // Print the contents of the B-Tree
        tree.print();

        // Search for keys in the B-Tree
        System.out.println("Search for 4: " + tree.search(4)); // true
        System.out.println("Search for 8: " + tree.search(8)); // false
    }
}

A B+ tree is an extended version of a B-tree, where each non-leaf node contains a fixed number(m) of pointers to child nodes(m-ways). The B+ tree has better performance compared to binary search trees and B-trees when searching and inserting data. The B+ tree is often used in databases and file systems where the data is stored in external storage. In a B+ tree, all the data is stored in the leaf node, whereas the internal nodes only store index keys. The leaf nodes are linked, and a sequential access can be made along the leaf nodes. Because of its many pointers, the B+ tree has good space utilization, as well as good performance for range queries. It is also self-balancing to ensure that the time complexity for searching, insertion, and deletion is logarithmic in most cases.

B+ tree

import java.util.LinkedList;
import java.util.Queue;

// Node class representing a node in the B+ tree
class BPlusTreeNode {
    int[] keys; // Array to store keys
    BPlusTreeNode[] children; // Array to store child pointers
    int numKeys; // Current number of keys in the node
    boolean leaf; // Flag to indicate if the node is a leaf
    BPlusTreeNode next; // Pointer to the next leaf node

    // Constructor to initialize a node with a given degree and leaf status
    public BPlusTreeNode(int degree, boolean leaf) {
        this.keys = new int[2 * degree - 1];
        this.children = new BPlusTreeNode[2 * degree];
        this.numKeys = 0;
        this.leaf = leaf;
        this.next = null;
    }
}

// B+ tree class
public class BPlusTree {
    private BPlusTreeNode root; // Root node of the tree
    private int degree; // Degree of the tree

    // Constructor to initialize the B+ tree with a given degree
    public BPlusTree(int degree) {
        this.degree = degree;
        this.root = new BPlusTreeNode(degree, true); // Initialize the root as a leaf node
    }

    // Method to insert a key into the B+ tree
    public void insert(int key) {
        if (root.numKeys == 2 * degree - 1) {
            // If the root is full, split it and create a new root
            BPlusTreeNode newRoot = new BPlusTreeNode(degree, false);
            newRoot.children[0] = root;
            splitChild(newRoot, 0);
            root = newRoot;
        }
        insertNonFull(root, key);
    }

    // Method to recursively insert a key into a non-full node
    private void insertNonFull(BPlusTreeNode node, int key) {
        int i = node.numKeys - 1;
        if (node.leaf) {
            // If the node is a leaf, insert the key into its sorted position
            while (i >= 0 && key < node.keys[i]) {
                node.keys[i + 1] = node.keys[i];
                i--;
            }
            node.keys[i + 1] = key;
            node.numKeys++;
        } else {
            // If the node is not a leaf, find the child to insert the key
            while (i >= 0 && key < node.keys[i]) {
                i--;
            }
            i++;
            if (node.children[i].numKeys == 2 * degree - 1) {
                // If the child is full, split it
                splitChild(node, i);
                if (key > node.keys[i]) {
                    i++;
                }
            }
            insertNonFull(node.children[i], key);
        }
    }

    // Method to split a full child node
    private void splitChild(BPlusTreeNode parent, int childIndex) {
        BPlusTreeNode child = parent.children[childIndex];
        BPlusTreeNode newChild = new BPlusTreeNode(degree, child.leaf);
        newChild.numKeys = degree - 1;
        for (int i = 0; i < degree - 1; i++) {
            newChild.keys[i] = child.keys[i + degree];
        }
        if (!child.leaf) {
            for (int i = 0; i < degree; i++) {
                newChild.children[i] = child.children[i + degree];
            }
        }
        child.numKeys = degree - 1;
        for (int i = parent.numKeys; i > childIndex; i--) {
            parent.children[i + 1] = parent.children[i];
        }
        parent.children[childIndex + 1] = newChild;
        for (int i = parent.numKeys - 1; i >= childIndex; i--) {
            parent.keys[i + 1] = parent.keys[i];
        }
        parent.keys[childIndex] = child.keys[degree - 1];
        parent.numKeys++;
    }

    // Method to search for a key in the B+ tree
    public boolean search(int key) {
        return search(root, key);
    }

    // Method to recursively search for a key in the B+ tree
    private boolean search(BPlusTreeNode node, int key) {
        int i = 0;
        while (i < node.numKeys && key > node.keys[i]) {
            i++;
        }
        if (i < node.numKeys && key == node.keys[i]) {
            return true;
        }
        if (node.leaf) {
            return false;
        }
        return search(node.children[i], key);
    }

    // Method to print the B+ tree
    public void print() {
        print(root, "");
    }

    // Method to recursively print a node and its children in the B+ tree
    private void print(BPlusTreeNode node, String prefix) {
        System.out.print(prefix);
        for (int i = 0; i < node.numKeys; i++) {
            System.out.print(node.keys[i] + " ");
        }
        System.out.println();
        if (!node.leaf) {
            for (int i = 0; i <= node.numKeys; i++) {
                print(node.children[i], prefix + "  ");
            }
        }
    }

    // Main method to test the B+ tree implementation
    public static void main(String[] args) {
        BPlusTree bPlusTree = new BPlusTree(3); // Create a B+ tree with degree 3
        // Insert some keys into the B+ tree
        bPlusTree.insert(1);
        bPlusTree.insert(3);
        bPlusTree.insert(7);
        bPlusTree.insert(10);
        bPlusTree.insert(11);
        bPlusTree.insert(13);
        bPlusTree.insert(14);
        bPlusTree.insert(15);
        bPlusTree.insert(18);
        bPlusTree.insert(16);
        bPlusTree.insert(19);
        bPlusTree.insert(24);
        bPlusTree.insert(25);
        bPlusTree.insert(26);
        bPlusTree.insert(21);
        bPlusTree.insert(4);
        bPlusTree.insert(5);
        bPlusTree.insert(20);
        bPlusTree.insert(22);
        bPlusTree.insert(2);
        bPlusTree.insert(17);
        // Print the B+ tree
        bPlusTree.print();

        // Search for keys in the B+ tree
        System.out.println("Search for 4: " + bPlusTree.search(4)); // true
        System.out.println("Search for 8: " + bPlusTree.search(8)); // false
    }
}

5. LRU and LFU

In a typical LRU implementation, you maintain a map of key-value pairs and a linked list of keys in the order of their access. Whenever an item is accessed, you move its corresponding key to the head of the list. If you need to evict an item due to capacity constraints, you simply remove the least recently accessed item from the tail of the list.

LFU eviction policy can be implemented using a combination of a hash map and a priority queue. In this implementation, you maintain a hash map of key-value pairs and a priority queue of entries ordered by the least frequency of usage. Whenever an entry is accessed, you increment its usage count and update its position in the priority queue. If you need to evict an item due to capacity constraints, you simply remove the least frequently used item from the head of the priority queue

The choice between LRU and LFU eviction policies depends on the specific use case and access patterns of the cache. LRU is typically a good choice when you have a large cache with high locality of reference, i.e., the most recently accessed items are likely to be accessed again in the near future. LFU is a good choice when you have a small cache with many frequently accessed items that you want to keep in memory at all times. However, there are many variations of both eviction policies, and other policies such as LRU-K and Adaptive Replacement Cache (ARC) that combine the benefits of both LRU and LFU policies.

import java.util.LinkedHashMap;
import java.util.Map;

// LRU Cache class
public class LRUCache<K, V> extends LinkedHashMap<K, V> {
    private final int capacity; // Capacity of the cache

    // Constructor to initialize the LRU cache with a given capacity
    public LRUCache(int capacity) {
        super(capacity, 0.75f, true); // Set access order to true for LRU behavior
        this.capacity = capacity;
    }

    // Method to check if the cache contains a key
    @Override
    public boolean containsKey(Object key) {
        return super.containsKey(key);
    }

    // Method to get the value of a key in the cache
    @Override
    public V get(Object key) {
        return super.getOrDefault(key, null);
    }

    // Method to put a key-value pair into the cache
    @Override
    public V put(K key, V value) {
        return super.put(key, value);
    }

    // Method to remove the least recently used entry if the cache is full
    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() > capacity;
    }

    // Method to print the contents of the cache (for debugging)
    public void printCache() {
        System.out.println("LRU Cache: " + super.toString());
    }

    // Main method to test the LRU cache implementation
    public static void main(String[] args) {
        LRUCache<Integer, String> cache = new LRUCache<>(2); // Create an LRU cache with capacity 2
        cache.put(1, "One");
        cache.put(2, "Two");
        cache.printCache(); // Output: LRU Cache: {1=One, 2=Two}

        System.out.println("Get key 1: " + cache.get(1)); // Output: Get key 1: One
        cache.printCache(); // Output: LRU Cache: {2=Two, 1=One}

        cache.put(3, "Three");
        cache.printCache(); // Output: LRU Cache: {1=One, 3=Three}
        
        System.out.println("Get key 2: " + cache.get(2)); // Output: Get key 2: null
    }
}
import java.util.HashMap;
import java.util.LinkedHashSet;
import java.util.Map;
import java.util.Set;

// LFU Cache class
public class LFUCache<K, V> {
    private final int capacity; // Capacity of the cache
    private final Map<K, V> cache; // Map to store key-value pairs
    private final Map<K, Integer> counts; // Map to store frequency counts of keys
    private final Map<Integer, Set<K>> frequencyMap; // Map to store keys grouped by frequency
    private int minFrequency; // Minimum frequency among all keys

    // Constructor to initialize the LFU cache with a given capacity
    public LFUCache(int capacity) {
        this.capacity = capacity;
        this.cache = new HashMap<>();
        this.counts = new HashMap<>();
        this.frequencyMap = new HashMap<>();
        this.minFrequency = 0;
    }

    // Method to get the value of a key in the cache
    public V get(K key) {
        if (!cache.containsKey(key)) {
            return null; // Key not found in the cache
        }
        int count = counts.get(key); // Get the current frequency count of the key
        counts.put(key, count + 1); // Increment the frequency count
        frequencyMap.get(count).remove(key); // Remove the key from its current frequency group
        if (count == minFrequency && frequencyMap.get(count).isEmpty()) {
            minFrequency++; // Update the minimum frequency if necessary
        }
        frequencyMap.putIfAbsent(count + 1, new LinkedHashSet<>()); // Create a new frequency group if necessary
        frequencyMap.get(count + 1).add(key); // Add the key to its new frequency group
        return cache.get(key); // Return the value of the key
    }

    // Method to put a key-value pair into the cache
    public void put(K key, V value) {
        if (capacity <= 0) {
            return; // Cache capacity is 0 or negative, do nothing
        }
        if (cache.containsKey(key)) {
            cache.put(key, value); // Update the value of the existing key
            get(key); // Increment the frequency count of the key
            return;
        }
        if (cache.size() >= capacity) {
            K evictKey = frequencyMap.get(minFrequency).iterator().next(); // Get the least frequently used key
            cache.remove(evictKey); // Remove the least frequently used key from the cache
            counts.remove(evictKey); // Remove the corresponding frequency count
            frequencyMap.get(minFrequency).remove(evictKey); // Remove the key from its frequency group
        }
        cache.put(key, value); // Put the new key-value pair into the cache
        counts.put(key, 1); // Set the frequency count of the new key to 1
        minFrequency = 1; // Reset the minimum frequency to 1
        frequencyMap.putIfAbsent(1, new LinkedHashSet<>()); // Create a new frequency group for the new key
        frequencyMap.get(1).add(key); // Add the new key to its frequency group
    }

    // Method to print the contents of the cache (for debugging)
    public void printCache() {
        System.out.println("LFU Cache: " + cache.toString());
    }

    // Main method to test the LFU cache implementation
    public static void main(String[] args) {
        LFUCache<Integer, String> cache = new LFUCache<>(2); // Create an LFU cache with capacity 2
        cache.put(1, "One");
        cache.put(2, "Two");
        cache.printCache(); // Output: LFU Cache: {1=One, 2=Two}

        System.out.println("Get key 1: " + cache.get(1)); // Output: Get key 1: One
        cache.printCache(); // Output: LFU Cache: {2=Two, 1=One}

        cache.put(3, "Three");
        cache.printCache(); // Output: LFU Cache: {1=One, 3=Three}
        
        System.out.println("Get key 2: " + cache.get(2)); // Output: Get key 2: null
    }
}