Implementing LRU Cache in Java Using LinkedHashMap

(The simplest, cleanest way — and why it’s more than enough in real-world systems)

🎯 What is an LRU Cache?

An LRU Cache (Least Recently Used Cache) is a special type of cache that:

  • Keeps a fixed number of elements (say, capacity = 100).
  • Every time you access (read or write) a key, it becomes the most recently used.
  • If you add a new key when at capacity, it automatically evicts the least recently used key.

🔥 Where is this useful?

✅ Common scenarios:

  • Database or REST API query caching.
  • Keeping recently opened files in memory.
  • Web session storage.

In modern production Java systems, you’d rarely build this from scratch.
Instead, you’d use:

Tool Why
Redis Distributed cache with built-in LRU, LFU, TTL eviction.
Caffeine / Guava Local JVM in-memory cache with advanced eviction, stats.
Spring Boot Cache with Redis / Ehcache Transparent caching at method level.

But for small local scenarios, Java gives you a built-in LRU-capable class: LinkedHashMap.

🚀 ✅ What is LinkedHashMap?

  • LinkedHashMap extends HashMap but maintains a linked list of entries.
  • You can make this linked list maintain access order, meaning:
[Least recently used] -> ... -> [Most recently used]

Whenever you call get() or put() on an existing key, it moves that key to the end.

🔥 So how does it become an LRU Cache?

✅ You can simply:

  1. Initialize LinkedHashMap with accessOrder = true to maintain usage order.
  2. Override removeEldestEntry() to tell it to automatically evict the oldest item if you exceed capacity.

🚀 Full Java class implementing LRU Cache using LinkedHashMap


import java.util.LinkedHashMap;
import java.util.Map;

public class LRUCache extends LinkedHashMap {
    private final int capacity;

    public LRUCache(int capacity) {
        // true = accessOrder, so LinkedHashMap maintains order of access (not just insertion)
        super(capacity, 0.75f, true);
        this.capacity = capacity;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry eldest) {
        return size() > capacity;
    }

    public static void main(String[] args) {
        LRUCache<Integer, String> cache = new LRUCache<>(3);

        cache.put(1, "One");
        cache.put(2, "Two");
        cache.put(3, "Three");
        System.out.println("Cache: " + cache);

        // Access key 1 so it becomes most recently used
        cache.get(1);
        System.out.println("Accessed 1, Cache: " + cache);

        // Insert new key, will evict least recently used (which is 2)
        cache.put(4, "Four");
        System.out.println("Added 4, Cache: " + cache);
    }
}

🔍 How does this work?

Method What it does
super(capacity, 0.75f, true) Sets load factor + accessOrder=true, so LinkedHashMap keeps order of usage.
removeEldestEntry Called by LinkedHashMap after every put(). If it returns true, it evicts the oldest entry.
get(key) Moves that entry to the end (most recently used).
put(key, val) If key exists, updates value + moves to end.

🚀 Sample output of the main method


Cache: {1=One, 2=Two, 3=Three}
Accessed 1, Cache: {2=Two, 3=Three, 1=One}
Added 4, Cache: {3=Three, 1=One, 4=Four}

🚀 ✅ Why is this O(1)?

Operation Time Why
get() O(1) Direct HashMap lookup + internal pointer moves.
put() O(1) Direct insert + move.
eviction O(1) Removes from head of linked list.

🚀 In real-world Java systems

✅ For tiny local caches (like keeping last 100 user IDs in memory), this is perfect.

But for anything distributed or large scale:

Tool Why you’d pick it
Redis Centralized cache shared across services, with built-in eviction and TTL.
Caffeine In-memory local cache with async refresh, LRU/LFU/TTL, and hit metrics.
Ehcache / Spring Cache Lets you swap local vs Redis by just changing config.

🚀 ✅ TL;DR Summary

  • LinkedHashMap is your built-in local LRU Cache in Java.
  • Just use accessOrder=true and override removeEldestEntry.
  • ✅ Perfect for interviews to show your understanding of:
    • O(1) operations using hash table + linked list.
    • Eviction policies.

🚀 ✅ Final take

The reason this question is still so popular is it tests your ability to mix two data structures to achieve strict O(1) requirements.
That thinking is what underlies everything from caching, to stream windows, to scheduler heaps.

Hope this helps you master LRU caching in Java using the cleanest possible built-in method!

Leave a Reply

Your email address will not be published. Required fields are marked *