{"id":880,"date":"2025-07-13T18:31:14","date_gmt":"2025-07-14T01:31:14","guid":{"rendered":"https:\/\/www.alerainfotech.com\/?p=880"},"modified":"2025-07-13T18:31:14","modified_gmt":"2025-07-14T01:31:14","slug":"implementing-lru-cache-in-java-using-linkedhashmap","status":"publish","type":"post","link":"https:\/\/www.alerainfotech.com\/home\/2025\/07\/13\/implementing-lru-cache-in-java-using-linkedhashmap\/","title":{"rendered":"Implementing LRU Cache in Java Using LinkedHashMap"},"content":{"rendered":"<p><em>(The simplest, cleanest way \u2014 and why it\u2019s more than enough in real-world systems)<\/em><\/p>\n<h3>\ud83c\udfaf What is an LRU Cache?<\/h3>\n<p>An <strong>LRU Cache (Least Recently Used Cache)<\/strong> is a special type of cache that:<\/p>\n<ul>\n<li>Keeps a fixed number of elements (say, capacity = 100).<\/li>\n<li>Every time you access (read or write) a key, it becomes the <em>most recently used<\/em>.<\/li>\n<li>If you add a new key when at capacity, it automatically <strong>evicts the least recently used key<\/strong>.<\/li>\n<\/ul>\n<h3>\ud83d\udd25 Where is this useful?<\/h3>\n<p>\u2705 Common scenarios:<\/p>\n<ul>\n<li>Database or REST API query caching.<\/li>\n<li>Keeping recently opened files in memory.<\/li>\n<li>Web session storage.<\/li>\n<\/ul>\n<p>In modern production Java systems, you\u2019d rarely build this from scratch.<br \/>\nInstead, you\u2019d use:<\/p>\n<table>\n<thead>\n<tr>\n<th>Tool<\/th>\n<th>Why<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\u2705 <strong>Redis<\/strong><\/td>\n<td>Distributed cache with built-in LRU, LFU, TTL eviction.<\/td>\n<\/tr>\n<tr>\n<td>\u2705 <strong>Caffeine \/ Guava<\/strong><\/td>\n<td>Local JVM in-memory cache with advanced eviction, stats.<\/td>\n<\/tr>\n<tr>\n<td>\u2705 <strong>Spring Boot Cache with Redis \/ Ehcache<\/strong><\/td>\n<td>Transparent caching at method level.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>But for small local scenarios, Java gives you a <strong>built-in LRU-capable class: LinkedHashMap<\/strong>.<\/p>\n<h3>\ud83d\ude80 \u2705 What is LinkedHashMap?<\/h3>\n<ul>\n<li><code>LinkedHashMap<\/code> extends <code>HashMap<\/code> but maintains a <strong>linked list<\/strong> of entries.<\/li>\n<li>You can make this linked list maintain <strong>access order<\/strong>, meaning:<\/li>\n<\/ul>\n<pre><code>[Least recently used] -> ... -> [Most recently used]<\/code><\/pre>\n<p>Whenever you call <code>get()<\/code> or <code>put()<\/code> on an existing key, it moves that key to the end.<\/p>\n<h3>\ud83d\udd25 So how does it become an LRU Cache?<\/h3>\n<p>\u2705 You can simply:<\/p>\n<ol>\n<li>Initialize <code>LinkedHashMap<\/code> with <strong><code>accessOrder = true<\/code><\/strong> to maintain usage order.<\/li>\n<li>Override <code>removeEldestEntry()<\/code> to tell it to automatically evict the oldest item if you exceed <code>capacity<\/code>.<\/li>\n<\/ol>\n<h3>\ud83d\ude80 Full Java class implementing LRU Cache using LinkedHashMap<\/h3>\n<pre><code class=\"language-java\">\nimport java.util.LinkedHashMap;\nimport java.util.Map;\n\npublic class LRUCache<K, V> extends LinkedHashMap<K, V> {\n    private final int capacity;\n\n    public LRUCache(int capacity) {\n        \/\/ true = accessOrder, so LinkedHashMap maintains order of access (not just insertion)\n        super(capacity, 0.75f, true);\n        this.capacity = capacity;\n    }\n\n    @Override\n    protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {\n        return size() > capacity;\n    }\n\n    public static void main(String[] args) {\n        LRUCache&lt;Integer, String&gt; cache = new LRUCache&lt;&gt;(3);\n\n        cache.put(1, \"One\");\n        cache.put(2, \"Two\");\n        cache.put(3, \"Three\");\n        System.out.println(\"Cache: \" + cache);\n\n        \/\/ Access key 1 so it becomes most recently used\n        cache.get(1);\n        System.out.println(\"Accessed 1, Cache: \" + cache);\n\n        \/\/ Insert new key, will evict least recently used (which is 2)\n        cache.put(4, \"Four\");\n        System.out.println(\"Added 4, Cache: \" + cache);\n    }\n}\n<\/code><\/pre>\n<h3>\ud83d\udd0d How does this work?<\/h3>\n<table>\n<thead>\n<tr>\n<th>Method<\/th>\n<th>What it does<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><code>super(capacity, 0.75f, true)<\/code><\/td>\n<td>Sets load factor + <code>accessOrder=true<\/code>, so LinkedHashMap keeps order of usage.<\/td>\n<\/tr>\n<tr>\n<td><code>removeEldestEntry<\/code><\/td>\n<td>Called by LinkedHashMap after every <code>put()<\/code>. If it returns <code>true<\/code>, it evicts the oldest entry.<\/td>\n<\/tr>\n<tr>\n<td><code>get(key)<\/code><\/td>\n<td>Moves that entry to the end (most recently used).<\/td>\n<\/tr>\n<tr>\n<td><code>put(key, val)<\/code><\/td>\n<td>If key exists, updates value + moves to end.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>\ud83d\ude80 Sample output of the main method<\/h3>\n<pre><code>\nCache: {1=One, 2=Two, 3=Three}\nAccessed 1, Cache: {2=Two, 3=Three, 1=One}\nAdded 4, Cache: {3=Three, 1=One, 4=Four}\n<\/code><\/pre>\n<h3>\ud83d\ude80 \u2705 Why is this O(1)?<\/h3>\n<table>\n<thead>\n<tr>\n<th>Operation<\/th>\n<th>Time<\/th>\n<th>Why<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>get()<\/td>\n<td>O(1)<\/td>\n<td>Direct HashMap lookup + internal pointer moves.<\/td>\n<\/tr>\n<tr>\n<td>put()<\/td>\n<td>O(1)<\/td>\n<td>Direct insert + move.<\/td>\n<\/tr>\n<tr>\n<td>eviction<\/td>\n<td>O(1)<\/td>\n<td>Removes from head of linked list.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>\ud83d\ude80 In real-world Java systems<\/h3>\n<p>\u2705 For tiny local caches (like keeping last 100 user IDs in memory), this is perfect.<\/p>\n<p>But for anything distributed or large scale:<\/p>\n<table>\n<thead>\n<tr>\n<th>Tool<\/th>\n<th>Why you\u2019d pick it<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Redis<\/strong><\/td>\n<td>Centralized cache shared across services, with built-in eviction and TTL.<\/td>\n<\/tr>\n<tr>\n<td><strong>Caffeine<\/strong><\/td>\n<td>In-memory local cache with async refresh, LRU\/LFU\/TTL, and hit metrics.<\/td>\n<\/tr>\n<tr>\n<td><strong>Ehcache \/ Spring Cache<\/strong><\/td>\n<td>Lets you swap local vs Redis by just changing config.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>\ud83d\ude80 \u2705 TL;DR Summary<\/h3>\n<ul>\n<li><code>LinkedHashMap<\/code> is your <strong>built-in local LRU Cache<\/strong> in Java.<\/li>\n<li>Just use <code>accessOrder=true<\/code> and override <code>removeEldestEntry<\/code>.<\/li>\n<li>\u2705 Perfect for interviews to show your understanding of:\n<ul>\n<li>O(1) operations using hash table + linked list.<\/li>\n<li>Eviction policies.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>\ud83d\ude80 \u2705 Final take<\/h3>\n<p>The reason this question is still so popular is it <strong>tests your ability to mix two data structures to achieve strict O(1) requirements<\/strong>.<br \/>\nThat thinking is what underlies everything from caching, to stream windows, to scheduler heaps.<\/p>\n<p>\u270d <strong>Hope this helps you master LRU caching in Java using the cleanest possible built-in method!<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>(The simplest, cleanest way \u2014 and why it\u2019s more than enough in real-world systems) \ud83c\udfaf What is an LRU Cache? An LRU Cache (Least Recently Used Cache) is a special type of cache that: Keeps a fixed number of elements (say, capacity = 100). Every time you access (read or write) a key, it becomes [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-880","post","type-post","status-publish","format-standard","hentry","category-java"],"_links":{"self":[{"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/posts\/880","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/comments?post=880"}],"version-history":[{"count":1,"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/posts\/880\/revisions"}],"predecessor-version":[{"id":881,"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/posts\/880\/revisions\/881"}],"wp:attachment":[{"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/media?parent=880"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/categories?post=880"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.alerainfotech.com\/home\/wp-json\/wp\/v2\/tags?post=880"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}