All Posts programming Speeding Up fzf: How a Tiny Change Made the World's Fastest Fuzzy Finder Even Faster

Speeding Up fzf: How a Tiny Change Made the World's Fastest Fuzzy Finder Even Faster

· 1077 words · 6 minute read
Go in production: interesting moments ▹

If you’ve ever used the command line, you’ve probably fallen in love with fzf — the blazing-fast fuzzy finder that lets you search through files, git history, processes, or anything else with a few keystrokes. Type a few letters, and fzf instantly shows you the best matches, ranked intelligently. It’s used by millions of developers every day, and it’s famous for feeling instant.

But behind that instant feel is a lot of clever engineering. Today, we’re diving into a recent under-the-hood improvement: a small pull request (actually a direct commit) that replaced a Go map with a fixed-size array for faster algorithm dispatch.

Commit link: Replace procFun map with fixed-size array for faster algo dispatch
What changed: Just 4 lines of code in src/pattern.go.
Why it matters: It makes fzf’s core matching loop noticeably snappier — especially when you’re typing quickly or searching huge lists.

By the end of this post, you’ll understand exactly what changed, why it was needed, and how it works. No prior Go experience required — we’ll keep it beginner-friendly with clear explanations and code examples.

First: A Quick Refresher on How fzf Works 🔗

fzf doesn’t just do simple string search. It supports smart, flexible query syntax:

  • fuzzy (default) — matches characters anywhere, in order (like typing “abc” to match “apple banana cake”)
  • 'exact — exact substring match
  • ^prefix — must start with the term
  • suffix$ — must end with the term
  • =equal — must be exactly equal

You can even combine them: ^src/ .go$ !test (files starting with “src/”, ending in “.go”, but not containing “test”).

Internally, fzf parses your query into a Pattern — a structured object that knows how to match each part of your input against every item in your list.

This matching happens thousands of times per second as you type. So every micro-optimization counts.

The Old Way: Using a map for Algorithm Dispatch 🔗

Inside the Pattern struct (in src/pattern.go), there was this field:

procFun map[termType]algo.Algo

termType is a simple integer enum with exactly 6 possible values (defined with iota):

type termType int

const (
 termFuzzy termType = iota  // 0
 termExact                  // 1
 termExactBoundary          // 2
 termPrefix                 // 3
 termSuffix                 // 4
 termEqual                  // 5
)

Each termType maps to a specific matching algorithm (algo.Algo is just a function type that knows how to score a match).

In the old code, BuildPattern would create a map like this:

procFun: make(map[termType]algo.Algo)

Then fill it with the right algorithm for each type (fuzzy uses your chosen fuzzy algorithm, exact uses ExactMatchNaive, etc.).

Later, during the actual matching (in extendedMatch), fzf would do:

pfun := p.procFun[term.typ]  // ← map lookup here!
off, score, pos := p.iter(pfun, ...)

This lookup happened for every term, in every candidate item, on every keystroke.

The New Way: A Fixed-Size Array (The Optimization) 🔗

The commit changed the field to:

procFun [6]algo.Algo

And removed the make(map...) call. That’s it.

Now, instead of a hash-map lookup, fzf does a direct array index:

pfun := p.procFun[term.typ]  // super fast array access!

Because termType is just an integer from 0 to 5, the array index is guaranteed to be valid and lightning-fast.

Here’s the exact diff (simplified):

-    procFun map[termType]algo.Algo
+    procFun [6]algo.Algo

-        procFun: make(map[termType]algo.Algo)}
+    }  // procFun is zero-initialized and then filled in BuildPattern

In BuildPattern, the code now directly assigns to array slots (e.g. p.procFun[termFuzzy] = fuzzyAlgo).

Why This Matters: Performance in the Hot Path 🔗

In Go:

  • Map lookup = hash the key + handle collisions + memory indirection. Even for tiny maps, there’s overhead.
  • Array access = one CPU instruction (just base_address + index * size).

This change eliminates unnecessary work in fzf’s most performance-critical loop — the part that decides which matching function to call for each term.

fzf author junegunn has always been obsessed with speed (fzf is written in Go for exactly this reason). This tweak is classic fzf: invisible to users, but it makes the tool feel even more responsive.

Before vs After: What You Actually Notice 🔗

You won’t see new flags or UI changes. The goal of this change was pure performance, and it was fully achieved:

  • Matching is now faster (especially on large lists or complex queries with multiple term types).
  • No behavior change — same results, same scoring, same everything.
  • Lower CPU usage during interactive use.

Real-world impact: If you’re using fzf with fd, rg, or in Vim/Neovim/Tmux, you’ll feel the difference when searching thousands of items or typing rapidly.

Visualizing the Improvement 🔗

Think of it like this:

  • Old (map): You’re looking up a phone number in a giant phone book that gets re-sorted every time.
  • New (array): You’re opening a drawer with exactly 6 labeled slots — you know exactly where everything is.

Or, in code terms:

// Old (slower)
algorithm := dispatchMap[termType]

// New (faster)
algorithm := dispatchArray[termType]  // direct memory access

Why Was This Change Made Now? 🔗

fzf has been incredibly stable, but its core matching engine is constantly evolving (fuzzy algorithm v1 vs v2, better scoring, etc.). As the code got more optimized elsewhere, the map lookup became a visible (tiny) bottleneck in profiling.

This commit is a perfect example of mature open-source maintenance: small, focused, well-explained, and laser-targeted at performance without risking correctness.

Wrapping Up: The Goal Was Achieved 🔗

This commit had one clear goal: make algorithm dispatch faster by replacing a map with a fixed-size array.

  • What happened: map[termType]algo.Algo[6]algo.Algo
  • How it works: Direct array indexing instead of hash lookup
  • Why it was needed: Every matching decision is in the hot path of an interactive tool

The change is live in the latest fzf (merged recently). Update with git pull (or your system package manager), and enjoy an even snappier experience.

fzf continues to prove that the best improvements are often the ones you barely notice — until you realize everything just feels better.

Have you noticed fzf getting faster lately? Drop your favorite fzf workflow in my X/Twitter comments — I’d love to hear how you’re using it! 🚀

Thanks to junegunn for another elegant optimization. fzf remains one of the best examples of thoughtful, high-performance software.

I hope you enjoyed reading this post as much as I enjoyed writing it. If you know a person who can benefit from this information, send them a link of this post. If you want to get notified about new posts, follow me on YouTube , Twitter (x) , LinkedIn , and GitHub .

Go in production: interesting moments ▹