qdrant-search-strategies

Installation
SKILL.md

How to Improve Search Results with Advanced Strategies

These strategies complement basic vector search. Use them after confirming the embedding model is fitting the task and HNSW config is correct. If exact search returns bad results, verify the selection of the embedding model (retriever) first. If the user wants to use a weaker embedding model because it is small, fast, and cheap, use reranking or relevance feedback to improve search quality.

Missing Obvious Keyword Matches

Use when: pure vector search misses results that contain obvious keyword matches. Domain terminology not in embedding training data, exact keyword matching critical (brand names, SKUs), acronyms common. Skip when: pure semantic queries, all data in training set, latency budget very tight.

  • Dense + sparse with prefetch and fusion Hybrid search
  • Prefer learned sparse (miniCOIL, SPLADE, GTE) over raw BM25 if applicable (when user needs smart keywords matching and learned sparse models know the vocabulary of the domain)
  • For non-English languages, configure sparse BM25 parameters accordingly
  • RRF: good default, supports weighted (v1.17+) RRF
  • DBSF with asymmetric limits (sparse_limit=250, dense_limit=100) can outperform RRF for technical docs DBSF
  • Fusion can also be done through reranking

Right Documents Found But Wrong Order

Use when: good recall but poor precision (right docs in top-100, not top-10).

Related skills

More from qdrant/skills

Installs
1
Repository
qdrant/skills
GitHub Stars
110
First Seen
Apr 12, 2026