Enable CLIP-powered semantic search to find content by natural language description across your entire library.
StreamStash uses OpenCLIP (ViT-B/32) — a vision-language AI model — to understand what's in your downloaded images and videos. Instead of searching by filename or username, you can search by what you see:
The model extracts visual embeddings from keyframes and matches them against your text query using cosine similarity.
AI search requires the Power tier. All dependencies (OpenCLIP, PyTorch, NumPy) are installed automatically on first launch — no manual setup needed.
Once the dependencies are installed, StreamStash automatically begins indexing your library in the background:
When you search, your text query is converted into a CLIP embedding and compared against all stored frame embeddings using a cosine dot-product. The best-matching frame per media item is returned, sorted by confidence score.
AI search is included in the Power tier. No manual installation is needed — StreamStash handles all dependencies automatically on first launch.
If you want to reindex all existing content (useful after the initial setup), click the Reindex button on the Search page, or call the API:
POST /api/search/reindex
The search index is stored in a separate SQLite database at:
<RECORDINGS_DIR>/clip_index.db
This does not affect the main app database. You can safely delete this file to reset the index — it will be rebuilt automatically.
AI search indexes content from all platforms:
The automatic dependency installation may have failed. Check Settings → System Status to retry, or check the logs for the exact error.
Indexing speed depends on your hardware. A GPU-enabled PyTorch installation is significantly faster. The model processes in batches and unloads between batches to avoid memory issues.
CLIP works best with descriptive phrases rather than single words. Try "person standing on a beach" instead of just "beach". The model understands visual concepts, not metadata.
| Endpoint | Method | Description |
|---|---|---|
/api/clip-search?q=<query> | GET | Search with optional limit and platform params |
/api/search/stats | GET | Index statistics (total embeddings, items by platform) |
/api/search/reindex | POST | Trigger full background reindex |
/api/search/frame/<id> | GET | Serve a matched keyframe image |