Vector Search API

Send text.
Get answers.

Vector search with built-in embeddings. No ML pipeline, no vector wrangling. Hybrid BM25 + vector search across 100+ languages.

Query
# Search in Spanish, find English results
curl -X POST /v1/vectors/query \
  -H "Authorization: Bearer $FV_KEY" \
  -H "Content-Type: application/json" \
  -d '{"collection": "products",
    "text": "zapatos comodos para correr"}'

# Response
{"results": [
  {"id": "1", "text": "Trail running shoes",
   "score": 0.91},
  {"id": "2", "text": "Waterproof hiking boots",
   "score": 0.84}
], "took_ms": 42}
3
API calls to search
100+
Languages
10K
Free vectors
$0
To self-host

Three calls. That's it.

Create a collection, upsert text, query. Embeddings happen server-side. No OpenAI key. No ML pipeline.

v1
Request
# Store English text
curl -X POST .../v1/vectors/upsert \
  -H "Authorization: Bearer $FV_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "collection": "products",
    "vectors": [
      {"id": "1", "text": "Trail running shoes"},
      {"id": "2", "text": "Waterproof hiking boots"},
      {"id": "3", "text": "Casual leather sandals"}
    ]
  }'

# Search in Spanish
curl -X POST .../v1/vectors/query \
  -H "Authorization: Bearer $FV_KEY" \
  -H "Content-Type: application/json" \
  -d '{"collection": "products",
    "text": "zapatos comodos para correr"}'
Response
{"upserted": 3}



// Embeddings generated server-side
// No OpenAI key needed



{"results": [
  {"id": "1",
   "text": "Trail running shoes",
   "score": 0.91},
  {"id": "2",
   "text": "Waterproof hiking boots",
   "score": 0.84}
], "took_ms": 42}
Request
import requests

BASE = "https://fluxvector.dev/v1"
H = {"Authorization": f"Bearer {KEY}"}

# Store English text
requests.post(f"{BASE}/vectors/upsert", json={
  "collection": "products",
  "vectors": [
    {"id": "1", "text": "Trail running shoes"},
    {"id": "2", "text": "Waterproof hiking boots"},
    {"id": "3", "text": "Casual leather sandals"},
  ]
}, headers=H)

# Search in Spanish
r = requests.post(f"{BASE}/vectors/query", json={
  "collection": "products",
  "text": "zapatos comodos para correr",
}, headers=H)
print(r.json())
Response

// Embeddings generated server-side
// No OpenAI key needed



{"upserted": 3}




{"results": [
  {"id": "1",
   "text": "Trail running shoes",
   "score": 0.91},
  {"id": "2",
   "text": "Waterproof hiking boots",
   "score": 0.84}
], "took_ms": 42}
Request
const BASE = "https://fluxvector.dev/v1";
const h = { Authorization: `Bearer ${KEY}` };

// Store English text
await fetch(`${BASE}/vectors/upsert`, {
  method: "POST", headers: h,
  body: JSON.stringify({
    collection: "products",
    vectors: [
      { id: "1", text: "Trail running shoes" },
      { id: "2", text: "Waterproof hiking boots" },
      { id: "3", text: "Casual leather sandals" },
    ],
  }),
});

// Search in Spanish
const r = await fetch(`${BASE}/vectors/query`, {
  method: "POST", headers: h,
  body: JSON.stringify({
    collection: "products",
    text: "zapatos comodos para correr",
  }),
});
const data = await r.json();
Response
// Embeddings generated server-side
// No OpenAI key needed


{"upserted": 3}




{"results": [
  {"id": "1",
   "text": "Trail running shoes",
   "score": 0.91},
  {"id": "2",
   "text": "Waterproof hiking boots",
   "score": 0.84}
], "took_ms": 42}

Stored in English. Searched in Spanish. Found the right result. No configuration.

What you get

Semantic search without the complexity, the surprise bills, or the vendor lock-in.

Built-in embeddings

Send text, not vectors. We embed with multilingual-e5-large server-side. No OpenAI calls. No ML pipeline.

Hybrid search

BM25 full-text + vector similarity + Reciprocal Rank Fusion. Better results than vector-only, out of the box.

100+ languages

Store in Spanish, query in English. Multilingual model handles cross-lingual retrieval natively.

Flat pricing

Monthly fee, that's it. Not per-query, per-embedding, per-dimension. Know what you pay before you start.

Self-host free

Same Docker image, same API. Your data stays in PostgreSQL, not a proprietary black box. Run it yourself forever.

Zero cold starts

Always warm. No serverless spin-up delays. Sub-second responses from the first query.

How we stack up

No asterisks.

FluxVector Pinecone Qdrant
Built-in embeddings Included free Add-on (extra cost) No
Hybrid BM25 + vector Yes + RRF Basic sparse Yes
Self-hosted Free forever No Yes
Cold starts None Seconds (serverless) None
100+ languages Native Depends on your model Depends on your model
Pricing model Flat monthly Per-query + storage Free cloud + paid tiers
Developer console Built-in Yes Yes
Free tier 10K vectors, no expiry Limited (single index) 1GB free cloud

No surprises

Start free. Upgrade when you need to. Self-host if you prefer.

Monthly Yearly SAVE 17%
Free
$0
forever
  • 10,000 vectors
  • 1 collection
  • 20 req/min
  • Built-in embeddings
  • Hybrid search
  • Developer console
Get Started
Popular
Pro
$29
/month
  • 1,000,000 vectors
  • 10 collections
  • 100 req/min
  • Email support
  • Usage analytics
  • Priority embedding
Start Pro
Scale
$99
/month
  • 10,000,000 vectors
  • Unlimited collections
  • 1,000 req/min
  • Priority support
  • Custom embeddings
  • Dedicated resources
Start Scale

Self-hosted is free forever. Same API, same features, your infra.

curl -X POST https://fluxvector.dev/v1/vectors/query \
  -H "Authorization: Bearer $FV_KEY" \
  -H "Content-Type: application/json" \
  -d '{"collection": "docs", "text": "your first query"}'

Get your API key in 10 seconds. No credit card.

Get API Key