Next.js makes this easy because you don't need a separate backend. Server actions keep your API key on the server, and the search UI lives in a client component. Everything stays in one project.
This tutorial adds semantic search to a Next.js app. Your users type natural language queries like "lightweight laptop for travel" and get results that match by meaning, not just keywords. We'll also add image search so users can upload a photo and find similar items.
What We're Building
A search page with two modes:
- Text search - user describes what they're looking for in plain language
- Image search - user uploads a photo to find visually similar items
Both hit the same Vecstore database through a server action.
Prerequisites
- Next.js 14+ (App Router)
- A Vecstore account (free tier works)
- An image database created in the Vecstore dashboard
- Your API key and database ID
Step 1: Environment Variables
Add your credentials to .env.local:
VECSTORE_API_KEY=your_api_key_here
VECSTORE_DB_ID=your_database_id_here
These only run on the server. They never get exposed to the browser.
Step 2: Server Actions
Create app/actions/search.ts. This is where the Vecstore API calls live. Since these are server actions, the API key stays on the server.
'use server';
const API_KEY = process.env.VECSTORE_API_KEY!;
const DB_ID = process.env.VECSTORE_DB_ID!;
const BASE = 'https://api.vecstore.app/api';
export async function searchByText(query: string) {
const res = await fetch(`${BASE}/databases/${DB_ID}/search`, {
method: 'POST',
headers: {
'X-API-Key': API_KEY,
'Content-Type': 'application/json',
},
body: JSON.stringify({ query, top_k: 12 }),
});
return res.json();
}
export async function searchByImage(formData: FormData) {
const file = formData.get('image') as File;
const bytes = await file.arrayBuffer();
const base64 = Buffer.from(bytes).toString('base64');
const res = await fetch(`${BASE}/databases/${DB_ID}/search`, {
method: 'POST',
headers: {
'X-API-Key': API_KEY,
'Content-Type': 'application/json',
},
body: JSON.stringify({ image: base64, top_k: 12 }),
});
return res.json();
}
export async function insertImage(imageUrl: string) {
const res = await fetch(`${BASE}/databases/${DB_ID}/documents`, {
method: 'POST',
headers: {
'X-API-Key': API_KEY,
'Content-Type': 'application/json',
},
body: JSON.stringify({ image_url: imageUrl }),
});
return res.json();
}
Three functions: text search, image search, and an insert function for populating your database. No Express server, no API routes, no separate backend. Server actions handle all of it.
Step 3: Populate Your Database
Before search works, you need images in your database. You can do this from the Vecstore dashboard, or run a quick script. Create scripts/seed.ts:
import { insertImage } from '../app/actions/search';
const images = [
'https://example.com/products/jacket-red.jpg',
'https://example.com/products/jacket-blue.jpg',
'https://example.com/products/boots-leather.jpg',
// ... your image URLs
];
for (const url of images) {
await insertImage(url);
console.log(`Inserted: ${url}`);
}
Each image gets embedded automatically. No tagging, no preprocessing.
Step 4: Build the Search Component
Create app/components/search.tsx. This is a client component that calls the server actions.
'use client';
import { useState, useCallback } from 'react';
import { searchByText, searchByImage } from '../actions/search';
export default function Search() {
const [query, setQuery] = useState('');
const [results, setResults] = useState<any[]>([]);
const [loading, setLoading] = useState(false);
const [preview, setPreview] = useState<string | null>(null);
const [dragging, setDragging] = useState(false);
const handleTextSearch = async (e: React.FormEvent) => {
e.preventDefault();
if (!query.trim()) return;
setLoading(true);
setPreview(null);
const data = await searchByText(query);
setResults(data.results || []);
setLoading(false);
};
const handleImageSearch = async (file: File) => {
setLoading(true);
setQuery('');
setPreview(URL.createObjectURL(file));
const formData = new FormData();
formData.append('image', file);
const data = await searchByImage(formData);
setResults(data.results || []);
setLoading(false);
};
const onDrop = useCallback((e: React.DragEvent) => {
e.preventDefault();
setDragging(false);
const file = e.dataTransfer.files[0];
if (file?.type.startsWith('image/')) {
handleImageSearch(file);
}
}, []);
return (
<div
onDragOver={(e) => { e.preventDefault(); setDragging(true); }}
onDragLeave={() => setDragging(false)}
onDrop={onDrop}
className="max-w-4xl mx-auto p-6"
>
<form onSubmit={handleTextSearch} className="flex gap-2">
<input
type="text"
value={query}
onChange={(e) => setQuery(e.target.value)}
placeholder="Describe what you're looking for..."
className="flex-1 px-4 py-2 border rounded-lg"
/>
<button
type="submit"
className="px-6 py-2 bg-black text-white rounded-lg"
>
Search
</button>
<label className="px-4 py-2 border rounded-lg cursor-pointer
hover:bg-gray-50">
Upload
<input
type="file"
accept="image/*"
onChange={(e) => {
const file = e.target.files?.[0];
if (file) handleImageSearch(file);
}}
hidden
/>
</label>
</form>
{dragging && (
<div className="fixed inset-0 bg-black/50 flex items-center
justify-center text-white text-2xl z-50
pointer-events-none">
Drop an image to search
</div>
)}
{preview && (
<div className="mt-4">
<p className="text-sm text-gray-500">Searching similar to:</p>
<img
src={preview}
alt="Query"
className="w-24 h-24 object-cover rounded mt-1"
/>
</div>
)}
{loading ? (
<p className="mt-8 text-center text-gray-500">Searching...</p>
) : (
<div className="grid grid-cols-2 sm:grid-cols-3 md:grid-cols-4
gap-4 mt-8">
{results.map((result) => (
<div key={result.vector_id} className="group">
<img
src={result.metadata?.image_url}
alt=""
className="w-full aspect-square object-cover rounded-lg"
/>
<p className="text-xs text-gray-400 mt-1">
{(result.score * 100).toFixed(1)}% match
</p>
</div>
))}
</div>
)}
</div>
);
}
Step 5: Add the Search Page
Create app/search/page.tsx:
import Search from '../components/search';
export default function SearchPage() {
return (
<main className="py-12">
<h1 className="text-3xl font-bold text-center mb-8">
Search
</h1>
<Search />
</main>
);
}
That's it. Run npm run dev, go to /search, and start searching.
Why Server Actions Instead of API Routes
You could also do this with Next.js API routes (app/api/search/route.ts). Both work. Server actions are simpler for this use case because:
- No need to define request/response types for an HTTP endpoint
- You import the function directly and call it from your component
- The framework handles the serialization
- Less boilerplate
If you need to expose your search as a public API endpoint (for a mobile app, for example), use API routes instead. Here's the equivalent:
// app/api/search/route.ts
import { NextRequest, NextResponse } from 'next/server';
const API_KEY = process.env.VECSTORE_API_KEY!;
const DB_ID = process.env.VECSTORE_DB_ID!;
export async function POST(req: NextRequest) {
const { query, top_k = 12 } = await req.json();
const res = await fetch(
`https://api.vecstore.app/api/databases/${DB_ID}/search`,
{
method: 'POST',
headers: {
'X-API-Key': API_KEY,
'Content-Type': 'application/json',
},
body: JSON.stringify({ query, top_k }),
}
);
const data = await res.json();
return NextResponse.json(data);
}
How Semantic Search Works
Traditional search matches keywords. If a user types "cheap running shoes" and your product is called "budget athletic sneakers," keyword search returns nothing. No words match.
Semantic search matches meaning. Both phrases mean the same thing, so they match. This works because Vecstore converts both the query and your stored content into vector embeddings that capture meaning, then finds the closest matches.
This also applies to images. Upload a photo of a red backpack, and the search returns other red backpacks from your database even if nobody tagged them as such. The model understands visual content directly.
Things to Keep in Mind
Server action size limits. Next.js server actions have a default request body size limit of 1MB. For image uploads, this is usually fine (most web images are under 1MB), but if you're accepting high-res photos, you may need to resize client-side or increase the limit in next.config.js.
Caching. By default, Next.js caches fetch results in server components. In server actions, fetch is not cached by default, which is what you want for search. If you move to API routes, add cache: 'no-store' to your fetch calls.
Debouncing. If you want search-as-you-type, debounce the server action calls. Firing a server action on every keystroke will work but it's wasteful.
TypeScript types. The code above uses any for results to keep things simple. For production, define proper types for the Vecstore response:
interface SearchResult {
vector_id: string;
score: number;
metadata?: {
image_url?: string;
[key: string]: any;
};
}
interface SearchResponse {
results: SearchResult[];
}
What Else You Can Do
The same Vecstore database supports more search types without extra setup:
- Face search - find every photo of a specific person across your database
- OCR search - find images by the text inside them (signs, screenshots, documents)
- NSFW detection - check uploaded images for content safety before displaying them
All through the same API key and database.
Wrapping Up
The full setup: two server actions, one client component, one page, and a Vecstore database with your content in it. No separate backend, no embedding models, no GPU servers.
For production you'd add error handling, pagination, and proper TypeScript types. But the search itself works as shown.
Get started with Vecstore - free tier includes enough credits to build and test.


