Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,13 @@ Or with Docker:
docker compose up --build -d
```

**Memory tuning:** The Node.js server uses a default heap of 128MB. For larger memory stores or heavy usage, increase the heap by setting `NODE_OPTIONS` in the Dockerfile's production stage or via environment:

```bash
# In docker-compose.yml environment section:
- NODE_OPTIONS=--max-old-space-size=256
```

The backend exposes:

- `/api/memory/*` – memory operations
Expand Down
4 changes: 4 additions & 0 deletions dashboard/.env.local.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# OpenMemory Dashboard Configuration
NEXT_PUBLIC_API_URL=http://localhost:8080
# Set this if your backend has OM_API_KEY configured for authentication
NEXT_PUBLIC_API_KEY=your
Copy link

Copilot AI Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The API key value 'your' is incomplete and ambiguous. It should either be 'your-api-key-here' or include a comment indicating this is a placeholder that needs to be replaced with an actual API key.

Suggested change
NEXT_PUBLIC_API_KEY=your
NEXT_PUBLIC_API_KEY=your-api-key-here

Copilot uses AI. Check for mistakes.
159 changes: 159 additions & 0 deletions dashboard/CHAT_SETUP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Chat Interface Setup

The chat interface is now connected to the OpenMemory backend and can query memories in real-time.

## Features

✅ **Memory Querying**: Searches your memory database for relevant content
✅ **Salience-based Results**: Shows top memories ranked by relevance
✅ **Memory Reinforcement**: Click the + button to boost memory importance
✅ **Real-time Updates**: Live connection to backend API
✅ **Action Buttons**: Quick actions after assistant responses

## Setup Instructions

### 1. Start the Backend

First, make sure the OpenMemory backend is running:

```bash
cd backend
npm install
npm run dev
```

The backend will start on `http://localhost:8080`

### 2. Configure Environment (Optional)

The dashboard is pre-configured to connect to `localhost:8080`. If your backend runs on a different port, create a `.env.local` file:

```bash
# dashboard/.env.local
NEXT_PUBLIC_API_URL=http://localhost:8080
```

### 3. Start the Dashboard

```bash
cd dashboard
npm install
npm run dev
```

The dashboard will start on `http://localhost:3000`

### 4. Add Some Memories

Before chatting, you need to add some memories to your database. You can do this via:

**Option A: API (Recommended for Testing)**

```bash
curl -X POST http://localhost:8080/memory/add \
-H "Content-Type: application/json" \
-d '{
"content": "JavaScript async/await makes asynchronous code more readable",
"tags": ["javascript", "async"],
"metadata": {"source": "learning"}
}'
```

**Option B: Use the SDK**

```javascript
// examples/js-sdk/basic-usage.js
import OpenMemory from '../../sdk-js/src/index.js';

const om = new OpenMemory('http://localhost:8080');

await om.addMemory({
content: 'React hooks revolutionized state management',
tags: ['react', 'hooks'],
});
```

**Option C: Ingest a Document**

```bash
curl -X POST http://localhost:8080/memory/ingest \
-H "Content-Type: application/json" \
-d '{
"content_type": "text",
"data": "Your document content here...",
"metadata": {"source": "document"}
}'
```

## How It Works

### Memory Query Flow

1. **User Input**: You ask a question in the chat
2. **Backend Query**: POST to `/memory/query` with your question
3. **Vector Search**: Backend searches HSG memory graph
4. **Results**: Top 5 memories returned with salience scores
5. **Response**: Chat generates answer based on retrieved memories

### Memory Reinforcement

Clicking the **+** button on a memory card:

- Sends POST to `/memory/reinforce`
- Increases memory salience by 0.1
- Makes it more likely to appear in future queries

## Current Features

✅ Real-time memory querying
✅ Salience-based ranking
✅ Memory reinforcement (boost)
✅ Sector classification display
✅ Error handling with backend status

## Coming Soon

- 🚧 LLM Integration (OpenAI, Ollama, Gemini)
- 🚧 Conversation memory persistence
- 🚧 Export chat to memories
- 🚧 WebSocket streaming responses
- 🚧 Quiz generation from memories
- 🚧 Podcast script generation

## Troubleshooting

### "Failed to query memories"

- Ensure backend is running: `npm run dev` in `backend/`
- Check backend is on port 8080: `curl http://localhost:8080/health`
- Verify CORS is enabled (already configured)

### "No memories found"

- Add memories using the API or SDK (see setup above)
- Try broader search terms
- Check memory content exists: `GET http://localhost:8080/memory/all`

### Connection refused

- Backend not started
- Wrong port in `.env.local`
- Firewall blocking connection

## API Endpoints Used

```typescript
POST /memory/query // Search memories
POST /memory/add // Add new memory
POST /memory/reinforce // Boost memory salience
GET /memory/all // List all memories
GET /memory/:id // Get specific memory
```

## Next Steps

1. Add LLM integration for intelligent responses
2. Implement conversation memory storage
3. Add streaming response support
4. Create memory export feature
5. Build quiz/podcast generators
36 changes: 36 additions & 0 deletions dashboard/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).

## Getting Started

First, run the development server:

```bash
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
```

Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.

You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.

This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.

## Learn More

To learn more about Next.js, take a look at the following resources:

- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.

You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!

## Deploy on Vercel

The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.

Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
109 changes: 109 additions & 0 deletions dashboard/app/api/settings/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
import { NextResponse } from 'next/server'
import fs from 'fs'
import path from 'path'

const ENV_PATH = path.resolve(process.cwd(), '../.env')

function parseEnvFile(content: string): Record<string, string> {
const result: Record<string, string> = {}
const lines = content.split('\n')

for (const line of lines) {
const trimmed = line.trim()
if (!trimmed || trimmed.startsWith('#')) continue

const equalIndex = trimmed.indexOf('=')
if (equalIndex === -1) continue

const key = trimmed.substring(0, equalIndex).trim()
const value = trimmed.substring(equalIndex + 1).trim()
result[key] = value
}

return result
}

function serializeEnvFile(updates: Record<string, string>): string {
const lines: string[] = []

for (const [key, value] of Object.entries(updates)) {
lines.push(`${key}=${value}`)
}

return lines.join('\n')
}

export async function GET() {
try {
if (!fs.existsSync(ENV_PATH)) {
return NextResponse.json({
exists: false,
settings: {}
})
}

const content = fs.readFileSync(ENV_PATH, 'utf-8')
const settings = parseEnvFile(content)

const masked = { ...settings }
if (masked.OPENAI_API_KEY) masked.OPENAI_API_KEY = '***'
if (masked.GEMINI_API_KEY) masked.GEMINI_API_KEY = '***'
if (masked.AWS_SECRET_ACCESS_KEY) masked.AWS_SECRET_ACCESS_KEY = "***"
if (masked.OM_API_KEY) masked.OM_API_KEY = '***'

return NextResponse.json({
exists: true,
settings: masked
})
} catch (e: any) {
console.error('[Settings API] read error:', e)
return NextResponse.json(
{ error: 'internal', message: e.message },
{ status: 500 }
)
}
}

export async function POST(request: Request) {
try {
const updates = await request.json()

if (!updates || typeof updates !== 'object') {
return NextResponse.json(
{ error: 'invalid_body' },
{ status: 400 }
)
}

let content = ''
let envExists = false

if (fs.existsSync(ENV_PATH)) {
content = fs.readFileSync(ENV_PATH, 'utf-8')
envExists = true
} else {
const examplePath = path.resolve(process.cwd(), '../.env.example')
if (fs.existsSync(examplePath)) {
content = fs.readFileSync(examplePath, 'utf-8')
}
}

const existing = content ? parseEnvFile(content) : {}
const merged = { ...existing, ...updates }
const newContent = serializeEnvFile(merged)

fs.writeFileSync(ENV_PATH, newContent, 'utf-8')

return NextResponse.json({
ok: true,
created: !envExists,
message: 'Settings saved. Restart the backend to apply changes.'
})
} catch (e: any) {
console.error('[Settings API] write error:', e)
return NextResponse.json(
{ error: 'internal', message: e.message },
{ status: 500 }
)
}
}
Loading