Query
Federated semantic search across all connected data sources.
Query
Query is the federated retrieval layer. Search across all connected sources with a single API call, powered by intent ranking and entropy-aware surfacing.
Overview
graph LR
Q[Query] --> ORCH[Orchestrator]
ORCH --> GHOST[Ephemeral Context]
GHOST --> VEC[Vec2Vec]
subgraph "Federated Nodes"
SLACK[(Slack)]
NOTION[(Notion)]
GITHUB[(GitHub)]
end
VEC --> SLACK
VEC --> NOTION
VEC --> GITHUB
SLACK --> RANK[Ranker]
NOTION --> RANK
GITHUB --> RANK
RANK --> RESULTS[Results]
API Endpoint
Federated Query
POST /v1/federate
Content-Type: application/json
{
"query": "What was the decision on authentication architecture?",
"user_id": "user-uuid",
"node_ids": ["slack", "notion", "github"],
"max_results": 10,
"intent_signal": {
"urgency": 0.8,
"emotional_weight": 0.5,
"action_likelihood": 0.6
}
}
Response:
{
"query_hash": "sha256:a1b2c3...",
"items": [
{
"id": "doc-123",
"content": "Team decided on singleton pattern for auth service...",
"source_node": "slack",
"relevance_score": 0.92,
"semantic_similarity": 0.88,
"federation_confidence": 0.95,
"temporal_status": "recent",
"is_critical": true,
"suppressed_by_tombstone": false,
"metadata": {
"channel": "#engineering",
"author": "alice@company.com",
"timestamp": "2026-01-15T10:30:00Z"
}
}
],
"nodes_consulted": ["slack", "notion"],
"latency_ms": 234,
"user_entropy": 0.45,
"damping_active": false
}
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | ✓ | Natural language query |
user_id | string | ✓ | User identifier for context |
node_ids | string[] | Filter to specific sources | |
max_results | int | Maximum results (default: 10) | |
intent_signal | object | Override intent weighting |
Intent Signal Parameters
Fine-tune ranking with 5D intent signal vectors:
{
"intent_signal": {
"urgency": 0.8, // 0-1: Immediate need vs future consideration
"emotional_weight": 0.5, // 0-1: Rational vs emotional importance
"action_likelihood": 0.6, // 0-1: Browsing vs intent to act
"social_dimension": 0.3, // 0-1: Solo vs collaborative context
"context_sensitivity": 0.7 // 0-1: How context-dependent
}
}
Response Fields
| Field | Type | Description |
|---|---|---|
query_hash | string | SHA-256 hash for audit tracking |
items | array | Retrieved documents |
nodes_consulted | string[] | Sources that were queried |
latency_ms | int | Total query latency |
user_entropy | float | Current user cognitive load |
damping_active | bool | Whether homeostasis damping is active |
Item Fields
| Field | Type | Description |
|---|---|---|
id | string | Document identifier |
content | string | Document content |
source_node | string | Origin source (slack, notion, etc.) |
relevance_score | float | Combined relevance (0-1) |
semantic_similarity | float | Embedding similarity (0-1) |
federation_confidence | float | Translation confidence (0-1) |
temporal_status | string | "recent", "stale", "historical" |
is_critical | bool | High-priority signal detected |
suppressed_by_tombstone | bool | Affected by forget request |
SDK Usage
TypeScript
import { MetalogueClient } from '@metalogue/sdk';
const client = new MetalogueClient({ apiKey: API_KEY });
const { results, reasoning, latencyMs } = await client.query(
'What was the auth decision from last week?',
{
limit: 10,
includeReasoning: true,
sources: ['slack', 'notion'],
filters: {
dateRange: {
after: '2026-01-13',
before: '2026-01-20',
},
authors: ['alice@company.com'],
},
}
);
for (const result of results) {
console.log(`[${result.source}] Score: ${result.score}`);
console.log(result.content);
}
Python
from metalogue import MetalogueClient
client = MetalogueClient(api_key=API_KEY)
response = await client.query(
"What was the auth decision from last week?",
limit=10,
include_reasoning=True,
sources=["slack", "notion"],
date_after="2026-01-13",
date_before="2026-01-20",
)
for result in response.results:
print(f"[{result.source}] Score: {result.score}")
print(result.content)
Filtering
By Source
await client.query('...', {
sources: ['slack', 'notion'],
});
By Date Range
await client.query('...', {
filters: {
dateRange: {
after: '2026-01-01',
before: '2026-01-31',
},
},
});
By Author
await client.query('...', {
filters: {
authors: ['alice@company.com', 'bob@company.com'],
},
});
By Document Type
await client.query('...', {
filters: {
types: ['message', 'document', 'ticket'],
},
});
Advanced Features
Entropy-Aware Surfacing
Results are automatically filtered based on the user's cognitive load:
Entropy Level Behavior
───────────── ────────
Very Low Surface everything
Low Be permissive
Normal Be selective
High Filter aggressively
Critical Critical items only (damping active)
The user_entropy field in the response shows the current entropy score.
Homeostasis Damping
When a user's entropy exceeds the threshold (default: 2.0), homeostasis damping activates:
- Reduces number of surfaced results
- Prioritizes critical items only
- Cross-node coordination
The damping_active field indicates when damping is active.
Temporal Resolution
Results include temporal context:
| Status | Description |
|---|---|
recent | Within last 7 days |
stale | 7-30 days old |
historical | Over 30 days old |
Temporal decay applies a recency weight:
Where λ = 0.05 by default.
Tombstone Filtering
Documents affected by Forget Request (semantic unlearning) are automatically filtered or penalized:
{
"suppressed_by_tombstone": true
}
The item is still returned (for transparency) but marked as suppressed.
Force Federation
Force results from all nodes (bypass caching):
POST /v1/federate/force
Use sparingly—adds latency and bypasses optimization.
Error Codes
| Code | Description |
|---|---|
| 400 | Invalid query format |
| 401 | Invalid API key |
| 403 | Permission denied |
| 429 | Rate limit exceeded |
| 500 | Internal server error |
| 504 | Federation timeout |
Best Practices
- Use specific queries - "auth decision last week" beats "auth"
- Leverage intent signals - Set urgency for time-sensitive queries
- Filter by source - Reduce latency when you know the source
- Monitor entropy - High entropy means information overload
- Cache results - Use
query_hashfor client-side caching
