Query

Federated semantic search across all connected data sources.

Query

Query is the federated retrieval layer. Search across all connected sources with a single API call, powered by intent ranking and entropy-aware surfacing.

Overview

graph LR
    Q[Query] --> ORCH[Orchestrator]
    ORCH --> GHOST[Ephemeral Context]
    GHOST --> VEC[Vec2Vec]
    
    subgraph "Federated Nodes"
        SLACK[(Slack)]
        NOTION[(Notion)]
        GITHUB[(GitHub)]
    end
    
    VEC --> SLACK
    VEC --> NOTION
    VEC --> GITHUB
    
    SLACK --> RANK[Ranker]
    NOTION --> RANK
    GITHUB --> RANK
    
    RANK --> RESULTS[Results]

API Endpoint

Federated Query

POST /v1/federate
Content-Type: application/json

{
  "query": "What was the decision on authentication architecture?",
  "user_id": "user-uuid",
  "node_ids": ["slack", "notion", "github"],
  "max_results": 10,
  "intent_signal": {
    "urgency": 0.8,
    "emotional_weight": 0.5,
    "action_likelihood": 0.6
  }
}

Response:

{
  "query_hash": "sha256:a1b2c3...",
  "items": [
    {
      "id": "doc-123",
      "content": "Team decided on singleton pattern for auth service...",
      "source_node": "slack",
      "relevance_score": 0.92,
      "semantic_similarity": 0.88,
      "federation_confidence": 0.95,
      "temporal_status": "recent",
      "is_critical": true,
      "suppressed_by_tombstone": false,
      "metadata": {
        "channel": "#engineering",
        "author": "alice@company.com",
        "timestamp": "2026-01-15T10:30:00Z"
      }
    }
  ],
  "nodes_consulted": ["slack", "notion"],
  "latency_ms": 234,
  "user_entropy": 0.45,
  "damping_active": false
}

Request Parameters

ParameterTypeRequiredDescription
querystringNatural language query
user_idstringUser identifier for context
node_idsstring[]Filter to specific sources
max_resultsintMaximum results (default: 10)
intent_signalobjectOverride intent weighting

Intent Signal Parameters

Fine-tune ranking with 5D intent signal vectors:

{
  "intent_signal": {
    "urgency": 0.8,           // 0-1: Immediate need vs future consideration
    "emotional_weight": 0.5,  // 0-1: Rational vs emotional importance
    "action_likelihood": 0.6, // 0-1: Browsing vs intent to act
    "social_dimension": 0.3,  // 0-1: Solo vs collaborative context
    "context_sensitivity": 0.7 // 0-1: How context-dependent
  }
}

Response Fields

FieldTypeDescription
query_hashstringSHA-256 hash for audit tracking
itemsarrayRetrieved documents
nodes_consultedstring[]Sources that were queried
latency_msintTotal query latency
user_entropyfloatCurrent user cognitive load
damping_activeboolWhether homeostasis damping is active

Item Fields

FieldTypeDescription
idstringDocument identifier
contentstringDocument content
source_nodestringOrigin source (slack, notion, etc.)
relevance_scorefloatCombined relevance (0-1)
semantic_similarityfloatEmbedding similarity (0-1)
federation_confidencefloatTranslation confidence (0-1)
temporal_statusstring"recent", "stale", "historical"
is_criticalboolHigh-priority signal detected
suppressed_by_tombstoneboolAffected by forget request

SDK Usage

TypeScript

import { MetalogueClient } from '@metalogue/sdk';

const client = new MetalogueClient({ apiKey: API_KEY });

const { results, reasoning, latencyMs } = await client.query(
  'What was the auth decision from last week?',
  {
    limit: 10,
    includeReasoning: true,
    sources: ['slack', 'notion'],
    filters: {
      dateRange: {
        after: '2026-01-13',
        before: '2026-01-20',
      },
      authors: ['alice@company.com'],
    },
  }
);

for (const result of results) {
  console.log(`[${result.source}] Score: ${result.score}`);
  console.log(result.content);
}

Python

from metalogue import MetalogueClient

client = MetalogueClient(api_key=API_KEY)

response = await client.query(
    "What was the auth decision from last week?",
    limit=10,
    include_reasoning=True,
    sources=["slack", "notion"],
    date_after="2026-01-13",
    date_before="2026-01-20",
)

for result in response.results:
    print(f"[{result.source}] Score: {result.score}")
    print(result.content)

Filtering

By Source

await client.query('...', {
  sources: ['slack', 'notion'],
});

By Date Range

await client.query('...', {
  filters: {
    dateRange: {
      after: '2026-01-01',
      before: '2026-01-31',
    },
  },
});

By Author

await client.query('...', {
  filters: {
    authors: ['alice@company.com', 'bob@company.com'],
  },
});

By Document Type

await client.query('...', {
  filters: {
    types: ['message', 'document', 'ticket'],
  },
});

Advanced Features

Entropy-Aware Surfacing

Results are automatically filtered based on the user's cognitive load:

Entropy Level    Behavior
─────────────    ────────
Very Low         Surface everything
Low              Be permissive
Normal           Be selective
High             Filter aggressively
Critical         Critical items only (damping active)

The user_entropy field in the response shows the current entropy score.

Homeostasis Damping

When a user's entropy exceeds the threshold (default: 2.0), homeostasis damping activates:

  • Reduces number of surfaced results
  • Prioritizes critical items only
  • Cross-node coordination

The damping_active field indicates when damping is active.

Temporal Resolution

Results include temporal context:

StatusDescription
recentWithin last 7 days
stale7-30 days old
historicalOver 30 days old

Temporal decay applies a recency weight:

weight=base_score×eλdaysweight = base\_score \times e^{-\lambda \cdot days}

Where λ = 0.05 by default.

Tombstone Filtering

Documents affected by Forget Request (semantic unlearning) are automatically filtered or penalized:

{
  "suppressed_by_tombstone": true
}

The item is still returned (for transparency) but marked as suppressed.

Force Federation

Force results from all nodes (bypass caching):

POST /v1/federate/force

Use sparingly—adds latency and bypasses optimization.

Error Codes

CodeDescription
400Invalid query format
401Invalid API key
403Permission denied
429Rate limit exceeded
500Internal server error
504Federation timeout

Best Practices

  1. Use specific queries - "auth decision last week" beats "auth"
  2. Leverage intent signals - Set urgency for time-sensitive queries
  3. Filter by source - Reduce latency when you know the source
  4. Monitor entropy - High entropy means information overload
  5. Cache results - Use query_hash for client-side caching

Next Steps