• Query
  • Database
  • TimeSeries
  • Server
  • Profiler
  • Security
  • Cluster
  • API
  • Info
  • AI
  • Settings
  • Logout
@
Database Info
Loading...
  • Table
  • Graph
  • Json
  • Explain
  • Redraw
  • Select
    • Direct Neighbors
    • Orphan Vertices
    • Invert Node Selection
    • Shortest Path
  • Cut
  • Crop
  • Import
    • JSON
    • GraphML
    • Settings
  • Export
    • JSON
    • GraphML
    • PNG image
    • JPEG image
    • Settings
 
  • Copy to clipboard
@
  • Schema
  • Buckets
  • Indexes
  • Dictionary
  • Metrics
  • Settings
  • Backup
Types
Loading...
Select a type from the sidebar to view its schema.
Buckets
Detail

        
Indexes
Detail

        
Dictionary
Total Entries
-
Total Pages
-
Database Metrics
Transaction Statistics
Commits
-
Rollbacks
-
Query Statistics
Queries
-
Commands
-
CRUD Operations
Scan & Count Operations
Scan Type-
Scan Bucket-
Iterate Type-
Iterate Bucket-
Count Type-
Count Bucket-
Materialized Views Health

No materialized views found.

Database Backups
Backup Configuration & Statistics

Loading configuration...

Total Backups
-
Total Size
-
Available Backups
@
  • Query
  • Schema
  • Quick Tour
Select a type first
  • Off
  • 10s
  • 30s
  • 1 min
Chart
Data

Select a database to view TimeSeries schema.

ArcadeDB TimeSeries supports multiple ingestion methods and integrates with Grafana for visualization. Before ingesting data, create a TimeSeries type that defines the timestamp column, tags (dimensions for filtering), and fields (numeric measurements).

1. Create a TimeSeries Type (SQL)

Define the schema before ingesting. Tags are indexed dimensions for fast filtering; fields are the numeric measurements.

CREATE TIMESERIES TYPE stocks
  TIMESTAMP ts
  TAGS (symbol STRING)
  FIELDS (open DOUBLE, close DOUBLE, high DOUBLE, low DOUBLE, volume LONG)
  SHARDS 4

Optional parameters: RETENTION <ms> for automatic data expiration, COMPACTION_INTERVAL <ms> for time-bucketed compaction, IF NOT EXISTS to avoid errors if the type already exists.

2. InfluxDB Line Protocol (HTTP API) — Recommended for Bulk Ingestion

The fastest way to ingest large volumes of data. Send one or more lines in InfluxDB Line Protocol format. Each line is: measurement,tag1=val1 field1=value1,field2=value2 timestamp

Endpoint
POST /api/v1/ts/{database}/write?precision=ns|us|ms|s
Line Protocol Format
# measurement,tag1=val1,tag2=val2 field1=value1,field2=value2 timestamp
stocks,symbol=TSLA open=250.64,close=252.10,high=253.50,low=249.80,volume=125000i 1700000000000000000
stocks,symbol=AAPL open=195.20,close=196.50,high=197.00,low=194.80,volume=89000i  1700000000000000000

Data type hints: integers require an i suffix (e.g. volume=125000i), floats are bare numbers, strings are quoted ("value"). Timestamp precision defaults to nanoseconds; use ?precision=ms for milliseconds.

curl Example — Single Point
curl -u root:password -X POST \
  "http://localhost:2480/api/v1/ts/mydb/write?precision=ms" \
  -H "Content-Type: text/plain" \
  --data-binary 'stocks,symbol=TSLA open=250.64,close=252.10,high=253.50,low=249.80,volume=125000i 1700000000000'
curl + Python Example — Generate 5,000 Sample Points

Generates realistic stock data for 5 symbols, 1,000 points each at 1-minute intervals:

curl -s -u root:password -X POST \
  "http://localhost:2480/api/v1/ts/mydb/write" \
  -H "Content-Type: text/plain" \
  --data-binary "$(python3 -c "
import random, time

symbols = ['TSLA', 'AAPL', 'GOOGL', 'MSFT', 'AMZN']
bases = {'TSLA': 250, 'AAPL': 195, 'GOOGL': 175, 'MSFT': 420, 'AMZN': 185}

now_ns = int(time.time() * 1e9)
interval = 60 * 1_000_000_000  # 1 min in ns

lines = []
for sym in symbols:
    price = bases[sym]
    for i in range(1000):
        ts = now_ns - (1000 - i) * interval
        o = round(price + random.uniform(-2, 2), 2)
        c = round(o + random.uniform(-3, 3), 2)
        h = round(max(o, c) + random.uniform(0, 2), 2)
        l = round(min(o, c) - random.uniform(0, 2), 2)
        v = random.randint(10000, 500000)
        lines.append(f'stocks,symbol={sym} open={o},close={c},high={h},low={l},volume={v}i {ts}')
        price = c
print('\n'.join(lines))
")"

Returns 204 No Content on success. Unknown measurement names (no matching TimeSeries type) are silently skipped.

3. SQL INSERT (via Command API)

Use standard SQL INSERT statements through the generic command endpoint. Useful for small batches or when integrating with existing SQL-based workflows.

Endpoint
POST /api/v1/command/{database}
SQL Syntax
INSERT INTO stocks (ts, symbol, open, close, high, low, volume)
  VALUES (1700000000000, 'TSLA', 250.64, 252.10, 253.50, 249.80, 125000)
curl Example
curl -u root:password -X POST \
  "http://localhost:2480/api/v1/command/mydb" \
  -H "Content-Type: application/json" \
  -d '{
    "language": "sql",
    "command": "INSERT INTO stocks (ts, symbol, open, close, high, low, volume) VALUES (1700000000000, '\''TSLA'\'', 250.64, 252.10, 253.50, 249.80, 125000)"
  }'

Timestamps are in milliseconds (epoch). Each INSERT runs inside a transaction. For bulk inserts, prefer the Line Protocol endpoint.

4. Java Embedded API

For applications embedding ArcadeDB directly, use the TimeSeriesEngine API for maximum performance with zero network overhead.

// Get the TimeSeries type and engine
LocalTimeSeriesType tsType = (LocalTimeSeriesType) db.getSchema().getType("stocks");
TimeSeriesEngine engine = tsType.getEngine();

// Prepare sample data (batch of N samples)
long[] timestamps = new long[] { System.currentTimeMillis(), System.currentTimeMillis() + 1000 };
Object[][] columns = new Object[][] {
    { "TSLA", "TSLA" },        // symbol (tag)
    { 250.64, 251.30 },        // open
    { 252.10, 253.00 },        // close
    { 253.50, 254.00 },        // high
    { 249.80, 250.50 },        // low
    { 125000L, 130000L }       // volume
};

db.begin();
engine.appendSamples(timestamps, columns);
db.commit();

The columns array must match the order of non-timestamp columns as defined in the type schema. Use tsType.getTsColumns() to inspect the column order.

5. Analytical SQL Functions

Use these aggregate functions in SQL queries via the Query tab for advanced time series analysis.

FunctionDescription
ts.rate(value, ts)Per-second rate of change
ts.rate(value, ts, true)Rate with counter reset detection (Prometheus-style)
ts.delta(value, ts)Difference between first and last values
ts.percentile(value, 0.95)Approximate percentile (p50, p95, p99, etc.)
ts.movingAvg(value, 10)Moving average with configurable window size
ts.interpolate(value, 'linear', ts)Gap filling with linear interpolation
ts.interpolate(value, 'prev')Gap filling with previous value
ts.first(value, ts)First value ordered by timestamp
ts.last(value, ts)Last value ordered by timestamp
ts.correlate(a, b)Pearson correlation coefficient between two series
ts.timeBucket(60000, ts)Time bucketing for GROUP BY (interval in ms)
Example: p95 Latency per Minute
SELECT ts.timeBucket(60000, ts) AS bucket, ts.percentile(latency, 0.95) AS p95
  FROM metrics
  WHERE ts BETWEEN 1700000000000 AND 1700086400000
  GROUP BY bucket ORDER BY bucket
Example: Rate with Counter Reset Detection
SELECT ts.timeBucket(300000, ts) AS bucket, ts.rate(requests_total, ts, true) AS req_per_sec
  FROM counters GROUP BY bucket ORDER BY bucket
6. Downsampling & Retention Policies

Configure automatic data lifecycle management. Retention removes old data; downsampling reduces granularity for historical data. Both policies are enforced automatically by a background scheduler (every 60 seconds).

Set Retention on Create
CREATE TIMESERIES TYPE metrics TIMESTAMP ts
  TAGS (host STRING) FIELDS (cpu DOUBLE, mem DOUBLE)
  RETENTION 30 DAYS
Add Downsampling Policy
ALTER TIMESERIES TYPE metrics ADD DOWNSAMPLING POLICY
  AFTER 7 DAYS GRANULARITY 1 HOURS
  AFTER 30 DAYS GRANULARITY 1 DAYS

You can also manage downsampling policies from the Schema tab using the visual controls.

Method Comparison
MethodBest ForThroughputBatching
Line ProtocolBulk ingestion, IoT, metrics collectionHighestNative (multi-line)
SQL INSERTSmall batches, ad-hoc inserts, SQL workflowsMediumOne row per statement
Java APIEmbedded applications, maximum controlHighestNative (array-based)

Grafana Integration

ArcadeDB exposes Grafana DataFrame-compatible endpoints so you can visualize TimeSeries data in Grafana without a custom plugin. Use the Grafana Infinity datasource plugin to connect.

1. Install the Infinity Datasource Plugin

The Infinity plugin is a generic JSON/CSV/XML datasource maintained by the Grafana community. Install it from the Grafana plugin catalog or via CLI:

grafana cli plugins install yesoreyeram-infinity-datasource
# then restart Grafana
2. Configure the Datasource

In Grafana, go to Connections → Data Sources → Add data source, select Infinity, and configure:

SettingValue
URLhttp://<arcadedb-host>:2480
AuthenticationBasic Auth
User / PasswordYour ArcadeDB credentials

Test the connection using the Health Check URL:

GET /api/v1/ts/{database}/grafana/health

# Response: { "status": "ok", "database": "mydb" }
3. Available Endpoints
MethodEndpointPurpose
GET/api/v1/ts/{db}/grafana/healthDatasource health check
GET/api/v1/ts/{db}/grafana/metadataDiscover types, fields, tags, aggregation types
POST/api/v1/ts/{db}/grafana/queryQuery → Grafana DataFrame JSON
4. Discover Available Metrics (Metadata)

Use the metadata endpoint to discover TimeSeries types and their fields before configuring panels:

curl -u root:password \
  "http://localhost:2480/api/v1/ts/mydb/grafana/metadata"
Response
{
  "types": [
    {
      "name": "weather",
      "fields": [{ "name": "temperature", "dataType": "DOUBLE" }],
      "tags": [{ "name": "location", "dataType": "STRING" }]
    }
  ],
  "aggregationTypes": ["SUM", "AVG", "MIN", "MAX", "COUNT"]
}
5. Build a Panel Query

Configure the Infinity plugin to POST JSON to the query endpoint. Each target maps to a Grafana panel query (refId A, B, C...). The response uses the columnar DataFrame format Grafana expects.

Request
{
  "from": 1700000000000,
  "to": 1700086400000,
  "maxDataPoints": 1000,
  "targets": [
    {
      "refId": "A",
      "type": "weather",
      "fields": ["temperature"],
      "tags": { "location": "us-east" },
      "aggregation": {
        "bucketInterval": 60000,
        "requests": [
          { "field": "temperature", "type": "AVG", "alias": "avg_temp" }
        ]
      }
    }
  ]
}
Response (DataFrame format)
{
  "results": {
    "A": {
      "frames": [{
        "schema": {
          "fields": [
            { "name": "time", "type": "time" },
            { "name": "avg_temp", "type": "number" }
          ]
        },
        "data": {
          "values": [
            [1700000000000, 1700000060000],
            [23.5, 24.1]
          ]
        }
      }]
    }
  }
}

Auto bucket interval: If aggregation is present but bucketInterval is omitted, it is automatically calculated as (to - from) / maxDataPoints. Omit aggregation entirely to get raw data points.

6. curl Example
curl -u root:password -X POST \
  "http://localhost:2480/api/v1/ts/mydb/grafana/query" \
  -H "Content-Type: application/json" \
  -d '{
    "from": 1700000000000,
    "to": 1700086400000,
    "maxDataPoints": 500,
    "targets": [{
      "refId": "A",
      "type": "weather",
      "aggregation": {
        "requests": [
          { "field": "temperature", "type": "AVG", "alias": "avg_temp" }
        ]
      }
    }]
  }'
  • Summary
  • Metrics
  • Sessions
  • Events
  • Server Settings
  • Backup
  • MCP

Loading server metrics...
0 Errors 0 Warnings 0 Info 0 Hints
CPU
-
Load
RAM
-
of -
OS RAM
-
of -
OS Disk
-
of -
Read Cache
-
of -
Tx Ops/Min
-
- total ops
Transaction Operations
Active authentication sessions

Server Metrics
HTTP Request Meters
Endpoint Total Requests Req/Min
Database Operations
Metric Total Req/Min
Profiler Details
Metric Value
Auto-Backup Configuration
Loading backup configuration...
General Settings
Schedule
sec min hour dom mon dow
Leave empty for any time
Retention Policy
MCP allows LLM clients (Claude Desktop, Cursor) to interact with ArcadeDB using natural language.
Settings
Permissions
Allowed Users
Connection Info
  • Claude Desktop
  • Claude Code / Cursor

Copy into claude_desktop_config.json (requires npx / Node.js):


                  
                

Add with claude mcp add arcadedb or copy into MCP settings:


                  
                
Query Profiler Recording
  • Loading...
Summary
Per-database breakdown
Query Language Database Count Total (ms) Avg (ms) Max (ms) P99 (ms)

          
Step Count Total (ms) Min (ms) Avg (ms) Max (ms) P99 (ms)

Click Start to begin recording queries from all protocols.

All SQL, Cypher, Gremlin, and other queries will be captured with execution plans and timing data.

AI Analysis
  • Users
  • User Groups
  • API Tokens
Manage server users and their database access permissions.
Create User
Database Access

Assign groups per database. Users in the admin group have full access. The * database applies to all databases not explicitly listed.

Database Groups
* (all databases)
Manage security groups that define per-type CRUD permissions and database-level access.
Create Group
-1 = unlimited
-1 = unlimited
Type Permissions
Type Create Read Update Delete
* (default)
Database Permissions
Manage long-lived API tokens for programmatic access (MCP, REST API integrations).
Create API Token
Leave empty for no expiration
Type Permissions
Type Create Read Update Delete
* (default)
Database Permissions
Token Created
Save this token now. It will not be shown again.

Cluster



AI Assistant

Get AI-powered database assistance:

  • Schema review and optimization
  • Query optimization and troubleshooting
  • Data modeling recommendations
  • Server profiling analysis
  • Synthetic data generation
  • And more...
Activate your subscription

Enter your subscription key to enable the AI assistant.

Don't have a key? Contact info@arcadedb.com for more information.

AI Assistant
How can I help you?

Ask me about your database schema, query optimization, data modeling, or synthetic data generation.

HTTP API Reference
Health & Status
GET /api/v1/ready Server readiness check
Authentication
POST /api/v1/login Authenticate with credentials
POST /api/v1/logout End current session
Query Execution
GET /api/v1/query/{db}/{lang}/{cmd} Execute a query via GET
POST /api/v1/query/{db} Execute a query via POST
Commands
POST /api/v1/command/{db} Execute a database command
Transactions
POST /api/v1/begin/{db} Begin a new transaction
POST /api/v1/commit/{db} Commit current transaction
POST /api/v1/rollback/{db} Rollback current transaction
Server Management
GET /api/v1/server Get server information
POST /api/v1/server Execute server command
Database Management
GET /api/v1/databases List all databases
GET /api/v1/exists/{db} Check if database exists
TimeSeries
POST /api/v1/ts/{db}/write Ingest via Line Protocol
POST /api/v1/ts/{db}/query Query timeseries data
GET /api/v1/ts/{db}/latest Get latest value
GET /api/v1/ts/{db}/grafana/health Grafana health check
GET /api/v1/ts/{db}/grafana/metadata Grafana metadata
POST /api/v1/ts/{db}/grafana/query Grafana DataFrame query
Documentation
GET /docs Swagger UI (interactive)
HTTP API Reference

Select an endpoint from the sidebar to view its documentation, including request format, parameters, and response examples.

All endpoints use JSON for request and response bodies. Authentication is via Authorization: Bearer {token} header after login.

Full HTTP API Documentation
Documentation
Getting Started
What is ArcadeDB? Run ArcadeDB Multi-Model Java Tutorial
Query Languages
SQL Cypher Gremlin GraphQL MongoDB Query Language Redis Query Language
Connectivity & Drivers
HTTP / JSON API Java Embedded API JDBC Driver Python PostgreSQL Protocol MongoDB Protocol Redis Protocol
Administration
Installation Docker Kubernetes Backup & Restore Security High Availability Settings
Core Concepts
Graph Database Schema Indexes Transactions Materialized Views
Tools
Console Importer
Community
GitHub Repository Report an Issue Discussions

ArcadeDB Documentation

The Next Generation Multi-Model Database Management System. Click any topic in the sidebar to open its documentation page.

Multi-Model

Graph, Document, Key/Value, Search, Time Series, and Vector in one engine.

Multi-Language

SQL, Cypher, Gremlin, GraphQL, MongoDB QL, and Redis commands.

Multi-Protocol

HTTP/JSON, PostgreSQL, MongoDB, and Redis wire protocols.

Performance

Built for speed with minimal GC pressure and efficient storage.

ACID Transactions

Full transaction support with isolation and durability guarantees.

High Availability

Leader-follower replication with automatic failover.

Materialized Views

Persist SQL query results with manual, incremental, or periodic refresh strategies.

Full Documentation GitHub Community
Settings
Appearance
Theme
Light
Dark
System
ArcadeDB

The Next Generation
Multi-Model DBMS

One database. Every data model. Extreme performance.

Graph
Document
Key/Value
Search
Time Series
Vector
6
Data Models
5+
Query Languages
ACID
Transactions
Docs · Discord · Issues
Copyright © 2021-present Arcade Data Ltd · Apache 2.0 License

Welcome to Studio

Sign in to manage your databases

ArcadeDB Studio