| Scan Type | - |
| Scan Bucket | - |
| Iterate Type | - |
| Iterate Bucket | - |
| Count Type | - |
| Count Bucket | - |
No materialized views found.
Loading configuration...
Select a database to view TimeSeries schema.
ArcadeDB TimeSeries supports multiple ingestion methods and integrates with Grafana for visualization. Before ingesting data, create a TimeSeries type that defines the timestamp column, tags (dimensions for filtering), and fields (numeric measurements).
Define the schema before ingesting. Tags are indexed dimensions for fast filtering; fields are the numeric measurements.
CREATE TIMESERIES TYPE stocks
TIMESTAMP ts
TAGS (symbol STRING)
FIELDS (open DOUBLE, close DOUBLE, high DOUBLE, low DOUBLE, volume LONG)
SHARDS 4
Optional parameters: RETENTION <ms> for automatic data expiration,
COMPACTION_INTERVAL <ms> for time-bucketed compaction,
IF NOT EXISTS to avoid errors if the type already exists.
The fastest way to ingest large volumes of data. Send one or more lines in
InfluxDB Line Protocol format.
Each line is: measurement,tag1=val1 field1=value1,field2=value2 timestamp
POST /api/v1/ts/{database}/write?precision=ns|us|ms|s
# measurement,tag1=val1,tag2=val2 field1=value1,field2=value2 timestamp
stocks,symbol=TSLA open=250.64,close=252.10,high=253.50,low=249.80,volume=125000i 1700000000000000000
stocks,symbol=AAPL open=195.20,close=196.50,high=197.00,low=194.80,volume=89000i 1700000000000000000
Data type hints: integers require an i suffix (e.g. volume=125000i), floats are bare numbers,
strings are quoted ("value"). Timestamp precision defaults to nanoseconds; use ?precision=ms for milliseconds.
curl -u root:password -X POST \
"http://localhost:2480/api/v1/ts/mydb/write?precision=ms" \
-H "Content-Type: text/plain" \
--data-binary 'stocks,symbol=TSLA open=250.64,close=252.10,high=253.50,low=249.80,volume=125000i 1700000000000'
Generates realistic stock data for 5 symbols, 1,000 points each at 1-minute intervals:
curl -s -u root:password -X POST \
"http://localhost:2480/api/v1/ts/mydb/write" \
-H "Content-Type: text/plain" \
--data-binary "$(python3 -c "
import random, time
symbols = ['TSLA', 'AAPL', 'GOOGL', 'MSFT', 'AMZN']
bases = {'TSLA': 250, 'AAPL': 195, 'GOOGL': 175, 'MSFT': 420, 'AMZN': 185}
now_ns = int(time.time() * 1e9)
interval = 60 * 1_000_000_000 # 1 min in ns
lines = []
for sym in symbols:
price = bases[sym]
for i in range(1000):
ts = now_ns - (1000 - i) * interval
o = round(price + random.uniform(-2, 2), 2)
c = round(o + random.uniform(-3, 3), 2)
h = round(max(o, c) + random.uniform(0, 2), 2)
l = round(min(o, c) - random.uniform(0, 2), 2)
v = random.randint(10000, 500000)
lines.append(f'stocks,symbol={sym} open={o},close={c},high={h},low={l},volume={v}i {ts}')
price = c
print('\n'.join(lines))
")"
Returns 204 No Content on success. Unknown measurement names (no matching TimeSeries type) are silently skipped.
Use standard SQL INSERT statements through the generic command endpoint. Useful for small batches or when integrating with existing SQL-based workflows.
POST /api/v1/command/{database}
INSERT INTO stocks (ts, symbol, open, close, high, low, volume)
VALUES (1700000000000, 'TSLA', 250.64, 252.10, 253.50, 249.80, 125000)
curl -u root:password -X POST \
"http://localhost:2480/api/v1/command/mydb" \
-H "Content-Type: application/json" \
-d '{
"language": "sql",
"command": "INSERT INTO stocks (ts, symbol, open, close, high, low, volume) VALUES (1700000000000, '\''TSLA'\'', 250.64, 252.10, 253.50, 249.80, 125000)"
}'
Timestamps are in milliseconds (epoch). Each INSERT runs inside a transaction. For bulk inserts, prefer the Line Protocol endpoint.
For applications embedding ArcadeDB directly, use the TimeSeriesEngine API for maximum performance with zero network overhead.
// Get the TimeSeries type and engine
LocalTimeSeriesType tsType = (LocalTimeSeriesType) db.getSchema().getType("stocks");
TimeSeriesEngine engine = tsType.getEngine();
// Prepare sample data (batch of N samples)
long[] timestamps = new long[] { System.currentTimeMillis(), System.currentTimeMillis() + 1000 };
Object[][] columns = new Object[][] {
{ "TSLA", "TSLA" }, // symbol (tag)
{ 250.64, 251.30 }, // open
{ 252.10, 253.00 }, // close
{ 253.50, 254.00 }, // high
{ 249.80, 250.50 }, // low
{ 125000L, 130000L } // volume
};
db.begin();
engine.appendSamples(timestamps, columns);
db.commit();
The columns array must match the order of non-timestamp columns as defined in the type schema.
Use tsType.getTsColumns() to inspect the column order.
Use these aggregate functions in SQL queries via the Query tab for advanced time series analysis.
| Function | Description |
|---|---|
ts.rate(value, ts) | Per-second rate of change |
ts.rate(value, ts, true) | Rate with counter reset detection (Prometheus-style) |
ts.delta(value, ts) | Difference between first and last values |
ts.percentile(value, 0.95) | Approximate percentile (p50, p95, p99, etc.) |
ts.movingAvg(value, 10) | Moving average with configurable window size |
ts.interpolate(value, 'linear', ts) | Gap filling with linear interpolation |
ts.interpolate(value, 'prev') | Gap filling with previous value |
ts.first(value, ts) | First value ordered by timestamp |
ts.last(value, ts) | Last value ordered by timestamp |
ts.correlate(a, b) | Pearson correlation coefficient between two series |
ts.timeBucket(60000, ts) | Time bucketing for GROUP BY (interval in ms) |
SELECT ts.timeBucket(60000, ts) AS bucket, ts.percentile(latency, 0.95) AS p95
FROM metrics
WHERE ts BETWEEN 1700000000000 AND 1700086400000
GROUP BY bucket ORDER BY bucket
SELECT ts.timeBucket(300000, ts) AS bucket, ts.rate(requests_total, ts, true) AS req_per_sec
FROM counters GROUP BY bucket ORDER BY bucket
Configure automatic data lifecycle management. Retention removes old data; downsampling reduces granularity for historical data. Both policies are enforced automatically by a background scheduler (every 60 seconds).
CREATE TIMESERIES TYPE metrics TIMESTAMP ts
TAGS (host STRING) FIELDS (cpu DOUBLE, mem DOUBLE)
RETENTION 30 DAYS
ALTER TIMESERIES TYPE metrics ADD DOWNSAMPLING POLICY
AFTER 7 DAYS GRANULARITY 1 HOURS
AFTER 30 DAYS GRANULARITY 1 DAYS
You can also manage downsampling policies from the Schema tab using the visual controls.
| Method | Best For | Throughput | Batching |
|---|---|---|---|
| Line Protocol | Bulk ingestion, IoT, metrics collection | Highest | Native (multi-line) |
| SQL INSERT | Small batches, ad-hoc inserts, SQL workflows | Medium | One row per statement |
| Java API | Embedded applications, maximum control | Highest | Native (array-based) |
ArcadeDB exposes Grafana DataFrame-compatible endpoints so you can visualize TimeSeries data in Grafana without a custom plugin. Use the Grafana Infinity datasource plugin to connect.
The Infinity plugin is a generic JSON/CSV/XML datasource maintained by the Grafana community. Install it from the Grafana plugin catalog or via CLI:
grafana cli plugins install yesoreyeram-infinity-datasource
# then restart Grafana
In Grafana, go to Connections → Data Sources → Add data source, select Infinity, and configure:
| Setting | Value |
|---|---|
| URL | http://<arcadedb-host>:2480 |
| Authentication | Basic Auth |
| User / Password | Your ArcadeDB credentials |
Test the connection using the Health Check URL:
GET /api/v1/ts/{database}/grafana/health
# Response: { "status": "ok", "database": "mydb" }
| Method | Endpoint | Purpose |
|---|---|---|
GET | /api/v1/ts/{db}/grafana/health | Datasource health check |
GET | /api/v1/ts/{db}/grafana/metadata | Discover types, fields, tags, aggregation types |
POST | /api/v1/ts/{db}/grafana/query | Query → Grafana DataFrame JSON |
Use the metadata endpoint to discover TimeSeries types and their fields before configuring panels:
curl -u root:password \
"http://localhost:2480/api/v1/ts/mydb/grafana/metadata"
{
"types": [
{
"name": "weather",
"fields": [{ "name": "temperature", "dataType": "DOUBLE" }],
"tags": [{ "name": "location", "dataType": "STRING" }]
}
],
"aggregationTypes": ["SUM", "AVG", "MIN", "MAX", "COUNT"]
}
Configure the Infinity plugin to POST JSON to the query endpoint. Each target maps to a Grafana panel query (refId A, B, C...). The response uses the columnar DataFrame format Grafana expects.
{
"from": 1700000000000,
"to": 1700086400000,
"maxDataPoints": 1000,
"targets": [
{
"refId": "A",
"type": "weather",
"fields": ["temperature"],
"tags": { "location": "us-east" },
"aggregation": {
"bucketInterval": 60000,
"requests": [
{ "field": "temperature", "type": "AVG", "alias": "avg_temp" }
]
}
}
]
}
{
"results": {
"A": {
"frames": [{
"schema": {
"fields": [
{ "name": "time", "type": "time" },
{ "name": "avg_temp", "type": "number" }
]
},
"data": {
"values": [
[1700000000000, 1700000060000],
[23.5, 24.1]
]
}
}]
}
}
}
Auto bucket interval: If aggregation is present but bucketInterval is omitted,
it is automatically calculated as (to - from) / maxDataPoints.
Omit aggregation entirely to get raw data points.
curl -u root:password -X POST \
"http://localhost:2480/api/v1/ts/mydb/grafana/query" \
-H "Content-Type: application/json" \
-d '{
"from": 1700000000000,
"to": 1700086400000,
"maxDataPoints": 500,
"targets": [{
"refId": "A",
"type": "weather",
"aggregation": {
"requests": [
{ "field": "temperature", "type": "AVG", "alias": "avg_temp" }
]
}
}]
}'
| Endpoint | Total Requests | Req/Min |
|---|
| Metric | Total | Req/Min |
|---|
| Metric | Value |
|---|
Copy into claude_desktop_config.json (requires npx / Node.js):
Add with claude mcp add arcadedb or copy into MCP settings:
| Query | Language | Database | Count | Total (ms) | Avg (ms) | Max (ms) | P99 (ms) |
|---|
| Step | Count | Total (ms) | Min (ms) | Avg (ms) | Max (ms) | P99 (ms) |
|---|
Click Start to begin recording queries from all protocols.
All SQL, Cypher, Gremlin, and other queries will be captured with execution plans and timing data.
Assign groups per database. Users in the admin group have full access. The * database applies to all databases not explicitly listed.
| Database | Groups | |
|---|---|---|
* (all databases) |
|
| Type | Create | Read | Update | Delete | |
|---|---|---|---|---|---|
* (default) |
| Type | Create | Read | Update | Delete | |
|---|---|---|---|---|---|
* (default) |
Get AI-powered database assistance:
Enter your subscription key to enable the AI assistant.
Don't have a key? Contact info@arcadedb.com for more information.
Ask me about your database schema, query optimization, data modeling, or synthetic data generation.
Select an endpoint from the sidebar to view its documentation, including request format, parameters, and response examples.
All endpoints use JSON for request and response bodies. Authentication is via Authorization: Bearer {token} header after login.
The Next Generation Multi-Model Database Management System. Click any topic in the sidebar to open its documentation page.
Graph, Document, Key/Value, Search, Time Series, and Vector in one engine.
SQL, Cypher, Gremlin, GraphQL, MongoDB QL, and Redis commands.
HTTP/JSON, PostgreSQL, MongoDB, and Redis wire protocols.
Built for speed with minimal GC pressure and efficient storage.
Full transaction support with isolation and durability guarantees.
Leader-follower replication with automatic failover.
Persist SQL query results with manual, incremental, or periodic refresh strategies.
One database. Every data model. Extreme performance.
Sign in to manage your databases