DuckDB Node.js (Neo) 客户端的最新版本是 1.3.2。
一个用于在 Node.js 中使用 DuckDB 的 API。
主要软件包 @duckdb/node-api 是一个面向应用程序的高级 API。它依赖于与 DuckDB C API 紧密结合的底层绑定,后者可作为 @duckdb/node-bindings 单独获取。
功能
与 duckdb-node 的主要区别
- 原生支持 Promises;无需单独的 duckdb-async 封装器。
- DuckDB 专用 API;不基于 SQLite Node API。
- 无损且高效地支持所有 DuckDB 数据类型的值。
- 封装了 已发布的 DuckDB 二进制文件,而非重新构建 DuckDB。
- 基于 DuckDB C API 构建;暴露更多功能。
路线图
部分功能尚未完成
- 绑定和追加 MAP 和 UNION 数据类型
- 逐行追加默认值
- 用户定义类型和函数
- 性能分析信息
- 表描述
- Arrow 的 API
请查阅 GitHub 上的问题列表,了解最新路线图。
支持的平台
- Linux arm64
- Linux x64
- Mac OS X (Darwin) arm64 (Apple Silicon)
- Mac OS X (Darwin) x64 (Intel)
- Windows (Win32) x64
示例
获取基本信息
import duckdb from '@duckdb/node-api';
console.log(duckdb.version());
console.log(duckdb.configurationOptionDescriptions());
连接
import { DuckDBConnection } from '@duckdb/node-api';
const connection = await DuckDBConnection.create();
这使用默认实例。对于高级用法,您可以显式创建实例。
创建实例
import { DuckDBInstance } from '@duckdb/node-api';
创建内存数据库
const instance = await DuckDBInstance.create(':memory:');
等同于上述方法
const instance = await DuckDBInstance.create();
读写数据库文件,如果需要则创建
const instance = await DuckDBInstance.create('my_duckdb.db');
设置配置选项
const instance = await DuckDBInstance.create('my_duckdb.db', {
threads: '4'
});
实例缓存
同一进程中的多个实例不应附加到同一个数据库。
为防止这种情况,可以使用实例缓存
const instance = await DuckDBInstance.fromCache('my_duckdb.db');
这使用默认实例缓存。对于高级用法,您可以显式创建实例缓存。
import { DuckDBInstanceCache } from '@duckdb/node-api';
const cache = new DuckDBInstanceCache();
const instance = await cache.getOrCreateInstance('my_duckdb.db');
连接到实例
const connection = await instance.connect();
断开连接
连接将在其引用被删除后自动断开,但您也可以在需要时显式断开。
connection.disconnectSync();
或者,等效地
connection.closeSync();
运行 SQL
const result = await connection.run('from test_all_types()');
参数化 SQL
const prepared = await connection.prepare('select $1, $2, $3');
prepared.bindVarchar(1, 'duck');
prepared.bindInteger(2, 42);
prepared.bindList(3, listValue([10, 11, 12]), LIST(INTEGER));
const result = await prepared.run();
或者
const prepared = await connection.prepare('select $a, $b, $c');
prepared.bind({
'a': 'duck',
'b': 42,
'c': listValue([10, 11, 12]),
}, {
'a': VARCHAR,
'b': INTEGER,
'c': LIST(INTEGER),
});
const result = await prepared.run();
甚至
const result = await connection.run('select $a, $b, $c', {
'a': 'duck',
'b': 42,
'c': listValue([10, 11, 12]),
}, {
'a': VARCHAR,
'b': INTEGER,
'c': LIST(INTEGER),
});
未指定类型将进行推断
const result = await connection.run('select $a, $b, $c', {
'a': 'duck',
'b': 42,
'c': listValue([10, 11, 12]),
});
指定值
许多数据类型的值使用 JS 基本类型 boolean
、number
、bigint
或 string
中的一种表示。此外,任何类型都可以包含 null
值。
某些数据类型的值需要使用特殊函数构造。它们是:
类型 | 函数 |
---|---|
ARRAY |
arrayValue |
BIT |
bitValue |
BLOB |
blobValue |
DATE |
dateValue |
DECIMAL |
decimalValue |
INTERVAL |
intervalValue |
LIST |
listValue |
MAP |
mapValue |
STRUCT |
structValue |
TIME |
timeValue |
TIMETZ |
timeTZValue |
TIMESTAMP |
timestampValue |
TIMESTAMPTZ |
timestampTZValue |
TIMESTAMP_S |
timestampSecondsValue |
TIMESTAMP_MS |
timestampMillisValue |
TIMESTAMP_NS |
timestampNanosValue |
UNION |
unionValue |
UUID |
uuidValue |
流式结果
流式结果在读取行时进行惰性求值。
const result = await connection.stream('from range(10_000)');
检查结果元数据
获取列名和类型
const columnNames = result.columnNames();
const columnTypes = result.columnTypes();
读取结果数据
运行并读取所有数据
const reader = await connection.runAndReadAll('from test_all_types()');
const rows = reader.getRows();
// OR: const columns = reader.getColumns();
流式读取(至少)一定数量的行
const reader = await connection.streamAndReadUntil(
'from range(5000)',
1000
);
const rows = reader.getRows();
// rows.length === 2048. (Rows are read in chunks of 2048.)
增量读取行
const reader = await connection.streamAndRead('from range(5000)');
reader.readUntil(2000);
// reader.currentRowCount === 2048 (Rows are read in chunks of 2048.)
// reader.done === false
reader.readUntil(4000);
// reader.currentRowCount === 4096
// reader.done === false
reader.readUntil(6000);
// reader.currentRowCount === 5000
// reader.done === true
获取结果数据
结果数据可以以多种形式检索
const reader = await connection.runAndReadAll(
'from range(3) select range::int as i, 10 + i as n'
);
const rows = reader.getRows();
// [ [0, 10], [1, 11], [2, 12] ]
const rowObjects = reader.getRowObjects();
// [ { i: 0, n: 10 }, { i: 1, n: 11 }, { i: 2, n: 12 } ]
const columns = reader.getColumns();
// [ [0, 1, 2], [10, 11, 12] ]
const columnsObject = reader.getColumnsObject();
// { i: [0, 1, 2], n: [10, 11, 12] }
转换结果数据
默认情况下,无法表示为 JS 内置类型的数据值将作为特殊的 JS 对象返回;请参阅下面的 Inspect Data Values
。
要以不同形式检索数据(例如 JS 内置类型或可以无损序列化为 JSON 的值),请使用上述结果数据方法的 JS
或 Json
形式。
也可以提供自定义转换器。有关如何操作,请参阅 JSDuckDBValueConverter 和 JsonDuckDBValueConverters 的实现。
示例(使用 Json
形式)
const reader = await connection.runAndReadAll(
'from test_all_types() select bigint, date, interval limit 2'
);
const rows = reader.getRowsJson();
// [
// [
// "-9223372036854775808",
// "5877642-06-25 (BC)",
// { "months": 0, "days": 0, "micros": "0" }
// ],
// [
// "9223372036854775807",
// "5881580-07-10",
// { "months": 999, "days": 999, "micros": "999999999" }
// ]
// ]
const rowObjects = reader.getRowObjectsJson();
// [
// {
// "bigint": "-9223372036854775808",
// "date": "5877642-06-25 (BC)",
// "interval": { "months": 0, "days": 0, "micros": "0" }
// },
// {
// "bigint": "9223372036854775807",
// "date": "5881580-07-10",
// "interval": { "months": 999, "days": 999, "micros": "999999999" }
// }
// ]
const columns = reader.getColumnsJson();
// [
// [ "-9223372036854775808", "9223372036854775807" ],
// [ "5877642-06-25 (BC)", "5881580-07-10" ],
// [
// { "months": 0, "days": 0, "micros": "0" },
// { "months": 999, "days": 999, "micros": "999999999" }
// ]
// ]
const columnsObject = reader.getColumnsObjectJson();
// {
// "bigint": [ "-9223372036854775808", "9223372036854775807" ],
// "date": [ "5877642-06-25 (BC)", "5881580-07-10" ],
// "interval": [
// { "months": 0, "days": 0, "micros": "0" },
// { "months": 999, "days": 999, "micros": "999999999" }
// ]
// }
这些方法也支持嵌套类型
const reader = await connection.runAndReadAll(
'from test_all_types() select int_array, struct, map, "union" limit 2'
);
const rows = reader.getRowsJson();
// [
// [
// [],
// { "a": null, "b": null },
// [],
// { "tag": "name", "value": "Frank" }
// ],
// [
// [ 42, 999, null, null, -42],
// { "a": 42, "b": "🦆🦆🦆🦆🦆🦆" },
// [
// { "key": "key1", "value": "🦆🦆🦆🦆🦆🦆" },
// { "key": "key2", "value": "goose" }
// ],
// { "tag": "age", "value": 5 }
// ]
// ]
const rowObjects = reader.getRowObjectsJson();
// [
// {
// "int_array": [],
// "struct": { "a": null, "b": null },
// "map": [],
// "union": { "tag": "name", "value": "Frank" }
// },
// {
// "int_array": [ 42, 999, null, null, -42 ],
// "struct": { "a": 42, "b": "🦆🦆🦆🦆🦆🦆" },
// "map": [
// { "key": "key1", "value": "🦆🦆🦆🦆🦆🦆" },
// { "key": "key2", "value": "goose" }
// ],
// "union": { "tag": "age", "value": 5 }
// }
// ]
const columns = reader.getColumnsJson();
// [
// [
// [],
// [42, 999, null, null, -42]
// ],
// [
// { "a": null, "b": null },
// { "a": 42, "b": "🦆🦆🦆🦆🦆🦆" }
// ],
// [
// [],
// [
// { "key": "key1", "value": "🦆🦆🦆🦆🦆🦆" },
// { "key": "key2", "value": "goose"}
// ]
// ],
// [
// { "tag": "name", "value": "Frank" },
// { "tag": "age", "value": 5 }
// ]
// ]
const columnsObject = reader.getColumnsObjectJson();
// {
// "int_array": [
// [],
// [42, 999, null, null, -42]
// ],
// "struct": [
// { "a": null, "b": null },
// { "a": 42, "b": "🦆🦆🦆🦆🦆🦆" }
// ],
// "map": [
// [],
// [
// { "key": "key1", "value": "🦆🦆🦆🦆🦆🦆" },
// { "key": "key2", "value": "goose" }
// ]
// ],
// "union": [
// { "tag": "name", "value": "Frank" },
// { "tag": "age", "value": 5 }
// ]
// }
列名和类型也可以序列化为 JSON
const columnNamesAndTypes = reader.columnNamesAndTypesJson();
// {
// "columnNames": [
// "int_array",
// "struct",
// "map",
// "union"
// ],
// "columnTypes": [
// {
// "typeId": 24,
// "valueType": {
// "typeId": 4
// }
// },
// {
// "typeId": 25,
// "entryNames": [
// "a",
// "b"
// ],
// "entryTypes": [
// {
// "typeId": 4
// },
// {
// "typeId": 17
// }
// ]
// },
// {
// "typeId": 26,
// "keyType": {
// "typeId": 17
// },
// "valueType": {
// "typeId": 17
// }
// },
// {
// "typeId": 28,
// "memberTags": [
// "name",
// "age"
// ],
// "memberTypes": [
// {
// "typeId": 17
// },
// {
// "typeId": 3
// }
// ]
// }
// ]
// }
const columnNameAndTypeObjects = reader.columnNameAndTypeObjectsJson();
// [
// {
// "columnName": "int_array",
// "columnType": {
// "typeId": 24,
// "valueType": {
// "typeId": 4
// }
// }
// },
// {
// "columnName": "struct",
// "columnType": {
// "typeId": 25,
// "entryNames": [
// "a",
// "b"
// ],
// "entryTypes": [
// {
// "typeId": 4
// },
// {
// "typeId": 17
// }
// ]
// }
// },
// {
// "columnName": "map",
// "columnType": {
// "typeId": 26,
// "keyType": {
// "typeId": 17
// },
// "valueType": {
// "typeId": 17
// }
// }
// },
// {
// "columnName": "union",
// "columnType": {
// "typeId": 28,
// "memberTags": [
// "name",
// "age"
// ],
// "memberTypes": [
// {
// "typeId": 17
// },
// {
// "typeId": 3
// }
// ]
// }
// }
// ]
获取数据块
获取所有数据块
const chunks = await result.fetchAllChunks();
每次获取一个数据块
const chunks = [];
while (true) {
const chunk = await result.fetchChunk();
// Last chunk will have zero rows.
if (chunk.rowCount === 0) {
break;
}
chunks.push(chunk);
}
对于具体化(非流式)结果,可以按索引读取数据块
const rowCount = result.rowCount;
const chunkCount = result.chunkCount;
for (let i = 0; i < chunkCount; i++) {
const chunk = result.getChunk(i);
// ...
}
获取数据块数据
const rows = chunk.getRows();
const rowObjects = chunk.getRowObjects(result.deduplicatedColumnNames());
const columns = chunk.getColumns();
const columnsObject =
chunk.getColumnsObject(result.deduplicatedColumnNames());
获取数据块数据(每次一个值)
const columns = [];
const columnCount = chunk.columnCount;
for (let columnIndex = 0; columnIndex < columnCount; columnIndex++) {
const columnValues = [];
const columnVector = chunk.getColumnVector(columnIndex);
const itemCount = columnVector.itemCount;
for (let itemIndex = 0; itemIndex < itemCount; itemIndex++) {
const value = columnVector.getItem(itemIndex);
columnValues.push(value);
}
columns.push(columnValues);
}
检查数据类型
import { DuckDBTypeId } from '@duckdb/node-api';
if (columnType.typeId === DuckDBTypeId.ARRAY) {
const arrayValueType = columnType.valueType;
const arrayLength = columnType.length;
}
if (columnType.typeId === DuckDBTypeId.DECIMAL) {
const decimalWidth = columnType.width;
const decimalScale = columnType.scale;
}
if (columnType.typeId === DuckDBTypeId.ENUM) {
const enumValues = columnType.values;
}
if (columnType.typeId === DuckDBTypeId.LIST) {
const listValueType = columnType.valueType;
}
if (columnType.typeId === DuckDBTypeId.MAP) {
const mapKeyType = columnType.keyType;
const mapValueType = columnType.valueType;
}
if (columnType.typeId === DuckDBTypeId.STRUCT) {
const structEntryNames = columnType.names;
const structEntryTypes = columnType.valueTypes;
}
if (columnType.typeId === DuckDBTypeId.UNION) {
const unionMemberTags = columnType.memberTags;
const unionMemberTypes = columnType.memberTypes;
}
// For the JSON type (https://duckdb.net.cn/docs/data/json/json_type)
if (columnType.alias === 'JSON') {
const json = JSON.parse(columnValue);
}
每个类型都实现了 toString 方法。结果既人性化,又可以在适当的表达式中被 DuckDB 读取。
const typeString = columnType.toString();
检查数据值
import { DuckDBTypeId } from '@duckdb/node-api';
if (columnType.typeId === DuckDBTypeId.ARRAY) {
const arrayItems = columnValue.items; // array of values
const arrayString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.BIT) {
const bools = columnValue.toBools(); // array of booleans
const bits = columnValue.toBits(); // arrary of 0s and 1s
const bitString = columnValue.toString(); // string of '0's and '1's
}
if (columnType.typeId === DuckDBTypeId.BLOB) {
const blobBytes = columnValue.bytes; // Uint8Array
const blobString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.DATE) {
const dateDays = columnValue.days;
const dateString = columnValue.toString();
const { year, month, day } = columnValue.toParts();
}
if (columnType.typeId === DuckDBTypeId.DECIMAL) {
const decimalWidth = columnValue.width;
const decimalScale = columnValue.scale;
// Scaled-up value. Represented number is value/(10^scale).
const decimalValue = columnValue.value; // bigint
const decimalString = columnValue.toString();
const decimalDouble = columnValue.toDouble();
}
if (columnType.typeId === DuckDBTypeId.INTERVAL) {
const intervalMonths = columnValue.months;
const intervalDays = columnValue.days;
const intervalMicros = columnValue.micros; // bigint
const intervalString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.LIST) {
const listItems = columnValue.items; // array of values
const listString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.MAP) {
const mapEntries = columnValue.entries; // array of { key, value }
const mapString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.STRUCT) {
// { name1: value1, name2: value2, ... }
const structEntries = columnValue.entries;
const structString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.TIMESTAMP_MS) {
const timestampMillis = columnValue.milliseconds; // bigint
const timestampMillisString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.TIMESTAMP_NS) {
const timestampNanos = columnValue.nanoseconds; // bigint
const timestampNanosString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.TIMESTAMP_S) {
const timestampSecs = columnValue.seconds; // bigint
const timestampSecsString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.TIMESTAMP_TZ) {
const timestampTZMicros = columnValue.micros; // bigint
const timestampTZString = columnValue.toString();
const {
date: { year, month, day },
time: { hour, min, sec, micros },
} = columnValue.toParts();
}
if (columnType.typeId === DuckDBTypeId.TIMESTAMP) {
const timestampMicros = columnValue.micros; // bigint
const timestampString = columnValue.toString();
const {
date: { year, month, day },
time: { hour, min, sec, micros },
} = columnValue.toParts();
}
if (columnType.typeId === DuckDBTypeId.TIME_TZ) {
const timeTZMicros = columnValue.micros; // bigint
const timeTZOffset = columnValue.offset;
const timeTZString = columnValue.toString();
const {
time: { hour, min, sec, micros },
offset,
} = columnValue.toParts();
}
if (columnType.typeId === DuckDBTypeId.TIME) {
const timeMicros = columnValue.micros; // bigint
const timeString = columnValue.toString();
const { hour, min, sec, micros } = columnValue.toParts();
}
if (columnType.typeId === DuckDBTypeId.UNION) {
const unionTag = columnValue.tag;
const unionValue = columnValue.value;
const unionValueString = columnValue.toString();
}
if (columnType.typeId === DuckDBTypeId.UUID) {
const uuidHugeint = columnValue.hugeint; // bigint
const uuidString = columnValue.toString();
}
// other possible values are: null, boolean, number, bigint, or string
显示时区
将 TIMESTAMP_TZ 值转换为字符串取决于时区偏移量。默认情况下,这在 Node 进程启动时设置为本地时区的偏移量。
要更改它,请设置 DuckDBTimestampTZValue
的 timezoneOffsetInMinutes
属性
DuckDBTimestampTZValue.timezoneOffsetInMinutes = -8 * 60;
const pst = DuckDBTimestampTZValue.Epoch.toString();
// 1969-12-31 16:00:00-08
DuckDBTimestampTZValue.timezoneOffsetInMinutes = +1 * 60;
const cet = DuckDBTimestampTZValue.Epoch.toString();
// 1970-01-01 01:00:00+01
请注意,用于此字符串转换的时区偏移量与 DuckDB 的 TimeZone
设置不同。
以下将此偏移量设置为与 DuckDB 的 TimeZone
设置匹配
const reader = await connection.runAndReadAll(
`select (timezone(current_timestamp) / 60)::int`
);
DuckDBTimestampTZValue.timezoneOffsetInMinutes =
reader.getColumns()[0][0];
追加到表
await connection.run(
`create or replace table target_table(i integer, v varchar)`
);
const appender = await connection.createAppender('target_table');
appender.appendInteger(42);
appender.appendVarchar('duck');
appender.endRow();
appender.appendInteger(123);
appender.appendVarchar('mallard');
appender.endRow();
appender.flushSync();
appender.appendInteger(17);
appender.appendVarchar('goose');
appender.endRow();
appender.closeSync(); // also flushes
追加数据块
await connection.run(
`create or replace table target_table(i integer, v varchar)`
);
const appender = await connection.createAppender('target_table');
const chunk = DuckDBDataChunk.create([INTEGER, VARCHAR]);
chunk.setColumns([
[42, 123, 17],
['duck', 'mallad', 'goose'],
]);
// OR:
// chunk.setRows([
// [42, 'duck'],
// [123, 'mallard'],
// [17, 'goose'],
// ]);
appender.appendDataChunk(chunk);
appender.flushSync();
有关如何向追加器提供值,请参阅上面的“指定值”。
提取语句
const extractedStatements = await connection.extractStatements(`
create or replace table numbers as from range(?);
from numbers where range < ?;
drop table numbers;
`);
const parameterValues = [10, 7];
const statementCount = extractedStatements.count;
for (let stmtIndex = 0; stmtIndex < statementCount; stmtIndex++) {
const prepared = await extractedStatements.prepare(stmtIndex);
let parameterCount = prepared.parameterCount;
for (let paramIndex = 1; paramIndex <= parameterCount; paramIndex++) {
prepared.bindInteger(paramIndex, parameterValues.shift());
}
const result = await prepared.run();
// ...
}
控制任务评估
import { DuckDBPendingResultState } from '@duckdb/node-api';
async function sleep(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
const prepared = await connection.prepare('from range(10_000_000)');
const pending = prepared.start();
while (pending.runTask() !== DuckDBPendingResultState.RESULT_READY) {
console.log('not ready');
await sleep(1);
}
console.log('ready');
const result = await pending.getResult();
// ...
运行 SQL 的方法
// Run to completion but don't yet retrieve any rows.
// Optionally take values to bind to SQL parameters,
// and (optionally) types of those parameters,
// either as an array (for positional parameters),
// or an object keyed by parameter name.
const result = await connection.run(sql);
const result = await connection.run(sql, values);
const result = await connection.run(sql, values, types);
// Run to completion but don't yet retrieve any rows.
// Wrap in a DuckDBDataReader for convenient data retrieval.
const reader = await connection.runAndRead(sql);
const reader = await connection.runAndRead(sql, values);
const reader = await connection.runAndRead(sql, values, types);
// Run to completion, wrap in a reader, and read all rows.
const reader = await connection.runAndReadAll(sql);
const reader = await connection.runAndReadAll(sql, values);
const reader = await connection.runAndReadAll(sql, values, types);
// Run to completion, wrap in a reader, and read at least
// the given number of rows. (Rows are read in chunks, so more than
// the target may be read.)
const reader = await connection.runAndReadUntil(sql, targetRowCount);
const reader =
await connection.runAndReadAll(sql, targetRowCount, values);
const reader =
await connection.runAndReadAll(sql, targetRowCount, values, types);
// Create a streaming result and don't yet retrieve any rows.
const result = await connection.stream(sql);
const result = await connection.stream(sql, values);
const result = await connection.stream(sql, values, types);
// Create a streaming result and don't yet retrieve any rows.
// Wrap in a DuckDBDataReader for convenient data retrieval.
const reader = await connection.streamAndRead(sql);
const reader = await connection.streamAndRead(sql, values);
const reader = await connection.streamAndRead(sql, values, types);
// Create a streaming result, wrap in a reader, and read all rows.
const reader = await connection.streamAndReadAll(sql);
const reader = await connection.streamAndReadAll(sql, values);
const reader = await connection.streamAndReadAll(sql, values, types);
// Create a streaming result, wrap in a reader, and read at least
// the given number of rows.
const reader = await connection.streamAndReadUntil(sql, targetRowCount);
const reader =
await connection.streamAndReadUntil(sql, targetRowCount, values);
const reader =
await connection.streamAndReadUntil(sql, targetRowCount, values, types);
// Prepared Statements
// Prepare a possibly-parametered SQL statement to run later.
const prepared = await connection.prepare(sql);
// Bind values to the parameters.
prepared.bind(values);
prepared.bind(values, types);
// Run the prepared statement. These mirror the methods on the connection.
const result = prepared.run();
const reader = prepared.runAndRead();
const reader = prepared.runAndReadAll();
const reader = prepared.runAndReadUntil(targetRowCount);
const result = prepared.stream();
const reader = prepared.streamAndRead();
const reader = prepared.streamAndReadAll();
const reader = prepared.streamAndReadUntil(targetRowCount);
// Pending Results
// Create a pending result.
const pending = await connection.start(sql);
const pending = await connection.start(sql, values);
const pending = await connection.start(sql, values, types);
// Create a pending, streaming result.
const pending = await connection.startStream(sql);
const pending = await connection.startStream(sql, values);
const pending = await connection.startStream(sql, values, types);
// Create a pending result from a prepared statement.
const pending = await prepared.start();
const pending = await prepared.startStream();
while (pending.runTask() !== DuckDBPendingResultState.RESULT_READY) {
// optionally sleep or do other work between tasks
}
// Retrieve the result. If not yet READY, will run until it is.
const result = await pending.getResult();
const reader = await pending.read();
const reader = await pending.readAll();
const reader = await pending.readUntil(targetRowCount);
获取结果数据的方法
// From a result
// Asynchronously retrieve data for all rows:
const columns = await result.getColumns();
const columnsJson = await result.getColumnsJson();
const columnsObject = await result.getColumnsObject();
const columnsObjectJson = await result.getColumnsObjectJson();
const rows = await result.getRows();
const rowsJson = await result.getRowsJson();
const rowObjects = await result.getRowObjects();
const rowObjectsJson = await result.getRowObjectsJson();
// From a reader
// First, (asynchronously) read some rows:
await reader.readAll();
// or:
await reader.readUntil(targetRowCount);
// Then, (synchronously) get result data for the rows read:
const columns = reader.getColumns();
const columnsJson = reader.getColumnsJson();
const columnsObject = reader.getColumnsObject();
const columnsObjectJson = reader.getColumnsObjectJson();
const rows = reader.getRows();
const rowsJson = reader.getRowsJson();
const rowObjects = reader.getRowObjects();
const rowObjectsJson = reader.getRowObjectsJson();
// Individual values can also be read directly:
const value = reader.value(columnIndex, rowIndex);
// Using chunks
// If desired, one or more chunks can be fetched from a result:
const chunk = await result.fetchChunk();
const chunks = await result.fetchAllChunks();
// And then data can be retrieved from each chunk:
const columnValues = chunk.getColumnValues(columnIndex);
const columns = chunk.getColumns();
const rowValues = chunk.getRowValues(rowIndex);
const rows = chunk.getRows();
// Or, values can be visited:
chunk.visitColumnValues(columnIndex,
(value, rowIndex, columnIndex, type) => { /* ... */ }
);
chunk.visitColumns((column, columnIndex, type) => { /* ... */ });
chunk.visitColumnMajor(
(value, rowIndex, columnIndex, type) => { /* ... */ }
);
chunk.visitRowValues(rowIndex,
(value, rowIndex, columnIndex, type) => { /* ... */ }
);
chunk.visitRows((row, rowIndex) => { /* ... */ });
chunk.visitRowMajor(
(value, rowIndex, columnIndex, type) => { /* ... */ }
);
// Or converted:
// The `converter` argument implements `DuckDBValueConverter`,
// which has the single method convertValue(value, type).
const columnValues = chunk.convertColumnValues(columnIndex, converter);
const columns = chunk.convertColumns(converter);
const rowValues = chunk.convertRowValues(rowIndex, converter);
const rows = chunk.convertRows(converter);
// The reader abstracts these low-level chunk manipulations
// and is recommended for most cases.