h1 stringclasses 12
values | h2 stringclasses 93
values | h3 stringlengths 0 64 | h5 stringlengths 0 79 | content stringlengths 24 533k | tokens int64 7 158k | content_embeddings_openai_text_embedding_3_small_512 sequencelengths 512 512 | content_embeddings_potion_base_8M_256 sequencelengths 256 256 |
|---|---|---|---|---|---|---|---|
Summary | This document contains [DuckDB's official documentation and guides](https://duckdb.org/) in a single-file easy-to-search form.
If you find any issues, please report them [as a GitHub issue](https://github.com/duckdb/duckdb-web/issues).
Contributions are very welcome in the form of [pull requests](https://github.com/duc... | 184 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |||
Connect | Connect | Connect or Create a Database | To use DuckDB, you must first create a connection to a database. The exact syntax varies between the [client APIs](#docs:api:overview) but it typically involves passing an argument to configure persistence. | 43 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Connect | Persistence | DuckDB can operate in both persistent mode, where the data is saved to disk, and in in-memory mode, where the entire data set is stored in the main memory.
> **Tip. ** Both persistent and in-memory databases use spilling to disk to facilitate larger-than-memory workloads (i.e., out-of-core-processing).
#### Persis... | 362 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Handling Concurrency | DuckDB has two configurable options for concurrency:
1. One process can both read and write to the database.
2. Multiple processes can read from the database, but no processes can write ([`access_mode = 'READ_ONLY'`](#docs:configuration:overview::configuration-reference)).
When using option 1, DuckDB supports multi... | 214 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Concurrency within a Single Process | DuckDB supports concurrency within a single process according to the following rules. As long as there are no write conflicts, multiple concurrent writes will succeed. Appends will never conflict, even on the same table. Multiple threads can also simultaneously update separate tables or separate subsets of the same tab... | 102 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Writing to DuckDB from Multiple Processes | Writing to DuckDB from multiple processes is not supported automatically and is not a primary design goal (see [Handling Concurrency](#::handling-concurrency)).
If multiple processes must write to the same file, several design patterns are possible, but would need to be implemented in application logic. For example, ... | 254 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Optimistic Concurrency Control | DuckDB uses [optimistic concurrency control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control), an approach generally considered to be the best fit for read-intensive analytical database systems as it speeds up read query processing. As a result any transactions that modify the same rows at the same time wi... | 108 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | The first step to using a database system is to insert data into that system.
DuckDB provides can directly connect to [many popular data sources](#docs:data:data_sources) and offers several data ingestion methods that allow you to easily and efficiently fill up the database.
On this page, we provide an overview of thes... | 79 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | ||
Data Import | Importing Data | `INSERT` Statements | `INSERT` statements are the standard way of loading data into a database system. They are suitable for quick prototyping, but should be avoided for bulk loading as they have significant per-row overhead.
```sql
INSERT INTO people VALUES (1, 'Mark');
```
For a more detailed description, see the [page on the `INSERT ... | 77 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | CSV Loading | Data can be efficiently loaded from CSV files using several methods. The simplest is to use the CSV file's name:
```sql
SELECT * FROM 'test.csv';
```
Alternatively, use the [`read_csv` function](#docs:data:csv:overview) to pass along options:
```sql
SELECT * FROM read_csv('test.csv', header = false);
```
Or use... | 239 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | Parquet Loading | Parquet files can be efficiently loaded and queried using their filename:
```sql
SELECT * FROM 'test.parquet';
```
Alternatively, use the [`read_parquet` function](#docs:data:parquet:overview):
```sql
SELECT * FROM read_parquet('test.parquet');
```
Or use the [`COPY` statement](#docs:sql:statements:copy::copy--... | 121 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 9