← Back to Blog
TutorialApril 6, 2026

DuckDB Browser Tutorial: Query Parquet, CSV & JSON Files with SQL in Your Browser

A complete guide to the DevToolSets DuckDB Browser — load data files, write SQL queries, explore tables, and export results, all running locally in your browser via WebAssembly.

What You'll Learn

  • How DuckDB runs entirely in your browser via WebAssembly
  • Loading Parquet, CSV, and JSON files for querying
  • Starting with an empty in-memory database
  • Writing and executing SQL queries
  • Browsing tables and viewing column schemas
  • Exporting query results as CSV
  • Using query history to re-run past statements

How It Works

Unlike the Postgres and MySQL tools, the DuckDB Browser doesn't connect to any external server. It runs DuckDB entirely inside your browser using WebAssembly (WASM).

This means:

  • No server, no credentials, no network requests for queries
  • Your data never leaves your machine
  • It works offline once the page is loaded
  • Performance depends on your device's resources

DuckDB is an analytical database engine optimized for OLAP workloads — it's particularly fast at scanning and aggregating columnar data like Parquet files.

Step 1: Loading Data Files

1

Open the DuckDB Browser

Navigate to the DuckDB Browser tool. You'll see a drop zone on the left and a query editor on the right.

2

Drop or select a file

Drag and drop a file onto the drop zone, or click to browse. Supported formats:

  • .parquet — Apache Parquet columnar files
  • .csv — comma-separated values
  • .json / .jsonl / .ndjson — JSON and newline-delimited JSON
3

Data is loaded automatically

The file is registered with DuckDB and its contents are imported into a table called imported. DuckDB auto-detects the schema, column types, and delimiters.

Don't have a file?

Click "Create empty database" to start with a blank in-memory DuckDB instance. You can create tables and insert data using SQL.

Step 2: Running SQL Queries

1

Write your SQL

The query editor on the right comes pre-filled with:

SELECT 42 AS answer;

Replace it with any DuckDB-compatible SQL.

2

Execute

Click the Run button. Since DuckDB runs locally, queries execute instantly against your in-memory data.

3

Read the results

Results appear in the Results tab at the bottom in a table format, with execution time displayed above. Errors show the full DuckDB error message.

Example queries to try

SELECT * FROM imported LIMIT 10;
SUMMARIZE imported;
SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'imported';
CREATE TABLE test AS SELECT range AS id, random() AS value FROM range(1000);
SELECT count(*), avg(value), min(value), max(value) FROM test;

Step 3: Browsing Tables

1

View the Tables tab

The bottom panel shows a Tables tab listing all tables in the main schema. Each entry shows the table name, type, and columns with their data types.

2

Click a table to inspect it

Clicking a table name runs a SELECT query against it and shows the data in the Results tab. This is the quickest way to peek at any table's contents.

Tables are refreshed automatically after each query, so newly created tables appear immediately.

Step 4: Exporting Data

Once you have data in your DuckDB instance, you can export all tables as CSV files. Click the download button in the file panel — each table is exported as a separate CSV file that downloads to your browser's default location.

Tip: Export specific queries

If you only need specific data, use DuckDB's built-in COPY command in your SQL:

COPY (SELECT * FROM imported WHERE value > 100) TO '/tmp/filtered.csv' (HEADER, DELIMITER ',');

Step 5: Using Query History

How it works

  • Every query you run is saved to the History tab (up to 20 entries)
  • Click any past query to re-run it instantly
  • History is stored in your browser's IndexedDB and persists across sessions
  • Older entries are automatically pruned when the limit is reached

Step 6: Customizing the Layout

The interface uses a resizable split layout. Drag the dividers to adjust:

  • Horizontal divider — adjust space between the file panel and query editor
  • Vertical divider — adjust space between the top panels and the bottom results area

Common Use Cases

Exploring Parquet files

Drop a Parquet file and run SUMMARIZE imported; to get instant statistics on every column — count, min, max, mean, and more.

Cleaning CSV data

Load a CSV, run transformation queries, and export the cleaned data as a new CSV.

Quick analytics

Run aggregations, window functions, and joins on local data without setting up any database infrastructure.

Learning SQL

Create tables, insert data, and practice SQL in a zero-setup environment that runs entirely in your browser.

Tips & Best Practices

  • Parquet files are the most efficient format for DuckDB — they load faster and use less memory than CSV
  • Use SUMMARIZE tablename; for quick column statistics instead of writing manual aggregations
  • DuckDB supports window functions, CTEs, and most modern SQL features
  • Data stays in memory — refreshing the page resets everything, so export before closing
  • For very large files, performance depends on your device's available RAM
  • Query history persists across page reloads even though the data doesn't

Ready to try it out?

Open DuckDB Browser →