Plan restrictions applyBulk export is only available on LangSmith Plus or Enterprise tiers.
- Setting up a GCS bucket and HMAC credentials for LangSmith
- Creating a bulk export destination and export job
- Creating a BigQuery external table over the exported data
- Example queries and troubleshooting tips
1. Create a GCS bucket
Create a dedicated GCS bucket for LangSmith exports. Using a dedicated bucket makes it easier to grant scoped permissions without affecting other data.2. Create a service account and grant access
Create a GCP service account that LangSmith will use to write data to GCS:storage.objects.create. Granting storage.objects.delete is optional, but recommended. LangSmith uses it to clean up a temporary test file created during destination validation. If this permission is absent, a tmp/ folder may remain in your bucket.
The “Storage Object Admin” predefined role covers all required and recommended permissions:
storage.objects.create(required)storage.objects.delete(optional, for test file cleanup)storage.objects.get(optional but recommended, for file size verification)storage.multipartUploads.create(optional but recommended, for large file uploads)
3. Generate HMAC keys
LangSmith connects to GCS using the S3-compatible XML API, which requires HMAC keys rather than a service account JSON key. Generate HMAC keys for your service account:accessId and secret from the output. You can also generate HMAC keys in the GCP Console under Cloud Storage → Settings → Interoperability → Create a key for a service account.
4. Create a bulk export destination
Create a destination in LangSmith pointing to your GCS bucket. Setendpoint_url to https://storage.googleapis.com to use the GCS S3-compatible API.
You will need your LangSmith API key and workspace ID.
prefix is a path within the bucket where LangSmith will write exported files. For example, langsmith-exports or data/traces. Choose any value that works for your bucket layout.
LangSmith validates the credentials by performing a test write before saving the destination. If the request returns a 400 error, refer to Debug destination errors.
Save the id from the response; you will need it in the next step.
Temporary validation file
During destination creation (and credential rotation), LangSmith writes a temporary.txt file to YOUR_PREFIX/tmp/ to verify write access, then attempts to delete it. The deletion is best-effort: if the service account lacks storage.objects.delete, the file is not deleted and the tmp/ folder remains in your bucket.
The tmp/ folder is harmless and does not affect exports, but it will be included in broad GCS URI globs (e.g., gs://YOUR_BUCKET_NAME/YOUR_PREFIX/*). See Create a BigQuery external table for how to handle this when pointing BigQuery at your data.
5. Create a bulk export job
Create an export targeting a specific project. Useformat_version: v2_beta for BigQuery compatibility—it produces UTC timezone-aware timestamps that BigQuery handles correctly.
You will need the project ID (session_id), which you can copy from the project view in the Tracing Projects list.
One-time export:
Output file structure
Exported files land in GCS using a Hive-partitioned path structure:export_id, tenant_id, session_id, resource, year, month, day) are available as queryable columns in BigQuery when Hive partition detection is enabled.
6. Create a BigQuery external table
Grant BigQuery access to GCS
BigQuery needs read access to your bucket. Find your BigQuery service account in GCP Console → BigQuery → Project Settings, then grant it access:Create the external table
Run the following in the BigQuery console or withbq:
Why
export_id=* instead of *LangSmith writes a temporary tmp/ folder to your prefix during destination creation and credential rotation (see Temporary validation file). Using export_id=* in the URI scopes BigQuery to only the Hive-partitioned export directories, avoiding any stray files under tmp/.If you have confirmed that your prefix contains only export data (e.g. you manually deleted the tmp/ folder), you can use * instead.WITH PARTITION COLUMNS without explicit column definitions to let BigQuery infer them:
7. Query your data
Once your external table is set up, you can query it directly in BigQuery. For the full list of available columns, see Exportable fields. Daily LLM cost and token usage:Credential rotation
To rotate your HMAC keys without interrupting active exports:- Generate new HMAC keys in GCP for the same service account.
-
Call the PATCH endpoint with the new credentials:
LangSmith validates the new credentials with a test write before saving. A new
tmp/file may appear in your bucket during this validation (see Temporary validation file). - Keep old HMAC keys active until all in-flight export runs complete. Both credential sets are valid simultaneously during the transition window.
- Delete the old HMAC keys in GCP once you have confirmed no in-flight runs are using them.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
400 Access denied on destination creation | HMAC credentials lack write permission | Verify the service account has storage.objects.create on the bucket |
400 Key ID you provided does not exist | HMAC access ID is invalid | Regenerate HMAC keys in GCP |
400 Invalid endpoint | Endpoint URL is malformed | Use exactly https://storage.googleapis.com |
| BigQuery table shows no rows | Export not yet complete | Check export status with GET /api/v1/bulk-exports/{export_id} |
| BigQuery partition pruning not working | Incorrect hive_partition_uri_prefix | Ensure the prefix ends at the directory level before the first partition key, e.g. gs://BUCKET/PREFIX |
BigQuery picks up tmp/ files | Broad URI glob | Use export_id=* in your uris value instead of * |
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

