A Batch Query enables you to request queries with long-running CPU processing times. Typically, these kind of requests raise timeout errors when using the SQL API. In order to avoid timeouts, you can use Batch Queries to create, read and cancel queries. You can also run a chained batch query to chain several SQL queries into one job. A Batch Query schedules the incoming jobs and allows you to request the job status for each query.
Batch Queries are not intended to be used for large query payloads that contain over 16384 characters (16kb). For instance, if you are inserting a large number of rows into your table, you still need to use the Import API or SQL API for this type of data management. Batch Queries are specific to queries and CPU usage.
Note: In order to use Batch Queries, you must be authenticated using API keys.
An API Key is required to manage your jobs. The following error message appears if you are not authenticated:
In order to get full access, you must use your API Key.
Using cURL tool:
Using Node.js request client:
Batch Queries Job Schema
A Batch Query request to your CARTO account includes the following job schema elements. Only the
query element can be modified. All other elements of the job schema are defined by the Batch Query and are read-only.
||a universally unique identifier (uuid).|
||user identifier, as displayed by the username.|
||displays the result of the long-running query. The possible status results are:|
||job waiting to be executed.|
||indicates that the job is currently running.|
||job executed successfully.|
||job executed but failed, with errors.|
||job canceled by user request.|
||appears when it is not possible to determine what exactly happened with the job.|
||the SQL statement to be executed in a database. You can modify the select SQL statement to be used in the job schema.
Tip: In some scenarios, you may need to retrieve the query results from a finished job. See Fetching Job Results for details.
||the date and time when the job schema was created.|
||the date and time of when the job schema was last updated, or modified.|
||displays the database error message, if something went wrong.|
Create a Job
To create a Batch Query job, make a POST request with the following parameters.
Creates a Batch Query job request.
If you are using the Batch Query create operation for a cURL POST request, use the following code:
If you are using the Batch Query create operation for a Node.js client POST request, use the following code:
Read a Job
To read a Batch Query job, make a GET request with the following parameters.
If you are using the Batch Query read operation for a cURL GET request, use the following code:
If you are a Batch Query read operation for a Node.js client GET request, use the following code:
Cancel a Job
To cancel a Batch Query, make a DELETE request with the following parameters.
Note: Be mindful when canceling a job when the status:
- If the job is
pending, the job will never be executed
- If the job is
running, the job will be terminated immediately
Note: Jobs can only be canceled while the
status: "running" or
status: "pending", otherwise the Batch Query operation is not allowed. You will receive an error if the job status is anything but “running” or “pending”.
If you are using the Batch Query cancel operation for cURL DELETE request, use the following code:
If you are using the Batch Query cancel operation for a Node.js client DELETE request, use the following code:
Chaining Batch Queries
In some cases, you may need to chain queries into one job. The Chaining Batch Query option enables you run an array of SQL statements, and define the order in which the queries are executed. You can use any of the operations (create, read, list, update, cancel) for the queries in a chained batch query.
Note: The Batch Query returns a job status for both the parent Chained Batch Query request, and for each child query within the request. The order in which each query is executed is guaranteed. Here are the possible status results for Chained Batch Queries:
If one query within the Chained Batch Query fails, the
"status": "failed"is returned for both the job and the query, and any “pending” queries will not be processed
If you cancel the Chained Batch Query job, the job status changes to
"status": "cancelled". Any running queries within the job will be stopped and changed to
"status": "pending", and will not be processed
Suppose the first query job status is
"status": "done", the second query is
"status": "running", and the third query
"status": "pending". If the second query fails for some reason, the job status changes to
"status": "failed"and the last query will not be processed. It is indicated which query failed in the Chained Batch Query job
Creating several jobs does not guarantee that jobs are going to be executed in the same order that they were created. If you need run queries in a specific order, you may want use Chaining Batch Queries.
If you are using the Chained Batch Query operation for cURL POST request, use the following code:
If you are using the Chained Batch Query operation for a Node.js client POST request, use the following code:
If you are using the Chained Batch Query operation for cURL PUT request, use the following code:
If you are using the Chained Batch Query operation for a Node.js client PUT request, use the following code:
Chaining Batch Queries with fallbacks
When you need to run an extra query based on how a chaining query finished, Batch Queries enable you to define onerror and onsuccess fallbacks. This powerful feature opens a huge range of possibilities, for instance:
- You can create jobs periodically in order to get updated data and create a new table where you can check the status of your tables.
For this example, you can create the following job:
If query finishes successfully, then onsuccess fallback will be fired. Otherwise, onerror will be fired. You can define fallbacks per query:
…at the job level..
If a query of a job fails (and onerror fallbacks for that query and job are defined), then Batch Queries runs the first fallback for that query. The job fallback runs next and sets the job as failed. Remaining queries will not be executed. Furthermore, Batch Queries will run the onsuccess fallback at the job level, if (and only if), every query has finished successfully.
Batch Queries provide a simple way to get the error message and the job identifier to be used in your fallbacks, by using the following templates:
<%= error_message %>: will be replaced by the error message raised by the database.
<%= job_id %>: will be replaced by the job identifier that Batch Queries provides.
This is helpful when you want to save error messages into a table:
More templates are coming soon.
Fetching Job Results
In some scenarios, you may need to fetch the output of a job. If that is the case, wrap the query with
SELECT * INTO, or
CREATE TABLE AS. The output is stored in a new table in your database. For example, if using the query
SELECT * FROM airports:
Wrap the query
SELECT * INTO job_result FROM (SELECT * FROM airports) AS job
Create a job, as described previously
Once the job is done, fetch the results through the CARTO SQL API,
SELECT * FROM job_result
Note: If you need to create a map or analysis with the new table, use the CDB_CartodbfyTable function.
For best practices, follow these recommended usage notes when using Batch Queries:
Batch Queries are recommended for INSERT, UPDATE, and CREATE queries that manipulate and create new data, such as creating expensive indexes, applying updates over large tables, and creating tables from complex queries. Batch queries have no effect for SELECT queries that retrieve data but do not store the results in a table. For example, running a batch query using
SELECT * from my_datasetwill not produce any results.
Batch Queries are not intended for large query payloads (e.g: inserting thousands of rows), use the Import API for this type of data management.
There is a limit of 16kb per job. The following error message appears if your job exceeds this size:
Your payload is too large. Max size allowed is 16384 (16kb)