During the last few months, to create a more compelling platform and provide all the necessary functionality for the new Builder, we’ve made some additions to our Engine. Before we began working on those additions, we learned from our clients, users, and partners how they could best leverage the Engine to accelerate and improve their development workflows.
As most of you have experienced, some geospatial operations, such as spatial intersections or calculating a convex hull, have long-running CPU processing times. Implementing those methods using the synchronous SQL API had some caveats, like having to block your application’s UI.
Today, we’re happy to announce the addition of Batch Queries in our existing SQL API. This will allow you to run queries on your data in CARTO in an asynchronous way.
Creating a Batch job in the SQL API is as simple as:
After creating a job, you will be able to call the API to get information about its execution. You will be able to discern whether a job is waiting to be executed (pending), running, done, failed, or canceled. Getting the information about a job is as simple as:
Another thing we noticed, is that applying several queries sequentially was a very common practice when working on complex geospatial applications. Part of the magic behind the Batch Queries on the SQL API is here to fix that. From now on, create chains of queries as part of a single job that canl be executed sequentially.
You’ll also notice in the documentation that you can define ‘on error’ and ‘on success’ fallbacks for each of your queries within a job. This will allow you to specify a new location to store the previously updated data, for example.
Although we use Batch Queries in several areas of the platform, such as the Data-services API or the new Builder, it’s the soon to be released Analysis API, which will make the most intensive usage of Batch Queries. Each Analysis workflow in the Builder defines a job in the SQL API. Isn’t that cool?
Now that we are working full throttle on all these new capabilities of the platform, we would love to hear your ideas on how to leverage this new functionality for your projects as well as your feedback on the API itself.
Over the past few months we have been working with Databricks to add built-in support for H3, and this added functionality was released recently. Native support for H3 mean...News
Please fill out the below form and we'll be in touch real soon.