Geospatial & AI at the Databricks Data + AI Summit 2025

Earlier this month, the Databricks Data + AI Summit took San Francisco by storm! At this event, the future of data, AI, and cloud-native technologies converge, sharp focus and this year, that future is looking a whole lot more spatial.
CARTO was proud to be part of the action as a Databricks partner - and we know this year’s announcements are going to change the game for our Databricks users! From the long-awaited debut of Spatial SQL to major advances in intelligent agents with Agent Bricks, Databricks just set the stage for a new era of cloud-native geospatial analytics and enterprise AI.
Couldn’t make it? We’ve got you covered with our round-up of the top announcements for geospatial, and what they mean for you.
.png)
The big story from Summit? Agent Bricks, a powerful new framework that changes how enterprises build intelligent agents grounded in their own data.
It’s hard to overstate what this unlocks: rather than spending weeks fine-tuning prompts, adjusting embeddings, and fiddling with expensive LLM configurations, teams can now describe the task in plain language. Agent Bricks does the rest - evaluating, optimizing, and continuously improving performance in production.

Here’s what stood out to us:
- Auto-evaluation & tuning: Agent Bricks generates custom evaluation suites tailored to your task (even with unstructured data like documents or legislative text).
- Performance without compromise: Get higher accuracy and lower costs, thanks to smart model and prompt optimization under the hood.
- Domain-specific reasoning: Whether it’s structured info extraction, multi-agent orchestration, or semantic search - the system adapts to your use case.
Why does this matter for geospatial? Because many of our hardest spatial workflows live at the intersection of maps and meaning. Permitting documents, planning studies, zoning laws - they’re rich in spatial context but locked away in unstructured formats. Agent Bricks gives us a way to extract that meaning with confidence.
It’s a new model for how we bring Location Intelligence into AI-native workflows, and it’s just the beginning…
The wait is over: native Spatial SQL has officially landed in Databricks in public preview - and it’s a huge moment for the spatial analytics community!
This development brings geospatial processing directly to the Databricks Lakehouse Platform, enabling users to perform complex spatial analysis at scale!
The new features are headlined by the introduction of two new fundamental data types:
- GEOMETRY: designed for planar, or flat-earth, representations of spatial data. This is ideal for scenarios where the curvature of the Earth is not a significant factor, such as analyzing data within a city or a small region.
- GEOGRAPHY: for global-scale analysis where incorporating the Earth's spherical nature is critical, this type correctly handles calculations over a spheroid, ensuring accurate results for distance and area measurements across continents.
The initial release includes essential constructors for creating spatial objects from well-known text (WKT) and well-known binary (WKB) formats, as well as functions for basic spatial analysis.
Another new feature here is the introduction of the H3 Spatial Indexing system! H3 is a global hierarchical hexagonal grid system that enables incredibly efficient spatial analysis and rendering through encoding location through a short ID, rather than a long and complex geometry. You can learn more about H3 and Spatial Indexes in our FREE ebook Spatial Indexes 101!
In addition to these functions, CARTO users can leverage our Analytics Toolbox for Databricks for even greater spatial support! This includes:
- Databricks' native Spatial SQL
- Apache Sedona
- Spark™ UDFs
- Or CARTO’s own functions.
That flexibility matters. Whether you’re running proximity analysis across 100M+ points or applying geofilters at scale, you’ll get the performance you need without having to rewrite logic or jump between tools. And we’re just getting started - sign up to our monthly newsletter to stay up-to-date with our latest updates, as they happen!
Another key topic for spatial? Interoperability! Apache Iceberg, GeoParquet and GeoArrow; the growing support for - and adoption of - these open data formats signals a real change in the way users are processing and storing data. Designed to efficiently store and work with large-scale geospatial vector data in a columnar format, these open standards, supported by a growing ecosystem of tools, will further enhance the interoperability and performance of geospatial analytics with CARTO and Databricks.
We talk about all things Apache Iceberg and Geoparquet in a recent blog: How Iceberg, GeoParquet & CARTO are reshaping geospatial.
CARTO’s platform is designed to run natively in Databricks - which means no ETL and no siloes.
With the addition of native Spatial SQL and a more modular agent ecosystem, that integration just got deeper. You can build advanced geospatial models, share insights across your BI tools, and now even bring intelligent agents into the loop - all on a single, scalable platform.

We're thrilled to be part of Databricks’ vision for open, intelligent data infrastructure - and even more excited for what it means for the future of spatial.
This year’s Databricks Data + AI Summit made one thing clear: the future of analytics is not just bigger and faster - it’s spatial, intelligent, and seamlessly integrated. With native Spatial SQL and Agent Bricks now in play, we’re entering a new era where geospatial data isn’t siloed and AI isn’t a black box. It’s all part of the same, cloud-native ecosystem - and we’re excited to be right at the center of it.
Want to explore how your organization could benefit from spatial analysis native to Databricks? Request a demo from one of our experts!