How CARTO generates and serves map tiles in the cloud-native era

Tile generation has always been at the heart of what makes CARTO fast and useful. Over the last decade, the way spatial platforms generate and serve map tiles has shifted substantially. It went from managing dedicated rendering infrastructure and parsing database logs, to querying cloud data warehouses directly and letting the tile assemble itself on demand.

Where the conversation used to be about tuning a shared PostgreSQL instance and a tile renderer, it is now about data warehouse query performance, spatial indexing strategies, and choosing the right tile generation approach for your dataset. The problem is the same: how do you serve map tiles fast, at scale, with accurate data? But the architecture, the tooling, and the answers are entirely different.

This post covers how CARTO's cloud-native platform generates tiles today, how to measure performance, and how to get the best results from it.

From rendering servers to powerful data warehouses

In the previous generation of geospatial platforms, tiles were generated by querying a PostgreSQL/PostGIS database, passing the geometry through a dedicated renderer, and caching the output. Measuring performance meant instrumenting database logs, a fragile, manual process that required custom tooling just to extract basic information like which tiles were being rendered repeatedly and how long each query took.

Today, CARTO connects directly to your cloud data warehouse. There is no intermediate database, no ETL, no data copy. When a tile is requested, the query goes straight to BigQuery, Snowflake, Databricks, Redshift, Oracle, or PostgreSQL, wherever your data already lives. The data warehouse executes the spatial query, and CARTO's Maps API processes the returned data and generates the tile. Your data never moves.

This changes everything about how we think about performance. The bottleneck is no longer a shared rendering instance with opaque log files. It's the query execution time inside your data warehouse, which is both more transparent and more controllable.

How CARTO generates tiles

Depending on your dataset size, update frequency, and performance requirements, CARTO supports two broad approaches: dynamic tiling and pre-generated tilesets. Within pre-generated tilesets, there are three distinct types. Choosing the right combination is the most important performance decision you'll make.

1. Dynamic Tiling

Dynamic Tiling is the default approach for most use cases. When a user requests a tile at coordinates (z/x/y), CARTO's Maps API sends a query to your data warehouse scoped to that tile's bounding box. Only the data that falls within that tile is retrieved and rendered on the fly, every time.

This means your maps always reflect the current state of your data. No regeneration, no pipeline, no lag. If your table in BigQuery is updated, the next tile request reflects that update immediately.

Dynamic tiling architecture: per-tile bounding box queries sent directly to the data warehouse

Dynamic tiles are served in multiple formats: MVT (Mapbox Vector Tiles), binary (deck.gl's binary data mode for MVT, optimized for fast client-side rendering), and GeoJSON. The binary format is the default for most connections because of its compact size and fast deserialization on the client side.

Performance is governed by a per-tile query timeout. If a tile query exceeds this, CARTO returns an empty tile rather than stalling the map. Feature limits also apply per tile to keep rendering fast. For large or complex datasets where queries consistently hit those limits at a given zoom level, pre-generated tilesets are a better fit, and we'll cover that below.

Here's an example of dynamic tiling rendering a table with 11.3 million points directly from a cloud data warehouse, with no pre-processing and no pre-generated tiles:

2. Pre-generated Tilesets

For very large datasets with hundreds of millions of rows, complex geometries, or cases where query-time rendering would be too slow, CARTO supports pre-generated tilesets. These are tiles computed in advance using the Analytics Toolbox and stored directly in your data warehouse as tables.

There are three types:

  • Simple Tilesets: pre-render raw vector geometry across a defined zoom range. Best for polygon or line datasets that are too heavy for dynamic rendering at low zoom levels.
  • Aggregation Tilesets: pre-aggregate point data by zoom level, computing counts or metrics for each zoom step during generation. Best for point datasets where zoom-level aggregation is the intended visualization and you want those aggregations pre-computed rather than calculated at query time.
  • Spatial Index Tilesets: pre-computed from data already structured as H3 or Quadbin spatial index cells. The index cells themselves become the tile features, which makes serving fast and the visualization semantically meaningful.

Because the tiles are already computed and stored in your warehouse, Maps API fetches them with a simple table lookup, with no spatial query and no geometry computation. This is the fastest possible path for serving tiles from large datasets.

The trade-off is freshness: tilesets need to be regenerated when underlying data changes. For datasets that update infrequently (daily, weekly), this is usually acceptable. For real-time or frequently changing data, Dynamic Tiling is the better choice.

A note on spatial index data

If your data is already structured as spatial index cells (H3 hexagons or Quadbin cells), CARTO can serve it efficiently either dynamically or via a pre-generated Spatial Index Tileset. The spatial index defines the zoom-level relationship, so tile generation is a fast lookup rather than a spatial geometry query, regardless of which approach you use.

For large point datasets where you want to show density or aggregated metrics at query time (rather than pre-computing them), aggregate your data to an H3 resolution in your warehouse before visualizing. This is a pattern that works well with dynamic tiling: rather than rendering millions of individual points, you query a pre-aggregated spatial index table and render a few thousand cells.

A classic example: 255.5 million global lightning strikes, aggregated to an H3 grid for a fast, visually coherent map. No individual point rendering, no geometry queries: just H3 cell lookups.

The same pattern applies with OSM-derived data at global scale. This map of Overture Maps buildings (built predominantly from OpenStreetMap) aggregates individual building footprints to H3 cells as you zoom out, switching dynamically between spatial index aggregation and raw geometry depending on zoom level.

For insurers and retailers, this kind of zoom-adaptive visualization is directly applicable to risk mitigation and site planning: at a regional level you see aggregate exposure, and as you zoom in you get asset-level detail. Incorporating geospatial pricing models into this workflow (where H3 cells carry enriched risk scores, property values, or footfall metrics) significantly improves risk measurement precision compared to traditional polygon-based approaches.

Measuring performance today

For years, measuring tile generation performance meant building custom log parsers and working backwards from database logs to understand query patterns. Today, CARTO provides a more direct path.

Because CARTO queries your data warehouse directly, the query execution is visible in your warehouse's own observability tools: BigQuery's INFORMATION_SCHEMA, Snowflake's QUERY_HISTORY, or equivalent views in other warehouses. You can see what tile generation cost in time, bytes processed, and compute, without any log parsing, because the warehouse already tracks it.

Caching plays a central role in making that observability useful. When you're benchmarking or testing changes to your data structure, you'll want to ensure you're measuring fresh query execution rather than cached tile results. Check the Maps API documentation for options to control cache behavior during testing.

In production, CARTO uses a multi-layer cache with long TTLs for stable tile content and shorter TTLs for dynamic data. For public maps with infrequently changing data, the CDN layer means most tile requests never reach the data warehouse at all.

Getting better tile performance

The biggest gains come from the data layer, not the tile layer. A few principles that consistently make a difference:

1. Cluster your table on the geo column

BigQuery, Snowflake, and other warehouses support table clustering or partitioning by a geometry or spatial index column. When your table is clustered on an H3 or Quadbin column, the warehouse can skip blocks of data that fall outside a tile's bounding box, rather than scanning the full table. This directly reduces tile query time and warehouse costs, and is one of the highest-leverage changes you can make before touching anything in CARTO.

2. Match the tile approach to the dataset

Dynamic Tiling works best when data fits comfortably within per-tile feature limits and queries return within the timeout window. If you're consistently hitting timeouts or feature caps at mid-zoom levels, switch to a tileset for those zoom levels. You can mix approaches: use a pre-generated tileset for low zoom levels (views of larger areas, think countries or continents) where data density is highest, and switch to dynamic tiles at higher zoom levels (views of granular areas, think neighborhood or street level) where the viewport is small enough to query quickly.

3. Aggregate before you visualize

For point datasets, avoid rendering every row as an individual point at low zoom levels. Use spatial index aggregation (H3 or Quadbin) to show density or metrics instead. At low zoom levels, aggregating to a spatial index grid enables better pattern recognition across larger geographies: regional clusters, hotspots, and distribution trends that would be invisible under millions of overlapping dots become immediately readable. This is both faster and more informative, and a heatmap of activity at a national or continental scale will always tell a clearer story than raw point rendering.

4. Use the Analytics Toolbox for pre-processing

The CARTO Analytics Toolbox runs natively inside your data warehouse and includes geometry simplification, spatial index generation, and tileset creation functions. Running these as part of a CARTO Workflow on a schedule means your tilesets stay fresh automatically, without any external pipeline. Workflows can be fully automated and scheduled for recurring updates, so your spatial data stays current without manual intervention. That means your team spends less time managing pipelines and more time doing what actually matters: delivering insights.

Conclusion

The tile generation challenge has always been the same: serve accurate spatial data fast, at scale. What has changed over the last decade is where the work happens and how you measure it. The rendering infrastructure that used to sit between your data and your users is gone. Your tile performance is now a direct expression of your data warehouse query performance. It's a well-understood, well-instrumented domain with mature tooling built in.

CARTO's role is to make sure the spatial layer on top of those platforms is as fast as possible: from the query it sends to the warehouse, to the format it returns to the map client. If you want to dig further into how Dynamic Tiling works in practice, see our deep-dive on Dynamic Tiling: the key to highly performant cloud-native maps. And if you're designing a spatial application and want to talk through the right tile strategy for your dataset, we're happy to walk through it.

Hear from our experts!

Request a Demo