How to use Distributed Tiling service

Feedback


The traditional stand-alone caching technology is usually time-consuming, and the failures during tiling cannot recovery. SuperMap iServer provides distributed tiling service in a parallel way. Different computers can be set as tiling nodes. Thus, the efficiency of tiling can be improved.

 Distributed tiling service supports all map services that have been published. The service data can be SuperMap workspace, WMS, WMTS, remote REST Map service, SuperMap cloud service, Bing Maps, Tianditu maps, MBTiles, SMTiles, etc. In addition, it also supports tiling  the 3D layers that are loaded in 3D scenes.

In the distributed tiling service, the server which creates the tiling task is called TileMaster, while other nodes clustered with TileMaster are called TileWorker(s). The deployment of the tiling environment, the creation and monitoring of the tiling tasks are conducted on TileMaster. TileWorkers don't need to do any settings. After the tiling task is created, the data will be automatically deployed to the TileWorkers. Any changes occurring on the TileMaster will synchronize to TileWorkers. For the distributed tiling principle and the internal communication mechanism, please refer to Distributed tiling mechanism.

Tile type

distributed tiling service can produce 2D and 3D tiles with various tile types.

2D tile

It means the map tiles generated with the raster image format.  Map tiles supports FastDFS (distributed storage), MongdoDB (distributed storage), SMTiles and MBTiles format, and SuperMap UGC format (SuperMap UGC type is the common and traditional type which can be used in all SuperMap products if the SuperMap UGC tile versions are the same. The UGCV5 tile refers to V5.0 original cache), and GeoPackage format.

The vector layer can be splitted and stored as the vector tiles. iServer supports SVTiles.

The attribute data in the vector layers can be stored as the attribute tiles. iServer supports UTFGrid.

For other information of tile types, please refer to Map caches types.

3D tile

distributed tiling service supports tiling layers in 3D scenes and store the tiles into  MongoDB. The supported 3D tile has:

Generated by partitioning the image layers in 3D scene.

Generated by partitioning the terrain layers.

Tiling Workflow

In the distributed tiling service, the server which creates the tiling task is called TileMaster (URL, http://localhost:8090/iserver/manager/tileservice/jobs), while other nodes clustered with TileMaster are called TileWorker(s). The deployment of the tiling environment, the creation and monitoring of the tiling tasks are conducted on TileMaster. So, the TileMaster is very important, it should be stable and reliable. In this case, SuperMap iServer provides the complete maintenance functions for the tiling service, including real-time monitoring and version management for the result tiles. Using distributed tiling service, you can:

When partitioning 3D tiles, the process is basically the same as the above process, except when creating a caching task. Please refer to:  Creating caching task - 3D tiles.

All the tiles, except UGCV5 format, generated by distributed tiling service can be automatically used by map services. You don't need to do additional configurations. For UGCV5 format tiles, you can manually Configure map provider to use it. If you modified the default settings when tiling, such as the default storage path, or other settings, you need to prefer to Configure to use the cached tiles.

Furthermore, you can Publish map tiles directly to map services, and you also can distribute the cached tile set to share them offline.

Note: After finishing to create the tile task, tile main node will push all data under folders of tile data workspace (*.smwu、*.sxwu) to the child node, so please put tile data and other data in the different directories.

Relative Concepts