Render images

When you’re satisfied with your scenario, you can insert a render call to mark the end of your notebook.

Declaring the render call
# Run this cell - If you want to render using your viewport
# Do not run this cell - If you wish to send a render job to our servers
scenario.render()

Render different types of images and labels

To render different types of images and labels, pass render parameters to the render() call:

Add render parameters
scenario.render(params=RenderParams.RgbWithThermal()) # RGB + thermal emulation

The following parameters are available:

Parameter

Description

RenderParams.RgbWithLabels()

Default. Captures RGB images with labels.

RenderParams.RgbWithDepth()

Captures RGB images with labels and depth maps.

RenderParams.RgbWithGeoTIFF()

Captures RGB images (in GeoTIFF format) with labels.

RenderParams.RgbWithNITF()

Captures RGB images (in NITF format) with labels.

RenderParams.RgbWithThermal()

Captures RGB with labels and thermal maps.

RenderParams.Rgb()

Captures only RGB images.

Maximum render quantity

Each scenario.render() call or Send render job > request may produce up to 1000 renders.

If you require datasets above this limit, consider splitting your dataset into multiple jobs.

There are two main ways to render images:

Option 1: Interactive render

Suitable for small previews. Directly run the scenario.render() cell in the notebook. This will take over the viewport, loop through keyframes, prepare the scene, and execute each render one at a time. This approach is ideal for smaller previews.

Option 2: Send render job

For larger datasets (>100 images), it’s not ideal to use your viewport for rendering as you’ll need to keep the browser tab open. To send your current notebook to our servers for rendering, arrange your cells from top to bottom, ending with a scenario.render() call, but do not run the render call.

Instead, click on Send render job > on the top right. This method will run your cells from top to bottom, and does not take over the viewport.

To avoid downloading large files over the network, it’s recommended to split larger datasets into multiple jobs and use different notebooks for each job.