text
stringlengths 0
5.58k
|
---|
burn areas. Burn incidents become indistinguishable within 2 days of burning as shown in Figure 5.
|
However, Sentinel-2 satellite imagery is freely available but on interval of 5-6 days. Infrared spectral
|
bands (SWIR) are available which are directly correlated with fire.
|
Daily monitoring of fire incidents is conducted at low spatial resolution using NASA satellite
|
instruments, specifically MODIS and VIIRS. The MODIS instrument captures data four times each
|
6
|
day, providing direct detection of fire events at a lower spatial resolution of 1 kilometer. This data is
|
typically available with a latency of 2 to 3 hours. Similarly, VIIRS collects data once a day, offering
|
direct detection of fires with an improved spatial resolution of 375 meters. Like MODIS, the data
|
from VIIRS is also available with a latency of 2 to 3 hours. These tools are critical for consistent and
|
timely monitoring of fire events over large areas.
|
Figure 4: Temporal sentinel imagery of burn area (May 2023)
|
Figure 5: Temporal planet imagery of burn area (May 2023)
|
A.1.2 Baseline Method
|
We conducted an analysis by comparing Planet and Sentinel imagery, along with active fire products,
|
to alert authorities and calculate the last burn index using Sentinel imagery. Figure 6 illustrates the
|
baseline method for detecting fire-affected areas using remote sensing data. The process begins
|
with two types of input data: active fire detection from MODIS/VIIRS and other remote sensing
|
datasets. These inputs are processed to generate visual representations of the affected regions, marked
|
by red circles. The processed images are then analyzed using various indices, including the Char
|
Index, Burn Area Index, Bare Soil Index, NBR (Normalized Burn Ratio), and others such as MIRBI
|
(Mid-Infrared Burn Index) and BSI (Burn Severity Index), to assess the extent and impact of the burn.
|
The outputs from these indices are subsequently used in a time-series change detection analysis to
|
monitor changes over time. The final result is visualized, indicating the spatial distribution of detected
|
changes (marked by red and blue triangles), which aids in identifying patterns and understanding the
|
impact of burning over time.
|
A.1.3 Preliminary Results
|
Combining MODIS, Sentinel, and Planet imagery significantly enhances the accuracy of fire detection,
|
building on methodologies proposed in prior studies. This integrated approach not only improves
|
detection precision but also facilitates the timely issuance of alerts to authorities, enabling prompt
|
action. In Figure 7, the first image, dated October 5, 2022, depicts the area before any burning
|
activity, showing a relatively uniform landscape. The second image, from October 16, 2022, shows
|
the aftermath of stubble burning, with darker patches clearly indicating fire-affected areas. The
|
third image, labeled ’Burn Area Mask’ and also dated October 16, 2022, precisely highlights the
|
locations impacted by the burning. The pink mask effectively outlines the burn areas, providing an
|
7
|
Figure 6: Workflow of baseline method using traditional remote sensing indices
|
accurate assessment of the extent of stubble burning. Similarly, the method successfully detected
|
stubble-burnt patches in another region, as shown in Figure 8. This visual analysis is crucial for
|
monitoring agricultural practices and assessing their environmental impact.
|
Figure 7: Masked burnt area occurred on 16thOct, 2022
|
Figure 8: Masked burnt area occurred on 9thMay, 2023
|
A.1.4 Limitation of baseline method and other techniques
|
Remote sensing (RS) spectral index-based method track the spectral differences between two images-
|
normal and burned. It is merely a difference of temporal changes in pixel that may be due to any
|
reason. Some of the RS based indices to detect burning are- MNDFI (Modified normalized difference
|
fire index), BAI (Burning Area Index), NBR (Normalized Burning Ratio).
|
Simple Models (CNN, RCNN etc.) can be used but not promising in case of limited training data. It
|
may also not detect temporal or positional relation. It may not deal with data from different sensors
|
of different resolution, for different local conditions. Whereas, Foundational models are trained on
|
diverse data from different sources and sensors.
|
8
|
A.2 Geospatial Foundation Model
|
Figure 9: The mask auto-encoder structure for pretraining Prithvi model on large scale multi-temporal
|
and multi-spectral satellite images [33].
|
Foundation models, trained on diverse datasets, are adapt at capturing temporal and spatial rela-
|
tionships, making them highly effective for complex tasks like stubble detection. Fine-tuning these
|
models with smaller, labeled datasets further enhances their accuracy. However, many farmers burn
|
stubble at night to avoid detection, making optical imagery alone insufficient. To address this, we need
|
to fine-tune the foundation model on a diverse range of data collected from multiple sensors. This data
|
should include optical and radar imagery, providing robust day/night coverage and mitigating issues
|
like cloud and noise interference. Such multi-modal data fusion cannot be effectively handled by
|
simple deep learning models. Instead, we require a foundation model trained on diverse datasets with
|
varying resolutions and sensor types. For this purpose, we selected the PRITHVI-100M geospatial
|
foundation model, which is specifically trained on three timestamps of Harmonized Landsat Sentinel
|
(HLS) data.
|
The PRITHVI-100M model represents a state-of-the-art approach to analyzing high-resolution
|
satellite imagery using advanced machine learning techniques. Built on the Vision Transformer (ViT)
|
architecture, it incorporates 3D patch embedding and 3D positional encoding to process multispectral
|
and temporal satellite data effectively. The model employs a self-supervised learning strategy based
|
on a masked auto-encoder (MAE). During training, multispectral images captured over various time
|
intervals and spectral bands are divided into smaller patches, which are flattened and processed by
|
the model. Its encoder-decoder structure generates a latent representation of the input, which is
|
used to reconstruct the original image. The training process is guided by a Mean Squared Error
|
(MSE) loss function to minimize reconstruction errors (Figure 9). The MAE learning strategy, which
|
involves masking certain patches during training, forces the model to learn underlying data patterns by
|
reconstructing the missing patches. This improves its ability to generalize and enhances its robustness
|
across applications like land cover classification, change detection, and environmental monitoring.
|
In this work, we fine-tune the PRITHVI-100M model on our dataset, using a Swin-B backbone
|
and a state-of-the-art U-Net regressor [49]. Unlike simpler models that struggle to integrate diverse
|
data types effectively, foundation models like PRITHVI-100M can leverage cross-modal learning to
|
capture nuanced relationships between different modalities. This capability significantly enhances
|
the accuracy and robustness of distinguishing stubble burning from other land disturbances.
|
PRITHVI-100M is particularly suitable for this task as it is trained on diverse earth observation data
|
across three timestamps, making it well-suited for change detection tasks. Additionally, it utilizes
|
six-band Harmonized Landsat Sentinel (HLS) data, including SWIR1 and SWIR2 bands. These
|
bands are highly effective in capturing the burning ratio, as demonstrated in the baseline method.
|
Overall, PRITHVI-100M represents a significant advancement in geospatial data analysis by combin-
|
ing ViT, 3D positional encoding, and MAE learning to deliver robust and scalable performance on
|
large-scale satellite imagery. After fine-tuning the model, the results will be validated against the
|
baseline method. However, challenges like detecting minor fires persist, underscoring the need for
|
further refinements.
|
9
|
DeepMyco - Dataset Generation for Dye
|
Mycoremediation
|
Danika Gupta
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.