TU Berlin / Faculty of EECS / Remote Sensing Image Analysis Group

Software

This page lists software developed and maintained at RSiM. All the codes are publicly available for private use and research only.

  • CHNR: Unsupervised Cross-Modal Hashing Method Robust to the Noisy Image-Text Correspondence

    This repository contains the code for our unsupervised cross-modal hashing method robust to the noisy image-text correspondences (CHNR). CHNR consists of three mod-ules: 1) feature extraction module, which extracts feature representations of image-text pairs; 2) noise detection module, which detects potential noisy correspondences; and 3) hashing module that generates cross-modal binary hash code.

    Repository: CHNR @RSiM-Git

    Accompanying Paper: CHNR: Unsupervised cross-modal hashing method robust to the noisy image-text correspondence

  • DUCH: Deep Unsupervised Contrastive Hashing

    This repository contains the code for our deep unsupervised cross-modal contrastive hashing (DUCH) method for RS text-image retrieval. The DUCH is made up of two main modules: 1) feature extraction module (which extracts deep representations of the text-image modalities); and 2) hashing module (which learns to generate cross-modal binary hash codes from the extracted representations).

    Repository: DUCH @RSiM-Git

    Accompanying Paper: DUCH: Deep Unsupervised Contrastive Hashing

  • Deep Metric Learning-Based Semi-Supervised Regression With Alternate Learning

    This repository contains the code for our deep metric learning-based semi-supervised regression (DML-S2R) method for parameter estimation problems. Our method aims to mitigate the problems of insufficient amount of labeled samples without collecting any additional samples with target values. To this end, DML-S2R is made up of two main steps: i) pairwise similarity modeling with scarce labeled data; and ii) triplet-based metric learning with abundant unlabeled data. The first step estimates the target value differences of labeled samples with a Siamese neural network (SNN) to model pairwise sample similarities. The second step employs the SNN of the first step for triplet-based deep metric learning that exploits not only labeled samples but also unlabeled samples. For the end-to-end training of DML-S2R, an alternate learning strategy is applied for the two steps.

    Repository: DML-S2R @RSiM-Git

    Accompanying Paper: Deep Metric Learning-Based Semi-Supervised Regression With Alternate Learning

  • A Novel Self-Supervised Cross-Modal Image Retrieval Method in Remote Sensing

    This repository contains the code for our self-supervised cross-modal remote sensing image retrieval method. Our method aims to: i) model mutual-information between different modalities in a self-supervised manner; ii) retain the distributions of modal-specific feature spaces similar; and iii) define most similar images within each modality without requiring any annotated training images. To this end, the objective of our method includes three loss functions that simultaneously: i) maximize mutual information of different modalities for inter-modal similarity preservation; ii) minimize the angular distance of multi-modal image tuples for the elimination of inter-modal discrepancies; and iii) increase cosine similarity of most similar images within each modality for the characterization of intra-modal similarities.

    Repository: SS-CM-RSIR @RSiM-Git

    Accompanying Paper: A Novel Self-Supervised Cross-Modal Image Retrieval Method in Remote Sensing

  • Weakly Supervised Semantic Segmentation of Remote Sensing Images for Tree Species Classification Based on Explanation Methods

    This repository contains the code for our comparative study on explanation methods in the context of weakly supervised semantic segmentation for tree species mapping. In this study, four different deep learning explanation methods have been compared in terms of their: 1) segmentation accuracy; 2) model size; and 3) segmentation time. The code is written based on PyTorch.

    Repository: RS-WSSS @RSiM-Git

    Accompanying Paper: Weakly Supervised Semantic Segmentation of Remote Sensing Images for Tree Species Classification Based on Explanation Methods

  • A Novel Framework to Jointly Compress and Index Remote Sensing Images for Efficient Content-Based Retrieval

    This repository contains the code for our framework to jointly compress and index remote sensing (RS) images for efficient content-based image retrieval (CBIR). RS images are usually stored in compressed format to reduce the storage size of the archives. Thus, CBIR systems in RS require decoding images before applying CBIR (which is computationally demanding in the case of large-scale CBIR problems). To address this problem, we present a joint framework that simultaneously learns RS image compression and indexing, eliminating the need for decoding RS images before applying CBIR. The proposed framework is made up of two modules. The first module aims at effectively compressing RS images. It is achieved based on an auto-encoder architecture. The second module aims at producing hash codes with a high discrimination capability. It is achieved based on a deep hashing method that exploits soft pairwise, bit-balancing and classification loss functions. We also propose a two stage learning strategy with gradient manipulation techniques to obtain image representations that are compatible with both RS image indexing and compression.

    Repository: RS-JCIF @RSiM-Git

    Accompanying Paper: A Novel Framework to Jointly Compress and Index Remote Sensing Images for Efficient Content-Based Retrieval

  • A Novel Graph-Theoretic Deep Representation Learning Method for Multi-Label Remote Sensing Image Retrieval

    This repository contains the code for our graph-theoretic deep representation learning method in the context of multi-label remote sensing image retrieval. Our method aims to extract and exploit multi-label co-occurrence relationships associated to each remote sensing (RS) image in the archive. To this end, each training image is initially represented with a graph structure that provides region-based image representation combining both local information and the related spatial organization. Unlike the other graph-based methods, the proposed method contains a novel learning strategy to train a deep neural network for automatically predicting a graph structure of each RS image in the archive. This strategy employs a region representation learning loss function to characterize the image content based on its multi-label co-occurrence relationship.

    Repository: GT-DRL-CBIR @RSiM-Git

    Accompanying Paper: A Novel Graph-Theoretic Deep Representation Learning Method for Multi-Label Remote Sensing Image Retrieval

  • A Consensual Collaborative Learning Method for Remote Sensing Image Classification Under Noisy Multi-Labels

    This repository contains the code for our multi-label learning method based on the idea of co-training for scene classification of remote sensing (RS) images with noisy labels. Our proposed Consensual Collaborative Multi-Label Learning (CCML) method identifies, ranks and corrects training images with noisy multi-labels through four main modules: 1) discrepancy module; 2) group lasso module; 3) flipping module; and 4) swap module. The discrepancy module ensures that the two networks learn diverse features, while obtaining the same predictions. The group lasso module detects the potentially noisy labels by estimating the label uncertainty based on the aggregation of two collaborative networks. The flipping module corrects the identified noisy labels, whereas the swap module exchanges the ranking information between the two networks. The code is written in Tensorflow 2.

    Repository: CCML @RSiM-Git

    Accompanying Paper: A Consensual Collaborative Learning Method for Remote Sensing Image Classification Under Noisy Multi-Labels

  • Informative and Representative Triplet Selection for Multi-Label Remote Sensing Image Retrieval

    This repository contains the code for our informative and representative triplet selection method in the context of multi-label remote sensing image retrieval. Our method selects a small set of the most representative and informative triplets based on two main steps. In the first step, a set of anchors that are diverse to each other in the embedding space is selected from the current mini-batch using an iterative algorithm. In the second step, different sets of positive and negative images are chosen for each anchor by evaluating the relevancy, hardness and diversity of the images among each other based on a novel strategy. The selection of the most informative and representative triplets results in: i) reducing the computational complexity of the training phase without any significant loss on the performance; and ii) an increase in learning speed since informative triplets allow fast convergence.

    Repository: Image Retrieval from Triplets @RSiM-Git

    Accompanying Paper: Informative and Representative Triplet Selection for Multi-Label Remote Sensing Image Retrieval

  • A Comparative Study of Deep Learning Loss Functions for Multi-Label Remote Sensing Image Classification

    This repository contains the code for our comparative study on deep learning loss functions in the context of multi-label remote sensing image classification. In this study, seven different deep learning loss functions have been compared in terms of their: 1) overall accuracy; 2) class imbalance awareness (for which the number of samples associated to each class significantly varies); 3) convexibility and differentiability; and 4) learning efficiency. The code is written based on TensorFlow.

    Repository: RS-MLC-Losses @RSiM-Git

    Accompanying Paper: A Comparative Study of Deep Learning Loss Functions for Multi-Label Remote Sensing Image Classification

  • SD-RSIC: Summarization Driven Deep Remote Sensing Image Captioning

    This repository contains the code for our Summarization Driven Remote Sensing Image Captioning (SD-RSIC) approach. The SD-RSIC approach consists of three main steps. The first step obtains the standard image captions by jointly exploiting convolutional neural networks (CNNs) with long short-term memory (LSTM) networks. The second step, unlike the existing RS image captioning methods, summarizes the ground-truth captions of each training image into a single caption by exploiting sequence to sequence neural networks and eliminates the redundancy present in the training set. The third step automatically defines the adaptive weights associated to each RS image to combine the standard captions with the summarized captions based on the semantic content of the image. This is achieved by a novel adaptive weighting strategy defined in the context of LSTM networks. The code is written based on TensorFlow.

    Repository: SD-RSIC @RSiM-Git

    Accompanying Paper: SD-RSIC: Summarization Driven Deep Remote Sensing Image Captioning

  • Metric-Learning-Based Deep Hashing Network for Content-Based Retrieval of Remote Sensing Images

    This repository contains the code of our metric-learning based hashing network, which learns: 1) a semantic-based metric space for effective feature representation; and 2) compact binary hash codes for fast archive search. Our network considers an interplay of multiple loss functions that allows to jointly learn a metric based semantic space facilitating similar images to be clustered together in that target space and at the same time producing compact final activations that lose negligible information when binarized.

    Repository: MHCLN @RSiM-Git

    Accompanying Paper: Metric-Learning-Based Deep Hashing Network for Content-Based Retrieval of Remote Sensing Images

  • S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

    This repository contains the code of the paper S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images. The model has been trained and tested on Wordview 2 Dataset for Binary Change Detection. The model is implemented in PyTorch.

    Repository: S2-cGAN @RSiM-Git

    Accompanying Paper: S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

  • A Deep Multi-Attention Driven Approach for Multi-Label Remote Sensing Image Classification

    This repository includes the code for our multi-attention driven multi-label scene classification approach. This approach is based on three main steps. The first step describes the complex spatial and spectral content of image local areas by a K-Branch CNN that includes spatial resolution specific CNN branches. The second step initially characterizes the importance scores of different local areas of each image and then defines a global descriptor for each image based on these scores. This is achieved by a multi-attention strategy that utilizes the bidirectional long short-term memory networks. The final step achieves the classification of RS image scenes with multi-labels.

    Repository: MAML-RSIC @RSiM-Git

    Accompanying Paper: A Deep Multi-Attention Driven Approach for Multi-Label Remote Sensing Image Classification

  • parallelCollGS: Parallel Download from Sentinel Collaborative Ground Segments

    This repository provides the python toolchain (parallelCollGS) for parallel queries to download Sentinel 1, 2, and 3 products from a varying number of collaborative ground segments. This toolchain abstracts sentinelsat Python API client to support parallelized mirror access, and thus provides simultaneous access to both high-speed and high-coverage mirrors while reusing the workflow of the non-parallelized client. While keeping as much of the original client’s workflow intact as possible, a fault-tolerant mechanism is included in parallelCollGS for accessing multiple mirrors in parallel. In addition, parallelCollGS uses a scheduling strategy for concurrent downloads to ensure optimal utilization of the available bandwidth. The toolchain provides convenient access to Hadoop Distributed File System (HDFS) via the Apache Hadoop stack based interface for the convenient upload of obtained products

    Repository: parallelCollGS @RSiM-Git

    Accompanying Paper: An End-to-End Framework for Processing and Analysis of Big Data in Remote Sensing

    Author(s): Viktor Bahr

  • Deep Learning Models for BigEarthNet-S2 with 43 Classes

    This repository contains: i) code to use the BigEarthNet-S2 archive with the original CORINE Land Cover (CLC) Level-3 class nomenclature for deep learning applications; and ii) model weights for deep learning models that have been pre-trained on BigEarthNet-S2 for scene classification. The code to use the pre-trained deep learning models, to train new models, and to evaluate pre-trained models is implemented based on both TensorFlow and PyTorch.

    Repositories

    Accompanying Paper: BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding

  • Deep Learning Models for BigEarthNet-S2 with 19 Classes

    This repository contains code to use the BigEarthNet Sentinel-2 (denoted as BigEarthNet-S2) archive with the nomenclature of 19 classes for deep learning applications. The nomenclature of 19 classes was defined by interpreting and arranging the CORINE Land Cover (CLC) Level-3 nomenclature based on the properties of Sentinel-2 images. The code to use the pre-trained deep learning models, to train new models, and to evaluate pre-trained models is implemented based on TensorFlow.

    Repositories

  • Deep Learning Models for BigEarthNet-MM with 19 Classes

    This repository contains code to use the multi-modal BigEarthNet (BigEarthNet-MM) archive with the nomenclature of 19 classes for deep learning applications. The nomenclature of 19 classes was defined by interpreting and arranging the CORINE Land Cover (CLC) Level-3 nomenclature based on the properties of Sentinel-2 images. The tools are implemented as Python scripts.

    Repository: BigEarthNet-MM 19 classes models @RSiM-Git

    Accompanying Paper: BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding

  • BigEarthNet-S2, BigEarthNet-S1 and BigEarthNet-MM Tools

    This repository contains code to use the multi-modal BigEarthNet (BigEarthNet-MM) archive with the nomenclature of 19 classes for deep learning applications. The nomenclature of 19 classes was defined by interpreting and arranging the CORINE Land Cover (CLC) Level-3 nomenclature based on the properties of Sentinel-2 images. The tools are implemented as Python scripts.

    Repositories

    Accompanying Paper: BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding