TU Berlin / Faculty of EECS / Remote Sensing Image Analysis Group

Software

This page lists software developed and maintained at RSiM. All the codes are publicly available for private use and research only.

  • A Consensual Collaborative Learning Method for Remote Sensing Image Classification Under Noisy Multi-Labels

    This repository contains the code for our multi-label learning method based on the idea of co-training for scene classification of remote sensing (RS) images with noisy labels. Our proposed Consensual Collaborative Multi-Label Learning (CCML) method identifies, ranks and corrects training images with noisy multi-labels through four main modules: 1) discrepancy module; 2) group lasso module; 3) flipping module; and 4) swap module. The discrepancy module ensures that the two networks learn diverse features, while obtaining the same predictions. The group lasso module detects the potentially noisy labels by estimating the label uncertainty based on the aggregation of two collaborative networks. The flipping module corrects the identified noisy labels, whereas the swap module exchanges the ranking information between the two networks. The code is written in Tensorflow 2.

    Repository: CCML @RSiM-Git

    Accompanying Paper: A Consensual Collaborative Learning Method for Remote Sensing Image Classification Under Noisy Multi-Labels

    Contact Person: Ahmet Kerem Aksoy

  • Informative and Representative Triplet Selection for Multi-Label Remote Sensing Image Retrieval

    This repository contains the code for our informative and representative triplet selection method in the context of multi-label remote sensing image retrieval. Our method selects a small set of the most representative and informative triplets based on two main steps. In the first step, a set of anchors that are diverse to each other in the embedding space is selected from the current mini-batch using an iterative algorithm. In the second step, different sets of positive and negative images are chosen for each anchor by evaluating the relevancy, hardness and diversity of the images among each other based on a novel strategy. The selection of the most informative and representative triplets results in: i) reducing the computational complexity of the training phase without any significant loss on the performance; and ii) an increase in learning speed since informative triplets allow fast convergence.

    Repository: Image Retrieval from Triplets @RSiM-Git

    Accompanying Paper: Informative and Representative Triplet Selection for Multi-Label Remote Sensing Image Retrieval

    Contact Person: Gencer Sumbul

  • A Comparative Study of Deep Learning Loss Functions for Multi-Label Remote Sensing Image Classification

    This repository contains the code for our comparative study on deep learning loss functions in the context of multi-label remote sensing image classification. In this study, seven different deep learning loss functions have been compared in terms of their: 1) overall accuracy; 2) class imbalance awareness (for which the number of samples associated to each class significantly varies); 3) convexibility and differentiability; and 4) learning efficiency. The code is written based on TensorFlow.

    Repository: RS-MLC-Losses @RSiM-Git

    Accompanying Paper: A Comparative Study of Deep Learning Loss Functions for Multi-Label Remote Sensing Image Classification

    Contact Person: Gencer Sumbul

  • SD-RSIC: Summarization Driven Deep Remote Sensing Image Captioning

    This repository contains the code for our Summarization Driven Remote Sensing Image Captioning (SD-RSIC) approach. The SD-RSIC approach consists of three main steps. The first step obtains the standard image captions by jointly exploiting convolutional neural networks (CNNs) with long short-term memory (LSTM) networks. The second step, unlike the existing RS image captioning methods, summarizes the ground-truth captions of each training image into a single caption by exploiting sequence to sequence neural networks and eliminates the redundancy present in the training set. The third step automatically defines the adaptive weights associated to each RS image to combine the standard captions with the summarized captions based on the semantic content of the image. This is achieved by a novel adaptive weighting strategy defined in the context of LSTM networks. The code is written based on TensorFlow.

    Repository: SD-RSIC @RSiM-Git

    Accompanying Paper: SD-RSIC: Summarization Driven Deep Remote Sensing Image Captioning

    Contact Person: Gencer Sumbul

  • Metric-Learning-Based Deep Hashing Network for Content-Based Retrieval of Remote Sensing Images

    This repository contains the code of our metric-learning based hashing network, which learns: 1) a semantic-based metric space for effective feature representation; and 2) compact binary hash codes for fast archive search. Our network considers an interplay of multiple loss functions that allows to jointly learn a metric based semantic space facilitating similar images to be clustered together in that target space and at the same time producing compact final activations that lose negligible information when binarized.

    Repository: MHCLN @RSiM-Git

    Accompanying Paper: Metric-Learning-Based Deep Hashing Network for Content-Based Retrieval of Remote Sensing Images

    Contact Person: Subhankar Roy

  • S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

    This repository contains the code of the paper S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images. The model has been trained and tested on Wordview 2 Dataset for Binary Change Detection. The model is implemented in PyTorch.

    Repository: S2-cGAN @RSiM-Git

    Accompanying Paper: S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images

    Contact People: Jose Luis Holgado Alvarez, Dr. Mahdyar Ravanbakhsh

  • A Deep Multi-Attention Driven Approach for Multi-Label Remote Sensing Image Classification

    This repository includes the code for our multi-attention driven multi-label scene classification approach. This approach is based on three main steps. The first step describes the complex spatial and spectral content of image local areas by a K-Branch CNN that includes spatial resolution specific CNN branches. The second step initially characterizes the importance scores of different local areas of each image and then defines a global descriptor for each image based on these scores. This is achieved by a multi-attention strategy that utilizes the bidirectional long short-term memory networks. The final step achieves the classification of RS image scenes with multi-labels.

    Repository: MAML-RSIC @RSiM-Git

    Accompanying Paper: A Deep Multi-Attention Driven Approach for Multi-Label Remote Sensing Image Classification

    Contact Person: Gencer Sumbul

  • parallelCollGS: Parallel Download from Sentinel Collaborative Ground Segments

    This repository provides the python toolchain (parallelCollGS) for parallel queries to download Sentinel 1, 2, and 3 products from a varying number of collaborative ground segments. This toolchain abstracts sentinelsat Python API client to support parallelized mirror access, and thus provides simultaneous access to both high-speed and high-coverage mirrors while reusing the workflow of the non-parallelized client. While keeping as much of the original client’s workflow intact as possible, a fault-tolerant mechanism is included in parallelCollGS for accessing multiple mirrors in parallel. In addition, parallelCollGS uses a scheduling strategy for concurrent downloads to ensure optimal utilization of the available bandwidth. The toolchain provides convenient access to Hadoop Distributed File System (HDFS) via the Apache Hadoop stack based interface for the convenient upload of obtained products

    Repository: parallelCollGS @RSiM-Git

    Accompanying Paper: An End-to-End Framework for Processing and Analysis of Big Data in Remote Sensing

    Author(s): Viktor Bahr

    Contact Person: Gencer Sumbul

  • Deep Learning Models for BigEarthNet-S2 with 43 Classes

    This repository contains: i) code to use the BigEarthNet-S2 archive with the original CORINE Land Cover (CLC) Level-3 class nomenclature for deep learning applications; and ii) model weights for deep learning models that have been pre-trained on BigEarthNet-S2 for scene classification. The code to use the pre-trained deep learning models, to train new models, and to evaluate pre-trained models is implemented based on both TensorFlow and PyTorch.

    Repositories

    Accompanying Paper: BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding

    Contact People: Gencer Sumbul, Tristan Kreuziger

  • Deep Learning Models for BigEarthNet-S2 with 19 Classes

    This repository contains code to use the BigEarthNet Sentinel-2 (denoted as BigEarthNet-S2) archive with the nomenclature of 19 classes for deep learning applications. The nomenclature of 19 classes was defined by interpreting and arranging the CORINE Land Cover (CLC) Level-3 nomenclature based on the properties of Sentinel-2 images. The code to use the pre-trained deep learning models, to train new models, and to evaluate pre-trained models is implemented based on TensorFlow.

    Repositories

    Contact People: Gencer Sumbul, Tristan Kreuziger

  • Deep Learning Models for BigEarthNet-MM with 19 Classes

    This repository contains code to use the multi-modal BigEarthNet (BigEarthNet-MM) archive with the nomenclature of 19 classes for deep learning applications. The nomenclature of 19 classes was defined by interpreting and arranging the CORINE Land Cover (CLC) Level-3 nomenclature based on the properties of Sentinel-2 images. The tools are implemented as Python scripts.

    Repository: BigEarthNet-MM 19 classes models @RSiM-Git

    Accompanying Paper: BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding

    Contact Person: Gencer Sumbul

  • BigEarthNet-S2, BigEarthNet-S1 and BigEarthNet-MM Tools

    This repository contains code to use the multi-modal BigEarthNet (BigEarthNet-MM) archive with the nomenclature of 19 classes for deep learning applications. The nomenclature of 19 classes was defined by interpreting and arranging the CORINE Land Cover (CLC) Level-3 nomenclature based on the properties of Sentinel-2 images. The tools are implemented as Python scripts.

    Repositories

    Accompanying Paper: BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding

    Contact People: Gencer Sumbul, Arne De Wall