Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

coding

FTPnet

Published:

Implementation of a FTP client in .NET 3.5 for desktop and portable applications.

Volume Mixer Plus

Published:

Improves the default Windows Volume Mixer to allows one to change the default playback device by using shortcuts.

NNCreator

Published:

GUI to create network graphs for the CNTK framework, which afterwards can be trained and evaluated in C++.

Data Viewer

Published:

Utility tool to inspect various data formats to verify data integrity in machine learning.

Keras Utility & Layer Collection

Published:

Collection of custom layers for Keras which are missing in the main framework. These layers might be useful to reproduce current state-of-the-art deep learning papers using Keras.

Lidar Viewer

Published:

Point cloud viewer with surface reconstruction for LIDAR data using OpenGL.

EmoMatch

Published:

Unsupervised Audio + Video Network Pretraining using PyTorch based on the correlation between audio and video signal.

VGGVox for PyTorch

Published:

Implementation of the VGGVox network in PyTorch.

Bayesian Spiking Neural Networks

Published:

Implementation of the paper Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints by Habenschuss et al.

portfolio

publications

Observing spatio-temporal dynamics of excitable media using reservoir computing

Published in Chaos: An Interdisciplinary Journal of Nonlinear Science, 2018

Examination of recurrent neural networks (Echo State Networks) for the spatio-temporal cross prediction of chaotic systems.

Zimmermann, R. S. and Parlitz, U. (2018). Observing spatio-temporal dynamics of excitable media using reservoir computing. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(4), 043118.

https://aip.scitation.org/doi/abs/10.1063/1.5022276

Increasing the robustness of DNNs against image corruptions by playing the Game of Noise

Published in Towards Trustworthy ML: Rethinking Security and Privacy for ML (ICLR 2020 Workshop), 2020

Simple but properly tuned training with additive noise generalizes surprisingly well to unseen corruptions.

Rusak, E., Schott, L., Zimmermann, R.S., Bitterwolf, J., Bringmann, O., Bethge, M. and Brendel, W.,Increasing the robustness of DNNs against image corruptions by playing the Game of Noise.

https://trustworthyiclr20.github.io/rusak.pdf

A simple way to make neural networks robust against diverse image corruptions

Published in European Conference on Computer Vision (ECCV) 2020, 2020

Simple but properly tuned training with additive noise generalizes surprisingly well to unseen corruptions.

Rusak, E., Schott, L., Zimmermann, R.S., Bitterwolf, J., Bringmann, O., Bethge, M. and Brendel, W.,A simple way to make neural networks robust against diverse image corruptions.

https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123480052.pdf

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Published in Journal of Open Source Software, 2020

Foolbox is a popular Python library to benchmark the robustness of machine learning models against these adversarial perturbations

Jonas, R. Zimmermann, R. S., Bethge, M. and Brendel, W., Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX.

https://joss.theoj.org/papers/10.21105/joss.02607.pdf

Natural images are more informative for interpreting CNN activations than synthetic feature visualizations

Published in NeurIPS 2020 Workshop: Shared Visual Representations in Human & Machine Intelligence, 2020

Using human psychophysical experiments, we show that natural images can be significantly more informative for interpreting neural network activations than synthetic feature visualizations.

Borowski, J., Zimmermann, R. S., Schepers, J., Geirhos, R. , Wallis, T. S. A., Bethge, M. and Brendel, W., Natural images are more informative for interpreting CNN activations than synthetic feature visualizations .

https://openreview.net/forum?id=-vhO2VPjbVa

Contrastive Learning can Identify the Underlying Generative Factors of the Data

Published in NeurIPS 2020 Workshop: Self-Supervised Learning - Theory and Practice, 2020

We rigorously show that models trained on a common contrastive loss can implicitly invert the underlying generative model of the observed data up to affine transformations.

Zimmermann, R. S., Schneider, S. Sharma, Y., Bethge, M. and Brendel, W., Contrastive Learning can Identify the Underlying Generative Factors of the Data.

https://sslneuips20.github.io/files/CameraReadys%203-77/67/CameraReady/Contrastive_Learning_can_Identify_the_Underlying_Generative_Factors_of_the_Data_SSL_NeurIPS_2020.pdf

Contrastive Learning Inverts the Data Generating Process

Published in ICML 2021, 2020

We rigorously show that models trained on a common contrastive loss can implicitly invert the underlying generative model of the observed data up to affine transformations.

Zimmermann, R. S., Sharma, Y., Schneider, S. Bethge, M. and Brendel, W., Contrastive Learning Inverts the Data Generating Process.

https://sslneuips20.github.io/files/CameraReadys%203-77/67/CameraReady/Contrastive_Learning_can_Identify_the_Underlying_Generative_Factors_of_the_Data_SSL_NeurIPS_2020.pdf

Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization

Published in NeurIPS 2020 Workshop: Shared Visual Representations in Human & Machine Intelligence, 2021

Using human psychophysical experiments, we show that natural images can be significantly more informative for interpreting neural network activations than synthetic feature visualizations.

Borowski, J., Zimmermann, R. S., Schepers, J., Geirhos, R. , Wallis, T. S. A., Bethge, M. and Brendel, W., Natural images are more informative for interpreting CNN activations than synthetic feature visualizations .

https://openreview.net/forum?id=-vhO2VPjbVa

Reconstructing complex cardiac excitation waves from incomplete data using echo state networks and convolutional autoencoders

Published in Frontiers in Applied Mathematics and Statistics, 2021

We use two approaches from machine learning, echo state networks and convolutional autoencoders, to solve two relevant data modelling tasks in cardiac dynamics to bridge the gap between measurable and not measurable quantities

Herzog, S., Zimmermann, R. S., Abele, J., Luther, S. and Parlitz, U. (2021). Reconstructing Complex Cardiac Excitation Waves From Incomplete Data Using Echo State Networks and Convolutional Autoencoders. Front. Appl. Math. Stat. 6:616584. doi: 10.3389/fams.2020.616584

https://www.frontiersin.org/articles/10.3389/fams.2020.616584/full

Score-Based Generative Classifiers

Published in arXiv, 2021

We evaluate score-based generative models as classifiers on CIFAR-10 and find that they yield good accuracy and likelihoods but no adversarial robustness.

Zimmermann, R. S., Schott, L., Song, Y. Dunn, B. A. and Klindt, D. A. Score-Based Generative Classifiers

https://arxiv.org/abs/2110.00473

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

Published in NeurIPS 2021, 2021

Using psychophysical experiments, we show that state-of-the-art synthetic feature visualizations do not support causal understanding much better than no visualizations, and only similarly well as other visualizations like natural dataset samples.

Zimmermann, R. S., Borowski, J., Geirhos, R. , Bethge, M., Wallis, T. S. A. and Brendel, W., How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

https://openreview.net/forum?id=vLPqnPf9k0

Provably Learning Object-Centric Representations

Published in ICML 2023, 2023

We analyze when object-centric representations can be learned without supervision and introduces two assumptions, compositionality and irreducibility, to prove that ground-truth object representations can be identified.

Brady, J., Zimmermann, R. S., Sharma, Y., Schölkopf, B., von Kügelken, J. and Brendel, W., Provably Learning Object-Centric Representations.

https://arxiv.org/pdf/2305.14229.pdf

Don’t trust your eyes: on the (un)reliability of feature visualizations

Published in arXiv, 2023

Feature visualizations seek to explain how neural networks process natural images, but as we show both experimentally and analytically, they can be unreliable (for instance, they can be manipulated to show arbitrary patterns).

Geirhos, R., Zimmermann, R. S., Bilodeau, B., Brendel, W., Kim, B., Don’t trust your eyes: on the (un)reliability of feature visualizations.

https://arxiv.org/pdf/2306.04719.pdf

Scale Alone Does not Improve Mechanistic Interpretability in Vision Models

Published in arXiv, 2023

We compare the mechanistic interpretability of vision models differing with respect to scale, architecture, training paradigm and dataset size and find that none of these design choices have any significant effect on the interpretability of individual units. We release a dataset of unit-wise interpretability scores that enables research on automated alignment.

Zimmermann, R. S., Klein, T. and Brendel, W., Scale Alone Does not Improve Mechanistic Interpretability in Vision Models.

https://arxiv.org/abs/2307.05471

talks