Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Implementation of a FTP client in .NET 3.5 for desktop and portable applications.
Published:
Improves the default Windows Volume Mixer to allows one to change the default playback device by using shortcuts.
Published:
GUI to create network graphs for the CNTK framework, which afterwards can be trained and evaluated in C++.
Published:
Utility tool to inspect various data formats to verify data integrity in machine learning.
Published:
Collection of custom layers for Keras which are missing in the main framework. These layers might be useful to reproduce current state-of-the-art deep learning papers using Keras.
Published:
Keras implementation of the paper Show, Attend and Tell
.
Published:
Implementation of the Transformer architecture described by Vaswani et al. in Attention Is All You Need
.
Published:
Point cloud viewer with surface reconstruction for LIDAR data using OpenGL.
Published:
Unsupervised Audio + Video Network Pretraining using PyTorch based on the correlation between audio and video signal.
Published:
Implementation of the VGGVox network in PyTorch.
Published:
Implementation of the paper Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints
by Habenschuss et al.
Published:
Reference implementation of Faster Training of Mask R-CNN by Focusing on Instance Boundaries
.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in University of Göttingen, 2017
Examination of recurrent neural networks (Echo State Networks) for the multiple prediction tasks in chaotic systems.
Zimmermann, Roland S. (2017). Prediction spatio-temporal dynamics using reservoir computing.
https://github.com/FlashTek/rcp_spatio_temporal/raw/master/paper/thesis/latex/thesis.pdfPublished in arXiv, 2018
Improving the training of Mask R-CNN for instance segmentation by introducing an intuitive auxiliary loss.
Zimmermann, R. S. and Siems, J. N. (2018). Faster Training of Mask R-CNN by Focusing on Instance Boundaries. arXiv preprint arXiv:1809.07069.
https://arxiv.org/abs/1809.07069Published in Chaos: An Interdisciplinary Journal of Nonlinear Science, 2018
Examination of recurrent neural networks (Echo State Networks) for the spatio-temporal cross prediction of chaotic systems.
Zimmermann, R. S. and Parlitz, U. (2018). Observing spatio-temporal dynamics of excitable media using reservoir computing. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28(4), 043118.
https://aip.scitation.org/doi/abs/10.1063/1.5022276Published in CinC, 2018
Analysis of chaotic dynamics using sequential and recurrent neural networks and classical Machine Learning methods.
Parlitz, U. and Zimmermann, R. S. , Herzog, S. and Isenseee, J. and Datseris, G. (2018). Predicting and Observing Chaotic Dynamics in Excitable Media Using Machine Learning. CinC 2018, Maastrich.
Published in arXiv, 2019
Breaks the the proposed adversarial defense method Adv-BNN
Zimmermann, R. S. (2019). Comment on “Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network”. arXiv preprint arXiv:1907.00895.
https://arxiv.org/abs/1907.00895Published in University of Göttingen/University of Tübingen, 2020
Analyzing the robustness of computer vision and comparing it to human performance by developing novel evaluation methods
Zimmermann, Roland S. (2020). Robust Perception in Humans and Machines.
https://rzimmermann.com/files/Masters_Thesis_Roland_S_Zimmermann.pdfPublished in Towards Trustworthy ML: Rethinking Security and Privacy for ML (ICLR 2020 Workshop), 2020
Simple but properly tuned training with additive noise generalizes surprisingly well to unseen corruptions.
Rusak, E., Schott, L., Zimmermann, R.S., Bitterwolf, J., Bringmann, O., Bethge, M. and Brendel, W.,Increasing the robustness of DNNs against image corruptions by playing the Game of Noise.
https://trustworthyiclr20.github.io/rusak.pdfPublished in European Conference on Computer Vision (ECCV) 2020, 2020
Simple but properly tuned training with additive noise generalizes surprisingly well to unseen corruptions.
Rusak, E., Schott, L., Zimmermann, R.S., Bitterwolf, J., Bringmann, O., Bethge, M. and Brendel, W.,A simple way to make neural networks robust against diverse image corruptions.
https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123480052.pdfPublished in Journal of Open Source Software, 2020
Foolbox is a popular Python library to benchmark the robustness of machine learning models against these adversarial perturbations
Jonas, R. Zimmermann, R. S., Bethge, M. and Brendel, W., Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX.
https://joss.theoj.org/papers/10.21105/joss.02607.pdfPublished in NeurIPS 2020 Workshop: Shared Visual Representations in Human & Machine Intelligence, 2020
Using human psychophysical experiments, we show that natural images can be significantly more informative for interpreting neural network activations than synthetic feature visualizations.
Borowski, J., Zimmermann, R. S., Schepers, J., Geirhos, R. , Wallis, T. S. A., Bethge, M. and Brendel, W., Natural images are more informative for interpreting CNN activations than synthetic feature visualizations .
https://openreview.net/forum?id=-vhO2VPjbVaPublished in NeurIPS 2020 Workshop: Self-Supervised Learning - Theory and Practice, 2020
We rigorously show that models trained on a common contrastive loss can implicitly invert the underlying generative model of the observed data up to affine transformations.
Zimmermann, R. S., Schneider, S. Sharma, Y., Bethge, M. and Brendel, W., Contrastive Learning can Identify the Underlying Generative Factors of the Data.
https://sslneuips20.github.io/files/CameraReadys%203-77/67/CameraReady/Contrastive_Learning_can_Identify_the_Underlying_Generative_Factors_of_the_Data_SSL_NeurIPS_2020.pdfPublished in ICML 2021, 2020
We rigorously show that models trained on a common contrastive loss can implicitly invert the underlying generative model of the observed data up to affine transformations.
Zimmermann, R. S., Sharma, Y., Schneider, S. Bethge, M. and Brendel, W., Contrastive Learning Inverts the Data Generating Process.
https://sslneuips20.github.io/files/CameraReadys%203-77/67/CameraReady/Contrastive_Learning_can_Identify_the_Underlying_Generative_Factors_of_the_Data_SSL_NeurIPS_2020.pdfPublished in NeurIPS 2020 Workshop: Shared Visual Representations in Human & Machine Intelligence, 2021
Using human psychophysical experiments, we show that natural images can be significantly more informative for interpreting neural network activations than synthetic feature visualizations.
Borowski, J., Zimmermann, R. S., Schepers, J., Geirhos, R. , Wallis, T. S. A., Bethge, M. and Brendel, W., Natural images are more informative for interpreting CNN activations than synthetic feature visualizations .
https://openreview.net/forum?id=-vhO2VPjbVaPublished in Frontiers in Applied Mathematics and Statistics, 2021
We use two approaches from machine learning, echo state networks and convolutional autoencoders, to solve two relevant data modelling tasks in cardiac dynamics to bridge the gap between measurable and not measurable quantities
Herzog, S., Zimmermann, R. S., Abele, J., Luther, S. and Parlitz, U. (2021). Reconstructing Complex Cardiac Excitation Waves From Incomplete Data Using Echo State Networks and Convolutional Autoencoders. Front. Appl. Math. Stat. 6:616584. doi: 10.3389/fams.2020.616584
https://www.frontiersin.org/articles/10.3389/fams.2020.616584/fullPublished in arXiv, 2021
We evaluate score-based generative models as classifiers on CIFAR-10 and find that they yield good accuracy and likelihoods but no adversarial robustness.
Zimmermann, R. S., Schott, L., Song, Y. Dunn, B. A. and Klindt, D. A. Score-Based Generative Classifiers
https://arxiv.org/abs/2110.00473Published in NeurIPS 2021, 2021
Using psychophysical experiments, we show that state-of-the-art synthetic feature visualizations do not support causal understanding much better than no visualizations, and only similarly well as other visualizations like natural dataset samples.
Zimmermann, R. S., Borowski, J., Geirhos, R. , Bethge, M., Wallis, T. S. A. and Brendel, W., How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
https://openreview.net/forum?id=vLPqnPf9k0Published in NeurIPS 2022, 2022
We propose a test that enables researchers to find flawed adversarial robustness evaluations. Passing our test produces compelling evidence that the attacks used have sufficient power to evaluate the model’s robustness.
Zimmermann, R. S., Brendel, W., Tramer, F., and Carlini, N., Increasing Confidence in Adversarial Robustness Evaluations. NeurIPS 2022.
https://openreview.net/forum?id=NkK4i91VWpPublished in ICML 2023, 2023
We analyze when object-centric representations can be learned without supervision and introduces two assumptions, compositionality and irreducibility, to prove that ground-truth object representations can be identified.
Brady, J., Zimmermann, R. S., Sharma, Y., Schölkopf, B., von Kügelken, J. and Brendel, W., Provably Learning Object-Centric Representations.
https://arxiv.org/pdf/2305.14229.pdfPublished in arXiv, 2023
We investigate the sensitivity of slot-based object-centric models to their number of slots, identify failures and explore mitigation strategies.
Zimmermann, R. S., van Steenkiste, S., Sajjadi, M. S. M., Kipf, T., Greff, K., Provably Learning Objet-Centric Representations.
https://arxiv.org/pdf/2305.18890.pdfPublished in arXiv, 2023
Feature visualizations seek to explain how neural networks process natural images, but as we show both experimentally and analytically, they can be unreliable (for instance, they can be manipulated to show arbitrary patterns).
Geirhos, R., Zimmermann, R. S., Bilodeau, B., Brendel, W., Kim, B., Don’t trust your eyes: on the (un)reliability of feature visualizations.
https://arxiv.org/pdf/2306.04719.pdfPublished in arXiv, 2023
We compare the mechanistic interpretability of vision models differing with respect to scale, architecture, training paradigm and dataset size and find that none of these design choices have any significant effect on the interpretability of individual units. We release a dataset of unit-wise interpretability scores that enables research on automated alignment.
Zimmermann, R. S., Klein, T. and Brendel, W., Scale Alone Does not Improve Mechanistic Interpretability in Vision Models.
https://arxiv.org/abs/2307.05471Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.