Understanding Convolutions on Graphs
Understanding the building blocks and design choices of graph neural networks.
Understanding the building blocks and design choices of graph neural networks.
What components are needed for building learning algorithms that leverage the structure and properties of graphs?
Reprogramming Neural CA to exhibit novel behaviour, using adversarial attacks.
Weights in the final layer of common visual models appear as horizontal bands. We investigate how and why.
When a neural network layer is divided into multiple branches, neurons self-organize into coherent groupings.
We report the existence of multimodal neurons in artificial neural networks, similar to those found in the human brain.
Neural Cellular Automata learn to generate textures, exhibiting surprising properties.
We present techniques for visualizing, contextualizing, and understanding neural network weights.
Reverse engineering the curve detection algorithm from InceptionV1 and reimplementing it from scratch.
A family of early-vision neurons reacting to directional transitions from high to low spatial frequency.
Neural networks naturally learn many transformed copies of the same feature, connected by symmetric weights.
With diverse environments, we can analyze, diagnose and edit deep reinforcement learning models using attribution.
Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization.
A collection of articles and comments with the goal of understanding how to design robust and general purpose self-organizing systems.
Training an end-to-end differentiable, self-organising cellular automata for classifying MNIST digits.
Part one of a three part deep dive into the curve neuron family.
How to tune hyperparameters for your machine learning model using Bayesian optimization.
An overview of all the neurons in the first five layers of InceptionV1, organized into a taxonomy of 'neuron groups.'
By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks.
What can we learn if we invest heavily in reverse engineering a single neural network?
By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.
Training an end-to-end differentiable, self-organising cellular automata model of morphogenesis, able to both grow and regenerate specific patterns.
Exploring the baseline input hyperparameter, and how it impacts interpretations of neural network behavior.