OpenAI and Google tackle explainable AI with “activation atlases”

OpenAI and Google researchers have created activation atlases, which is a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes is expected to identify weaknesses and investigate failures.

Modern neural networks are often criticized as being a black box. Despite their success at a variety of problems, there’s a limited understanding of how they make decisions internally. Activation atlases are a new way to see some of what goes on inside that box.

Activation atlases build on feature visualization, a technique for studying what the hidden layers of neural networks can represent. Early work in feature visualization primarily focused on individual neurons. By collecting hundreds of thousands of examples of neurons interacting and visualizing those, activation atlases move from individual neurons to visualizing the space those neurons jointly represent.

Understanding what’s going on inside neural nets isn’t solely a question of scientific curiosity — lack of understanding handicaps the ability to audit neural networks and, in high stakes contexts, ensure they are safe. Normally, deploying a critical piece of software means reviewing all the paths through the code, or doing formal verification, but with neural networks, the ability to do this kind of review has been much more limited.

With activation atlases humans can discover unanticipated issues in neural networks — for example, places where the network is relying on spurious correlations to classify images, or where re-using a feature between two classes leads to strange bugs.

Humans can even use this understanding to “attack” the model, modifying images to fool it. For example, a special kind of activation atlas can be created to show how a network tells apart frying pans and woks. Many of the things we see are what one expects. Frying pans are more squarish, while woks are rounder and deeper. But it also seems like the model has learned that frying pans and woks can also be distinguished by food around them — in particular, wok is supported by the presence of noodles. Adding noodles to the corner of the image will fool the model 45% of the time.

Other human-designed attacks based on the network overloading certain feature detectors are often more effective (some succeed as often as 93% of the time). But the noodle example is particularly interesting because it’s a case of the model picking up on something that is correlated, but not causal, with the correct answer. This has structural similarities to types of errors we might be particularly worried about, such as fairness and bias issues.

“Activation atlases worked better than we anticipated and seem to strongly suggest that neural network activations can be meaningful to humans. This gives us increased optimism that it is possible to achieve interpretability in vision models in a strong sense,” OpenAI wrote in a blog post.

Read the full report

Related Posts

Previous Post
Germany’s finance ministry floats crypto regulation
Next Post
China Daily: PBOC unveils fintech development plan

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account