Close

Menu

  • Home
  • Author, Editor
  • Subscribe
Home
Prasad Chalasani

Prasad Chalasani

CEO, Co-Founder @ XaiPient. Previously Los Alamos, Goldman Sachs, Yahoo, MediaMath. PhD/ML/CMU, BTech/CS/IIT. Interests: Causality, Deep (Adversarial) Learning and Explainability, Python.

4 Posts New York, NY Website Twitter
Page 1 of 1

Explainability in Neural Networks, Part 4: Path Methods for Feature Attribution

Prasad Chalasani on deep learning, neural networks, attribution, explainability | 10 Nov 2018

This post will delve deeper into Path Integrated Gradient Methods for Feature Attribution in Neural Networks.…

Explainability in Neural Networks, Part 3: The Axioms of Attribution

Prasad Chalasani on deep learning, neural networks, attribution, explainability | 01 Nov 2018

In this third post of the series on Explainability in Neural Networks, we present Axioms of Attribution, which are a set of desirable properties that any reasonable feature-attribution method should have.…

Explainability in Neural Networks, Part 2: Limitations of Simple Feature Attribution Methods

Prasad Chalasani on deep learning, explainability, neural networks, attribution, machine learning | 19 Oct 2018

We examine some simple, intuitive methods to explain the output of a neural network (based on perturbations and gradients), and see how they produce non-sensical results for non-linear functions.…

Explainability in Deep Neural Networks

Prasad Chalasani on deep learning, explainability, neural networks, adversarial, attribution | 04 Oct 2018

The wild success of Deep Neural Network (DNN) models in a variety of domains has created considerable excitement in the machine learning community. Despite this success, a deep understanding of why DNNs perform so well, and whether their performance is somehow brittle, has been lacking.…

Page 1 of 1
Theme Attila by zutrinken Published with Ghost