Comparative study of xai using formal concept lattice and lime
By: Venkatsubramaniam, Bhaskaran.
Contributor(s): Baruah, Pallav Kumar.
Publisher: Chennai ICT Academy 2022Edition: Vol.13(1), Oct.Description: 2782-2791p.Subject(s): Computer EngineeringOnline resources: Click here In: ICTACT Journal on Soft Computing (IJSC)Summary: Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of a set of features with their specific values can be the deciding factor rather than individual feature importance. They also generate random instances around the point of interest to fit the surrogate model. These random instances need not be part of the original source or may even turn out to be meaningless. In this work, we compare LIME to explanations from the formal concept lattice. This does not use a surrogate model but a deterministic approach by generating synthetic data that respects implications in the original dataset and not randomly generating it. It obtains crucial feature combinations with their values as decision factors without presuming dependence or independence of features. Its explanations not only cover the point of interest but also global explanation of the model, similar and contrastive examples around the point of interest. The explanations are textual and hence easier to comprehend than comprehending weights of a surrogate linear model to understand the black box model.Item type | Current location | Call number | Status | Date due | Barcode | Item holds |
---|---|---|---|---|---|---|
Articles Abstract Database | School of Engineering & Technology Archieval Section | Not for loan | 2023-0515 |
Local Interpretable Model Agnostic Explanation (LIME) is a
technique to explain a black box machine learning model using a
surrogate model approach. While this technique is very popular,
inherent to its approach, explanations are generated from the surrogate
model and not directly from the black box model. In sensitive domains
like healthcare, this need not be acceptable as trustworthy. These
techniques also assume that features are independent and provide
feature weights of the surrogate linear model as feature importance. In
real life datasets, features may be dependent and a combination of a set
of features with their specific values can be the deciding factor rather
than individual feature importance. They also generate random
instances around the point of interest to fit the surrogate model. These
random instances need not be part of the original source or may even
turn out to be meaningless. In this work, we compare LIME to
explanations from the formal concept lattice. This does not use a
surrogate model but a deterministic approach by generating synthetic
data that respects implications in the original dataset and not randomly
generating it. It obtains crucial feature combinations with their values
as decision factors without presuming dependence or independence of
features. Its explanations not only cover the point of interest but also
global explanation of the model, similar and contrastive examples
around the point of interest. The explanations are textual and hence
easier to comprehend than comprehending weights of a surrogate
linear model to understand the black box model.
There are no comments for this item.