Search in sources :

Example 41 with PerturbationContext

use of org.kie.kogito.explainability.model.PerturbationContext in project kogito-apps by kiegroup.

the class CounterfactualExplainerTest method testNoCounterfactualPossible.

@ParameterizedTest
@ValueSource(ints = { 0, 1, 2 })
void testNoCounterfactualPossible(long seed) throws ExecutionException, InterruptedException, TimeoutException {
    Random random = new Random();
    final PerturbationContext perturbationContext = new PerturbationContext(seed, random, 4);
    final List<Output> goal = List.of(new Output("inside", Type.BOOLEAN, new Value(true), 0.0));
    List<Feature> features = new LinkedList<>();
    List<FeatureDomain> featureBoundaries = new LinkedList<>();
    List<Boolean> constraints = new LinkedList<>();
    features.add(FeatureFactory.newNumericalFeature("f-num1", 1.0));
    constraints.add(true);
    featureBoundaries.add(EmptyFeatureDomain.create());
    features.add(FeatureFactory.newNumericalFeature("f-num2", 1.0));
    constraints.add(false);
    featureBoundaries.add(NumericalFeatureDomain.create(0.0, 2.0));
    features.add(FeatureFactory.newNumericalFeature("f-num3", 1.0));
    constraints.add(false);
    featureBoundaries.add(NumericalFeatureDomain.create(0.0, 2.0));
    features.add(FeatureFactory.newNumericalFeature("f-num4", 1.0));
    constraints.add(true);
    featureBoundaries.add(EmptyFeatureDomain.create());
    final DataDomain dataDomain = new DataDomain(featureBoundaries);
    final double center = 500.0;
    final double epsilon = 1.0;
    List<Feature> perturbedFeatures = DataUtils.perturbFeatures(features, perturbationContext);
    final CounterfactualResult result = runCounterfactualSearch((long) seed, goal, perturbedFeatures, TestUtils.getSumThresholdModel(center, epsilon), DEFAULT_GOAL_THRESHOLD);
    assertFalse(result.isValid());
}
Also used : PerturbationContext(org.kie.kogito.explainability.model.PerturbationContext) DataDomain(org.kie.kogito.explainability.model.DataDomain) EmptyFeatureDomain(org.kie.kogito.explainability.model.domain.EmptyFeatureDomain) CategoricalFeatureDomain(org.kie.kogito.explainability.model.domain.CategoricalFeatureDomain) NumericalFeatureDomain(org.kie.kogito.explainability.model.domain.NumericalFeatureDomain) FeatureDomain(org.kie.kogito.explainability.model.domain.FeatureDomain) Feature(org.kie.kogito.explainability.model.Feature) LinkedList(java.util.LinkedList) Random(java.util.Random) PredictionOutput(org.kie.kogito.explainability.model.PredictionOutput) Output(org.kie.kogito.explainability.model.Output) Value(org.kie.kogito.explainability.model.Value) ValueSource(org.junit.jupiter.params.provider.ValueSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 42 with PerturbationContext

use of org.kie.kogito.explainability.model.PerturbationContext in project kogito-apps by kiegroup.

the class DummyModelsLimeExplainerTest method testFixedOutput.

@ParameterizedTest
@ValueSource(longs = { 0 })
void testFixedOutput(long seed) throws Exception {
    Random random = new Random();
    List<Feature> features = new LinkedList<>();
    features.add(FeatureFactory.newNumericalFeature("f1", 6));
    features.add(FeatureFactory.newNumericalFeature("f2", 3));
    features.add(FeatureFactory.newNumericalFeature("f3", 5));
    PredictionProvider model = TestUtils.getFixedOutputClassifier();
    PredictionInput input = new PredictionInput(features);
    List<PredictionOutput> outputs = model.predictAsync(List.of(input)).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    Prediction prediction = new SimplePrediction(input, outputs.get(0));
    LimeConfig limeConfig = new LimeConfig().withSamples(10).withPerturbationContext(new PerturbationContext(seed, random, 1));
    LimeExplainer limeExplainer = new LimeExplainer(limeConfig);
    Map<String, Saliency> saliencyMap = limeExplainer.explainAsync(prediction, model).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    for (Saliency saliency : saliencyMap.values()) {
        assertNotNull(saliency);
        List<FeatureImportance> topFeatures = saliency.getTopFeatures(3);
        assertEquals(3, topFeatures.size());
        for (FeatureImportance featureImportance : topFeatures) {
            assertEquals(0, featureImportance.getScore());
        }
        assertEquals(0d, ExplainabilityMetrics.impactScore(model, prediction, topFeatures));
    }
    int topK = 1;
    double minimumPositiveStabilityRate = 0.5;
    double minimumNegativeStabilityRate = 0.5;
    TestUtils.assertLimeStability(model, prediction, limeExplainer, topK, minimumPositiveStabilityRate, minimumNegativeStabilityRate);
    List<PredictionInput> inputs = new ArrayList<>();
    for (int i = 0; i < 100; i++) {
        List<Feature> fs = new LinkedList<>();
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        inputs.add(new PredictionInput(fs));
    }
    DataDistribution distribution = new PredictionInputsDataDistribution(inputs);
    int k = 2;
    int chunkSize = 10;
    String decision = "class";
    double precision = ExplainabilityMetrics.getLocalSaliencyPrecision(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(precision).isEqualTo(1);
    double recall = ExplainabilityMetrics.getLocalSaliencyRecall(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(recall).isEqualTo(1);
    double f1 = ExplainabilityMetrics.getLocalSaliencyF1(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(f1).isEqualTo(1);
}
Also used : SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) PerturbationContext(org.kie.kogito.explainability.model.PerturbationContext) PredictionInput(org.kie.kogito.explainability.model.PredictionInput) Prediction(org.kie.kogito.explainability.model.Prediction) SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) ArrayList(java.util.ArrayList) Saliency(org.kie.kogito.explainability.model.Saliency) PredictionProvider(org.kie.kogito.explainability.model.PredictionProvider) Feature(org.kie.kogito.explainability.model.Feature) LinkedList(java.util.LinkedList) Random(java.util.Random) FeatureImportance(org.kie.kogito.explainability.model.FeatureImportance) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) DataDistribution(org.kie.kogito.explainability.model.DataDistribution) PredictionOutput(org.kie.kogito.explainability.model.PredictionOutput) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) ValueSource(org.junit.jupiter.params.provider.ValueSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 43 with PerturbationContext

use of org.kie.kogito.explainability.model.PerturbationContext in project kogito-apps by kiegroup.

the class DummyModelsLimeExplainerTest method testMapOneFeatureToOutputClassification.

@ParameterizedTest
@ValueSource(longs = { 0 })
void testMapOneFeatureToOutputClassification(long seed) throws Exception {
    Random random = new Random();
    int idx = 1;
    List<Feature> features = new LinkedList<>();
    features.add(FeatureFactory.newNumericalFeature("f1", 1));
    features.add(FeatureFactory.newNumericalFeature("f2", 1));
    features.add(FeatureFactory.newNumericalFeature("f3", 3));
    PredictionInput input = new PredictionInput(features);
    PredictionProvider model = TestUtils.getEvenFeatureModel(idx);
    List<PredictionOutput> outputs = model.predictAsync(List.of(input)).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    Prediction prediction = new SimplePrediction(input, outputs.get(0));
    LimeConfig limeConfig = new LimeConfig().withSamples(100).withPerturbationContext(new PerturbationContext(seed, random, 2));
    LimeExplainer limeExplainer = new LimeExplainer(limeConfig);
    Map<String, Saliency> saliencyMap = limeExplainer.explainAsync(prediction, model).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    for (Saliency saliency : saliencyMap.values()) {
        assertNotNull(saliency);
        List<FeatureImportance> topFeatures = saliency.getTopFeatures(3);
        assertEquals(3, topFeatures.size());
        assertEquals(1d, ExplainabilityMetrics.impactScore(model, prediction, topFeatures));
    }
    double minimumPositiveStabilityRate = 0.5;
    double minimumNegativeStabilityRate = 0.5;
    int topK = 1;
    TestUtils.assertLimeStability(model, prediction, limeExplainer, topK, minimumPositiveStabilityRate, minimumNegativeStabilityRate);
    List<PredictionInput> inputs = new ArrayList<>();
    for (int i = 0; i < 100; i++) {
        List<Feature> fs = new LinkedList<>();
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        inputs.add(new PredictionInput(fs));
    }
    DataDistribution distribution = new PredictionInputsDataDistribution(inputs);
    int k = 2;
    int chunkSize = 10;
    String decision = "feature-" + idx;
    double precision = ExplainabilityMetrics.getLocalSaliencyPrecision(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(precision).isEqualTo(1);
    double recall = ExplainabilityMetrics.getLocalSaliencyRecall(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(recall).isEqualTo(1);
    double f1 = ExplainabilityMetrics.getLocalSaliencyF1(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(f1).isEqualTo(1);
}
Also used : SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) PerturbationContext(org.kie.kogito.explainability.model.PerturbationContext) PredictionInput(org.kie.kogito.explainability.model.PredictionInput) Prediction(org.kie.kogito.explainability.model.Prediction) SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) ArrayList(java.util.ArrayList) Saliency(org.kie.kogito.explainability.model.Saliency) PredictionProvider(org.kie.kogito.explainability.model.PredictionProvider) Feature(org.kie.kogito.explainability.model.Feature) LinkedList(java.util.LinkedList) Random(java.util.Random) FeatureImportance(org.kie.kogito.explainability.model.FeatureImportance) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) DataDistribution(org.kie.kogito.explainability.model.DataDistribution) PredictionOutput(org.kie.kogito.explainability.model.PredictionOutput) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) ValueSource(org.junit.jupiter.params.provider.ValueSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 44 with PerturbationContext

use of org.kie.kogito.explainability.model.PerturbationContext in project kogito-apps by kiegroup.

the class DummyModelsLimeExplainerTest method testUnusedFeatureRegression.

@ParameterizedTest
@ValueSource(longs = { 0 })
void testUnusedFeatureRegression(long seed) throws Exception {
    Random random = new Random();
    int idx = 2;
    List<Feature> features = new LinkedList<>();
    features.add(TestUtils.getMockedNumericFeature(100));
    features.add(TestUtils.getMockedNumericFeature(20));
    features.add(TestUtils.getMockedNumericFeature(10));
    PredictionProvider model = TestUtils.getSumSkipModel(idx);
    PredictionInput input = new PredictionInput(features);
    List<PredictionOutput> outputs = model.predictAsync(List.of(input)).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    Prediction prediction = new SimplePrediction(input, outputs.get(0));
    LimeConfig limeConfig = new LimeConfig().withSamples(10).withPerturbationContext(new PerturbationContext(seed, random, 1));
    LimeExplainer limeExplainer = new LimeExplainer(limeConfig);
    Map<String, Saliency> saliencyMap = limeExplainer.explainAsync(prediction, model).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    for (Saliency saliency : saliencyMap.values()) {
        assertNotNull(saliency);
        List<FeatureImportance> topFeatures = saliency.getTopFeatures(3);
        assertEquals(3, topFeatures.size());
        assertEquals(1d, ExplainabilityMetrics.impactScore(model, prediction, topFeatures));
    }
    int topK = 1;
    double minimumPositiveStabilityRate = 0.5;
    double minimumNegativeStabilityRate = 0.5;
    TestUtils.assertLimeStability(model, prediction, limeExplainer, topK, minimumPositiveStabilityRate, minimumNegativeStabilityRate);
    List<PredictionInput> inputs = new ArrayList<>();
    for (int i = 0; i < 100; i++) {
        List<Feature> fs = new LinkedList<>();
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        inputs.add(new PredictionInput(fs));
    }
    DataDistribution distribution = new PredictionInputsDataDistribution(inputs);
    int k = 2;
    int chunkSize = 10;
    String decision = "sum-but" + idx;
    double precision = ExplainabilityMetrics.getLocalSaliencyPrecision(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(precision).isEqualTo(1);
    double recall = ExplainabilityMetrics.getLocalSaliencyRecall(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(recall).isEqualTo(1);
    double f1 = ExplainabilityMetrics.getLocalSaliencyF1(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(f1).isEqualTo(1);
}
Also used : SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) PerturbationContext(org.kie.kogito.explainability.model.PerturbationContext) PredictionInput(org.kie.kogito.explainability.model.PredictionInput) Prediction(org.kie.kogito.explainability.model.Prediction) SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) ArrayList(java.util.ArrayList) Saliency(org.kie.kogito.explainability.model.Saliency) PredictionProvider(org.kie.kogito.explainability.model.PredictionProvider) Feature(org.kie.kogito.explainability.model.Feature) LinkedList(java.util.LinkedList) Random(java.util.Random) FeatureImportance(org.kie.kogito.explainability.model.FeatureImportance) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) DataDistribution(org.kie.kogito.explainability.model.DataDistribution) PredictionOutput(org.kie.kogito.explainability.model.PredictionOutput) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) ValueSource(org.junit.jupiter.params.provider.ValueSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Example 45 with PerturbationContext

use of org.kie.kogito.explainability.model.PerturbationContext in project kogito-apps by kiegroup.

the class DummyModelsLimeExplainerTest method testMapOneFeatureToOutputRegression.

@ParameterizedTest
@ValueSource(longs = { 0 })
void testMapOneFeatureToOutputRegression(long seed) throws Exception {
    Random random = new Random();
    int idx = 1;
    List<Feature> features = new LinkedList<>();
    features.add(TestUtils.getMockedNumericFeature(100));
    features.add(TestUtils.getMockedNumericFeature(20));
    features.add(TestUtils.getMockedNumericFeature(0.1));
    PredictionInput input = new PredictionInput(features);
    PredictionProvider model = TestUtils.getFeaturePassModel(idx);
    List<PredictionOutput> outputs = model.predictAsync(List.of(input)).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    Prediction prediction = new SimplePrediction(input, outputs.get(0));
    LimeConfig limeConfig = new LimeConfig().withSamples(100).withPerturbationContext(new PerturbationContext(seed, random, 1));
    LimeExplainer limeExplainer = new LimeExplainer(limeConfig);
    Map<String, Saliency> saliencyMap = limeExplainer.explainAsync(prediction, model).get(Config.INSTANCE.getAsyncTimeout(), Config.INSTANCE.getAsyncTimeUnit());
    for (Saliency saliency : saliencyMap.values()) {
        assertNotNull(saliency);
        List<FeatureImportance> topFeatures = saliency.getTopFeatures(3);
        assertEquals(3, topFeatures.size());
        assertEquals(1d, ExplainabilityMetrics.impactScore(model, prediction, topFeatures));
    }
    int topK = 1;
    double minimumPositiveStabilityRate = 0.5;
    double minimumNegativeStabilityRate = 0.5;
    TestUtils.assertLimeStability(model, prediction, limeExplainer, topK, minimumPositiveStabilityRate, minimumNegativeStabilityRate);
    List<PredictionInput> inputs = new ArrayList<>();
    for (int i = 0; i < 100; i++) {
        List<Feature> fs = new LinkedList<>();
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        fs.add(TestUtils.getMockedNumericFeature());
        inputs.add(new PredictionInput(fs));
    }
    DataDistribution distribution = new PredictionInputsDataDistribution(inputs);
    int k = 2;
    int chunkSize = 10;
    String decision = "feature-" + idx;
    double precision = ExplainabilityMetrics.getLocalSaliencyPrecision(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(precision).isZero();
    double recall = ExplainabilityMetrics.getLocalSaliencyRecall(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(recall).isEqualTo(1);
    double f1 = ExplainabilityMetrics.getLocalSaliencyF1(decision, model, limeExplainer, distribution, k, chunkSize);
    assertThat(f1).isZero();
}
Also used : SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) PerturbationContext(org.kie.kogito.explainability.model.PerturbationContext) PredictionInput(org.kie.kogito.explainability.model.PredictionInput) Prediction(org.kie.kogito.explainability.model.Prediction) SimplePrediction(org.kie.kogito.explainability.model.SimplePrediction) ArrayList(java.util.ArrayList) Saliency(org.kie.kogito.explainability.model.Saliency) PredictionProvider(org.kie.kogito.explainability.model.PredictionProvider) Feature(org.kie.kogito.explainability.model.Feature) LinkedList(java.util.LinkedList) Random(java.util.Random) FeatureImportance(org.kie.kogito.explainability.model.FeatureImportance) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) DataDistribution(org.kie.kogito.explainability.model.DataDistribution) PredictionOutput(org.kie.kogito.explainability.model.PredictionOutput) PredictionInputsDataDistribution(org.kie.kogito.explainability.model.PredictionInputsDataDistribution) ValueSource(org.junit.jupiter.params.provider.ValueSource) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest)

Aggregations

PerturbationContext (org.kie.kogito.explainability.model.PerturbationContext)73 Random (java.util.Random)64 PredictionProvider (org.kie.kogito.explainability.model.PredictionProvider)61 PredictionInput (org.kie.kogito.explainability.model.PredictionInput)59 Prediction (org.kie.kogito.explainability.model.Prediction)58 PredictionOutput (org.kie.kogito.explainability.model.PredictionOutput)58 SimplePrediction (org.kie.kogito.explainability.model.SimplePrediction)57 Test (org.junit.jupiter.api.Test)46 LimeConfig (org.kie.kogito.explainability.local.lime.LimeConfig)45 LimeExplainer (org.kie.kogito.explainability.local.lime.LimeExplainer)33 Feature (org.kie.kogito.explainability.model.Feature)30 LimeConfigOptimizer (org.kie.kogito.explainability.local.lime.optim.LimeConfigOptimizer)28 ArrayList (java.util.ArrayList)27 Saliency (org.kie.kogito.explainability.model.Saliency)25 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)24 DataDistribution (org.kie.kogito.explainability.model.DataDistribution)24 PredictionInputsDataDistribution (org.kie.kogito.explainability.model.PredictionInputsDataDistribution)20 ValueSource (org.junit.jupiter.params.provider.ValueSource)17 LinkedList (java.util.LinkedList)16 FeatureImportance (org.kie.kogito.explainability.model.FeatureImportance)12