🏭 End-to-end pipeline that builds and runs experiments for assessing explaining methods suited for Graph Neural Networks.
Go to file
araison a9abc401f0 Updating 2024-04-28 16:48:08 +02:00
explaining_framework Fixing bug 2023-02-12 14:51:15 +01:00
.coveragerc Updating 2024-04-28 16:48:08 +02:00
.gitignore Adding .gitignore and setup.py 2022-12-17 18:13:56 +01:00
.gitkeep First commit 2022-11-29 13:19:05 +01:00
AUTHORS.md Updating 2024-04-28 16:48:08 +02:00
CHANGELOG.md Updating 2024-04-28 16:48:08 +02:00
CONTRIBUTING.md Updating 2024-04-28 16:48:08 +02:00
LICENCE Updating 2024-04-28 16:48:08 +02:00
Makefile Updating 2024-04-28 16:48:08 +02:00
README.md Updating 2024-04-28 16:48:08 +02:00
config_gen.py Adding XGWT configs generation 2023-02-12 14:12:12 +01:00
main.py Adding Force FastForward method 2023-01-31 10:19:17 +01:00
parallel.sh Adding Force FastForward method 2023-01-31 10:24:31 +01:00
requirements.txt Adding requirements 2023-01-09 20:46:02 +01:00
setup.cfg Updating 2024-04-28 16:48:08 +02:00
setup.py Adding .gitignore and setup.py 2022-12-17 18:13:56 +01:00
stat_parser.py New module for parsing explaining stats 2023-02-21 13:11:17 +01:00

README.md

Explaining framework

PyTorch-Geoemtric add-on for explaining Graph Neural Network.

How to

  1. Set up your experiment details (dataset, GNN architecture, explaining method, metrics, GPU workload limit, etc.).
 # ----------------------------------------------------------------------- #
    # Basic options
    # ----------------------------------------------------------------------- #

    # Set print destination: stdout / file / both
    explaining_cfg.print = "both"

    explaining_cfg.out_dir = "./explanations"

    explaining_cfg.cfg_dest = "explaining_config.yaml"

    explaining_cfg.seed = 0

    # ----------------------------------------------------------------------- #
    # Dataset options
    # ----------------------------------------------------------------------- #

    explaining_cfg.dataset = CN()

    explaining_cfg.dataset.name = "Cora"

    explaining_cfg.dataset.item = []

    # ----------------------------------------------------------------------- #
    # Model options
    # ----------------------------------------------------------------------- #

    explaining_cfg.model = CN()

    # Set wether or not load the best model for given dataset or a path
    explaining_cfg.model.ckpt = "best"

    # Setting the path of models folder
    explaining_cfg.model.path = "path"

    # ----------------------------------------------------------------------- #
    # Explainer options
    # ----------------------------------------------------------------------- #

    explaining_cfg.explainer = CN()

    # Name of the explaining method
    explaining_cfg.explainer.name = "EiXGNN"

    # Whether or not to provide specific explaining methods configuration or default configuration
    explaining_cfg.explainer.cfg = "default"

    # Whether or not recomputing explanation if they already exist
    explaining_cfg.explainer.force = False

    # ----------------------------------------------------------------------- #
    # Explaining options
    # ----------------------------------------------------------------------- #

    # 'ExplanationType : 'model' or 'phenomenon'
    explaining_cfg.explanation_type = "model"

    explaining_cfg.model_config = CN()

    # Do not modify it, will be handled by dataset , assuming one dataset = one learning task
    explaining_cfg.model_config.mode = "regression"

    # Do not modify it, will be handled by dataset , assuming one dataset = one learning task
    explaining_cfg.model_config.task_level = None

    # Do not modify it, we always assume here that model output are 'raw'
    explaining_cfg.model_config.return_type = "raw"

    # ----------------------------------------------------------------------- #
    # Thresholding options
    # ----------------------------------------------------------------------- #

    explaining_cfg.threshold = CN()

    explaining_cfg.threshold.config = CN()
    explaining_cfg.threshold.config.type = "all"

    explaining_cfg.threshold.value = CN()
    explaining_cfg.threshold.value.hard = [(i * 10) / 100 for i in range(10)]
    explaining_cfg.threshold.value.topk = [2, 3, 5, 10, 20, 30, 50]

    # which objectives metrics to computes, either all or one in particular if implemented
    explaining_cfg.metrics = CN()
    explaining_cfg.metrics.sparsity = CN()
    explaining_cfg.metrics.sparsity.name = "all"
    explaining_cfg.metrics.fidelity = CN()
    explaining_cfg.metrics.fidelity.name = "all"
    explaining_cfg.metrics.accuracy = CN()
    explaining_cfg.metrics.accuracy.name = "all"

    # Whether or not recomputing metrics if they already exist

    explaining_cfg.adjust = CN()
    explaining_cfg.adjust.strategy = "rpns"

    explaining_cfg.attack = CN()
    explaining_cfg.attack.name = "all"

    # Select device: 'cpu', 'cuda', 'auto'
    explaining_cfg.accelerator = "auto"

  1. Provide the generated .yaml file to main.py or a folder path to a configs file stack for parallel running.

  2. Run

  3. Check already post-processed (statistics, plots) results.