banner



What Does Fp Section Mean When Registering For Class Njit

Explainable COVID-19 Pneumonia Project CS677 Autumn 2020

The project contents are present in two unlike GitHub repos -- see Github Repos

Project Parts

Part 1

This part of the project is contained in the GitHub repo - https://github.com/cicorias/njit-covid-cxr.

Role 1 of the project and reproduction of the results emitted primaryly a Trained Tensorflow model and for the LIME steps a batch run of the predictions using the LIME implementation. The model is likewise large to persist inside a GitHub repo and git-lfs triggered a payment wall for usage.

  1. Trained model - https://scicoria.hulk.core.windows.net/public/model20201115-151427-random-imbalance-NO-fiscore-resnet50v2.h5
  2. LIME batch predictions from model: https://github.com/cicorias/njit-covid-cxr/tree/principal/results/predictions/20201106-135747
  3. The review of the LIME results are hither: https://github.com/cicorias/njit-deeplearn-explicate-shap#initial-lime-implementation-reproduction-of-results

The model file was over 250 Megabytes and non advisable for standard git commit and limited git-lfs support was requiring payment - and so, the model is nowadays in Azure Cloud Storage.

Part 2

  1. The summary of the SHAP arroyo is here: https://github.com/cicorias/njit-deeplearn-explain-shap/blob/principal/README.md#shapley-value-overview

Part iii

The application of the SHAP framework and packages - acme level repo and word are:

  • SHAP caption on the Covid Dataset - https://github.com/cicorias/njit-deeplearn-explain-shap#using-the-shap-method-on-the-covid-data-set
  • SHAP approach Jupyter notebook - https://github.com/cicorias/njit-deeplearn-explicate-shap/blob/master/shap-last-run.ipynb
  • Height Level Repo (containing a + b) https://github.com/cicorias/njit-deeplearn-explicate-shap

This repo is the second part of the project that is provided in two repos

Github Repos

  1. https://github.com/cicorias/njit-covid-cxr - this is a duplication of the original work to create a trained model -- that model is used in the following project
  2. https://github.com/cicorias/njit-deeplearn-explain-shap - this is the second part of the project and provides the SHAP visualizations and the project writeup.

Authors

name email
David Apolinar Da468 AT njit DOT edu
Shawn Cicoria sc2443 AT njit DOT edu
Ted Moore tm437 AT njit DOT edu

Overview

For this project, the team explored several papers and implementations of techniques for providing Explainable auto learning. As articulated in the paper, and every bit society becomes more than dependent upon AI/ML algorithms for predictions or making choices in our lodge, the pressure level is there to e'er be able to explain "why" an AI/ML model takes a path towards an outcome.

For this projection, the team reviewed the existing LIME implementation, so reviewed the papers:

  1. A Unified Arroyo to Interpreting Model Predictions
              @misc{lundberg2017unified,       title={A Unified Approach to Interpreting Model Predictions},        author={Scott Lundberg and Su-In Lee},       year={2017},       eprint={1705.07874},       archivePrefix={arXiv},       primaryClass={cs.AI} }                          
  1. AI Explanations Whitepaper - https://storage.googleapis.com/cloud-ai-whitepapers/AI%20Explainability%20Whitepaper.pdf

  2. Interpretable machine learning. A Guide for Making Black Box Models Explainable

              @book{molnar2019,  title      = {Interpretable Car Learning},   author     = {Christoph Molnar},   note       = {\url{https://christophm.github.io/interpretable-ml-book/}},   year       = {2019},   subtitle   = {A Guide for Making Blackness Box Models Explainable} }                          

These references provide methods for Explainable machine learning outcomes. These references provide a detailed explanation of several approaches and all of them provide Shapley Value (https://en.wikipedia.org/wiki/Shapley_value) based approaches. In addition, Lundberg and Lee present a framework called SHAP (SHapley Additive exPlanations) intending to be a unified framework for interpreting prediction that is used afterwards in this cursory.

Part ane

Initial LIME implementation Reproduction of Results

In lodge to rerun and railroad train the model for similar results – you must follow the initial REPO'southward instructions for your environment – and modify the config.yaml to match your file system paths, along with using the settings for training. These steps are conspicuously explained in the Getting Started section hither: https://github.com/aildnont/covid-cxr#getting-started

Team attempt and resulting Model

As explained later, but to skip alee to the SHAP repo, the model used past this squad, that was trained is also located here: https://scicoria.hulk.cadre.windows.cyberspace/public/model20201115-151427-random-imbalance-NO-fiscore-resnet50v2.h5

The team cloned all the required repos and followed the setup steps every bit articulated in the README.doctor file located in the Local Interpretable Model -Agnostic Explanations (LIME).

As in the prior projection GPUs are nearly essential for some of the steps. Leveraging a large GPU enabled virtual automobile from Microsoft Azure with the following configuration:

Azure Virtual Machine

  • Standard NC6 (6 vCPUs, 56 GiB memory)

  • ane or 2 NVIDIA Tesla K80 24 GiB of GDDR5 memory

  • Windows (Windows Server 2019 Datacenter)

Steps taken - Training, validation, and test data

After cloning the primary GitHub repository, we broke the upstream origin remote and bound information technology to our ain repo at : https://github.com/cicorias/njit-covid-cxr. This was done just to make any of our own modifications to the lawmaking easier to manage through normal git direction of source code.

Dataset configuration

In improver, the supporting data sets were also cloned or pulled locally (Kaggle data). Each was placed in a relative directory alongside the chief repo nether the RAW_DATA path, and the necessary config.yaml file updated with the correct paths.

Configuration file

Other than paths, the settings that thing the nigh for our run are in the TRAIN department, all other settings remained the same equally the source repository.

  • MODEL_DEF : resnet50v2

  • EPOCHS : 100

  • IMB_STRATEGY : random oversample

  • NUM_GPUS: 1

Model_def – this setting allows for one of the iii supported models in the original codebase. The resnet50v2 is based upon TensorFlow'south ResNet50V2 with several layers added for a custom top:

X = GlobalAveragePooling2D()(X)
X = Dropout(dropout)(Ten)
X = Dense(nodes_dense0, kernel_initializer='he_uniform', activity_regularizer=l2(l2_lambda))(X)
X = LeakyReLU()(10)
X = Dense(n_classes, bias_initializer=output_bias)(X)
Y = Activation('softmax', dtype='float32', name='output')(Ten)

Epochs – we set this to 100 to provide grooming fourth dimension in a reasonable time and residual that with the loss output – going from 100 to 200 epochs did non provide great reduction in loss for this exercise.

Imb_strategy – this is the method of how the codebase handled the significant imbalance of Positive COVID images vs the Negative COVID images. The code utilizes generators that supplemented the Positive image sets by rotation, flipping, etc.

Num_gpus – the number of GPUs if bachelor the code would use.

Preprocess data

The corpus of data is dissever iii ways initially and includes generation of data if the imbalance strategy "random_oversample" is chose.

Two example rows from the generated CSV meta-data files are every bit follows:

              ,filename,label,label_str 349,rsna/0700bc73-b6e3-412e-9e2f-aa0b83424804.jpg,0,not-COVID-xix 830,rsna/0abbde89-55e8-4b25-ba9b-17a99f84bae0.jpg,0,non-COVID-19                          

The get-go column is an index from the originating image dataset and is non used. The second is the source/image filename from that dataset.

Finally, there is an integer label and text characterization – where 0 – is "negative" for COVID, and ane is "positive" for COVID – a binary classification.

The three CSV files for meta-information created during pre-process:

The breakup of these files is as follows – with each line in the file providing a reference to a filename, along with beingness labeled either COVID or non-COVID.

Training Set

Row Labels Count of label_str
COVID-xix 28
non-COVID-19 1461
(blank)
Grand Full 1489

Validation Prepare

Row Labels Count of label_str
COVID-19 3
not-COVID-19 143
(blank)
Grand Total 146

Exam Set

Row Labels Count of label_str
COVID-nineteen 4
not-COVID-nineteen 178
(bare)
Grand Full 182

We see that the training set consists of 1461 non-COVID and only 28 COVID positive images. We meet similar deficiencies in the datasets for validation and test besides.

For that meaning imbalance, the training process, as documented in the COVID-cxr repository (https://github.com/aildnont/covid-cxr) the team chose the choice for "random-imbalance", as mentioned before in the configuration department. This option uses a data generation for Training of images until each course (in our instance just binary two classes) are of equal number of samples. So, from 28 Training set up images 1433 images are created through additional random picking of the 28 -- this is in the original source code as train.py(random_minority_oversample) method**.** This relies on a method chosen RandomOverSampler from the imbalanced-learn-API package (https://imbalanced-larn.readthedocs.io/en/stable/generated/imblearn.over_sampling.RandomOverSampler.html).

The unfortunate affair is that this could be further improved via image rotation, flipping instead of simply "over-sampling" of the 28 images as this is nearly duplication of every image North times until it reaches 1433 as the ratio is and then small. This would be an expanse of great improvement in building greater trust in the model itself. The other selection is to stick with class_weights – which just sticks with the imbalance, but equally the original authors documented, this is a merchandise-off that resulted in lower accuracies from training and left for farther experimentation.

During our training runs, since model turning and perfection was non the focus, and the training accuracy, loss was quite similar between class_weights and random_imbalance, we stuck with the latter selection. Equally we see subsequently, the choice was acceptable for expanding into both the LIME and SHAP explainable techniques.

Preparation run output

Mostly expected with the random oversampling, there is quite a bit of overfitting. Once more, the focus of this experiment is on explain-ability not the performance of this model. The Preparation/Validation metrics are:

Training / validation

train metric validation metric
loss: 0.0180 val_loss: 0.1581
accuracy: 0.9997 val_accuracy: 0.9795
precision: 0.9993 val_precision: 0.0000e+00
recall: 1.0000 val_recall: 0.0000e+00
auc: ane.0000 val_auc: 0.9762

We can meet the overfitting clearly, and again this is easily attributed to the vast imbalance of non-COVID vs COVID 10-rays – which further compounded and leads to overfitting when bones random sampling is such a small pool of images (28) for COVID to start with and extrapolating to 1489 images.

From Tensorboard the accuracy plotting over epoch.

NOTE: Given the inverted articulation of "positive" vs "negative" along with the imbalance, the defoliation matrix and associated metrics may look a fleck awkward.

Test set up results

loss = 0.13487988989800215 accuracy = 0.9945 precision = 0.0 call back = 0.0 auc = 0.9814938 confusion matrix: Truthful (-)ves: 181 False (+)ves: 0 Simulated (-)ves: one Truthful (+)ves: 0

Confusion Matrix

Test set up was reasonable with 1 false negative of the 4 samples presented.

Measure out Value Derivations
Sensitivity 0.0000 TPR = TP / (TP + FN)
Specificity 1.0000 SPC = TN / (FP + TN)
Precision PPV = TP / (TP + FP)
Negative Predictive Value 0.9945 NPV = TN / (TN + FN)
False Positive Rate 0.0000 FPR = FP / (FP + TN)
Simulated Discovery Rate FDR = FP / (FP + TP)
Simulated Negative Rate 1.0000 FNR = FN / (FN + TP)
Accuracy 0.9945 ACC = (TP + TN) / (P + N)
F1 Score 0.0000 F1 = 2TP / (2TP + FP + FN)
Matthews Correlation Coefficient TP*TN - FP*FN / sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN))

Model output

The resulting model after training is persisted as an H5 file containing model composition along with trained weights. This H5 model is at present portable and usable for subsequent phases. In add-on, for the terminal SHAP step, the model is pushed to Azure Deject Storage at the following location and is approximately 240 MBs in size. The H5 file format and working with is explained hither: https://www.tensorflow.org/tutorials/keras/save_and_load.

Model – trained: https://scicoria.blob.cadre.windows.net/public/model20201115-151427-random-imbalance-NO-fiscore-resnet50v2.h5


LIME Interpretations

For the first part of the project, to recreate the initial results from the GitHub Repository, after training the model, updating the config.yaml to utilize that model, then running the single or batch LIME estimation we are provided with some visualizations that help place the features of the source images that push button the nomenclature to either COVID or not-COVID.

What is LIME

Equally explained in the reference newspaper from the provided GitHub repository, Local Interpretable Model-Agnostic Explanations (i.e. LIME) can exist applied for model explicate-ability, which is what the authors initial provided. LIME groups pixels from the images together, forming super-pixels that represent the features. The processes use the contribution (either +/-) to the outcome or predicted class of these groupings. For this project, the explanations are shown as overlays of colors for these groupings that are primary contributions. In this implementation, green represents significant positive contribution towards the predicted class; red is the inverse.

LIME with the x-ray dataset

Before running a batch of images, y'all must get-go run the script - ./src/interpretability/lime_explain.py – which is hardcoded to choose an prototype from the dataset. Information technology then persists 2 "pickle" files (persisted Python object) – that is afterward used by the batch prediction script.

To run a batch of images through, we utilized the provided script - ./src/predict.py which relies on the the config.yaml settings below, which obviously includes our trained model in addition to the pickle files from above, the source path of x-ray images (BATCH_PRED_IMGS), and where to write the output.

                MODEL_TO_LOAD: 'results\models\model20201110-073719.h5'   LIME_EXPLAINER: './data/interpretability/lime_explainer.pkl'   OUTPUT_CLASS_INDICES: './data/interpretability/output_class_indices.pkl'   BATCH_PRED_IMGS: 'East:/g/njit/deep-learn/cnn/RAW_DATA/Figure1-COVID-chestxray-dataset/images/'   BATCH_PREDS: 'results/predictions/'                          

From the output, we so are going to look at six images – 3 that are COVID (positive) and 3 that are non-COVID (negative) using the output of the LIME caption.

Batch LIME Predictions and Visualizations

All images used in this footstep are from the GitHub repository of x-ray paradigm data at https://github.com/ieee8023/covid-chestxray-dataset.

The images we chose out of this batch for give-and-take are below, and all batch predictions have been added to the GitHub repo at: https://github.com/cicorias/njit-covid-cxr/tree/master/results/predictions/20201106-135747. Normally these would exist excluded from being committed via the .gitignore just for this write-up we chose to add a run to the online GitHub repository.

COVID

  • COVID-00001.jpg

  • COVID-00015a.png

  • COVID-00015b.png

not-COVID

  • COVID-00010.jpg

  • COVID-00013a.jpg

  • COVID-00013b.jpg

Batch Output COVID positive images

Every bit shown below, each image forth with the annotated LIME paradigm are show alongside each other. For the labeling, the COVID images are all labeled as "COVID" or positive. The LIME coloring equally described before, dark-green pushed the nomenclature GREATER, while RED lower. The magnitude and overall coverage are not easily seen, but the visualization certainly aids in helping a human poke further into the original images to run across any features and provide farther trust every bit that aligns with reality.

COVID

  • COVID-00001.jpg (positive)

  • COVID-00015a.png (positive)

  • COVID-00015b.png (positive)

Batch Output non-COVID negative images

These paradigm pairs on all non-COVID. Once again, the coloring provided either Dark-green or Cherry-red every bit it contributes positively or negatively towards classification.

non-COVID

  • COVID-00010.jpg (negative)

    COVID-00013a.jpg (negative)

  • COVID-00013b.jpg (negative)

Office 2

Shapley Value Overview

Earlier discussing the SHAP Method, it is of import to discuss Shapley values, which the SHAP method relies on. The general thought is to determine how much each characteristic contributed to the ultimate prediction of the mode.

Shapley values and SHAP Overview

As described in Google's AI and Explainability Whitepaper, SHAPLEY is a concept from cooperative game theory. This process created past Lloyd Shapley in 1953. The Shapley approach leverages a concept of Shapley values. Shapley values make it possible to decide what contributions specific inputs have to a given result.

From the online book - Interpretable Machine Learning [Molnar, Christoph] Shapley values are divers equally follows [https://christophm.github.io/interpretable-ml-book/shapley.html]:

A prediction can exist explained past assuming that each feature value of the instance is a "role player" in a game where the prediction is the payout. Shapley values -- a method from coalitional game theory -- tells us how to fairly distribute the "payout" among the features.

To illustrate this further, Google's AI explainability whitepaper uses a ready of 3 employees to measure out what contributions each gave to achieve a specific profit. Assuming we accept the profit of event of each collective combination of employees (eg. A, B, C, A and B, B and C, etc.) we then determine the relative contribution of each employee by calculating their incremental contribution in all of the possible permutations of adding employees in sequence to build to the complete collection of the three employees. The boilerplate of these incremental contributions is known as the Shapley value, and the employee with the largest Shapley value is considered to have the highest contribution to the profit.

In our scenario with the Covid-nineteen dataset, we tin apply the Shapley primary to determine which features contributed the most to a given prediction. Because we are analyzing images, "features" stand for to pixels and we can calculate the relative importance of each pixel (or average importance over a cluster of pixels) to a positive or negative prediction and use it to highlight or describe bounding boxes around areas of importance.

This visual indication of importance will enable a homo-in-the-loop to brand educated decisions most our model, such every bit figure out whether the the model is skewed or needs to be updated to correct bias that may exist due to limitations in the available dataset. This too makes it possible for a medical physician who is reviewing the results from a model, to determine what the strengths and weaknesses are, and make suitable changes. This is described in the Axiomatric Attribution for Deep Networks whitepaper.

SHAP (SHapley Addictive exPlanations)

At a high level, SHAP is considered a process to explicate private predictions. Christoph M, describes "the goal of SHAP equally explaining the prediction of an case of x past computing each contribution to the prediction."

SHAP has several Model-Agnostic Approximations

  • Kernel SHAP

  • Linear SHAP

  • Depression-Club SHAP

  • Max SHAPE

  • Deel SHAP (DeepLIFT _ Shapley values)+

SHAP Effectiveness

Unfortunately at the scale of high resolution Ten-Rays, exhaustively calculating the Shapley values for each input characteristic would exist compuationally expensive/infeasible (calculations abound exponentially with the number of features), and thus we explore some culling approachs that can approximate the importance.

Shapley works by leveraging what is chosen characteristic attribution methods. As described in Google's AI whitepaper, feature attributions brand it possible to assign a score proportional to the features contribution to the model'southward prediction. This makes information technology possible to provide the explainability with AI model predictions. In our scenario, the feature attributions for our covid predictions can help show a doctor which sections of a specific X-Ray instance led to predicting a positive or negative covid case.

Shapley Value Disadvantages

Shapley values depend on the characteristic set provided (https://christophm.github.io/interpretable-ml-book/shapley.html\#disadvantages-xiii). Information technology too leverages all of the features, which explains why the computation fourth dimension takes equally long as it does.

Furthermore, as detailed in the interpretable-ml-book (https://christophm.github.io/interpretable-ml-book/shapley.html\#the-shapley-value-in-detail), also referenced by Google, Shapley values tin be misinterpreted, this is because users tin can be believe that the Shapley value is the departure betwixt the predicted value and the removed feature from the model. But as stated above, this is non the instance. It is the difference between the actual predicton and the mean prediction.

SHAP Differences From LIME

SHAP differs from LIME in the way the importance is calculated. While LIME fits a linear model to approximate the contribution of each characteristic to a predicted grade, SHAP looks at the relative contribution of all features to the outcome past looking at their average incremental contribution through all permuations of the features. Through this approach, SHAP is able to provide some key benefits in its results that are considered shortcomings of the LIME arroyo – including stability, consistency, and missingness. Although information technology is more computationally expensive, these extra benefits take motivated the use of SHAP over LIME in most explainability exercises in practice.

SHAP Use cases

SHAP Feature attribution helps with two key use-cases.

  • Model Debugging – For this method, it is helpful to understand why a specific model failed to classify a certain dataset. This can be due to to several factors, and agreement which pixels of pixel areas contributed to a covid vs pneumonia classification is helpful in fixing the problem

  • Optimizating Models – Having the SHAP values for a specific X-Ray tin help tune the model and then that it does not focus on certain areas to make a prediction, e.thou. looking at the abdomen area. This helps better the detection area for the features nosotros are nearly interested in, eastward.thousand. Covid/Pneumonia.

Feature Attribution Methods

There are several Feature Attribution methods every bit documented on Google'south explainable AI (https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/overview\#understanding_feature_attribution_methods).

Method Basic caption Recommended model types Instance employ cases
Integrated gradients A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value. Differentiable models, such equally neural networks. Recommended particularly for models with large feature spaces. Recommended for depression-contrast images, such as X-rays. Classification and regression on tabular information Nomenclature on paradigm data
XRAI (caption with Ranked Area Integrals) Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels. Models that accept epitome inputs. Recommended specially for natural images, which are any real-world scenes that contain multiple objects. Classification on prototype data
Sampled Shapley Assigns credit for the event to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values. Non-differentiable models, such as ensembles of trees and neural networks1 Classification and regression on tabular data

As outlined in the chart above, we cull to leverage Integrated Gradients since information technology fits our apply-case, e.1000. X-Rays.

Part 3

Using the SHAP method on the COVID data ready

The Jupyter notebook https://github.com/cicorias/njit-deeplearn-explain-shap/blob/main/shap-final-run.ipynb can be run directly in Google Colab by choosing the link at the top. This notebook is within this repository and correlates to the content that follows.

One of the central benefits of SHAP is that it allows us to view the important values that contribute to a model's prediction. Equally such, as we will show in the model's layers below, we will be able to come across the key values that will lead to an instance being classified as a COVID or non-COVID example. This will help cease-users -- in this specific scenario, medical doctors -- sympathise what components of their X-Rays are contributing to classifying patients as COVID patients. For the explanations below, we leveraged the Python SHAP GradientExplainer (https://github.com/slundberg/shap\#gradientexplainer) libraries, which implements an integrated gradients algorithm to explain how diverse layers contributed to the model's prediction.

We chose Convolutional Layers early in the model for our visualization and experimentation with SHAP. Almost other layers had not shown whatever pronounced visuals, and some even crashed the library due to input size bug – and given the time, we did not accept an opportunity to farther debug and fix any crashing issues. And then, nosotros ended up with the post-obit layers:

Chosen Layers

Layer#: 2 - conv1_conv

Layer#: seven - conv2_block1_1_conv

Layer#: 14 - conv2_block1_0_conv

Layer 2 Explanation

As mentioned above the utilise-cases for SHAP, model debugging is a prime number example to determine how Shapley values are contributing to a model'south prediction. To illustrate, using layer 2 from our model as an example, we tin see which SHAP values contributed to the prediction. The red SHAP values, contributed near to this layer in the predicted form. The blue values did the reverse. In analyzing the six unlike cases below, we tin meet where each SHAP in the paradigm expanse was helping predict COVID vs non-COVID cases.

A picture containing diagram Description automatically generated

A screenshot of a cell phone Description automatically generated

Layer 7 Explanation

As we move to layer 7, we tin see how the SHAP values contribute further to the class prediction. For the top three COVID images, the sections in red are more prominent in this layer to predicting the COVID class. In 2 of the three images for non-COVID, we see that we don't get strong SHAP values, either positive or negative contributing to a prediction class.

A picture containing diagram Description automatically generated

A screenshot of a cell phone Description automatically generated

Layer 14 Explanation

The further nosotros go in the model, the more nosotros can see the areas that are strongly contributing to our final predicted grade. In layer xiv, nosotros notice a similar blueprint to layer 7, but a few more than areas with strong SHAP values, both blue and cerise.

A picture containing table Description automatically generated

A screenshot of a cell phone Description automatically generated

SHAP Predictions for COVID

As we tin meet in a higher place, SHAP is helpful in determining what could potentially be causing our predictions to bear a certain way. If we were looking to ameliorate optimize our results or fifty-fifty debug as to why we are getting a specific prediction for a certain image, nosotros can full diagnose what is swaying our model at a per layer basis. In our COVID dataset, a medical doctor tin encounter how our model is behaving and tweak the model or even remove artifacts that may exist inadvertently causing incorrect predictions, or further strength a model with more than samples with the attributes that highlight stiff SHAPLEY values.

References

  1. Interpretable machine learning. A Guide for Making Black Box Models Explainable, 2019. Molnar, Christoph.

  2. arXiv:1907.09701v2 [cs.LG] 4 Nov 2019, Benchmarking Attribution Methods with Relative Characteristic Importance

  3. arXiv:1703.01365v2 [cs.LG] xiii Jun 2017, Axiomatic Attribution for Deep Networks

  4. arXiv:1906.02825v2 [cs.CV] 20 Aug 2019, XRAI: Better Attributions Through Regions

  5. arXiv:1705.07874v2 [cs.AI] 25 Nov 2017, A Unified Approach to Interpreting Model

    Predictions

  6. AI Explainations Whitepaper, 2019, Google Cloud

  7. Introduction to AI Explainations for AI Platform, 2019, Google Cloud

  8. Attributing a deep network's prediction to its input features, xiii Mar 2017, [MUKUND SUNDARARAJAN, ANKUR TALY, QIQI YAN]

Appendices

Appendix 1 – config.yaml used for run

PATHS:
RAW_DATA: 'D:/Documents/Piece of work/covid-cxr/information/' # Path containing all 3 raw datasets (Mila, Figure one, RSNA)
MILA_DATA: 'D:/Documents/Piece of work/covid-cxr/data/covid-chestxray-dataset/' # Path of Mila dataset https://github.com/ieee8023/covid-chestxray-dataset
FIGURE1_DATA: 'D:/Documents/Work/covid-cxr/information/Figure1-COVID-chestxray-dataset/' # Path of Effigy 1 dataset https://github.com/agchung/Figure1-COVID-chestxray-dataset
RSNA_DATA: 'D:/Documents/Piece of work/covid-cxr/data/rsna/' # Path of RSNA dataset https://world wide web.kaggle.com/c/rsna-pneumonia-detection-challenge
PROCESSED_DATA: 'information/candy/'
TRAIN_SET: 'data/processed/train_set.csv'
VAL_SET: 'data/processed/val_set.csv'
TEST_SET: 'data/processed/test_set.csv'
IMAGES: 'documents/generated_images/'
LOGS: 'results\\logs\\'
MODEL_WEIGHTS: 'results/models/'
MODEL_TO_LOAD: 'results/models/model.h5'
LIME_EXPLAINER: './data/interpretability/lime_explainer.pkl'
OUTPUT_CLASS_INDICES: './information/interpretability/output_class_indices.pkl'
BATCH_PRED_IMGS: 'information/processed/test/'
BATCH_PREDS: 'results/predictions/'
DATA:
IMG_DIM: [224, 224]
VIEWS: ['PA', 'AP']
VAL_SPLIT: 0.08
TEST_SPLIT: 0.1
NUM_RSNA_IMGS: 1000
CLASSES: ['non-COVID-19', 'COVID-xix'] # Classes for binary classification
#CLASSES: ['normal', 'COVID-19', 'other_pneumonia'] # Classes for multiclass classification (3 classes)
Railroad train:
CLASS_MODE: 'binary' # One of {'binary', 'multiclass'}
MODEL_DEF: 'dcnn_resnet' # 1 of {'dcnn_resnet', 'resnet50v2', 'resnet101v2'}
CLASS_MULTIPLIER: [0.15, ane.0] # Class multiplier for binary classification
#CLASS_MULTIPLIER: [0.iv, 1.0, 0.4] # Class multiplier for multiclass classification (three classes)
EXPERIMENT_TYPE: 'single_train' # Ane of {'single_train', 'multi_train', 'hparam_search'}
BATCH_SIZE: 32
EPOCHS: 200
THRESHOLDS: 0.5 # Can be changed to listing of values in range [0, 1]
PATIENCE: 7
IMB_STRATEGY: 'class_weight' # One of {'class_weight', 'random_oversample'}
METRIC_PREFERENCE: ['auc', 'call up', 'precision', 'loss']
NUM_RUNS: x
NUM_GPUS: 1
NN:
DCNN_BINARY:
KERNEL_SIZE: (iii,3)
STRIDES: (i,1)
INIT_FILTERS: sixteen
FILTER_EXP_BASE: three
MAXPOOL_SIZE: (two,2)
CONV_BLOCKS: 3
NODES_DENSE0: 128
LR: 0.00001
OPTIMIZER: 'adam'
DROPOUT: 0.4
L2_LAMBDA: 0.0001
DCNN_MULTICLASS:
KERNEL_SIZE: (iii,3)
STRIDES: (1,1)
INIT_FILTERS: 16
FILTER_EXP_BASE: iii
MAXPOOL_SIZE: (2,2)
CONV_BLOCKS: iv
NODES_DENSE0: 128
LR: 0.0002
OPTIMIZER: 'adam'
DROPOUT: 0.40
L2_LAMBDA: 0.0001
LIME:
KERNEL_WIDTH: 1.75
FEATURE_SELECTION: 'lasso_path'
NUM_FEATURES: 1000
NUM_SAMPLES: 1000
COVID_ONLY: false
HP_SEARCH:
METRICS: ['accuracy', 'loss', 'call up', 'precision', 'auc']
COMBINATIONS: l
REPEATS: 2
RANGES:
KERNEL_SIZE: ['(3,3)', '(5,five)'] # Detached range
MAXPOOL_SIZE: ['(2,2)', '(3,3)'] # Discrete range
INIT_FILTERS: [8, xvi, 32] # Detached range
FILTER_EXP_BASE: [ii, iii] # Int range
NODES_DENSE0: [128, 256, 512, 1024] # Discrete range
CONV_BLOCKS: [iii, viii] # Int range
DROPOUT: [0.0, 0.one, 0.ii, 0.3, 0.four, 0.five] # Discrete range
LR: [-five.0, -3.0] # Real range on log scale (10^ten)
OPTIMIZER: ['adam'] # Detached range
L2_LAMBDA: [0.0, 0.00001, 0.0001, 0.001] # Discrete range
BATCH_SIZE: [16, 32] # Discrete range
IMB_STRATEGY: ['class_weight'] # Detached range
PREDICTION:
THRESHOLD: 0.5

Appendix ii – Preparation run output

              Train set shape before oversampling:  (1489, 3)  Train fix shape after resampling:  (2922, three) Found 2922 non-validated paradigm filenames belonging to 2 classes. Plant 146 non-validated paradigm filenames belonging to ii classes. Found 182 not-validated image filenames belonging to two classes. Preparation distribution:  ['Class COVID-19: 1461. ', 'Class non-COVID-19: 1461. '] MODEL CONFIG:  {'KERNEL_SIZE': '(three,3)', 'STRIDES': '(1,1)', 'INIT_FILTERS': sixteen, 'FILTER_EXP_BASE': 3, 'MAXPOOL_SIZE': '(2,2)', 'CONV_BLOCKS': 3, 'NODES_DENSE0': 128, 'LR': 1e-05, 'OPTIMIZER': 'adam', 'DROPOUT': 0.4, 'L2_LAMBDA': 0.0001} Model: "model" __________________________________________________________________________________________________ Layer (blazon)                    Output Shape         Param #     Connected to                      ================================================================================================== input_img (InputLayer)          [(None, 224, 224, 3) 0                                             __________________________________________________________________________________________________ conv1_pad (ZeroPadding2D)       (None, 230, 230, 3)  0           input_img[0][0]                   __________________________________________________________________________________________________ conv1_conv (Conv2D)             (None, 112, 112, 64) 9472        conv1_pad[0][0]                   __________________________________________________________________________________________________ pool1_pad (ZeroPadding2D)       (None, 114, 114, 64) 0           conv1_conv[0][0]                  __________________________________________________________________________________________________ pool1_pool (MaxPooling2D)       (None, 56, 56, 64)   0           pool1_pad[0][0]                   __________________________________________________________________________________________________ conv2_block1_preact_bn (BatchNo (None, 56, 56, 64)   256         pool1_pool[0][0]                  __________________________________________________________________________________________________ conv2_block1_preact_relu (Activ (None, 56, 56, 64)   0           conv2_block1_preact_bn[0][0]      __________________________________________________________________________________________________ conv2_block1_1_conv (Conv2D)    (None, 56, 56, 64)   4096        conv2_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv2_block1_1_bn (BatchNormali (None, 56, 56, 64)   256         conv2_block1_1_conv[0][0]         __________________________________________________________________________________________________ conv2_block1_1_relu (Activation (None, 56, 56, 64)   0           conv2_block1_1_bn[0][0]           __________________________________________________________________________________________________ conv2_block1_2_pad (ZeroPadding (None, 58, 58, 64)   0           conv2_block1_1_relu[0][0]         __________________________________________________________________________________________________ conv2_block1_2_conv (Conv2D)    (None, 56, 56, 64)   36864       conv2_block1_2_pad[0][0]          __________________________________________________________________________________________________ conv2_block1_2_bn (BatchNormali (None, 56, 56, 64)   256         conv2_block1_2_conv[0][0]         __________________________________________________________________________________________________ conv2_block1_2_relu (Activation (None, 56, 56, 64)   0           conv2_block1_2_bn[0][0]           __________________________________________________________________________________________________ conv2_block1_0_conv (Conv2D)    (None, 56, 56, 256)  16640       conv2_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv2_block1_3_conv (Conv2D)    (None, 56, 56, 256)  16640       conv2_block1_2_relu[0][0]         __________________________________________________________________________________________________ conv2_block1_out (Add together)          (None, 56, 56, 256)  0           conv2_block1_0_conv[0][0]                                                                          conv2_block1_3_conv[0][0]         __________________________________________________________________________________________________ conv2_block2_preact_bn (BatchNo (None, 56, 56, 256)  1024        conv2_block1_out[0][0]            __________________________________________________________________________________________________ conv2_block2_preact_relu (Activ (None, 56, 56, 256)  0           conv2_block2_preact_bn[0][0]      __________________________________________________________________________________________________ conv2_block2_1_conv (Conv2D)    (None, 56, 56, 64)   16384       conv2_block2_preact_relu[0][0]    __________________________________________________________________________________________________ conv2_block2_1_bn (BatchNormali (None, 56, 56, 64)   256         conv2_block2_1_conv[0][0]         __________________________________________________________________________________________________ conv2_block2_1_relu (Activation (None, 56, 56, 64)   0           conv2_block2_1_bn[0][0]           __________________________________________________________________________________________________ conv2_block2_2_pad (ZeroPadding (None, 58, 58, 64)   0           conv2_block2_1_relu[0][0]         __________________________________________________________________________________________________ conv2_block2_2_conv (Conv2D)    (None, 56, 56, 64)   36864       conv2_block2_2_pad[0][0]          __________________________________________________________________________________________________ conv2_block2_2_bn (BatchNormali (None, 56, 56, 64)   256         conv2_block2_2_conv[0][0]         __________________________________________________________________________________________________ conv2_block2_2_relu (Activation (None, 56, 56, 64)   0           conv2_block2_2_bn[0][0]           __________________________________________________________________________________________________ conv2_block2_3_conv (Conv2D)    (None, 56, 56, 256)  16640       conv2_block2_2_relu[0][0]         __________________________________________________________________________________________________ conv2_block2_out (Add)          (None, 56, 56, 256)  0           conv2_block1_out[0][0]                                                                             conv2_block2_3_conv[0][0]         __________________________________________________________________________________________________ conv2_block3_preact_bn (BatchNo (None, 56, 56, 256)  1024        conv2_block2_out[0][0]            __________________________________________________________________________________________________ conv2_block3_preact_relu (Activ (None, 56, 56, 256)  0           conv2_block3_preact_bn[0][0]      __________________________________________________________________________________________________ conv2_block3_1_conv (Conv2D)    (None, 56, 56, 64)   16384       conv2_block3_preact_relu[0][0]    __________________________________________________________________________________________________ conv2_block3_1_bn (BatchNormali (None, 56, 56, 64)   256         conv2_block3_1_conv[0][0]         __________________________________________________________________________________________________ conv2_block3_1_relu (Activation (None, 56, 56, 64)   0           conv2_block3_1_bn[0][0]           __________________________________________________________________________________________________ conv2_block3_2_pad (ZeroPadding (None, 58, 58, 64)   0           conv2_block3_1_relu[0][0]         __________________________________________________________________________________________________ conv2_block3_2_conv (Conv2D)    (None, 28, 28, 64)   36864       conv2_block3_2_pad[0][0]          __________________________________________________________________________________________________ conv2_block3_2_bn (BatchNormali (None, 28, 28, 64)   256         conv2_block3_2_conv[0][0]         __________________________________________________________________________________________________ conv2_block3_2_relu (Activation (None, 28, 28, 64)   0           conv2_block3_2_bn[0][0]           __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D)    (None, 28, 28, 256)  0           conv2_block2_out[0][0]            __________________________________________________________________________________________________ conv2_block3_3_conv (Conv2D)    (None, 28, 28, 256)  16640       conv2_block3_2_relu[0][0]         __________________________________________________________________________________________________ conv2_block3_out (Add together)          (None, 28, 28, 256)  0           max_pooling2d[0][0]                                                                                conv2_block3_3_conv[0][0]         __________________________________________________________________________________________________ conv3_block1_preact_bn (BatchNo (None, 28, 28, 256)  1024        conv2_block3_out[0][0]            __________________________________________________________________________________________________ conv3_block1_preact_relu (Activ (None, 28, 28, 256)  0           conv3_block1_preact_bn[0][0]      __________________________________________________________________________________________________ conv3_block1_1_conv (Conv2D)    (None, 28, 28, 128)  32768       conv3_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv3_block1_1_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block1_1_conv[0][0]         __________________________________________________________________________________________________ conv3_block1_1_relu (Activation (None, 28, 28, 128)  0           conv3_block1_1_bn[0][0]           __________________________________________________________________________________________________ conv3_block1_2_pad (ZeroPadding (None, 30, thirty, 128)  0           conv3_block1_1_relu[0][0]         __________________________________________________________________________________________________ conv3_block1_2_conv (Conv2D)    (None, 28, 28, 128)  147456      conv3_block1_2_pad[0][0]          __________________________________________________________________________________________________ conv3_block1_2_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block1_2_conv[0][0]         __________________________________________________________________________________________________ conv3_block1_2_relu (Activation (None, 28, 28, 128)  0           conv3_block1_2_bn[0][0]           __________________________________________________________________________________________________ conv3_block1_0_conv (Conv2D)    (None, 28, 28, 512)  131584      conv3_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv3_block1_3_conv (Conv2D)    (None, 28, 28, 512)  66048       conv3_block1_2_relu[0][0]         __________________________________________________________________________________________________ conv3_block1_out (Add)          (None, 28, 28, 512)  0           conv3_block1_0_conv[0][0]                                                                          conv3_block1_3_conv[0][0]         __________________________________________________________________________________________________ conv3_block2_preact_bn (BatchNo (None, 28, 28, 512)  2048        conv3_block1_out[0][0]            __________________________________________________________________________________________________ conv3_block2_preact_relu (Activ (None, 28, 28, 512)  0           conv3_block2_preact_bn[0][0]      __________________________________________________________________________________________________ conv3_block2_1_conv (Conv2D)    (None, 28, 28, 128)  65536       conv3_block2_preact_relu[0][0]    __________________________________________________________________________________________________ conv3_block2_1_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block2_1_conv[0][0]         __________________________________________________________________________________________________ conv3_block2_1_relu (Activation (None, 28, 28, 128)  0           conv3_block2_1_bn[0][0]           __________________________________________________________________________________________________ conv3_block2_2_pad (ZeroPadding (None, 30, xxx, 128)  0           conv3_block2_1_relu[0][0]         __________________________________________________________________________________________________ conv3_block2_2_conv (Conv2D)    (None, 28, 28, 128)  147456      conv3_block2_2_pad[0][0]          __________________________________________________________________________________________________ conv3_block2_2_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block2_2_conv[0][0]         __________________________________________________________________________________________________ conv3_block2_2_relu (Activation (None, 28, 28, 128)  0           conv3_block2_2_bn[0][0]           __________________________________________________________________________________________________ conv3_block2_3_conv (Conv2D)    (None, 28, 28, 512)  66048       conv3_block2_2_relu[0][0]         __________________________________________________________________________________________________ conv3_block2_out (Add)          (None, 28, 28, 512)  0           conv3_block1_out[0][0]                                                                             conv3_block2_3_conv[0][0]         __________________________________________________________________________________________________ conv3_block3_preact_bn (BatchNo (None, 28, 28, 512)  2048        conv3_block2_out[0][0]            __________________________________________________________________________________________________ conv3_block3_preact_relu (Activ (None, 28, 28, 512)  0           conv3_block3_preact_bn[0][0]      __________________________________________________________________________________________________ conv3_block3_1_conv (Conv2D)    (None, 28, 28, 128)  65536       conv3_block3_preact_relu[0][0]    __________________________________________________________________________________________________ conv3_block3_1_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block3_1_conv[0][0]         __________________________________________________________________________________________________ conv3_block3_1_relu (Activation (None, 28, 28, 128)  0           conv3_block3_1_bn[0][0]           __________________________________________________________________________________________________ conv3_block3_2_pad (ZeroPadding (None, 30, 30, 128)  0           conv3_block3_1_relu[0][0]         __________________________________________________________________________________________________ conv3_block3_2_conv (Conv2D)    (None, 28, 28, 128)  147456      conv3_block3_2_pad[0][0]          __________________________________________________________________________________________________ conv3_block3_2_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block3_2_conv[0][0]         __________________________________________________________________________________________________ conv3_block3_2_relu (Activation (None, 28, 28, 128)  0           conv3_block3_2_bn[0][0]           __________________________________________________________________________________________________ conv3_block3_3_conv (Conv2D)    (None, 28, 28, 512)  66048       conv3_block3_2_relu[0][0]         __________________________________________________________________________________________________ conv3_block3_out (Add)          (None, 28, 28, 512)  0           conv3_block2_out[0][0]                                                                             conv3_block3_3_conv[0][0]         __________________________________________________________________________________________________ conv3_block4_preact_bn (BatchNo (None, 28, 28, 512)  2048        conv3_block3_out[0][0]            __________________________________________________________________________________________________ conv3_block4_preact_relu (Activ (None, 28, 28, 512)  0           conv3_block4_preact_bn[0][0]      __________________________________________________________________________________________________ conv3_block4_1_conv (Conv2D)    (None, 28, 28, 128)  65536       conv3_block4_preact_relu[0][0]    __________________________________________________________________________________________________ conv3_block4_1_bn (BatchNormali (None, 28, 28, 128)  512         conv3_block4_1_conv[0][0]         __________________________________________________________________________________________________ conv3_block4_1_relu (Activation (None, 28, 28, 128)  0           conv3_block4_1_bn[0][0]           __________________________________________________________________________________________________ conv3_block4_2_pad (ZeroPadding (None, 30, 30, 128)  0           conv3_block4_1_relu[0][0]         __________________________________________________________________________________________________ conv3_block4_2_conv (Conv2D)    (None, xiv, 14, 128)  147456      conv3_block4_2_pad[0][0]          __________________________________________________________________________________________________ conv3_block4_2_bn (BatchNormali (None, 14, 14, 128)  512         conv3_block4_2_conv[0][0]         __________________________________________________________________________________________________ conv3_block4_2_relu (Activation (None, 14, 14, 128)  0           conv3_block4_2_bn[0][0]           __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D)  (None, fourteen, xiv, 512)  0           conv3_block3_out[0][0]            __________________________________________________________________________________________________ conv3_block4_3_conv (Conv2D)    (None, fourteen, 14, 512)  66048       conv3_block4_2_relu[0][0]         __________________________________________________________________________________________________ conv3_block4_out (Add together)          (None, 14, 14, 512)  0           max_pooling2d_1[0][0]                                                                              conv3_block4_3_conv[0][0]         __________________________________________________________________________________________________ conv4_block1_preact_bn (BatchNo (None, fourteen, fourteen, 512)  2048        conv3_block4_out[0][0]            __________________________________________________________________________________________________ conv4_block1_preact_relu (Activ (None, 14, 14, 512)  0           conv4_block1_preact_bn[0][0]      __________________________________________________________________________________________________ conv4_block1_1_conv (Conv2D)    (None, fourteen, 14, 256)  131072      conv4_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block1_1_bn (BatchNormali (None, 14, xiv, 256)  1024        conv4_block1_1_conv[0][0]         __________________________________________________________________________________________________ conv4_block1_1_relu (Activation (None, 14, fourteen, 256)  0           conv4_block1_1_bn[0][0]           __________________________________________________________________________________________________ conv4_block1_2_pad (ZeroPadding (None, sixteen, 16, 256)  0           conv4_block1_1_relu[0][0]         __________________________________________________________________________________________________ conv4_block1_2_conv (Conv2D)    (None, fourteen, 14, 256)  589824      conv4_block1_2_pad[0][0]          __________________________________________________________________________________________________ conv4_block1_2_bn (BatchNormali (None, 14, 14, 256)  1024        conv4_block1_2_conv[0][0]         __________________________________________________________________________________________________ conv4_block1_2_relu (Activation (None, 14, 14, 256)  0           conv4_block1_2_bn[0][0]           __________________________________________________________________________________________________ conv4_block1_0_conv (Conv2D)    (None, 14, fourteen, 1024) 525312      conv4_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block1_3_conv (Conv2D)    (None, 14, fourteen, 1024) 263168      conv4_block1_2_relu[0][0]         __________________________________________________________________________________________________ conv4_block1_out (Add together)          (None, fourteen, xiv, 1024) 0           conv4_block1_0_conv[0][0]                                                                          conv4_block1_3_conv[0][0]         __________________________________________________________________________________________________ conv4_block2_preact_bn (BatchNo (None, 14, 14, 1024) 4096        conv4_block1_out[0][0]            __________________________________________________________________________________________________ conv4_block2_preact_relu (Activ (None, 14, fourteen, 1024) 0           conv4_block2_preact_bn[0][0]      __________________________________________________________________________________________________ conv4_block2_1_conv (Conv2D)    (None, 14, 14, 256)  262144      conv4_block2_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block2_1_bn (BatchNormali (None, 14, 14, 256)  1024        conv4_block2_1_conv[0][0]         __________________________________________________________________________________________________ conv4_block2_1_relu (Activation (None, 14, fourteen, 256)  0           conv4_block2_1_bn[0][0]           __________________________________________________________________________________________________ conv4_block2_2_pad (ZeroPadding (None, 16, 16, 256)  0           conv4_block2_1_relu[0][0]         __________________________________________________________________________________________________ conv4_block2_2_conv (Conv2D)    (None, xiv, fourteen, 256)  589824      conv4_block2_2_pad[0][0]          __________________________________________________________________________________________________ conv4_block2_2_bn (BatchNormali (None, fourteen, 14, 256)  1024        conv4_block2_2_conv[0][0]         __________________________________________________________________________________________________ conv4_block2_2_relu (Activation (None, xiv, 14, 256)  0           conv4_block2_2_bn[0][0]           __________________________________________________________________________________________________ conv4_block2_3_conv (Conv2D)    (None, 14, fourteen, 1024) 263168      conv4_block2_2_relu[0][0]         __________________________________________________________________________________________________ conv4_block2_out (Add together)          (None, 14, xiv, 1024) 0           conv4_block1_out[0][0]                                                                             conv4_block2_3_conv[0][0]         __________________________________________________________________________________________________ conv4_block3_preact_bn (BatchNo (None, 14, 14, 1024) 4096        conv4_block2_out[0][0]            __________________________________________________________________________________________________ conv4_block3_preact_relu (Activ (None, 14, xiv, 1024) 0           conv4_block3_preact_bn[0][0]      __________________________________________________________________________________________________ conv4_block3_1_conv (Conv2D)    (None, fourteen, fourteen, 256)  262144      conv4_block3_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block3_1_bn (BatchNormali (None, 14, 14, 256)  1024        conv4_block3_1_conv[0][0]         __________________________________________________________________________________________________ conv4_block3_1_relu (Activation (None, 14, xiv, 256)  0           conv4_block3_1_bn[0][0]           __________________________________________________________________________________________________ conv4_block3_2_pad (ZeroPadding (None, 16, 16, 256)  0           conv4_block3_1_relu[0][0]         __________________________________________________________________________________________________ conv4_block3_2_conv (Conv2D)    (None, xiv, 14, 256)  589824      conv4_block3_2_pad[0][0]          __________________________________________________________________________________________________ conv4_block3_2_bn (BatchNormali (None, 14, xiv, 256)  1024        conv4_block3_2_conv[0][0]         __________________________________________________________________________________________________ conv4_block3_2_relu (Activation (None, 14, xiv, 256)  0           conv4_block3_2_bn[0][0]           __________________________________________________________________________________________________ conv4_block3_3_conv (Conv2D)    (None, 14, 14, 1024) 263168      conv4_block3_2_relu[0][0]         __________________________________________________________________________________________________ conv4_block3_out (Add)          (None, 14, 14, 1024) 0           conv4_block2_out[0][0]                                                                             conv4_block3_3_conv[0][0]         __________________________________________________________________________________________________ conv4_block4_preact_bn (BatchNo (None, 14, fourteen, 1024) 4096        conv4_block3_out[0][0]            __________________________________________________________________________________________________ conv4_block4_preact_relu (Activ (None, 14, 14, 1024) 0           conv4_block4_preact_bn[0][0]      __________________________________________________________________________________________________ conv4_block4_1_conv (Conv2D)    (None, 14, 14, 256)  262144      conv4_block4_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block4_1_bn (BatchNormali (None, 14, 14, 256)  1024        conv4_block4_1_conv[0][0]         __________________________________________________________________________________________________ conv4_block4_1_relu (Activation (None, 14, fourteen, 256)  0           conv4_block4_1_bn[0][0]           __________________________________________________________________________________________________ conv4_block4_2_pad (ZeroPadding (None, 16, 16, 256)  0           conv4_block4_1_relu[0][0]         __________________________________________________________________________________________________ conv4_block4_2_conv (Conv2D)    (None, xiv, 14, 256)  589824      conv4_block4_2_pad[0][0]          __________________________________________________________________________________________________ conv4_block4_2_bn (BatchNormali (None, 14, fourteen, 256)  1024        conv4_block4_2_conv[0][0]         __________________________________________________________________________________________________ conv4_block4_2_relu (Activation (None, xiv, 14, 256)  0           conv4_block4_2_bn[0][0]           __________________________________________________________________________________________________ conv4_block4_3_conv (Conv2D)    (None, 14, 14, 1024) 263168      conv4_block4_2_relu[0][0]         __________________________________________________________________________________________________ conv4_block4_out (Add)          (None, 14, 14, 1024) 0           conv4_block3_out[0][0]                                                                             conv4_block4_3_conv[0][0]         __________________________________________________________________________________________________ conv4_block5_preact_bn (BatchNo (None, xiv, fourteen, 1024) 4096        conv4_block4_out[0][0]            __________________________________________________________________________________________________ conv4_block5_preact_relu (Activ (None, 14, 14, 1024) 0           conv4_block5_preact_bn[0][0]      __________________________________________________________________________________________________ conv4_block5_1_conv (Conv2D)    (None, 14, 14, 256)  262144      conv4_block5_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block5_1_bn (BatchNormali (None, xiv, 14, 256)  1024        conv4_block5_1_conv[0][0]         __________________________________________________________________________________________________ conv4_block5_1_relu (Activation (None, 14, 14, 256)  0           conv4_block5_1_bn[0][0]           __________________________________________________________________________________________________ conv4_block5_2_pad (ZeroPadding (None, 16, 16, 256)  0           conv4_block5_1_relu[0][0]         __________________________________________________________________________________________________ conv4_block5_2_conv (Conv2D)    (None, 14, xiv, 256)  589824      conv4_block5_2_pad[0][0]          __________________________________________________________________________________________________ conv4_block5_2_bn (BatchNormali (None, 14, 14, 256)  1024        conv4_block5_2_conv[0][0]         __________________________________________________________________________________________________ conv4_block5_2_relu (Activation (None, 14, fourteen, 256)  0           conv4_block5_2_bn[0][0]           __________________________________________________________________________________________________ conv4_block5_3_conv (Conv2D)    (None, 14, 14, 1024) 263168      conv4_block5_2_relu[0][0]         __________________________________________________________________________________________________ conv4_block5_out (Add together)          (None, fourteen, xiv, 1024) 0           conv4_block4_out[0][0]                                                                             conv4_block5_3_conv[0][0]         __________________________________________________________________________________________________ conv4_block6_preact_bn (BatchNo (None, 14, xiv, 1024) 4096        conv4_block5_out[0][0]            __________________________________________________________________________________________________ conv4_block6_preact_relu (Activ (None, xiv, 14, 1024) 0           conv4_block6_preact_bn[0][0]      __________________________________________________________________________________________________ conv4_block6_1_conv (Conv2D)    (None, xiv, 14, 256)  262144      conv4_block6_preact_relu[0][0]    __________________________________________________________________________________________________ conv4_block6_1_bn (BatchNormali (None, 14, 14, 256)  1024        conv4_block6_1_conv[0][0]         __________________________________________________________________________________________________ conv4_block6_1_relu (Activation (None, 14, fourteen, 256)  0           conv4_block6_1_bn[0][0]           __________________________________________________________________________________________________ conv4_block6_2_pad (ZeroPadding (None, 16, sixteen, 256)  0           conv4_block6_1_relu[0][0]         __________________________________________________________________________________________________ conv4_block6_2_conv (Conv2D)    (None, vii, 7, 256)    589824      conv4_block6_2_pad[0][0]          __________________________________________________________________________________________________ conv4_block6_2_bn (BatchNormali (None, 7, 7, 256)    1024        conv4_block6_2_conv[0][0]         __________________________________________________________________________________________________ conv4_block6_2_relu (Activation (None, 7, seven, 256)    0           conv4_block6_2_bn[0][0]           __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D)  (None, 7, vii, 1024)   0           conv4_block5_out[0][0]            __________________________________________________________________________________________________ conv4_block6_3_conv (Conv2D)    (None, seven, 7, 1024)   263168      conv4_block6_2_relu[0][0]         __________________________________________________________________________________________________ conv4_block6_out (Add together)          (None, 7, 7, 1024)   0           max_pooling2d_2[0][0]                                                                              conv4_block6_3_conv[0][0]         __________________________________________________________________________________________________ conv5_block1_preact_bn (BatchNo (None, 7, vii, 1024)   4096        conv4_block6_out[0][0]            __________________________________________________________________________________________________ conv5_block1_preact_relu (Activ (None, 7, 7, 1024)   0           conv5_block1_preact_bn[0][0]      __________________________________________________________________________________________________ conv5_block1_1_conv (Conv2D)    (None, 7, 7, 512)    524288      conv5_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv5_block1_1_bn (BatchNormali (None, 7, 7, 512)    2048        conv5_block1_1_conv[0][0]         __________________________________________________________________________________________________ conv5_block1_1_relu (Activation (None, seven, 7, 512)    0           conv5_block1_1_bn[0][0]           __________________________________________________________________________________________________ conv5_block1_2_pad (ZeroPadding (None, nine, 9, 512)    0           conv5_block1_1_relu[0][0]         __________________________________________________________________________________________________ conv5_block1_2_conv (Conv2D)    (None, 7, 7, 512)    2359296     conv5_block1_2_pad[0][0]          __________________________________________________________________________________________________ conv5_block1_2_bn (BatchNormali (None, 7, 7, 512)    2048        conv5_block1_2_conv[0][0]         __________________________________________________________________________________________________ conv5_block1_2_relu (Activation (None, 7, 7, 512)    0           conv5_block1_2_bn[0][0]           __________________________________________________________________________________________________ conv5_block1_0_conv (Conv2D)    (None, 7, vii, 2048)   2099200     conv5_block1_preact_relu[0][0]    __________________________________________________________________________________________________ conv5_block1_3_conv (Conv2D)    (None, 7, 7, 2048)   1050624     conv5_block1_2_relu[0][0]         __________________________________________________________________________________________________ conv5_block1_out (Add)          (None, seven, seven, 2048)   0           conv5_block1_0_conv[0][0]                                                                          conv5_block1_3_conv[0][0]         __________________________________________________________________________________________________ conv5_block2_preact_bn (BatchNo (None, vii, 7, 2048)   8192        conv5_block1_out[0][0]            __________________________________________________________________________________________________ conv5_block2_preact_relu (Activ (None, vii, 7, 2048)   0           conv5_block2_preact_bn[0][0]      __________________________________________________________________________________________________ conv5_block2_1_conv (Conv2D)    (None, vii, 7, 512)    1048576     conv5_block2_preact_relu[0][0]    __________________________________________________________________________________________________ conv5_block2_1_bn (BatchNormali (None, 7, vii, 512)    2048        conv5_block2_1_conv[0][0]         __________________________________________________________________________________________________ conv5_block2_1_relu (Activation (None, seven, 7, 512)    0           conv5_block2_1_bn[0][0]           __________________________________________________________________________________________________ conv5_block2_2_pad (ZeroPadding (None, 9, 9, 512)    0           conv5_block2_1_relu[0][0]         __________________________________________________________________________________________________ conv5_block2_2_conv (Conv2D)    (None, vii, 7, 512)    2359296     conv5_block2_2_pad[0][0]          __________________________________________________________________________________________________ conv5_block2_2_bn (BatchNormali (None, 7, 7, 512)    2048        conv5_block2_2_conv[0][0]         __________________________________________________________________________________________________ conv5_block2_2_relu (Activation (None, 7, 7, 512)    0           conv5_block2_2_bn[0][0]           __________________________________________________________________________________________________ conv5_block2_3_conv (Conv2D)    (None, 7, 7, 2048)   1050624     conv5_block2_2_relu[0][0]         __________________________________________________________________________________________________ conv5_block2_out (Add)          (None, 7, vii, 2048)   0           conv5_block1_out[0][0]                                                                             conv5_block2_3_conv[0][0]         __________________________________________________________________________________________________ conv5_block3_preact_bn (BatchNo (None, 7, seven, 2048)   8192        conv5_block2_out[0][0]            __________________________________________________________________________________________________ conv5_block3_preact_relu (Activ (None, 7, 7, 2048)   0           conv5_block3_preact_bn[0][0]      __________________________________________________________________________________________________ conv5_block3_1_conv (Conv2D)    (None, seven, 7, 512)    1048576     conv5_block3_preact_relu[0][0]    __________________________________________________________________________________________________ conv5_block3_1_bn (BatchNormali (None, 7, vii, 512)    2048        conv5_block3_1_conv[0][0]         __________________________________________________________________________________________________ conv5_block3_1_relu (Activation (None, vii, seven, 512)    0           conv5_block3_1_bn[0][0]           __________________________________________________________________________________________________ conv5_block3_2_pad (ZeroPadding (None, nine, 9, 512)    0           conv5_block3_1_relu[0][0]         __________________________________________________________________________________________________ conv5_block3_2_conv (Conv2D)    (None, 7, vii, 512)    2359296     conv5_block3_2_pad[0][0]          __________________________________________________________________________________________________ conv5_block3_2_bn (BatchNormali (None, 7, 7, 512)    2048        conv5_block3_2_conv[0][0]         __________________________________________________________________________________________________ conv5_block3_2_relu (Activation (None, 7, vii, 512)    0           conv5_block3_2_bn[0][0]           __________________________________________________________________________________________________ conv5_block3_3_conv (Conv2D)    (None, vii, seven, 2048)   1050624     conv5_block3_2_relu[0][0]         __________________________________________________________________________________________________ conv5_block3_out (Add)          (None, 7, 7, 2048)   0           conv5_block2_out[0][0]                                                                             conv5_block3_3_conv[0][0]         __________________________________________________________________________________________________ post_bn (BatchNormalization)    (None, vii, 7, 2048)   8192        conv5_block3_out[0][0]            __________________________________________________________________________________________________ post_relu (Activation)          (None, 7, 7, 2048)   0           post_bn[0][0]                     __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 2048)         0           post_relu[0][0]                   __________________________________________________________________________________________________ dropout (Dropout)               (None, 2048)         0           global_average_pooling2d[0][0]    __________________________________________________________________________________________________ dumbo (Dense)                   (None, 128)          262272      dropout[0][0]                     __________________________________________________________________________________________________ leaky_re_lu (LeakyReLU)         (None, 128)          0           dense[0][0]                       __________________________________________________________________________________________________ dense_1 (Dumbo)                 (None, 2)            258         leaky_re_lu[0][0]                 __________________________________________________________________________________________________ output (Activation)             (None, 2)            0           dense_1[0][0]                     ================================================================================================== Total params: 23,827,330 Trainable params: 23,781,890 Non-trainable params: 45,440 __________________________________________________________________________________________________ Epoch 1/100   1/92 [..............................] - ETA: 12:l - loss: one.0292 - accuracy: 0.5000 - precision: 0.4167 - recollect: 0.8333 - auc: 0.4287 ....  loss: 0.0180 - accuracy: 0.9997 - precision: 0.9993 - recall: i.0000 - auc: 1.0000 - val_loss: 0.1581 - val_accuracy: 0.9795 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_auc: 0.9762  loss  =  0.13487988989800215 accurateness  =  0.9945 precision  =  0.0 recollect  =  0.0 auc  =  0.9814938 True (-)ves:  181  Fake (+)ves:  0  False (-)ves:  ane  True (+)ves:  0                          
Mensurate Value Derivations
Sensitivity 0.0000 TPR = TP / (TP + FN)
Specificity i.0000 SPC = TN / (FP + TN)
Precision PPV = TP / (TP + FP)
Negative Predictive Value 0.9945 NPV = TN / (TN + FN)
False Positive Rate 0.0000 FPR = FP / (FP + TN)
Imitation Discovery Rate FDR = FP / (FP + TP)
Faux Negative Rate 1.0000 FNR = FN / (FN + TP)
Accurateness 0.9945 ACC = (TP + TN) / (P + N)
F1 Score 0.0000 F1 = 2TP / (2TP + FP + FN)
Matthews Correlation Coefficient TP*TN - FP*FN / sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN))

What Does Fp Section Mean When Registering For Class Njit,

Source: https://github.com/cicorias/njit-deeplearn-explain-shap

Posted by: engelwourroon1942.blogspot.com

0 Response to "What Does Fp Section Mean When Registering For Class Njit"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel