Showing
1 changed file
with
83 additions
and
0 deletions
README.md
0 → 100644
1 | +# Deep GONet | ||
2 | + | ||
3 | +From the article entitled **Deep GONet: Self-explainable deep neural network using Gene Ontology for phenotype prediction from gene expression data** (submitted to ECCB'20) | ||
4 | + | ||
5 | +--- | ||
6 | + | ||
7 | +## Description | ||
8 | + | ||
9 | +Deep GONet is a self-explainable neural network integrating the Gene Ontology into its hierarchical architecture. | ||
10 | + | ||
11 | +## Get started | ||
12 | + | ||
13 | +The code is implemented in Python using the [Tensorflow](https://www.tensorflow.org/) framework v1.12 (see [requirements.txt](https://forge.ibisc.univ-evry.fr/vbourgeais/DeepGONet/blob/master/requirements.txt) for more details) | ||
14 | + | ||
15 | +### Dataset | ||
16 | + | ||
17 | +The full dataset can be downloaded on ArrayExpress database under the id [E-MTAB-3732](https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-3732/). Here, you can find the pre-processed training and test sets: | ||
18 | + | ||
19 | +[training set](https://entrepot.ibisc.univ-evry.fr/f/5b57ab5a69de4f6ab26b/?dl=1) | ||
20 | + | ||
21 | +[test set](https://entrepot.ibisc.univ-evry.fr/f/057f1ffa0e6c4aab9bee/?dl=1) | ||
22 | + | ||
23 | +### Usage | ||
24 | + | ||
25 | +Deep GONet was achieved with the $L_{GO}$ regularization and the hyperparameter $\alpha=1e^{-2}$. | ||
26 | +To replicate it, the command line flag *type_training* needs to be set to LGO (default value) and the command line flag *alpha* to $1e^{-2}$ (default value). | ||
27 | + | ||
28 | +There exists 3 functions (flag *processing*): one is dedicated to the training of the model (*train*), another one to the evaluation of the model on the test set (*evaluate*), and the last one to the prediction of the outcomes of the samples from the test set (*predict*). | ||
29 | + | ||
30 | +#### 1) Train | ||
31 | + | ||
32 | + | ||
33 | +```bash | ||
34 | +python DeepGONet.py --type_training="LGO" --alpha=1e-2 --EPOCHS=600 --is_training=True --display_step=10 --save=True --processing="train" | ||
35 | +``` | ||
36 | + | ||
37 | +#### 2) Evaluate | ||
38 | + | ||
39 | + | ||
40 | +```bash | ||
41 | +python DeepGONet.py --type_training="LGO" --alpha=1e-2 --EPOCHS=600 --is_training=False --restore=True --processing="evaluate" | ||
42 | +``` | ||
43 | + | ||
44 | +#### 3) Predict | ||
45 | + | ||
46 | + | ||
47 | +```bash | ||
48 | +python DeepGONet.py --type_training="LGO" --alpha=1e-2 --EPOCHS=600 --is_training=False --restore=True --processing="predict" | ||
49 | +``` | ||
50 | + | ||
51 | +The outcomes are saved into a numpy array. | ||
52 | + | ||
53 | +#### Help | ||
54 | + | ||
55 | +All the details about the command line flags can be provided by the following command: | ||
56 | + | ||
57 | + | ||
58 | +```bash | ||
59 | +python DeepGONet.py --help | ||
60 | +``` | ||
61 | + | ||
62 | +For most of the flags, the default values can be employed. *log_dir* and *save_dir* can be modified to your own repositories. Only the flags in the command lines displayed have to be adjusted to achieve the desired objective. | ||
63 | + | ||
64 | +### Comparison with classical fully-connected network using L2 or L1 regularization terms | ||
65 | + | ||
66 | +It is possible to compare the model with L2,L1 regularization instead of LGO. | ||
67 | + | ||
68 | + | ||
69 | +```bash | ||
70 | +python DeepGONet.py --type_training="L2" --alpha=1e-2 --EPOCHS=600 --is_training=True --display_step=10 --save=True --processing="train" | ||
71 | +``` | ||
72 | + | ||
73 | + | ||
74 | +```bash | ||
75 | +python DeepGONet.py --type_training="L1" --alpha=1e-2 --EPOCHS=600 --is_training=True --display_step=10 --save=True --processing="train" | ||
76 | +``` | ||
77 | + | ||
78 | +Without regularization: | ||
79 | + | ||
80 | + | ||
81 | +```bash | ||
82 | +python DeepGONet.py --alpha=0 --EPOCHS=100 --is_training=True --display_step=5 --save=True --processing="train" | ||
83 | +``` |
-
Please register or login to post a comment