You will find a script for preparing data (data_conformer.py) and a script for preprocessing and training with the data.
This experiment is using a CNN plus an LSTM for binary classification, where:
1 = Exam positive for keratoconus. 0 = Exam with normal characteristics.
This is a software to pull data from Galilei G6, an ophthalmologic equipment used for eye biometrics.
- Clone the repository in your local machine.
- Open the 'projeto_oft' folder in your terminal
- Create a python3 virtualenv
- Open your virtualenv with 'source .env/bin/activate'
- pip install -r requirements_conform.txt
- When requirements are installed, run the file as the following:
'python etl_task.py --path --out'
You can see this script takes 2 positional arguments:
- path is the directory containing all patients.
- out is the path for your output csv file.
The output should be a single csv per examination, containing 15 columns and 18k rows representing the measures.
For training, you should install "requirements_train.txt" in your env, using pip.
You can train the model by running:
"python3 train.py --pos_folder --neg_folder --job_dir"
Where:
- pos_folder is the directory where all the positive class samples are.
- neg_folder is the directory where all the negative class samples are.
- job_dir is the directory pointed for saving the model and tensorboard.
You can make predictions with the model by running:
"python3 predict.py --test_folder --model_path"
Where:
- test_folder is the directory where your test data is.
- model_path is the directory containing your h5 keras model.