Vision Dataset: The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal). Please find your dataset from the link - https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia
(Links to an external site.)
Steps For Implementation
1. Import Libraries/Dataset
Import the required libraries and the dataset (use Google Drive if required).
Check the GPU available (recommended- use free GPU provided by Google Colab).
2. Data Visualization and augmentation
Plot at least two samples from each class of the dataset (use matplotlib/seaborn/any other library).
Apply rotation and height shift augmentation (rotation_range, height_shift_range)to the dataset separately. Print the augmented image and the original image for each class and each augmentation.
Bring the train and test data in the required format.
Print the shapes of train and test data.
3. Model Building
Sequential Model layers- Use AT LEAST 3 hidden layers with appropriate input for each. Choose the best number for hidden units and give reasons.
Add L2 regularization to all the layers.
Add one layer of dropout at the appropriate position and give reasons.
Choose the appropriate activation function for all the layers.
Print the model summary.
4. Model Compilation
Compile the model with the appropriate loss function.
Use an appropriate optimizer. Give reasons for the choice of learning rate and its value.
Use accuracy as a metric.
5. Model Training
Train the model for an appropriate number of epochs. Print the train and validation accuracy and loss for each epoch. Use the appropriate batch size.
Plot the loss and accuracy history graphs for both train and validation set. Print the total time taken for training.
6. Model Evaluation
Print the final train and validation loss and accuracy. Print confusion matrix and classification report for the validation dataset. Analyse and report the best and worst performing class.
Print the two most incorrectly classified images for each class in the test dataset.
Build two more additional models by changing the following hyperparameters ONE at a time. Write the code for Model Building, Model Compilation, Model Training and Model Evaluation as given in the instructions above for each additional model.
Optimiser: Use a different optimizer with the appropriate LR value.
Network Depth: Change the number of hidden layers and hidden units for each layer.
Write a comparison between each model and give reasons for the difference in results.
Question No.2. Dataset: (Data set)
Load the attached csv file in python. Each row consists of feature 1, feature 2, feature 3 & class label.
Train two single/double hidden layer deep networks by varying the number of hidden nodes (4, 8, 12, 16) in each layer with 70% training and 30% validation data. Use appropriate learning rate, activation, and loss functions and also mention the reason for choosing the same. Report, compare, and explain the observed accuracy and minimum loss achieved.
Visually observe the dataset and design an appropriate feature transformation (derived feature) such that after feature transformation, the dataset can be classified using a minimal network architecture (minimum number of parameters). Design, train this minimal network, and report training and validation errors, and trained parameters of the network. Use 70% training and 30% validation data, appropriate learning rate, activation and loss functions. Explain the final results.
Evaluation Process -
Task Response and Task Completion- All the models should be logically sound and have decent accuracy (models with random guessing, frozen and incorrect accuracy, exploding gradients etc. will lead to deduction of marks. Please do a sanity check of your model and results before submission).
There are a lot of subparts, so answer each completely and correctly, as no partial marks will be awarded for partially correct subparts.
Implementation- The model layers, parameters, hyperparameters, evaluation metrics etc. should be properly implemented.
Only fully connected or dense layers are allowed. CNNs/RNNs are strictly not allowed.
Notebooks without output will not be considered for evaluation.