Build, Train and Evaluate a Deep Neural Network Using Dogs-Cat-Images Dataset | Deep Learning



Task 1:

Vision Dataset: Please find your dataset from the link -

 https://www.kaggle.com/chetankv/dogs-cats-images

 

1. Import Libraries/Dataset

  • Import the required libraries and the dataset (use Google Drive if required).

  • Check the GPU available (recommended- use free GPU provided by Google Colab).


2. Data Visualization and augmentation

  • Plot at least two samples from each class of the dataset (use matplotlib/seaborn/any other library).

  • Apply rotation and height shift augmentation (rotation_range, height_shift_range)to the dataset separately. Print the augmented image and the original image for each class and each augmentation.

  • Bring the train and test data in the required format.

  • Print the shapes of train and test data.


3. Model Building

  • Sequential Model layers- Use AT LEAST 3 hidden layers with appropriate input for each. Choose the best number for hidden units and give reasons.

  • Add L2 regularization to all the layers.

  • Add one layer of dropout at the appropriate position and give reasons.

  • Choose the appropriate activation function for all the layers.

  • Print the model summary.

 

4. Model Compilation (0.25 mark)

  • Compile the model with the appropriate loss function.

  • Use an appropriate optimizer. Give reasons for the choice of learning rate and its value.

  • Use accuracy as a metric.


5. Model Training

  • Train the model for an appropriate number of epochs. Print the train and validation accuracy and loss for each epoch. Use the appropriate batch size.

  • Plot the loss and accuracy history graphs for both train and validation set. Print the total time taken for training.

6. Model Evaluation (0.5 + 0.5 = 1 mark)

  • Print the final train and validation loss and accuracy. Print confusion matrix and classification report for the validation dataset. Analyse and report the best and worst performing class.

  • Print the two most incorrectly classified images for each class in the test dataset.


Hyperparameter Tuning- 

Build two more additional models by changing the following hyperparameters ONE at a time. Write the code for Model Building, Model Compilation, Model Training and Model Evaluation as given in the instructions above for each additional model. 

  1. Optimiser: Use a different optimizer with the appropriate LR value.

  2. Network Depth: Change the number of hidden layers and hidden units for each layer.

Write a comparison between each model and give reasons for the difference in results.


Task 2. (Sentiment Analysis)

NLP Dataset: Sentiment Analysis dataset - 1.6 Million tweets. The column 'text' has the tweet and 'target' gives the sentiment of the text. Please find your dataset from

the link -

https://www.kaggle.com/kazanova/sentiment140


Import Libraries/Dataset

  • Import the required libraries and the dataset (use Google Drive if required).

  • Check the GPU available (recommended- use free GPU provided by Google Colab).


Data Visualization

  • Print at least two records from each class of the dataset, for a sanity check that labels match the text.

  • Plot a bar graph of class distribution in the dataset. Each bar depicts the number of records belonging to a particular class in the dataset. (recommended - matplotlib/seaborn libraries)

  • Any other visualizations that seem appropriate for this problem are encouraged but not necessary, for the points.

Print the shapes of train and test data.


Data Pre-processing

  • Need for this Step- Since the models we use cannot accept string inputs or cannot be of the string format. We have to come up with a way of handling this step. The discussion of different ways of handling this step is out of the scope of this assignment.

  • Please use this pre-trained embedding layer from TensorFlow hub for this assignment. This link also has a code snippet on how to convert a sentence to a vector. Refer to that for further clarity on this subject.

  • Bring the train and test data in the required format.


Model Building

  • Sequential Model layers- Use AT LEAST 3 hidden layers with appropriate input for each. Choose the best number for hidden units and give reasons.

  • Add L2 regularization to all the layers.

  • Add one layer of dropout at the appropriate position and give reasons.

  • Choose the appropriate activation function for all the layers.

  • Print the model summary.


Model Compilation

  • Compile the model with the appropriate loss function.

  • Use an appropriate optimizer. Give reasons for the choice of learning rate and its value.

  • Use accuracy as a metric.


Model Training

  • Train the model for an appropriate number of epochs. Print the train and validation accuracy and loss for each epoch. Use the appropriate batch size.

  • Plot the loss and accuracy history graphs for both train and validation set.

Print the total time taken for training.


Model Evaluation

  • Print the final train and validation loss and accuracy. Print confusion matrix and classification report for the validation dataset. Analyse and report the best and worst performing class.

  • Print the two most incorrectly classified records for each class in the test dataset.

Hyperparameter Tuning-

Build two more models by changing the following hyperparameters one at a time. Write the code for Model Building, Model Compilation, Model Training and Model Evaluation as given in the instructions above for each additional model.


Regularization: Train a model without regularization

Dropout: Change the position and value of dropout layer

Write a comparison between each model and give reasons for the difference in results.



Task 3:

1. Load the attached csv file in python. Each row consists of feature 1, feature 2, class label.

 

2. Train two single/double hidden layer deep networks by varying the number of hidden nodes (4, 8, 12, 16) in each layer with 70% training and 30% validation data. Use appropriate learning rate, activation, and loss functions and also mention the reason for choosing the same. Report, compare, and explain the observed accuracy and minimum loss achieved.


3. Visually observe the dataset and design an appropriate feature transformation (derived feature) such that after feature transformation, the dataset can be classified using a minimal network architecture (minimum number of parameters). Design, train this minimal network, and report training and validation errors, and trained parameters of the network. Use 70% training and 30% validation data, appropriate learning rate, activation and loss functions. Explain the final results.


Evaluation Process -

1. Task Response and Task Completion- All the models should be logically sound and have decent accuracy (models with random guessing, frozen and incorrect accuracy, exploding gradients etc. will lead to deduction of marks.

Please do a sanity check of your model and results before submission).


2. There are a lot of subparts, so answer each completely and correctly, as no partial marks will be awarded for partially correct subparts.


3. Implementation- The model layers, parameters, hyperparameters, evaluation metrics etc. should be properly implemented.


4. Only fully connected or dense layers are allowed. CNNs/RNNs are strictly not allowed.


5. Notebooks without output will not be considered for evaluation.


Additional Tips -

1. Code organization- Please organize your code with correct line spacing and indentation, and add comments to make your code more readable.

2. Try to give explanations or cite references wherever required.

3. Use other combinations of hyperparameters to improve model accuracy.



Contact Us: +91 82 67 81 38 69

Email: realcode4you@gmail.com

And get Instant help with an affordable prices.



#DeepLearningAssignmentHelp #MachineLearningAssignmentHelp #PythonAssignmentHelp


118 views0 comments