According to Khan et al. (2020), although facial-recognition technology is well-known as a technique for assessing people — and a contentious one at that, given well-known concerns about privacy, accuracy, and prejudice — animal face detection system is one of the numerous initiatives to adapt it for use with wild and domestic animals. Proponents of the technique said that it is a less expensive, longer-lasting, less intrusive, and less hazardous method of tracking animals than, for example, adding a collar and piercing an ear to connect an RFID tag. As per Nada et al. (2018) Animal Faces-HQ (AFHQ) is a data collection comprising 16,130 high-quality photos of animals. There are three classes' domains, each with around 5000 photos. AFHQ creates a difficult image-to-image translation challenge by having numerous (three) domains and different photos of distinct breeds inside each domain. At the same time, Nada et al. (2018) define image classification as the process of classifying and labelling groupings of pixels or vectors inside an image using predefined criteria. One or even more spectral or texture properties may be used to develop the classification legislation. There are two broad classification methods: 'supervised' and 'unsupervised'. As per Khan et al. (2020), the two most often used object detection algorithms—HOG and YOLO. HOG is a linear filter that has been shown to be effective with SVM and similar machine learning methods, while YOLO is used with deep learning-based neural networks. While Nada et al. (2018) suggested that, Object detection's primary task is to determine and find one or more useful targets within the still picture or video data. It incorporates a broad range of critical methods, including image analysis, pattern recognition, artificial intelligence, and machine learning. While Loos and Ernst (2013) argued the Objective of binary classification is to divide the members of a collection into two categories using a classification rule. On the other hand, Face detection assists in determining which areas of images and videos should be focussed on when determining an individual's age, gender, and emotions via facial expressions. While as per Khan et al. (2020), Facial recognition is indeed a technology-based method of identifying a human face. A face recognition system maps face characteristics from an image or video using biometrics. It compares the data to the database of known individuals to see whether there is a match. These face detection technology can be used in farms for differentiating animals, and they can develop modification in breeding units. The use of sensors to gather biometric data in order to quantify animal expressions is a rising area of concern in agricultural technology. It can be done by numerous facets of the use of sensing systems to monitor animal expressions (Viola & Jones, 2001). Due to the fact that a simple emotional response measurement based on an animal's facial features or physiological processes cannot adequately depict the farm animal's emotional changes, complex expression recognition measurement is necessary. Khan et al. (2020) suggest several unique approaches for integrating sensor technology into efficient sensors and data and quantifying animals' complex expressions through sensor fusion. But the use of binary classification and python language can differentiate the data to create the system of face detection. The use of this technology can benefit the livestock's (digitisation of livestock and farming) and also can use in pet recognising in missing reports (Loos & Ernst, 2013).
The aim of this project is to locate and recognise the animal faces, other animal datasets and human datasets.
The objective of this project is:
To review the literature on this topic
To gather/select suitable datasets.
To explore the knowledge of face detection.
To evaluate theoretical approaches and model to differentiate the datasets.
To build an animal face/no animal face classifier.
To apply the technology of image classification to the selected datasets.
Face detection is a form of computer vision that identifies the position and size of a facial expression inside a digital picture. The face traits are recognised, and all other items in the digital picture, such as trees, buildings, and bodies, are disregarded.
Currently, no 'benchmarks' or scientific evaluations exist for evaluating and quantifying farm animal reactions and expressions.
add some information about this animal faces dataset from Kaggle , and I suggest combining it with another dataset (CIFAR for animals but not face pictures, or even CELEBA for human faces) so that you can build an “animal face/ no animal face” classifier using a deep learning model e.g convolutional neural networks(CNN). This means you should Frame the project as “does this image contain an animal face or not?” I.e use information from the Kaggle dataset that contain animal faces and CIFAR-10 dataset that contain animal pictures but not front-on animal faces.
Explain why you will be doing things in a certain way and a little bit of what python libraries and tools you would use e.g keras. What possible software and techniques can be used to develop a solution.
The project implementation could be similar to
But instead of cats and dogs, you’ll have “animal face” and “no animal face” images. The animal face images will come from the kaggle dataset, the no animal faces might come from CIFAR-10 (maybe) or you might choose a different dataset (CELEBA has human faces in it).
For architectures, you might want to look at LeNet-5 or AlexNet or ResNet or even Inception. There are newer ones, too.
- Cover the datasets that you’ve looked at (in a “dataset” subsection)
- the easiest way might be to create a small table indicating dataset name and reference, dataset size (how many images, sizes of the images), labels and a comment on how you might use this dataset for your application. Creating a table will avoid plagiarism and evidence higher learning on your part (“Critical appraisal of sources”).?
- Machine learning - deep learning and Convolutional neural networks (CNN’s) in particular. Cover LeNet-5, AlexNet and ResNet briefly (you can compare the architectures, and maybe pick one that you’d like to implement. You’ll probably use “Adam” as an optimiser (there’s a paper to reference for that, too).
- Pre-trained networks (you can get pre-trained AlexNet and YOLO networks for a variety of applications. It might be that you can use a pre-trained network and train it further on your dataset to cut down on training time.
- Cover practical ways of implementing CNNs (namecheck some of Keras, Pytorch, maybe SciKit learn). This is where you might reference online tutorials
Are you looking computer vision project help or need solution of above animal face detection problem then send your requirement details at firstname.lastname@example.org so we can help you, we are offering discount price for first order.