top of page

Big Data Spark Assignment Help | Big Data Analytics Using Spark Sample Paper



Introduction

In this project you can learn basic understanding of big data and implement it using PySpark and perform some queries over the dataset using pyspark.sql. Here you can also learn how to use machine learning models in this.


This dataset was originally created by the University of New Brunswick for analyzing DDoS data. You can find the full dataset and its description here. The dataset itself was based on logs of the university's servers, which found various DoS attacks throughout the publicly available period to generate totally 80 attributes with 6.40GB size. We will use about 2.6GB of the data to process it with the restricted PCs to 4GB RAM. Download it from here. When writing machine learning or statistical analysis for this data, note that the Label column is arguably the most important portion of data, as it determines if the packets sent are malicious or not.


a) The features are described in the “IDS2018_Features.xlsx” file in Moodle page.

b) The labels are as follows:











• “Label”: normal traffic

• “Benign”: susceptible to DoS attack


In this, we use more than 8.2-million records with the size of 2.6GB. As a big data specialist, firstly, we should read and understand the features, then apply modeling techniques. If you want to see a few records of this dataset, you can either use [1] Hadoop HDFS and Hive, [2] Spark SQL or [3] RDD for printing a few records for your understanding.



Big Data Query & Analysis using Spark SQL

This task is using Spark SQL for converting big sized raw data into useful information. Each member of a group should implement 2 complex SQL queries (refer to the marking

scheme). Apply appropriate visualization tools to present your findings numerically and graphically. Interpret shortly your findings.


You can use https://spark.apache.org/docs/3.0.0/sql-ref.html for more information.


What do you need to put in the HTML report per student?

  1. At least two Spark SQL queries.

  2. A short explanation of the queries.

  3. The working solution, i.e., plot or table.


Advanced Analytics using PySpark

In this section, you will conduct advanced analytics using PySpark.


Analyze and Interpret Big Data using PySpark

Every member of a group should analyze data through 3 analytical methods (e.g., advanced descriptive statistics, correlation, hypothesis testing, density estimation, etc.). You need to present your work numerically and graphically. Apply tooltip text, legend, title, X-Y labels etc. accordingly.


Note: we need a working solution without system or logical error for the good/full mark.



Design and Build a Machine Learning (ML) technique

Every member of a group should go over https://spark.apache.org/docs/3.0.0/ml-guide.html and apply one ML technique. You can apply one the following approaches: Classification, Regression, Clustering, Dimensionality Reduction, Feature Extraction, Frequent Pattern mining or Optimization. Explain and evaluate your model and its results into the numerical and/or graphical representations.


Documentation

Your final report must follow the “The format of final submission” section. Your work must demonstrate appropriate understanding of building a user friendly, efficient and comprehensive analytics report for a big data project to help move users (readers) around to find the relevant contents.


THE FORMAT OF FINAL SUBMISSION

1- You can use either Google Colab (https://colab.research.google.com/) or Ubuntu VMWare for this CRWK.

2- You have to convert the source code (*.ipynb) to HTML. Watch the video in the Moodle about “how to submit the report in HTML format”.

3- Upload ONLY one single HTML file per group into Turnitin in Moodle. One member of each group must submit the work, NOT all members. The name of the file must be in the format of “Your-Group-ID_CN7031”, such as Group200_CN7031.html if you are belonging to the group 200.

4- The submission link will be available from week 10, and you are free to amend your submitted file several times before submission deadline. Your last submission will be saved in the Moodle database for marking.



Contact us to get complete solution of this problem or need any other Big Data PySpark assignment help then send your request at realcode4you@gmail.com and get instant help with with an affordable price.
bottom of page