With online sales gaining popularity, tech companies are exploring ways to improve their sales by analysing customer behaviour and gaining insights about product trends. Furthermore, the websites make it easier for customers to find the products they require without much scavenging. Needless to say, the role of big data analysts is among the most sought-after job profiles of this decade. Therefore, as part of this assignment, we will be challenging you, as a big data analyst, to extract data and gather insights from a real-life data set of an e-commerce company.
For this assignment, you will be working with a public clickstream dataset of a cosmetics store. Using this dataset, your job is to extract valuable insights which generally data engineers come up within an e-retail company.
You will find the data in the link given below.
You can find the description of the attributes in the dataset given below.
The implementation phase can be divided into the following parts:
Copying the data set into the HDFS
Launch an EMR cluster that utilizes the Hive services, and
Move the data from the S3 bucket into the HDFS
Creating the database and launching Hive queries on your EMR cluster
Create the structure of your database,
Use optimized techniques to run your queries as efficiently as possible
Show the improvement of the performance after using optimization on any single query.
Run Hive queries to answer the questions given below.
Drop your database, and
Terminate your cluster
You are required to provide answers to the questions given below.
Find the total revenue generated due to purchases made in October.
Write a query to yield the total sum of purchases per month in a single output.
Write a query to find the change in revenue generated due to purchases from October to November.
Find distinct categories of products. Categories with null category code can be ignored.
Find the total number of products available under each category.
Which brand had the maximum sales in October and November combined
Which brands increased their sales from October to November
Your company wants to reward the top 10 users of its website with a Golden Customer plan.
Write a query to generate a list of top 10 users who spend the most.
To write your queries, please make necessary optimizations, such as selecting the appropriate table format and using partitionedbucketed tables. You will be awarded marks for enhancing the performance of your queries.
Each question should have one query only.
Use a 2-node EMR cluster with both the master and core nodes as M4.large.
Make sure you terminate the cluster when you are done working with it.
Since EMR can only be terminated and cannot be stopped, always have a copy of your queries in a text editor so that you can copy-paste them every time you launch a new cluster.
Do not leave PuTTY idle for so long. Do some activity like pressing the space bar at regular intervals. If the terminal becomes inactive, you don't have to start a new cluster. You can reconnect to the master node by opening the puTTY terminal again, giving the host address and loading .ppk key file.
For your information, if you are using emr-6.x release, certain queries might take a longer time, we would suggest you use emr-5.29.0 release for this case study.
Important Note For this project, you can use the m4 EMR instance types. In AWS Academy, any other instances other than m4 (i.e m4.large, m4.xlarge etc.) might lead to the deactivation of your account.
There are different options for storing the data in an EMR cluster. You can briefly explore them in this link. In your previous module on hive querying, you copied the data to the local file system, i.e., to the master node's file system and performed the queries. Since the size of the dataset is large here in this case study, it is a good practice to load the data into the HDFS and not into the local file system.
As part of your submission, you are required to submit a PDF document which includes the executed commands, necessary explanations and screenshots of the successfully executed hive queries.
Reference link .
Creating EMR Cluster
We log in to the Nuvepro dashboard, navigate to the Console, and then click on Create Cluster on the EMR Home Page. Choose the emr-5.29.0 release and the case study's necessary services.
As suggested, we are using a 2-node EMR cluster in this case study, with M4.large nodes serving as both the master and core nodes.
Named the cluster as Hive Assignment
Now go to Security > Choose “testing_key” EC2 Key-Pair and then click on create cluster
Our cluster Hive Assignment is created and launched successfully and is now in “Waiting” state
Hadoop & Hive Querying
Launch PuTTY and enter the Master DNS address from the EMR cluster summary page as the Host Name, followed by "hadoop@". Following that, load the.ppk Key Pair File by selecting SSH -> Auth.
Creating a directory – “hiveassignment” :
hadoop fs -mkdir /hiveassignment
Checking the directory:
hadoop fs -ls /
We see that a directory, “hiveassignment” has been created
We will load the data from S3 and local storage into HDFS due to the size of the data.
hadoop distcp ‘s3://upgrad-hiveassignment/hiveassignment/2019-Oct.csv’ /hiveassignment/2019-Oct.csv
hadoop distcp 's3://upgrad-hiveassignment/hiveassignment/2019-Nov.csv' /hiveassignment/2019-Nov.csv
Checking the loaded files:
hadoop fs -ls /hiveassignment
The datasets were loaded successfully, as we can attest.
Creating Database “upgrad_assignment”:
Create database if not exists upgrad_assignment; use upgrad_assignment;
Creating an External Table, Sales:
create External table if not exists sales(event_time timestamp,event_type string,product_id string,category_id string,category_code string,brand string,price float, user_id bigint,user_session string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ("separatorChar"=",","quoteChar"="\"","escapeChar"="\\") stored as textfile Location '/hiveassignment’ TBLPROPERTIES("skip.header.line.count"="1");
Loading the data into the table:
hive> load data inpath '/hiveassignment/2019-Oct.csv' into table sales;
hive> load data inpath '/hiveassignment/2019-Nov.csv' into table sales;
Q1. Find the total revenue generated due to purchases made in October.
hive> set hive.cli.print.header=true; hive> select sum(price) from sales where Month(event_time)=10 and event_type='purchase';
Here you get all big data related help with an affordable price. Our expert provide complete solution with proper explanation as per your given requirement details. For more details or to get help you can send your Project/ Assignment requirement details at below mail id: