ReneWind
Renewable energy sources play an increasingly important role in the global energy mix, as the effort to reduce the environmental impact of energy production increases.
Out of all the renewable energy alternatives, wind energy is one of the most developed technologies worldwide. The U.S Department of Energy has put together a guide to achieving operational efficiency using predictive maintenance practices.
Predictive maintenance uses sensor information and analysis methods to measure and predict degradation and future component capability. The idea behind predictive maintenance is that failure patterns are predictable and if component failure can be predicted accurately and the component is replaced before it fails, the costs of operation and maintenance will be much lower.
The sensors fitted across different machines involved in the process of energy generation collect data related to various environmental factors (temperature, humidity, wind speed, etc.) and additional features related to various parts of the wind turbine (gearbox, tower, blades, break, etc.).
Objective
“ReneWind” is a company working on improving the machinery/processes involved in the production of wind energy using machine learning and has collected data of generator failure of wind turbines using sensors. They have shared a ciphered version of the data, as the data collected through sensors is confidential (the type of data collected varies with companies). Data has 40 predictors, 20000 observations in the training set and 5000 in the test set.
The objective is to build various classification models, tune them, and find the best one that will help identify failures so that the generators could be repaired before failing/breaking to reduce the overall maintenance cost. The nature of predictions made by the classification model will translate as follows:
True positives (TP) are failures correctly predicted by the model. These will result in repairing costs.
False negatives (FN) are real failures where there is no detection by the model. These will result in replacement costs.
False positives (FP) are detections where there is no failure. These will result in inspection costs.
It is given that the cost of repairing a generator is much less than the cost of replacing it, and the cost of inspection is less than the cost of repair.
“1” in the target variables should be considered as “failure” and “0” represents “No failure”.
Data Description
The data provided is a transformed version of original data which was collected using sensors.
Train.csv - To be used for training and tuning of models.
Test.csv - To be used only for testing the performance of the final best model.
Both the datasets consist of 40 predictor variables and 1 target variable
Importing libraries
pip install nb-black
# This will help in making the Python code more structured automatically (good coding practice)
%load_ext nb_black
# Libraries to help with reading and manipulating data
import pandas as pd
import numpy as np
# Libaries to help with data visualization
import matplotlib.pyplot as plt
import seaborn as sns
# To tune model, get different metric scores, and split data
from sklearn.metrics import (
f1_score,
accuracy_score,
recall_score,
precision_score,
confusion_matrix,
roc_auc_score,
plot_confusion_matrix,
)
from sklearn import metrics
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
# To be used for data scaling and one hot encoding
from sklearn.preprocessing import StandardScaler, MinMaxScaler, OneHotEncoder
# To impute missing values
from sklearn.impute import SimpleImputer
# To oversample and undersample data
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
# To do hyperparameter tuning
from sklearn.model_selection import RandomizedSearchCV
# To be used for creating pipelines and personalizing them
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
# To define maximum number of columns to be displayed in a dataframe
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
# To supress scientific notations for a dataframe
pd.set_option("display.float_format", lambda x: "%.3f" % x)
# To help with model building
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import (
AdaBoostClassifier,
GradientBoostingClassifier,
RandomForestClassifier,
BaggingClassifier,
)
from xgboost import XGBClassifier
# To suppress scientific notations
pd.set_option("display.float_format", lambda x: "%.3f" % x)
# To suppress warnings
import warnings
warnings.filterwarnings("ignore")
Loading Data
df = pd.read_csv(" ") ## Complete the code to read the data
df_test = pd.read_csv(" ") ## Complete the code to read the data
# Checking the number of rows and columns in the training data
"--" ## Complete the code to view dimensions of the train data
# Checking the number of rows and columns in the test data
"--" ## Complete the code to view dimensions of the test data
Data Overview
# let's create a copy of the training data
data = df.copy()
# let's create a copy of the training data
data_test = df_test.copy()
# let's view the first 5 rows of the data
data."--" ## Complete the code to view top 5 rows of the data
Output:
# let's view the last 5 rows of the data
data."--" ## Complete the code to view last 5 rows of the data
output:
# let's check the data types of the columns in the dataset
data.info()
Output:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20000 entries, 0 to 19999
Data columns (total 41 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 V1 19982 non-null float64
1 V2 19982 non-null float64
2 V3 20000 non-null float64
3 V4 20000 non-null float64
4 V5 20000 non-null float64
5 V6 20000 non-null float64
6 V7 20000 non-null float64
7 V8 20000 non-null float64
8 V9 20000 non-null float64
9 V10 20000 non-null float64
10 V11 20000 non-null float64
...
...
# let's check for duplicate values in the data
data.duplicated().sum()
# let's check for missing values in the data
round(data.isnull().sum() / data.isnull().count() * 100, 2)
output:
V1 0.090
V2 0.090
V3 0.000
V4 0.000
V5 0.000
V6 0.000
V7 0.000
V8 0.000
...
...
# let's check for missing values in the data
"--" ## Complete the code to check missing entries in the test data
output:
V1 0.100
V2 0.120
V3 0.000
V4 0.000
V5 0.000
V6 0.000
V7 0.000
V8 0.000
V9 0.000
...
...
# let's view the statistical summary of the numerical columns in the data
"--" ## Complete the code to print the statitical summary of the train data
output:
EDA(Exploratory Data Analysis) Univariate analysis
# function to plot a boxplot and a histogram along the same scale.
def histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None):
"""
Boxplot and histogram combined
data: dataframe
feature: dataframe column
figsize: size of figure (default (12,7))
kde: whether to the show density curve (default False)
bins: number of bins for histogram (default None)
"""
f2, (ax_box2, ax_hist2) = plt.subplots(
nrows=2, # Number of rows of the subplot grid= 2
sharex=True, # x-axis will be shared among all subplots
gridspec_kw={"height_ratios": (0.25, 0.75)},
figsize=figsize,
) # creating the 2 subplots
sns.boxplot(
data=data, x=feature, ax=ax_box2, showmeans=True, color="violet"
) # boxplot will be created and a star will indicate the mean value of the column
sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2, bins=bins, palette="winter"
) if bins else sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2
) # For histogram
ax_hist2.axvline(
data[feature].mean(), color="green", linestyle="--"
) # Add mean to the histogram
ax_hist2.axvline(
data[feature].median(), color="black", linestyle="-"
) # Add median to the histogram
Plotting histograms and boxplots for all the variables
for feature in df.columns:
histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None)
output:
Data Pre-Processing
# Dividing data into X and y
X = data.drop(["Target"], axis=1)
y = data["Target"]
"--" ## Complete the code to drop target variable from test data
"--" ## Complete the code to store target variable in y_test
# Splitting data into training and validation set:
"--" ## Complete the code to split the data into train test in the ratio 75:25
print(X_train.shape, X_val.shape, X_test.shape)
output:
(15000, 40) (5000, 40) (5000, 40) <IPython.core.display.Javascript object>
Hire expert to get complete solution of this project. Realcode4you expert provide best quality solution with an reasonable price.
For more details you can contact us at:
realcode4you@gmail.com
Comments