Notebook cheat sheet for fast Data preprocessing

Often, people entering the field of Data Science do not have a completely realistic idea of ​​\uXNUMXb\uXNUMXbwhat awaits them. Many people think that now they will write cool neural networks, create a voice assistant from Iron Man, or beat everyone in the financial markets.
But work Data Scientist is data-driven, and one of the most important and time-consuming moments is data processing before it is fed into a neural network or analyzed in a certain way.

In this article, our team will describe how you can easily and quickly process data with step-by-step instructions and code. We tried to make the code quite flexible and could be used for different datasets.

Many professionals may not find anything extraordinary in this article, but beginners will be able to learn something new, and anyone who has long dreamed of making a separate notebook for fast and structured data processing can copy the code and format it for themselves, or download the finished notebook from Github.

Received dataset. What to do next?

So, the standard: you need to understand what we are dealing with, the big picture. For this, we use pandas to simply define different data types.

import pandas as pd #ΠΈΠΌΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ pandas
import numpy as np  #ΠΈΠΌΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ numpy
df = pd.read_csv("AB_NYC_2019.csv") #Ρ‡ΠΈΡ‚Π°Π΅ΠΌ датасСт ΠΈ записываСм Π² ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΡƒΡŽ df

df.head(3) #смотрим Π½Π° ΠΏΠ΅Ρ€Π²Ρ‹Π΅ 3 строчки, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΏΠΎΠ½ΡΡ‚ΡŒ, ΠΊΠ°ΠΊ выглядят значСния

Notebook cheat sheet for fast Data preprocessing

df.info() #ДСмонстрируСм ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ ΠΎ ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°Ρ…

Notebook cheat sheet for fast Data preprocessing

Let's look at the values ​​of the columns:

  1. Does the number of lines in each column match the total number of lines?
  2. What is the essence of the data in each column?
  3. Which column do we want to target in order to make predictions for it?

The answers to these questions will allow you to analyze the dataset and roughly draw a plan for the next steps.

Also, for a deeper look at the values ​​in each column, we can use the pandas describe() function. True, the disadvantage of this function is that it does not provide information about columns with string values. We will deal with them later.

df.describe()

Notebook cheat sheet for fast Data preprocessing

Magic Visualization

Let's look at where we have no values ​​at all:

import seaborn as sns
sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis')

Notebook cheat sheet for fast Data preprocessing

It was a small view from above, now we will proceed to more interesting things.

Let's try to find and, if possible, delete columns that have only one value in all rows (they will not affect the result in any way):

df = df[[c for c
        in list(df)
        if len(df[c].unique()) > 1]] #ΠŸΠ΅Ρ€Π΅Π·Π°ΠΏΠΈΡΡ‹Π²Π°Π΅ΠΌ датасСт, оставляя Ρ‚ΠΎΠ»ΡŒΠΊΠΎ Ρ‚Π΅ ΠΊΠΎΠ»ΠΎΠ½ΠΊΠΈ, Π² ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Ρ… большС ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΡƒΠ½ΠΈΠΊΠ°Π»ΡŒΠ½ΠΎΠ³ΠΎ значСния

Now we protect ourselves and the success of our project from duplicate lines (lines that contain the same information in the same order as one of the existing lines):

df.drop_duplicates(inplace=True) #Π”Π΅Π»Π°Π΅ΠΌ это, Ссли считаСм Π½ΡƒΠΆΠ½Ρ‹ΠΌ.
                                 #Π’ Π½Π΅ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Ρ… ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π°Ρ… ΡƒΠ΄Π°Π»ΡΡ‚ΡŒ Ρ‚Π°ΠΊΠΈΠ΅ Π΄Π°Π½Π½Ρ‹Π΅ с самого Π½Π°Ρ‡Π°Π»Π° Π½Π΅ стоит.

We divide the dataset into two: one with qualitative values, and the other with quantitative ones

Here we need to make a small clarification: if the rows with missing data in qualitative and quantitative data do not strongly correlate with each other, then we will need to decide what we are sacrificing - all the rows with missing data, only part of them or certain columns. If the lines are correlated, then we have every right to divide the dataset into two. Otherwise, you will first need to deal with the lines in which the missing data do not correlate in qualitative and quantitative terms, and only then divide the dataset into two.

df_numerical = df.select_dtypes(include = [np.number])
df_categorical = df.select_dtypes(exclude = [np.number])

We do this to make it easier for us to process these two different types of data - later we will understand how much this simplifies our lives.

Working with quantitative data

The first thing we should do is determine if there are any "spy columns" in the quantitative data. We call these columns so because they pretend to be quantitative data, but they themselves work as qualitative ones.

How can we define them? Of course, it all depends on the nature of the data you are analyzing, but in general such columns may have little unique data (in the region of 3-10 unique values).

print(df_numerical.nunique())

After we decide on the spy columns, we move them from quantitative to qualitative data:

spy_columns = df_numerical[['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1', 'ΠΊΠΎΠ»ΠΎΠΊΠ°2', 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3']]#выдСляСм ΠΊΠΎΠ»ΠΎΠ½ΠΊΠΈ-ΡˆΠΏΠΈΠΎΠ½Ρ‹ ΠΈ записываСм Π² ΠΎΡ‚Π΄Π΅Π»ΡŒΠ½ΡƒΡŽ dataframe
df_numerical.drop(labels=['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1', 'ΠΊΠΎΠ»ΠΎΠΊΠ°2', 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3'], axis=1, inplace = True)#Π²Ρ‹Ρ€Π΅Π·Π°Π΅ΠΌ эти ΠΊΠΎΠ»ΠΎΠ½ΠΊΠΈ ΠΈΠ· количСствСнных Π΄Π°Π½Π½Ρ‹Ρ…
df_categorical.insert(1, 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1', spy_columns['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1']) #добавляСм ΠΏΠ΅Ρ€Π²ΡƒΡŽ ΠΊΠΎΠ»ΠΎΠ½ΠΊΡƒ-шпион Π² качСствСнныС Π΄Π°Π½Π½Ρ‹Π΅
df_categorical.insert(1, 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2', spy_columns['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2']) #добавляСм Π²Ρ‚ΠΎΡ€ΡƒΡŽ ΠΊΠΎΠ»ΠΎΠ½ΠΊΡƒ-шпион Π² качСствСнныС Π΄Π°Π½Π½Ρ‹Π΅
df_categorical.insert(1, 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3', spy_columns['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3']) #добавляСм Ρ‚Ρ€Π΅Ρ‚ΡŒΡŽ ΠΊΠΎΠ»ΠΎΠ½ΠΊΡƒ-шпион Π² качСствСнныС Π΄Π°Π½Π½Ρ‹Π΅

Finally, we have completely separated quantitative data from qualitative data and now we can work with them properly. The first thing is to understand where we have empty values ​​(NaN, and in some cases 0 will be accepted as empty values).

for i in df_numerical.columns:
    print(i, df[i][df[i]==0].count())

At this stage, it is important to understand which columns might have zeros indicating missing values: does this have anything to do with how the data was collected? Or could it be related to data values? These questions must be answered on a case-by-case basis.

So, if we nevertheless decided that we may not have data where there are zeros, we should replace zeros with NaN, so that it would be easier to work with this lost data later:

df_numerical[["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ° 1", "ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ° 2"]] = df_numerical[["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ° 1", "ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ° 2"]].replace(0, nan)

Now let's see where we have missing data:

sns.heatmap(df_numerical.isnull(),yticklabels=False,cbar=False,cmap='viridis') # МоТно Ρ‚Π°ΠΊΠΆΠ΅ Π²ΠΎΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒΡΡ df_numerical.info()

Notebook cheat sheet for fast Data preprocessing

Here, those values ​​within the columns that are missing should be marked in yellow. And the fun begins now - how to behave with these values? Delete lines with these values ​​or columns? Or fill these empty values ​​with some other ones?

Here is a rough diagram that can help you figure out what you can basically do with empty values:

Notebook cheat sheet for fast Data preprocessing

0. Remove unnecessary columns

df_numerical.drop(labels=["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2"], axis=1, inplace=True)

1. Is the number of empty values ​​in this column greater than 50%?

print(df_numerical.isnull().sum() / df_numerical.shape[0] * 100)

df_numerical.drop(labels=["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2"], axis=1, inplace=True)#УдаляСм, Ссли какая-Ρ‚ΠΎ ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ° ΠΈΠΌΠ΅Π΅Ρ‚ большС 50 пустых Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ

2. Delete lines with empty values

df_numerical.dropna(inplace=True)#УдаляСм строчки с пустыми значСниями, Ссли ΠΏΠΎΡ‚ΠΎΠΌ останСтся достаточно Π΄Π°Π½Π½Ρ‹Ρ… для обучСния

3.1. Inserting a random value

import random #ΠΈΠΌΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ random
df_numerical["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°"].fillna(lambda x: random.choice(df[df[column] != np.nan]["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°"]), inplace=True) #вставляСм Ρ€Π°Π½Π΄ΠΎΠΌΠ½Ρ‹Π΅ значСния Π² пустыС ΠΊΠ»Π΅Ρ‚ΠΊΠΈ Ρ‚Π°Π±Π»ΠΈΡ†Ρ‹

3.2. Inserting a constant value

from sklearn.impute import SimpleImputer #ΠΈΠΌΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ SimpleImputer, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ ΠΏΠΎΠΌΠΎΠΆΠ΅Ρ‚ Π²ΡΡ‚Π°Π²ΠΈΡ‚ΡŒ значСния
imputer = SimpleImputer(strategy='constant', fill_value="<Π’Π°ΡˆΠ΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ здСсь>") #вставляСм ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½Π½ΠΎΠ΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ SimpleImputer
df_numerical[["новая_ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1",'новая_ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2','новая_ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3']] = imputer.fit_transform(df_numerical[['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1', 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2', 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3']]) #ΠŸΡ€ΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ это для нашСй Ρ‚Π°Π±Π»ΠΈΡ†Ρ‹
df_numerical.drop(labels = ["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3"], axis = 1, inplace = True) #Π£Π±ΠΈΡ€Π°Π΅ΠΌ ΠΊΠΎΠ»ΠΎΠ½ΠΊΠΈ со старыми значСниями

3.3. Insert the average or maximum frequent value

from sklearn.impute import SimpleImputer #ΠΈΠΌΠΏΠΎΡ€Ρ‚ΠΈΡ€ΡƒΠ΅ΠΌ SimpleImputer, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ ΠΏΠΎΠΌΠΎΠΆΠ΅Ρ‚ Π²ΡΡ‚Π°Π²ΠΈΡ‚ΡŒ значСния
imputer = SimpleImputer(strategy='mean', missing_values = np.nan) #вмСсто mean ΠΌΠΎΠΆΠ½ΠΎ Ρ‚Π°ΠΊΠΆΠ΅ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ most_frequent
df_numerical[["новая_ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1",'новая_ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2','новая_ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3']] = imputer.fit_transform(df_numerical[['ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1', 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2', 'ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3']]) #ΠŸΡ€ΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ это для нашСй Ρ‚Π°Π±Π»ΠΈΡ†Ρ‹
df_numerical.drop(labels = ["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3"], axis = 1, inplace = True) #Π£Π±ΠΈΡ€Π°Π΅ΠΌ ΠΊΠΎΠ»ΠΎΠ½ΠΊΠΈ со старыми значСниями

3.4. Insert value calculated by another model

Sometimes values ​​can be calculated using regression models using models from the sklearn library or other similar libraries. Our team will dedicate a separate article on how this can be done in the near future.

So, while the story about quantitative data will be interrupted, because there are many other nuances about how best to do data preparation and preprocessing for different tasks, and the basic things for quantitative data have been taken into account in this article, and now is the time to return to qualitative data, which we separated a few steps back from the quantitative ones. You can change this notebook the way you like, adjusting it to different tasks so that data preprocessing goes very quickly!

Qualitative data

Basically, for qualitative data, the One-hot-encoding method is used, in order to format it from a string (or object) to a number. Before moving on to this point, let's use the schema and code above to deal with empty values.

df_categorical.nunique()

sns.heatmap(df_categorical.isnull(),yticklabels=False,cbar=False,cmap='viridis')

Notebook cheat sheet for fast Data preprocessing

0. Remove unnecessary columns

df_categorical.drop(labels=["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2"], axis=1, inplace=True)

1. Is the number of empty values ​​in this column greater than 50%?

print(df_categorical.isnull().sum() / df_numerical.shape[0] * 100)

df_categorical.drop(labels=["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2"], axis=1, inplace=True) #УдаляСм, Ссли какая-Ρ‚ΠΎ ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ° 
                                                                          #ΠΈΠΌΠ΅Π΅Ρ‚ большС 50% пустых Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ

2. Delete lines with empty values

df_categorical.dropna(inplace=True)#УдаляСм строчки с пустыми значСниями, 
                                   #Ссли ΠΏΠΎΡ‚ΠΎΠΌ останСтся достаточно Π΄Π°Π½Π½Ρ‹Ρ… для обучСния

3.1. Inserting a random value

import random
df_categorical["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°"].fillna(lambda x: random.choice(df[df[column] != np.nan]["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°"]), inplace=True)

3.2. Inserting a constant value

from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='constant', fill_value="<Π’Π°ΡˆΠ΅ Π·Π½Π°Ρ‡Π΅Π½ΠΈΠ΅ здСсь>")
df_categorical[["новая_колонка1",'новая_колонка2','новая_колонка3']] = imputer.fit_transform(df_categorical[['колонка1', 'колонка2', 'колонка3']])
df_categorical.drop(labels = ["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3"], axis = 1, inplace = True)

So, at last we have dealt with empty values ​​in qualitative data. Now it's time to do one-hot-encoding for the values ​​you have in your database. This method is very often used so that your algorithm can be trained on qualitative data.

def encode_and_bind(original_dataframe, feature_to_encode):
    dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
    res = pd.concat([original_dataframe, dummies], axis=1)
    res = res.drop([feature_to_encode], axis=1)
    return(res)

features_to_encode = ["ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°1","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°2","ΠΊΠΎΠ»ΠΎΠ½ΠΊΠ°3"]
for feature in features_to_encode:
    df_categorical = encode_and_bind(df_categorical, feature))

So, finally, we finished processing separately qualitative and quantitative data - it's time to combine them back

new_df = pd.concat([df_numerical,df_categorical], axis=1)

After we have connected the datasets together into one, in the end we can use the data transformation using the MinMaxScaler from the sklearn library. This will make our values ​​between 0 and 1, which will help when training the model in the future.

from sklearn.preprocessing import MinMaxScaler
min_max_scaler = MinMaxScaler()
new_df = min_max_scaler.fit_transform(new_df)

This data is now ready for anything - for neural networks, standard ML algorithms, and so on!

In this article, we have not taken into account working with data related to time series, since for such data you should use slightly different processing techniques, depending on your task. In the future, our team will devote a separate article to this topic, and we hope that it will be able to bring something interesting, new and useful into your life, like this one.

Source: habr.com

Add a comment