반응형

안녕하세요, 츄르 사려고 코딩하는 집사 코집사입니다.

작년에 캐글에서 제공하고 있는 튜토리얼로 house prices를 참여했었습니다.

등수는 1000등 정도.....?

지금은 더 내려갔을 겁니다.

kaggle은 그만큼 많은 사람들이 참여하고 전 세계적으로 참여하는 대회니까요!

우리나라에서도 많은 사람들이 참여를 하고 있답니다.

 

 

 

1. House Prices

House Price는 캐글에서 제공하는 타이타닉과 같은 튜토리얼인데요.

머신 러닝을 공부하기에 좋은 튜토리얼인 것 같습니다.

저는 House price를 참여를 하면서 전처리와 여러 모델들을 사용했습니다.

 

저는 전처리 방법을 아래와 같이 하였습니다.

결측치를 처리하는 것과 변수들의 추가 및 수정, 삭제 그리고 수치형 변수로 변환?

전처리 방법은 아래의 코드에 주석 처리를 하였으니 쉽게 볼 수 있을 겁니다.

 

모델은 랜덤 포레스트, 선형 회귀, 리지, 라쏘, Decision Tree, Ada boosting, KNN, KSVM ,Gradient Boosting을 사용했습니다.

 

각 코드는 아래에 있으니 여러분들도 이 house prices의 튜토리얼을 참여할 때 참고를 했으면 좋겠습니다.

 

 

 

 

<코드>

#library
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from os import chdir
from scipy.stats import norm, skew
from scipy import stats
from subprocess import check_output
from collections import Counter
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold, cross_val_score, train_test_split
warnings.filterwarnings(action='ignore')

#train과 test csv 저장
chdir('C:\\Users\\rs\\Desktop\\house')
train=pd.read_csv('train.csv')
test=pd.read_csv('test.csv')

#마지막, 제출할 때, test에 있는 Id 꺼내오기 위한 csv파일 저장
o_test = pd.read_csv('test.csv')

#target variable
target = train['SalePrice']

train = train.drop('SalePrice',axis=1)
train['training_set'] = True
test['training_set'] = False

# train data 전처리
train.drop(['Id'],axis=1,inplace=True)
for col in ('PoolQC','MiscFeature','Alley','Fence','FireplaceQu','GarageType','GarageFinish','GarageQual','GarageCond','BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2','MasVnrType'):
    train[col] = train[col].fillna('None')
for col in ('GarageYrBlt','GarageArea','GarageCars','BsmtFinSF1','BsmtFinSF2','BsmtUnfSF','TotalBsmtSF','BsmtFullBath','BsmtHalfBath','MasVnrArea'):
    train[col] = train[col].fillna(0)
train["LotFrontage"] = train.groupby("Neighborhood")["LotFrontage"].transform(
    lambda x: x.fillna(x.median()))
for col in ('MSZoning','Electrical','KitchenQual','Exterior1st','Exterior2nd','SaleType'):
    train[col] =train[col].fillna(train[col].mode()[0])
train["Functional"] = train["Functional"].fillna("Typ")
train['YrSold'] = train['YrSold'].astype(str)
train['MoSold'] = train['MoSold'].astype(str)

train = train.replace({'FireplaceQu': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'BsmtQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'BsmtCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'GarageQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'GarageCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'ExterQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'ExterCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'HeatingQC': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'PoolQC': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \

                 'KitchenQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'BsmtFinType1': {'GLQ': 6, 'ALQ': 5, 'BLQ': 4, 'Rec': 3, 'LwQ': 2, 'Unf': 1, 'None': 0}, \
                 'BsmtFinType2': {'GLQ': 6, 'ALQ': 5, 'BLQ': 4, 'Rec': 3, 'LwQ': 2, 'Unf': 1, 'None': 0}, \
                 'Functional': {'Sel': 6, 'Sev': 5, 'Maj2': 4, 'Maj1': 3, 'Mod': 2, 'Min1': 1, 'Min2': 1, 'Typ': 0}, \
                 'BsmtExposure': {'Gd': 3, 'Av': 2, 'Mn': 1, 'No': 0, 'None': 0}, \
                 'Fence': {'GdPrv': 2, 'GdWo': 2, 'MnPrv': 1, 'MnWw': 1, 'None': 0}, \
                 'GarageFinish': {'Fin': 3, 'Unf': 2, 'RFn': 1, 'None': 0}, \
                 'LandSlope': {'Gtl': 2, 'Mod': 1, 'Sev': 0}, \
                 'LotShape': {'Reg': 3, 'IR1': 2, 'IR2': 1, 'IR3': 0}, \
                 'Street': {'Pave': 1, 'Grvl': 0}, \
                 'Alley': {'Pave': 2, 'Grvl': 1, 'None': 0}})

#test data 전처리
test.drop(['Id'],axis=1,inplace=True)
for col in ('PoolQC','MiscFeature','Alley','Fence','FireplaceQu','GarageType','GarageFinish','GarageQual','GarageCond','BsmtQual','BsmtCond','BsmtExposure','BsmtFinType1','BsmtFinType2','MasVnrType'):
    test[col] = test[col].fillna('None')
for col in ('GarageYrBlt','GarageArea','GarageCars','BsmtFinSF1','BsmtFinSF2','BsmtUnfSF','TotalBsmtSF','BsmtFullBath','BsmtHalfBath','MasVnrArea'):
    test[col] = test[col].fillna(0)
test["LotFrontage"] = test.groupby("Neighborhood")["LotFrontage"].transform(
    lambda x: x.fillna(x.median()))
for col in ('MSZoning','Electrical','KitchenQual','Exterior1st','Exterior2nd','SaleType'):
    test[col] = test[col].fillna(test[col].mode()[0])
test["Functional"] = test["Functional"].fillna("Typ")
test['YrSold'] = test['YrSold'].astype(str)
test['MoSold'] = test['MoSold'].astype(str)

test = test.replace({'FireplaceQu': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'BsmtQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'BsmtCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'GarageQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'GarageCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'ExterQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'ExterCond': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'HeatingQC': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'PoolQC': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \

                 'KitchenQual': {'Ex': 5, 'Gd': 4, 'TA': 3, 'Fa': 2, 'Po': 1, 'None': 0}, \
                 'BsmtFinType1': {'GLQ': 6, 'ALQ': 5, 'BLQ': 4, 'Rec': 3, 'LwQ': 2, 'Unf': 1, 'None': 0}, \
                 'BsmtFinType2': {'GLQ': 6, 'ALQ': 5, 'BLQ': 4, 'Rec': 3, 'LwQ': 2, 'Unf': 1, 'None': 0}, \
                 'Functional': {'Sel': 6, 'Sev': 5, 'Maj2': 4, 'Maj1': 3, 'Mod': 2, 'Min1': 1, 'Min2': 1, 'Typ': 0}, \
                 'BsmtExposure': {'Gd': 3, 'Av': 2, 'Mn': 1, 'No': 0, 'None': 0}, \
                 'Fence': {'GdPrv': 2, 'GdWo': 2, 'MnPrv': 1, 'MnWw': 1, 'None': 0}, \
                 'GarageFinish': {'Fin': 3, 'Unf': 2, 'RFn': 1, 'None': 0}, \
                 'LandSlope': {'Gtl': 2, 'Mod': 1, 'Sev': 0}, \
                 'LotShape': {'Reg': 3, 'IR1': 2, 'IR2': 1, 'IR3': 0}, \
                 'Street': {'Pave': 1, 'Grvl': 0}, \
                 'Alley': {'Pave': 2, 'Grvl': 1, 'None': 0}})

#더미를 같이 하기 위해 train과 test concat
df_full = pd.concat([train, test])

#dummies
df_full = pd.get_dummies(df_full)

train = df_full[df_full['training_set']==True]
train = train.drop('training_set', axis=1)
test = df_full[df_full['training_set']==False]
test = test.drop('training_set', axis=1)

#split
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor

data = pd.DataFrame(df_full)
data.to_csv('dum.csv')


#dummy 처리를 한 결과를 dum2로 저장하여 df_full에 저장 dummy에서 test를 뺐음
df_full = pd.read_csv('dum900.csv')

#dummy 처리한 data에서 test를 뺀 data 저장
test2 = pd.read_csv('dum901.csv')

#df_full을 이용한 split
X_train, X_test, y_train, y_test = train_test_split(df_full, target, random_state=42)


#Random Forest
rf_model = RandomForestRegressor(n_estimators=500,n_jobs=-1)

#Random Forest fit
rf_model.fit(X_train, y_train)
rf_y_predict = rf_model.predict(X_test)

#cv
from sklearn.metrics import make_scorer, mean_squared_error
scorer = make_scorer(mean_squared_error, False)
cv_score = np.sqrt(-cross_val_score(estimator=rf_model,X=X_train, y=y_train, cv=5, scoring = scorer))


#Random Forest score
print(rf_model.score(X_train,y_train))
print(rf_model.score(X_test,y_test))
print(cv_score)
#Random Forest Visualization
plt.plot(rf_y_predict)

plt.figure(figsize=(10,5))
plt.bar(range(len(rf_score)), rf_score)

plt.plot(range(len(cv_score)+1), [cv_score.mean()]*(len(cv_score)+1))
plt.tight_layout()

# Random Forest에 새로운 data 적용
rf_test_predict = rf_model.predict(test2)

# Random Forest 출력
print(rf_test_predict)
np.expm1(rf_test_predict)

# Random Forest Visualization
plt.plot(rf_test_predict)

#제출
my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': rf_test_predict})
my_submission.to_csv('submission-181212.csv', index=False)

#linear Regression
from sklearn.linear_model import LinearRegression
reg = LinearRegression()

#Linear Regression fit
reg.fit(X_train, y_train)
reg_y_predict = reg.predict(X_test)

#Linear Regression score
print("linear training set score :{:.4f}".format(reg.score(X_train, y_train)))
print("linear test set score :{:.4f}".format(reg.score(X_test, y_test)))

#Visualization
plt.plot(reg_y_predict)

# Linear Regression에 새로운 data 적용
Linear_Regression_test_predict = reg.predict(test2)

# Linear Regression 출력
print(Linear_Regression_test_predict)
np.expm1(Linear_Regression_test_predict)

# Linear Regression Visualization
plt.plot(Linear_Regression_test_predict)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': Linear_Regression_test_predict})
my_submission.to_csv('linear.csv', index=False)

#Ridge
from sklearn.linear_model import Ridge
ridgealpha = Ridge(alpha =0.0001).fit(X_train,y_train)
Ridge_y_predict = ridgealpha.predict(X_test)

#Ridge Regression score
print("ridge 훈련 세트 점수 :{:.4f}".format(ridgealpha.score(X_train, y_train)))
print("ridge 테스트 세트 점수 :{:.4f}".format(ridgealpha.score(X_test, y_test)))

#Visualization
plt.plot(Ridge_y_predict)

# Ridge에 새로운 data 적용
Ridge_test_predict = ridgealpha.predict(test2)

# Random Forest 출력
print(Ridge_test_predict)
np.expm1(Ridge_test_predict)

# Random Forest Visualization
plt.plot(Ridge_test_predict)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': Ridge_test_predict})
my_submission.to_csv('Ridge.csv', index=False)

#Lasso
from sklearn.linear_model import Lasso
lassoalpha = Lasso(alpha=0.0001,max_iter=100000).fit(X_train, y_train)
lasso_y_predict = lassoalpha.predict(X_test)

#Lasso score
print("lasso 훈련 세트 점수 :{:.5f}".format(lassoalpha.score(X_train, y_train)))
print("lasso 테스트 세트 점수 :{:.5f}".format(lassoalpha.score(X_test, y_test)))

#visualization
plt.plot(lasso_y_predict)

# Lasso에 새로운 data 적용
Lasso_test_predict = lassoalpha.predict(test2)

# Lasso 출력
print(Lasso_test_predict)
np.expm1(Lasso_test_predict)

# Lasso Visualization
plt.plot(Lasso_test_predict)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': Lasso_test_predict})
my_submission.to_csv('lasso.csv', index=False)

#Decision Tree
from sklearn.tree import DecisionTreeRegressor
reg_1 = DecisionTreeRegressor(max_depth=7)
reg_1.fit(X_train,y_train)
reg_1_y_predict = reg_1.predict(X_test)

#Decision Tree score
print("decisiontree 훈련 세트 점수 :{:.5f}".format(reg_1.score(X_train, y_train)))
print("decisiontree 테스트 세트 점수 :{:.5f}".format(reg_1.score(X_test, y_test)))

#visualizaiton
plt.plot(reg_1_y_predict)

# Decision Tree에 새로운 data 적용
Decision_Tree_test_predict = reg_1.predict(test2)

# Decision Tree 출력
print(Decision_Tree_test_predict)
np.expm1(Decision_Tree_test_predict)

# Decision_Tree Visualization
plt.plot(Decision_Tree_test_predict)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': Decision_Tree_test_predict})
my_submission.to_csv('DT.csv', index=False)

#Ada Boosting
from sklearn.ensemble import AdaBoostRegressor
reg_2 = AdaBoostRegressor()
reg_2.fit(X_train, y_train)
Ada_Boosting_y_predict = reg_2.predict(X_test)

#Ada Boosting score
print("adaboosting 훈련 세트 점수 :{:.5f}".format(reg_2.score(X_train, y_train)))
print("adaboosting 테스트 세트 점수 :{:.5f}".format(reg_2.score(X_test, y_test)))

#Visualization
plt.plot(Ada_Boosting_y_predict)


# Ada Boosting에 새로운 data 적용
Ada_Boosting_test_predict = reg_2.predict(test2)
# Ada Boosting 출력
print(Ada_Boosting_test_predict)
np.expm1(Ada_Boosting_test_predict)

# Ada Boosting Visualization
plt.plot(Ada_Boosting_test_predict)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': Ada_Boosting_test_predict})
my_submission.to_csv('ab.csv', index=False)

#Knn Regression
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import train_test_split
from matplotlib import font_manager, rc
import matplotlib
font_name = font_manager.FontProperties(fname="c:/Windows/Fonts/malgun.ttf").get_name()
rc('font', family=font_name)
matplotlib.rcParams['axes.unicode_minus'] = False
training_accuracy=[]
test_accuracy=[]
neighbors_settings = range(1,10)
for n in neighbors_settings:
    knnR = KNeighborsRegressor(n_neighbors=n)
    knnR.fit(X_train, y_train)
    training_accuracy.append(knnR.score(X_train,y_train))
    test_accuracy.append(knnR.score(X_test,y_test))
   
plt.plot(neighbors_settings,training_accuracy, label ="훈련정확도")
plt.plot(neighbors_settings,test_accuracy, label ="테스트정확도")
plt.ylabel("정확도")
plt.xlabel("n_neighbors")
plt.legend()
print(max(test_accuracy))

###GradientBoosting#####
from sklearn.ensemble import GradientBoostingRegressor

rate1=[0.07,0.08,0.09,0.01]
depth1=range(1,10)
for rate in rate1:
    training_accuracy=[]
    test_accuracy=[]
    for depth in depth1:
        gbrt = GradientBoostingRegressor(random_state=0,max_depth=depth,learning_rate=rate)
        gbrt.fit(X_train,y_train)
        training_accuracy.append(gbrt.score(X_train,y_train))
        test_accuracy.append(gbrt.score(X_test,y_test))
        print("depth=",depth,"rate=",rate, " training=",gbrt.score(X_train,y_train),"  test=",(gbrt.score(X_test,y_test)))
   
    plt.plot(depth1,training_accuracy, label ="훈련정확도")
    plt.plot(depth1,test_accuracy, label ="테스트정확도")
    plt.ylabel("정확도")
    plt.xlabel("depth")
    plt.legend()
    plt.title("rate = {} ".format(rate))
   
    plt.show()

from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(random_state=0,max_depth=5,learning_rate=0.07)
gbrt.fit(X_train,y_train)


gbrt_predict = gbrt.predict(test2)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': gbrt_predict})
my_submission.to_csv('gb350.csv', index=False)


#####KSVM#####
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score

svr_rbf = SVR(kernel='rbf', C=10000000, gamma=1e-08)
svr_rbf.fit(X_train, y_train)
print("svr_rbf 훈련 세트 점수 :{:.5f}".format(svr_rbf.score(X_train, y_train)))
print("svr_rbf 테스트 세트 점수 :{:.5f}".format(svr_rbf.score(X_test, y_test)))
scores = cross_val_score(svr_rbf, train, target, cv=5)
svr_y_predict=svr_rbf.predict(X_test)

ksvm_predict = svr_rbf.predict(test2)

my_submission = pd.DataFrame({'Id': o_test.Id, 'SalePrice': ksvm_predict})
my_submission.to_csv('ksvm.csv', index=False)

반응형

'Kaggle > House Prices' 카테고리의 다른 글

캐글(kaggle) - House Prices 데이터 시각화  (0) 2019.05.02
  • 네이버 블러그 공유하기
  • 네이버 밴드에 공유하기
  • 페이스북 공유하기
  • 카카오스토리 공유하기