我想使用“信息增益”度量(scikit-learn中的“互信息”)来识别数据帧的10个最佳功能,然后将它们显示在表格中(根据信息增益获得的分数以升序排列)。
在此示例中,features
数据框包含所有有趣的训练数据,这些数据可以告诉您餐厅是否关门。
# Initialization of data and labels
x = features.copy () # "x" contains all training data
y = x ["closed"] # "y" contains the labels of the records in "x"
# Elimination of the class column (closed) of features
x = x.drop ('closed', axis = 1)
# this is x.columns, sorry for the mix french and english
features_columns = ['moyenne_etoiles', 'ville', 'zone', 'nb_restaurants_zone',
'zone_categories_intersection', 'ville_categories_intersection',
'nb_restaurant_meme_annee', 'ecart_type_etoiles', 'tendance_etoiles',
'nb_avis', 'nb_avis_favorables', 'nb_avis_defavorables',
'ratio_avis_favorables', 'ratio_avis_defavorables',
'nb_avis_favorables_mention', 'nb_avis_defavorables_mention',
'nb_avis_favorables_elites', 'nb_avis_defavorables_elites',
'nb_conseils', 'nb_conseils_compliment', 'nb_conseils_elites',
'nb_checkin', 'moyenne_checkin', 'annual_std', 'chaine',
'nb_heures_ouverture_semaine', 'ouvert_samedi', 'ouvert_dimanche',
'ouvert_lundi', 'ouvert_vendredi', 'emporter', 'livraison',
'bon_pour_groupes', 'bon_pour_enfants', 'reservation', 'prix',
'terrasse']
# normalization
std_scale = preprocessing.StandardScaler().fit(features[features_columns])
normalized_data = std_scale.transform(features[features_columns])
labels = np.array(features['closed'])
# split the data
train_features, test_features, train_labels, test_labels = train_test_split(normalized_data, labels, test_size = 0.2, random_state = 42)
labels_true = ?
labels_pred = ?
# I dont really know how to use this function to achieve what i want
from sklearn.feature_selection import mutual_info_classif
from sklearn.datasets import make_classification
# Get the mutual information coefficients and convert them to a data frame
coeff_df =pd.DataFrame(features,
columns=['Coefficient'], index=x.columns)
coeff_df.head()
使用互惠互助得分实现此目的的正确语法是什么?
该adjusted_ mutual_info_score地面实况标签与分类标签的预测进行比较。两个标签数组必须具有相同的形状(nsamples)。
您需要Scikit-Learn的common_info_classif才能实现。将要素数组和相应的标签传递给common_info_classif,以获取每个要素与目标之间的估计互信息。
import numpy as np
import pandas as pd
from sklearn.feature_selection import mutual_info_classif
from sklearn.datasets import make_classification
# Generate a sample data frame
X, y = make_classification(n_samples=1000, n_features=4,
n_informative=2, n_redundant=2,
random_state=0, shuffle=False)
feature_columns = ['A', 'B', 'C', 'D']
features = pd.DataFrame(X, columns=feature_columns)
# Get the mutual information coefficients and convert them to a data frame
coeff_df =pd.DataFrame(mutual_info_classif(X, y).reshape(-1, 1),
columns=['Coefficient'], index=feature_columns)
输出量
features.head(3)
Out[43]:
A B C D
0 -1.668532 -1.299013 0.799353 -1.559985
1 -2.972883 -1.088783 1.953804 -1.891656
2 -0.596141 -1.370070 -0.105818 -1.213570
# Displaying only the top two features. Adjust the number as required.
coeff_df.sort_values(by='Coefficient', ascending=False)[:2]
Out[44]:
Coefficient
B 0.523911
D 0.366884
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句