科大讯飞2023企业经营数据集和健康评估

摘要:

合集:AI案例-ML-泛金融业
赛题:企业经营健康评估挑战赛
主办方:科大讯飞
主页:https://challenge.xfyun.cn/topic/info?type=management-health
AI问题:识别分类问题
数据集:公司经营指标相关数据如偿付能力、盈利能力、现金流比率、资本结构比率、营业额比率、增长等相关字段。
数据集价值:评估公司运营健康情况。
解决方案:LightGBM框架。基于对抗验证(Adversarial Validation)思想筛选出AUC值低的特征提高模型泛化能力。

一、赛题描述

背景

企业经营不善的话,会导致破产或倒闭,会对企业本身和全球经济产生负面影响。对于许多相关金融机构来说,评估企业的健康是一项非常重要的任务。金融机构需要有效的预测模型,以便做出适当的贷款决定。有效的企业健康评估对于金融机构做出适当的贷款决定至关重要。一般来说,输入变量(或特征),如财务比率、预测技术等,是影响健康评估的两个最重要的因素。

任务

本次比赛任务是根据公司经营指标相关数据,如偿付能力、盈利能力、现金流比率、资本结构比率、营业额比率、增长等相关字段,然后通过训练数据训练模型,预测测试集中所以样本对应公司健康状况,判断企业是否濒临破产。

二、数据集说明

企业经营数据由6万条训练集、4万条测试集据组成,共包含95个字段,其中id为样本唯一表示,target字段为预测目标,即此公司是否濒临破产。其余字段名为x1至x93表示公司经营指标相关数据,如偿付能力、盈利能力、现金流比率、资本结构比率、营业额比率、增长等相关字段。特别注意,为了保证比赛的公平性,这部分字段(x1-x93)为匿名处理字段。

字段定义:

id:样本唯一表示
target字段:预测目标,即此公司是否濒临破产
x1至x93:表示公司经营指标相关数据

数据集版权许可协议

GPL2 license

三、解决方案样例

解决方案

源码:baseline.ipynb

这段代码实现了一个基于LightGBM的企业破产预测模型,通过特征工程扩展了原始特征集。核心思路是利用企业匿名经营指标来预测其健康状况。这是一个典型的梯度提升树分类器应用案例。

安装开发库

请参考文章《安装传统机器学习开发包》。

导入开发库

import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import lightgbm as lgb
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import auc

源码流程:

1. 数据准备

数据读取

  • 从CSV文件读取训练集(train.csv)和测试集(test.csv)
  • 训练集包含60,000条记录,测试集包含40,000条记录
  • 数据包含95个字段,其中id是样本唯一标识,target是预测目标(是否破产),其余x1-x93是匿名处理的企业经营指标
# 设置 value 的显示长度为 200,默认为 50
pd.set_option('max_colwidth', 200)

# 显示所有列,把行显示设置成最大
pd.set_option('display.max_columns', None)

# 显示所有行,把列显示设置成最大
pd.set_option('display.max_rows', None)

train = pd.read_csv('./data/train.csv')
test = pd.read_csv('./data/test.csv')

特征工程

  • 定义了一个cross_features()函数,对选定的特征列进行暴力组合:
    • 计算特征之间的加、减、乘、除
    • 例如:x20 + x82, x20 – x82, x20 * x82, x20 / x82
  • 对[‘x20’, ‘x82’, ‘x35’, ‘x32’, ‘x65’, ‘x80’, ‘x45’]这些列进行交叉特征生成
  • 这样将原始93个特征扩展到了179个特征(93原始 + 7选7组合的交叉特征)
def cross_features(data, features):
   # 暴力 Feature
   loc_f = features
   for i in range(len(loc_f)):
       for j in range(i + 1, len(loc_f)):
           data[f'{loc_f[i]} + {loc_f[j]}'] = data[loc_f[i]] + data[loc_f[j]]
           data[f'{loc_f[i]} - {loc_f[j]}'] = data[loc_f[i]] - data[loc_f[j]]
           data[f'{loc_f[i]} * {loc_f[j]}'] = data[loc_f[i]] * data[loc_f[j]]
           data[f'{loc_f[i]} / {loc_f[j]}'] = data[loc_f[i]] / data[loc_f[j]]
   return data

训练集和测试集:

train = cross_features(train, ['x20', 'x82', 'x35', 'x32', 'x65', 'x80', 'x45'])
test = cross_features(test, ['x20', 'x82', 'x35', 'x32', 'x65', 'x80', 'x45'])

2. 数据分析

数据概览

  • 使用train.head()展示前5行数据
  • 使用train.shape查看数据维度(60,000行179列)
  • 使用train.describe()展示数值特征的统计信息
train.head()
train.shape # 输出 (60000, 179)
train.describe()
columns = [i for i in test.columns if i != 'id']
for col in columns:
  print('==================', col, train[col].nunique())

输出:各个特征字段和不重复的值的个数

================== x1 59842
================== x2 58699
================== x3 58699
================== x4 59842
================== x5 59842
================== x6 59839
================== x7 59842
================== x8 59841
================== x9 59842
================== x10 59842
================== x11 59842
================== x12 59842
================== x13 59842
================== x14 59842
================== x15 49723
================== x16 59842
================== x17 59662
================== x18 59837
================== x19 59842
================== x20 57808
================== x21 59842
================== x22 58694
================== x23 59439
================== x24 59842
================== x25 59842
...
================== x80 + x45 59728
================== x80 - x45 59728
================== x80 * x45 57437
================== x80 / x45 57431

3. 模型构建

模型选择

  • 使用LightGBM算法,这是一种基于决策树的梯度提升框架
  • 采用分层K折交叉验证(StratifiedKFold)来评估模型性能
  • 使用AUC(曲线下面积)作为评估指标

这个函数用于进行对抗性特征选择(Adversarial Feature Selection),目的是识别并过滤掉那些可能泄露数据分布信息的特征(即”对抗性特征”),这些特征可能会导致模型在训练集上表现过好但在测试集上泛化能力差。

3-1、数据准备阶段

  • 创建一个新的二分类任务:区分样本是来自训练集(1)还是测试集(0)
  • 合并训练集和测试集,便于统一处理。输出到 df 变量。

3-2、LightGBM参数设置

  • 使用二分类目标函数
  • 评估指标为AUC
  • 设置随机种子保证可复现性
  • 禁用日志输出(verbose=-1)

3-3、交叉验证设置

  • 5折分层交叉验证
  • 使用单一随机种子
  • 初始化结果存储列表

3-4、特征评估核心逻辑

  1. 单特征建模:对每个特征单独训练一个LightGBM模型
  2. 评估标准:计算模型区分训练/测试集的AUC值
    • AUC > 0.85:说明该特征能够很好地区分训练集和测试集(可能是”泄露特征”)
    • AUC ≤ 0.85:说明该特征不能区分数据来源(是”安全特征”)
  3. 早停机制:设置early_stopping(100)防止过拟合
  4. 交叉验证:使用5折交叉验证提高评估可靠性
def get_adv_feats(df_train, df_test, feats):
   df_train['adv'] = 1
   df_test['adv'] = 0
   df = pd.concat([df_train, df_test]).reset_index(drop = True)

   params = {
       'learning_rate': 0.1,
       'boosting_type': 'gbdt',
       'objective': 'binary',
       'metric': 'auc',
       'seed': 2222,
       'n_jobs': 4,
       'verbose': -1,
  }

   fold_num = 5  # 设置交叉验证的折数为5
   seeds = [2222]  # 设置随机种子,用于确保每次运行代码时,数据的划分是一致的
   new_feats = []  # 初始化一个空列表,用于存储每个特征的性能评估结果

   for f in feats:
       oof = np.zeros(len(df))
       for seed in seeds:
           # StratifiedKFold 是一个分层交叉验证的实现,确保每个折中的目标变量的分布与整个数据集中的分布一致
           kf = StratifiedKFold(n_splits = fold_num, shuffle = True, random_state = seed)
           for fold, (train_idx, val_idx) in enumerate(kf.split(df[[f]], df['adv'])):
               train = lgb.Dataset(df.loc[train_idx, [f]],
                                   df.loc[train_idx, 'adv'])
               val = lgb.Dataset(df.loc[val_idx, [f]],
                                 df.loc[val_idx, 'adv'])
               # 训练LightGBM模型,设置最大迭代次数为10000,并使用早停法(early_stopping)来防止过拟合,当验证集的性能在连续100轮没有提升时停止训练
               # log_evaluation(-1) 表示每轮都打印日志
               model = lgb.train(params, train, valid_sets = [val], num_boost_round = 10000,
                                 callbacks = [lgb.early_stopping(100), lgb.log_evaluation(-1)])
               # 使用训练好的模型对验证集进行预测,并将预测结果累加到oof数组中。最后除以种子的数量,以得到平均预测值
               oof[val_idx] += model.predict(df.loc[val_idx, [f]]) / len(seeds)
               # 使用auc函数计算验证集上的AUC值,这是一个常用的分类性能指标
               score = auc(df.loc[val_idx, 'adv'], oof[val_idx])

               if score > 0.85:
                   print('--------------------------------------', f, score)
                   break
               else:  
               # 低AUC特征(≤0.85)意味着该特征在训练集和测试集中分布相似,是"安全"特征,通常保留。
                   new_feats.append(f)

   return new_feats

调用:

feats = [i for i in test.columns if i != 'id']
feats = get_adv_feats(train, test, feats)

len(feats)  # 打印筛选出AUC值低的特征数量

3-5、对抗验证(Adversarial Validation)思想

这种方法基于对抗验证(Adversarial Validation)思想:

  • 好特征:应该在训练集和测试集上分布一致,模型难以区分
  • 坏特征:在训练集和测试集上分布差异大,模型容易区分

低AUC特征(≤0.85)意味着该特征在训练集和测试集中分布相似,是”安全”特征,通常保留。通过筛选出AUC值低的特征,可以:

  1. 去除数据泄露风险
  2. 提高模型泛化能力
  3. 确保特征在训练/测试集上分布一致

运行

Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[10] valid_0's auc: 0.576528
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[20] valid_0's auc: 0.577056
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.577792
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[13] valid_0's auc: 0.576346
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[7] valid_0's auc: 0.577707
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[64] valid_0's auc: 0.63278
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[4] valid_0's auc: 0.633139
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[37] valid_0's auc: 0.629698
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.629587
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[52] valid_0's auc: 0.636017
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.63346
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[13] valid_0's auc: 0.633128
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[16] valid_0's auc: 0.630884
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.630103
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.639228
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[23] valid_0's auc: 0.57985
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[27] valid_0's auc: 0.582963
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[4] valid_0's auc: 0.580656
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.586945
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[48] valid_0's auc: 0.586731
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[33] valid_0's auc: 0.596912
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[22] valid_0's auc: 0.598719
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.594498
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[45] valid_0's auc: 0.595547
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[5] valid_0's auc: 0.603925
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[33] valid_0's auc: 0.567451
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.566559
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[36] valid_0's auc: 0.558429
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[3] valid_0's auc: 0.562277
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[5] valid_0's auc: 0.566175
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.636032
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[24] valid_0's auc: 0.638711
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[26] valid_0's auc: 0.634329
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[48] valid_0's auc: 0.638199
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[18] valid_0's auc: 0.643169
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[5] valid_0's auc: 0.662756
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.655137
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[14] valid_0's auc: 0.65244
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[16] valid_0's auc: 0.658464
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.657936
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[10] valid_0's auc: 0.593343
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.596523
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.584485
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[69] valid_0's auc: 0.589031
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.597596
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[22] valid_0's auc: 0.597477
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.596513
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.601361
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.597941
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[5] valid_0's auc: 0.598215
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[5] valid_0's auc: 0.566344
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[12] valid_0's auc: 0.569934
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[37] valid_0's auc: 0.568921
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[10] valid_0's auc: 0.561517
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[20] valid_0's auc: 0.568466
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[2] valid_0's auc: 0.554777
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[7] valid_0's auc: 0.557864
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[2] valid_0's auc: 0.556707
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[24] valid_0's auc: 0.567944
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[14] valid_0's auc: 0.571202
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[26] valid_0's auc: 0.581766
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[12] valid_0's auc: 0.584985
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[61] valid_0's auc: 0.588866
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[29] valid_0's auc: 0.580959
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[20] valid_0's auc: 0.5908
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.569568
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[10] valid_0's auc: 0.572805
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[24] valid_0's auc: 0.57317
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[8] valid_0's auc: 0.567318
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[14] valid_0's auc: 0.574612
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.561836
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[20] valid_0's auc: 0.566826
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[3] valid_0's auc: 0.567128
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[3] valid_0's auc: 0.567567
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.568203
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.569886
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.572222
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.578005
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[3] valid_0's auc: 0.577148
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.579871
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[97] valid_0's auc: 0.579952
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[64] valid_0's auc: 0.584451
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.586981
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[32] valid_0's auc: 0.585033
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[28] valid_0's auc: 0.590702
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[4] valid_0's auc: 0.660675
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[17] valid_0's auc: 0.66831
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[38] valid_0's auc: 0.669261
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[15] valid_0's auc: 0.671841
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[11] valid_0's auc: 0.666711
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.557374
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[22] valid_0's auc: 0.56375
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[10] valid_0's auc: 0.560111
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.55327
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[43] valid_0's auc: 0.564633
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[43] valid_0's auc: 0.58297
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[32] valid_0's auc: 0.576409
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.576544
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[64] valid_0's auc: 0.576876
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[78] valid_0's auc: 0.582001
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[6] valid_0's auc: 0.594953
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[14] valid_0's auc: 0.602343
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.595574
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.591768
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.604123
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.633145
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[33] valid_0's auc: 0.631921
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[15] valid_0's auc: 0.629478
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[38] valid_0's auc: 0.625148
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[2] valid_0's auc: 0.628338
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[27] valid_0's auc: 0.610011
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[31] valid_0's auc: 0.618245
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[47] valid_0's auc: 0.609342
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[80] valid_0's auc: 0.609567
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[30] valid_0's auc: 0.613475
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[4] valid_0's auc: 0.570276
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.575891
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[10] valid_0's auc: 0.572775
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[8] valid_0's auc: 0.580579
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[24] valid_0's auc: 0.573181
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.563339
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[19] valid_0's auc: 0.559776
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[16] valid_0's auc: 0.558438
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[17] valid_0's auc: 0.554842
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[45] valid_0's auc: 0.565871
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[43] valid_0's auc: 0.593006
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[42] valid_0's auc: 0.592554
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[16] valid_0's auc: 0.589617
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[9] valid_0's auc: 0.585962
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[31] valid_0's auc: 0.593686
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[3] valid_0's auc: 0.568025
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[62] valid_0's auc: 0.569857
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[1] valid_0's auc: 0.570441
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[4] valid_0's auc: 0.575613
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[49] valid_0's auc: 0.575035
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.569886
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.572222
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.578005
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[3] valid_0's auc: 0.577148
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[25] valid_0's auc: 0.579871
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[15] valid_0's auc: 0.585451
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[8] valid_0's auc: 0.582976
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[48] valid_0's auc: 0.584937
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[17] valid_0's auc: 0.578881
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[15] valid_0's auc: 0.584933
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[97] valid_0's auc: 0.579952
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[64] valid_0's auc: 0.584451
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[39] valid_0's auc: 0.586981
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[32] valid_0's auc: 0.585033
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[28] valid_0's auc: 0.590702
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[21] valid_0's auc: 0.65526
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[28] valid_0's auc: 0.659
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[31] valid_0's auc: 0.657194
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[15] valid_0's auc: 0.664637
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[48] valid_0's auc: 0.653801
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[42] valid_0's auc: 0.625196
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[5] valid_0's auc: 0.627003
Training until validation scores don't improve for 100 rounds
Early stopping, best iteration is:
[50] valid_0's auc: 0.628552
Training until validation scores don't improve for 100 rounds

源码开源协议

GPL-3.0 license
https://github.com/datawhalechina/competition-baseline/blob/master/LICENSE

四、获取案例套装

需要登录后才允许下载文件包。登录

发表评论