如何选择机器学习算法

How do you know what machine learning algorithm to choose for your classification problem? Of course, if you really care about accuracy, your best bet is to test out a couple different ones (making sure to try different parameters within each algorithm as well), and select the best one by cross-validation. But if you’re simply looking for a “good enough” algorithm for your problem, or a place to start, here are some general guidelines I’ve found to work well over the years.

如何针对某个分类问题决定使用何种机器学习算法? 当然,如果你真心在乎准确率,最好的途径就是测试一大堆各式各样的算法(同时确保在每个算法上也测试不同的参数),最后选择在交叉验证中表现最好的。倘若你只是想针对你的问题寻找一个“足够好”的算法,或者一个起步点,这里给出了一些我觉得这些年用着还不错的常规指南。

How large is your training set?

训练集有多大?

If your training set is small, high bias/low variance classifiers (e.g., Naive Bayes) have an advantage over low bias/high variance classifiers (e.g., kNN), since the latter will overfit. But low bias/high variance classifiers start to win out as your training set grows (they have lower asymptotic error), since high bias classifiers aren’t powerful enough to provide accurate models.

如果是小训练集,高偏差/低方差的分类器(比如朴素贝叶斯)要比低偏差/高方差的分类器(比如k最近邻)具有优势,因为后者容易过拟合。然而随着训练集的增大,低偏差/高方差的分类器将开始具有优势(它们拥有更低的渐近误差),因为高偏差分类器对于提供准确模型不那么给力。

You can also think of this as a generative model vs. discriminative model distinction.

你也可以把这一点看作生成模型和判别模型的差别。

Advantages of some particular algorithms

一些常用算法的优缺点

Advantages of Naive Bayes: Super simple, you’re just doing a bunch of counts. If the NB conditional independence assumption actually holds, a Naive Bayes classifier will converge quicker than discriminative models like logistic regression, so you need less training data. And even if the NB assumption doesn’t hold, a NB classifier still often does a great job in practice. A good bet if want something fast and easy that performs pretty well. Its main disadvantage is that it can’t learn interactions between features (e.g., it can’t learn that although you love movies with Brad Pitt and Tom Cruise, you hate movies where they’re together).

朴素贝叶斯: 巨尼玛简单,你只要做些算术就好了。倘若条件独立性假设确实满足,朴素贝叶斯分类器将会比判别模型,譬如逻辑回归收敛得更快,因此你只需要更少的训练数据。就算该假设不成立,朴素贝叶斯分类器在实践中仍然有着不俗的表现。如果你需要的是快速简单并且表现出色,这将是个不错的选择。其主要缺点是它学习不了特征间的交互关系(比方说,它学习不了你虽然喜欢甄子丹和姜文的电影,却讨厌他们共同出演的电影《关云长》的情况)。

Advantages of Logistic Regression: Lots of ways to regularize your model, and you don’t have to worry as much about your features being correlated, like you do in Naive Bayes. You also have a nice probabilistic interpretation, unlike decision trees or SVMs, and you can easily update your model to take in new data (using an online gradient descent method), again unlike decision trees or SVMs. Use it if you want a probabilistic framework (e.g., to easily adjust classification thresholds, to say when you’re unsure, or to get confidence intervals) or if you expect to receive more training data in the future that you want to be able to quickly incorporate into your model.

逻辑回归: 有很多正则化模型的方法,而且你不必像在用朴素贝叶斯那样担心你的特征是否相关。与决策树与支持向量机相比,你还会得到一个不错的概率解释,你甚至可以轻松地利用新数据来更新模型(使用在线梯度下降算法)。如果你需要一个概率架构(比如简单地调节分类阈值,指明不确定性,或者是要得得置信区间),或者你 以后 想将更多的训练数据 快速 整合到模型中去,使用它吧。

Advantages of Decision Trees: Easy to interpret and explain (for some people – I’m not sure I fall into this camp). They easily handle feature interactions and they’re non-parametric, so you don’t have to worry about outliers or whether the data is linearly separable (e.g., decision trees easily take care of cases where you have class A at the low end of some feature x, class B in the mid-range of feature x, and A again at the high end). One disadvantage is that they don’t support online learning, so you have to rebuild your tree when new examples come on. Another disadvantage is that they easily overfit, but that’s where ensemble methods like random forests (or boosted trees) come in. Plus, random forests are often the winner for lots of problems in classification (usually slightly ahead of SVMs, I believe), they’re fast and scalable, and you don’t have to worry about tuning a bunch of parameters like you do with SVMs, so they seem to be quite popular these days.