台灣留學生出席國際會議補助

2007年11月20日 星期二

Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent

論文發表人:林軒田 (加州理工學院電腦科學所博士班)

http://www.ijcnn2007.org

0/1 損失是感知器的一個重要的成本函數。然而,現有的感知器學習演算法無法輕易的對這個函數最佳化。在本文中,我們提出隨機坐標減少演算法來直接最佳化 0/1 損失,並證明其收斂性。本演算法計算效率較其他演算法為高,而且通常可得到最低的 0/1 損失。這種優勢,使之有助於解決真實世界中的非線性分割問題。實驗表明,本演算法對集成學習特別有幫助,配合增強集成學習演算法後,可在許多複雜的資料上得到最佳的測試結果。

The 0/1 loss is an important cost function for perceptrons.  Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost.