Username   Password       Forgot your password?  Forgot your username? 

A Stochastic Sub-gradient Method for Low Rank Matrix Completion of Collaborative Recommendation

Volume 13, Number 5, September 2017 - Paper 9  - pp. 643-656
DOI: 10.23940/ijpe.17.05.p9.643656

WeihuaYuana,b, Hong Wanga,*, Baofang Hua , Qian Sunb

aSchool of Information Science and Engineering, Shandong Normal University, Jinan,250358,China
bSchool of Computer Science and Technology, Shandong Jianzhu University,Jinan,250101, China

(Submitted on April 14, 2017; Revised on June 28, 2017; Accepted on August 12, 2017)

Abstract:

In this paper, we focus on nuclear norm regularized matrix completion model in large matrices, and propose a new model named stochastic sub-gradient method for low rank matrix completion (SS-LRMC). To the problem of traditional SVT algorithm that would use one fixed threshold to shrink all the singular values during iterations, and the enormous computation burden when faced with large matrices, we define an adaptive singular value thresholding operator, and put forward a kind of matrix completion model applicable for user-item rating matrix of collaborative filtering. During iterations, we combine stochastic sub-gradient descent techniques with the adaptive singular value thresholding operator to obtain low rank intermediate solutions. Empirical results confirm that our proposed model and algorithm outperform several state-of-the-art matrix completion algorithms and the application to collaborative filtering recommendation can effectively solve the sparse problem of the user-item rating matrix and can significantly improve recommendation accuracy.

 

References: 21

    1. H. Avron, S. Kale, and S. Kasiviswanathan, et al., “Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization,” in Proceedings of the 29th International Conference on Machine Learning, pp. 1231-1238, 2012
    2. N. Boumal, and P. Absil, “RTRMC: a Riemannian Trust Region Method for Matrix Completion,” in Proceedings of the 25th Annual Conference on Neural Information Processing Systems, Granada, Spain, pp. 406-414, 2011
    3. M. Brand, “Fast Low-rank Modifications of the Thin Singular Value Decomposition,” Linear Algebra and its Applications, 415(1), pp. 20-30, 2006
    4. J. F. Cai, E. J. Candès, and Z. Shen, “A Singular Value Thresholding Algorithm for Matrix Completion,SIAM Journal on optimization, 20(4),pp. 1956-1982, 2010
    5. E. J. Candès, and T, Tao, “The Power of Convex Relaxation: Near Optimal Matrix Completion,” IEEE Transactions on Information Theory, 56(5), pp. 2053-2080, 2010
    6. M. Chen, A. Ganesh, Z. Lin, and Y. Ma,etc., “Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-rank Matrix,” Journal of the Marine Biological Association of the Uk, 56(3), pp. 707-722, 2015
    7. E. Hazan, “Sparse Approximate Solutions to Semidefinite Programs,” in Latin American Conference on Theoretical Informatics, Springer-Verlag, pp. 306-316, 2008
    8. M. Jaggi, and M Sulovský, “A Simple Algorithm for Nuclear Norm Regularized Problems,” in International Conference on Machine Learning, pp. 471-478, June 2010
    9. M. Jamali, and M. Ester, "TrustWalker: a Random Walk Model for Combining Trust-based and Item-based Recommendation," ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 397-406, 2009
    10. F. KLIU, and H. J. LEE, “Use of Social Network Information to Enhance Collaborative Filtering Performance,” Expert Systems with Applications, 37(7), pp. 4772-4778, 2010
    11. Y. Koren, "Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model," ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, Nevada, USA, pp. 426-434, August 2008
    12. L. Lü, M. Medo, H. Y. Chi, Y. C. Zhang, Z. K. Zhang and T. Zhou, “Recommender Systems,” Physics Reports, 519(1), pp. 1-49, 2012
    13. S. MA, D. Goldfarb, and L. Chen, “Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization,” Mathematical Programming, 128(1), pp. 321-353, 2011
    14. A. Mnih, and R. Salakhutdinov, “Probabilistic Matrix Factorization”, Advances in neural information processing systems, pp. 1257-1264, 2007
    15. Y. G. Peng, J. L. Suo, Q. H. Dai, and W.L, Xu, “From Compressed Sensing to Low-rank Matrix Recovery: Theory and Applications,” Acta Automatica Sinica, 39(7), pp. 981-994, 2013
    16. N. Srebro, and A. Tewari, “Stochastic Optimization for Machine Learning,” ICML Tutorial, 2010
    17. G. A. Watson, “Characterization of the Subdifferential of some Matrix Norms,” Linear Algebra and its Applications, 170(0), pp. 33-35, 1992
    18. Z. W. Wen, W. T. Yin, and Y. Zhang, “Solving a Low-rank Factorization Model for Matrix Completion By a Nonlinear Successive Over-relaxation Algorithm,” Rice University, Technical Report, pp. 1-24, 2010
    19. J. Yang, X. Yuan, “Linearized Augmented Lagrangian and Alternating Direction Methods for Nuclear Norm Minimization”, Mathematics of Computation, pp. 301-329, 2013
    20. S. Zhang, W. Wang and J. Ford, “Using Singular Value Decomposition Approximation for Collaborative Filtering”, E-Commerce Technology, pp. 257-264, 2005
    21. Y. J. Zhao, B. Y. Zheng and S. N. Chen, “Matrix Completion and its Application in Signal Processing,” Journal of Signal processing, pp. 423-436, 2015

     

       

      Please note : You will need Adobe Acrobat viewer to view the full articles.Get Free Adobe Reader

       
      This site uses encryption for transmitting your passwords. ratmilwebsolutions.com