Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Non-sparse linear representations for visual tracking with online reservoir metric learning|
Van Den Hengel, A.
|Citation:||Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition, held in Providence, Rhode Island, 16-21 June, 2012: pp. 1760-1767|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||IEEE Conference on Computer Vision and Pattern Recognition (25th : 2012 : Providence, Rhode Island)|
|Xi Li, Chunhua Shen, Qinfeng Shi, Anthony Dick, Anton van den Hengel|
|Abstract:||Most sparse linear representation-based trackers need to solve a computationally expensive ℓ₁-regularized optimization problem. To address this problem, we propose a visual tracker based on non-sparse linear representations, which admit an efficient closed-form solution without sacrificing accuracy. Moreover, in order to capture the correlation information between different feature dimensions, we learn a Mahalanobis distance metric in an online fashion and incorporate the learned metric into the optimization problem for obtaining the linear representation. We show that online metric learning using proximity comparison significantly improves the robustness of the tracking, especially on those sequences exhibiting drastic appearance changes. Furthermore, in order to prevent the unbounded growth in the number of training samples for the metric learning, we design a time-weighted reservoir sampling method to maintain and update limited-sized foreground and background sample buffers for balancing sample diversity and adaptability. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracker.|
|Rights:||© 2012 IEEE|
|Appears in Collections:||Aurora harvest 5|
Computer Science publications
Files in This Item:
|hdl_70244.pdf||Accepted version||712.41 kB||Adobe PDF||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.