A Novel Image Annotation Method Using Tag Ranking System

K. Sushma, N. Umadevi

Abstract


Number of mechanized images are growing which are open in online media for picture matching and recuperation picture clarification applications are having influence. Yet existing strategies like substance based picture recovery and furthermore tag based image recovery systems are accepting more open door to physically stamp the picture and having limitations. Multitag course of action is moreover central issue. It requires unlimited pictures with spotless and finish remarks keeping the choosing goal to take in a dependable model for tag prediction. Proposing a novel system of tag positioning with network recuperation which positions the tag and put those tags in sliding solicitation considering significance to the given picture. For tag expectation A Ranking based Multi-association Tensor Factorization model is proposed. The matrix is molded by conglomerating desire models with different tags. Finally proposed structure is best for tag ranking and which beats the multitag grouping Problems.


References


R. Datta, D. Joshi, J. Li, and J. Wang, “Image retrieval: ideas, influences, and trends of the new age,” ACM Computing Surveys, vol. 40, no. 2, 2008.

J. Wu, H. Shen, Y. Li, Z. Xiao, M. Lu, and C. Wang, “Learning a hybrid similarity measure for image retrieval,” Pattern Recognition, vol. 46, no. 11, pp. 2927–2939, 2013.

L. Wu, R. Jin, and A. Jain, “Tag completion for image retrieval,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 35, no. 3, pp. 716–727, 2013.

A. Makadia, V. Pavlovic, and S. Kumar, “Baselines for image annotation,” International Journal of Computer Vision, vol. 90, no. 1, pp. 88–105, 2010.

J. Tang, R. Hong, S. Yan, and T. Chua, “Image annotation by knn-sparse graph-based tag propagation over noisily-tagged web images,” ACM Trans. on Intelligent Systems and Technology, vol. 2, no. 2, pp. 1–16, 2011.

J. Tang, S. Yan, R. Hong, G. Qi, and T. Chua, “Inferring semantic concepts from community-contributed images and noisy tags,” in ACM Int. Conf. on Multimedia, 2009, pp. 223–232.

M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid, “Tagprop: discriminative metric learning in nearest neighbor models for image auto-annotation,” in IEEE Int. Conf. on Computer Vision, 2009, pp. 309– 316.

W. Liu and D. Tao, “Multiview hessian regularization for image annotation,” IEEE Trans. on Image Processing, vol. 22, no. 7, pp. 2676–2687, 2013

S. Zhang, J. Huang, Y. Huang, Y. Yu, H. Li, and D. Metaxas, “Automatic image annotation using group sparsity,” in IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2010, pp. 3312–3319.

Y. Verma and C. Jawahar, “Image annotation using metric learning in semantic neighbourhoods,” in European Conf. on Computer Vision, 2012, pp. 836–849.

Z. Feng, R. Jin, and A. Jain, “Large-scale image annotation by efficient and robust kernel metric learning,” in IEEE Int. Conf. on Computer Vision, 2013, pp. 4321–4328.

G. Carneiro, A.Chan, P. Moreno, and N. Vasconcelos, “Supervised learning of semantic classes for image annotation and retrieval,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, no. 3, pp. 394–410, 2007.

C. Yang, M. M. Dong, and F. Fotouhi, “Region-based image annotation through multiple-instance learning,” in ACM Int Conf. on Multimedia, 2005.

C. Wang, S. Yan, L. Zhang, and H. Zhang, “Multi-tag sparse coding for automatic image annotation,” in IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2009, pp. 1643–1650.

R. Datta, D. Joshi, J. Li, and J. Wang, “Web and personal image annotation by mining tag correlation with relaxed visual graph embedding,” IEEE Trans. on Image Processing, vol. 21, no. 3, 2012.


Full Text: PDF [Full Text]

Refbacks

  • There are currently no refbacks.


Copyright © 2013, All rights reserved.| ijseat.com

Creative Commons License
International Journal of Science Engineering and Advance Technology is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJSEat , Permissions beyond the scope of this license may be available at http://creativecommons.org/licenses/by/3.0/deed.en_GB.