Protection Assessment of Design Classifiers under Assault

D Srujan Chandra Reddy, S Ajay kumar

Abstract


Design arrangement frameworks area unit usually utilized as an area of antagonistic applications, as biometric confirmation, system interruption location, and spam separating, within which info are often deliberately controlled by individuals to undermine their operation. As this ill-disposed scenario isn't thought of by ancient configuration techniques, style grouping frameworks could show vulnerabilities, whose abuse may seriously influence their execution, and thence confine their handy utility. Many works have attended the problem of outlining vigorous classifiers against these dangers, albeit basically concentrating on explicit applications and kinds of assaults. During this paper, we have a tendency to address one in every of the first open issues: assessing at define stage the safety of example classifiers, specifically, the execution debasement beneath potential assaults they will create amid operation. We have a tendency to propose a system for Experimental assessment of classifier security that formalizes and sums up the principle thoughts projected within the writing. System Security carries with it the procurements and techniques received by a system chairman to forestall and screen unapproved access. Email is that the principle correspondence interface currently every day everyone uses/have mail get to all or any authorities’ organization sent on by a mail correspondence. During this mail correspondence we are going to have a spam sends. Spam Emails/numerous E-sends contains URL's to a sites or Webpages prompts infection or hacking. Thus we have a tendency to as of currently have a system for characteristic the spam sends but it will not acknowledge the total spam sends. Spamming is that the utilization of Electronic messages to send/get spontaneous mass messages significantly promoting erratically. Wherever as during this strategy we have a tendency to area unit planning to distinguish the total spam via email examining before it browse by the purchasers, impeding the area freelance of the purchasers E-mail ID, essential word primarily based obstructing by checking the themes, dominant the excellence within the middle of open and personal space before obstruction, watchword security by bio-metric, identity verification, pattern identification (face filtering) associated acknowledgment is an one in every of a form technique to acknowledge all and sundry. We have a tendency to utilize savage power string match calculation. It demonstrates the somebody footage of face filtering acknowledgment framework may be perceived proficiently utilizing bury reliance of pixels rising from facial codes of images.


References


R.N. Rodrigues, L.L. Ling, and V. Govindaraju, “Robustness of Multimodal Biometric Fusion Methods against Spoof Attacks,” J. Visual Languages and Computing, vol. 20, no. 3, pp. 169-179, 2009.

P. Johnson, B. Tan, and S. Schuckers, “Multimodal Fusion Vulnerabilityto Non-Zero Effort (Spoof) Imposters,” Proc. IEEE Int’lWorkshop Information Forensics and Security, pp. 1-5, 2010.

P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee,“Polymorphic Blending Attacks,” Proc. 15th Conf. USENIX SecuritySymp., 2006.

G.L. Wittel and S.F. Wu, “On Attacking Statistical Spam Filters,”Proc.First Conf. Email and Anti-Spam, 2004.

D. Lowd and C. Meek, “Good Word Attacks on Statistical SpamFilters,” Proc. Second Conf. Email and Anti-Spam, 2005.

A. Kolcz and C.H. Teo, “Feature Weighting for Improved ClassifierRobustness,” Proc. Sixth Conf. Email and Anti-Spam, 2009.

D.B. Skillicorn, “Adversarial Knowledge Discovery,” IEEE IntelligentSystems, vol. 24, no. 6, Nov./Dec. 2009.

D. Fetterly, “Adversarial Information Retrieval: The Manipulationof Web Content,” ACM Computing Rev., 2007.

R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification. Wiley-Interscience Publication, 2000.

N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma,“Adversarial Classification,” Proc. 10th ACM SIGKDD Int’l Conf.Knowledge Discovery and Data Mining, pp. 99-108, 2004.

M. Barreno, B. Nelson, R. Sears, A.D. Joseph, and J.D. Tygar, “Can Machine Learning be Secure?” Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), pp. 16-25, 2006.

A.A. C_ardenas and J.S. Baras, “Evaluation of Classifiers: Practical Considerations for Security Applications,” Proc. AAAI Workshop Evaluation Methods for Machine Learning, 2006.

P. Laskov and R. Lippmann, “Machine Learning in Adversarial Environments,” Machine Learning, vol. 81, pp. 115-119, 2010.

L. Huang, A.D. Joseph, B. Nelson, B. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning,” Proc. Fourth ACM Workshop Artificial Intelligence and Security, pp. 43-57, 2011.

M. Barreno, B. Nelson, A. Joseph, and J. Tygar, “The Security of Machine Learning,” Machine Learning, vol. 81, pp. 121-148, 2010.

D. Lowd and C. Meek, “Adversarial Learning,” Proc. 11th ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining, pp. 641- 647, 2005.


Full Text: PDF[FULL TEXT]

Refbacks

  • There are currently no refbacks.


Copyright © 2013, All rights reserved.| ijseat.com

Creative Commons License
International Journal of Science Engineering and Advance Technology is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJSEat , Permissions beyond the scope of this license may be available at http://creativecommons.org/licenses/by/3.0/deed.en_GB.