Crowdsourcing: A New Approach Of Non-Domain Experts To Provide Predictive Behavioral Results

Yalamanchili Pavan

Abstract


Bringing about models from great data sets and influential which subsets of data to mine is becoming more and more automated. Though preferring what data to gather in the first place necessitate human intuition or experience typically complete by a domain expert. In paper we explain a new approach to machine science which makes obvious for the first time those non-domain experts can communally put together features and present values for those features such that they are predictive of some behavioral outcome of interest. This was completed by building a web platform in which human groups interrelate to both respond to questions likely to help predict a behavioral outcome and pose new questions to their peers. These consequences are in a vibrant growing online survey the result of this supportive behavior also leads to replica that can forecast user’s outcomes based on their answer to the user-generated survey questions.


Keywords


Crowd sourcing, machine science, surveys, social media, human behaviour modelling.

References


J. Bongard and H. Lipson, “Automated reverse engineering of nonlinear dynamical systems,” Proceedings of the National Academy of Sciences, vol. 104, no. 24, pp. 9943–9948, 2007.

J. Evans and A. Rzhetsky, “Machine science,” Science, vol. 329, no. 5990, p. 399, 2010.

R. D. King, K. E. Whelan, F. M. Jones, P. G. K. Reiser, C. H. Bryant, S. H. Muggleton, D. B. Kell, and S. G. Oliver, “Functional genomic hypothesis generation and experimentation by a robot scientist,” Nature, vol. 427, pp. 247–252, 2004.

R. King, J. Rowland, S. Oliver, M. Young, W. Aubrey, E. Byrne, M. Liakata, M. Markham, P. Pir, L. Soldatova et al., “The automation of science,” Science, vol. 324, no. 5923, p. 85, 2009.

J. Bongard, V. Zykov, and H. Lipson, “Resilient machines through continuous self-modeling,” Science, vol. 314, pp. 1118–1121, 2006.

J. Giles, “Internet encyclopedias go head to head,” Nature, vol. 438, no. 15, pp. 900–901, 2005.

D. C. Brabham, “Crowdsourcing as a model for problem solving,” Convergence, vol. 14, pp. 75–90, 2008.

A. Sorokin and D. Forsyth, “Utility data annotation with amazon mechanical turk,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008.

M. Marge, S. Banerjee, and A. Rudnicky, “Using the amazon mechanical turk for transcription of spoken language,” in Proc. IEEE International Conference on Acoustics Speech and Signal Processing, 2010.

N. Kong, J. Heer, and M. Agrawala, “Perceptual guidelines for creating rectangular treemaps,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, 2010.

A. Kittur, E. Chi, and B. Suh, “Crowdsourcing user studies with mechanical turk,” in Proc.Twenty-sixth annual SIGCHI conference on human factors in computing systems, 2008.

D. Wightman, “Crowdsourcing human-based computation,” in Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, 2010.

B. Fitzgerald, “The transformation of open source software,” Management Information Systems Quarterly, vol. 30, no. 3, pp. 587–598, 2006.

J. Howe, Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business. Crown Business, 2009.

N. Thurman, “Forums for citizen journalists? adoption of user generated content initiatives by online news media,” New Media and Society, vol. 10, no. 1, 2008.


Full Text: PDF[FULL TEXT]

Refbacks

  • There are currently no refbacks.


Copyright © 2013, All rights reserved.| ijseat.com

Creative Commons License
International Journal of Science Engineering and Advance Technology is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJSEat , Permissions beyond the scope of this license may be available at http://creativecommons.org/licenses/by/3.0/deed.en_GB.