• DocumentCode
    264120
  • Title

    Is affective crowdsourcing reliable?

  • Author

    Hupont, Isabelle ; Lebreton, P. ; Maki, Toni ; Skodras, E. ; Hirth, Matthias

  • Author_Institution
    Aragon Inst. of Technol., Zaragoza, Spain
  • fYear
    2014
  • fDate
    July 30 2014-Aug. 1 2014
  • Firstpage
    516
  • Lastpage
    521
  • Abstract
    Affective content annotations are typically acquired from subjective manual assessments by experts in supervised laboratory tests. While well manageable, such campaigns are expensive, time-consuming and results may not be generalizable to larger audiences. Crowdsourcing constitutes a promising approach for quickly collecting data with wide demographic scope and reasonable costs. Undeniably, affective crowdsourcing is particularly challenging in the sense that it attempts to collect subjective perceptions from humans with different cultures, languages, knowledge background, etc. In this study we analyze the validity of well-known user affective scales in a crowdsourcing context by comparing results with the ones obtained in laboratory tests. Experimental results demonstrate that pictorial scales possess promising features for affective crowdsourcing.
  • Keywords
    behavioural sciences computing; human computer interaction; human factors; affective content annotations; affective crowdsourcing; crowdsourcing context; demographic scope; subjective manual assessments; supervised laboratory test; user affective scales; Crowdsourcing; Laboratories; Reliability; Silicon;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Communications and Electronics (ICCE), 2014 IEEE Fifth International Conference on
  • Conference_Location
    Danang
  • Print_ISBN
    978-1-4799-5049-2
  • Type

    conf

  • DOI
    10.1109/CCE.2014.6916757
  • Filename
    6916757