Crowdsourcing semantic data management: challenges and opportunities - - PowerPoint PPT Presentation

crowdsourcing semantic data management challenges and
SMART_READER_LITE
LIVE PREVIEW

Crowdsourcing semantic data management: challenges and opportunities - - PowerPoint PPT Presentation

Crowdsourcing semantic data management: challenges and opportunities Elena Simperl Karlsruhe Institute of Technology, Germany Talk at WIMS 2012: International Conference on Web Intelligence, Mining and Semantics Craiova, Romania; June 2012


slide-1
SLIDE 1

6/17/2012 www.insemtives.eu 1

Crowdsourcing semantic data management: challenges and

  • pportunities

Elena Simperl Karlsruhe Institute of Technology, Germany

Talk at WIMS 2012: International Conference on Web Intelligence, Mining and Semantics Craiova, Romania; June 2012

slide-2
SLIDE 2

Semantic technologies are all about automation

  • Many tasks in semantic

data management fundamentally rely on human input

– Modeling a domain – Integrating data sources

  • riginating from

different contexts – Producing semantic markup for various types

  • f digital artifacts

– ...

slide-3
SLIDE 3

Great challenges

  • Understand what drives users to participate in

semantic data management tasks

  • Design semantic systems reflecting this

understanding to reach critical mass and sustained engagement

slide-4
SLIDE 4

Great opportunities

slide-5
SLIDE 5

Incentives and motivators

  • What motivates people to engage with an application?
  • Which rewards are effective and when?
  • Motivation is the driving force that makes humans achieve

their goals

  • Incentives are ‘rewards’ assigned by an external ‘judge’ to a

performer for undertaking a specific task

– Common belief (among economists): incentives can be translated into a sum of money for all practical purposes

  • Incentives can be related to extrinsic and intrinsic

motivations

slide-6
SLIDE 6

Incentives and motivators (2)

  • Successful volunteer crowdsourcing is difficult to

predict or replicate

– Highly context‐specific – Not applicable to arbitrary tasks

  • Reward models often easier to study and control

(if performance can be reliably measured)

– Different models: pay‐per‐time, pay‐per‐unit, winner‐takes‐it‐ all, … – Not always easy to abstract from social aspects (free‐riding, social pressure) – May undermine intrinsic motivation

slide-7
SLIDE 7

TURN WORK INTO PLAY

slide-8
SLIDE 8

GWAPs and gamification

  • GWAPs: human computation disguised as casual games
  • Gamification/game mechanics: integrate game elements

to applications

– Accelerated feedback cycles

  • Annual performance appraisals vs immediate feedback to maintain

engagement

– Clear goals and rules of play

  • Players feel empowered to achieve goals vs fuzzy, complex system of

rules in real‐world

– Compelling narrative

  • Gamification builds a narrative that engages players to participate and

achieve the goals of the activity

– But in the end it’s about what tasks users want to get better at

slide-9
SLIDE 9

Examples

www.insemtives.eu 9

slide-10
SLIDE 10

Example: ontology building

6/17/2012 www.insemtives.eu 10

slide-11
SLIDE 11

Example: relationship finding

slide-12
SLIDE 12

Example: ontology alignment

www.insemtives.eu 12

slide-13
SLIDE 13

Example: video annotation

www.insemtives.eu 13

slide-14
SLIDE 14

Challenges

  • Not all tasks are amenable to gamification

– Work is decomposable into simpler (nested) tasks – Performance is measurable according to an obvious rewarding scheme – Skills can be arranged in a smooth learning curve – Player’s retention vs repetitive tasks

  • Not all domains are equally appealing

– Application domain needs to attract a large user base – Knowledge corpus has to be large‐enough to avoid repetitions – Quality of automatically computed input may hamper game experience

  • Attracting and retaining players

– You need a critical mass of players to validate the results – Advertisement, building upon an existing user base – Continuous development

slide-15
SLIDE 15

OUTSOURCING TO THE CROWD

slide-16
SLIDE 16

Microtask crowdsourcing

  • Work decomposed into small Human Intelligence Tasks

(HITs) executed independently and in parallel in return for a monetary reward.

  • Successfully applied to transcription, classification, and

content generation, data collection, image tagging, website feedback, usability tests…

  • Increasingly used by academia for evaluation purposes
  • Extensions for quality assurance, complex workflows,

resource management, vertical domains…

slide-17
SLIDE 17

Examples

Mason & Watts: Financial incentives and the performance of the crowds, HCOMP 2009.

slide-18
SLIDE 18

Crowdsourcing ontology alignment

  • Experiments using

Amazon‘s Mechanical Turk and CrowdFlower and established benchmarks

  • Enhancing the results of

automatic techniques

  • Fast, accurate and cost‐

effective

slide-19
SLIDE 19

Challenges

  • Not all tasks can be addressed by microtask

platforms

– Routine work requiring common knowledge, decomposable into simpler, independent sub‐ tasks, performance easily measurable

  • Ongoing research in task design, quality

assurance (spam), estimated time of completion…

slide-20
SLIDE 20

„Retrieve the labels in German of commercial airports located in Baden‐Württemberg, ordered by the better human‐readable description of the airport given in the comment“.

  • This query cannot be optimally answered automatically

– Incorrect/missing classification of entities (e.g. classification as airports instead of commercial airports) – Missing information in data sets (e.g. German labels) – It is not possible to optimally perform subjective operations (e.g. comparisons

  • f pictures or NL comments)

Crowdsourcing query processing

Give me the German names of all commercial airports in Baden‐ Württemberg, ordered by their most informative description.

slide-21
SLIDE 21

An integrated solution

  • Integral part of Linked Data

management platforms

– At design time application developer specifies which data portions workers can process and via which types of HITs – At run time

  • The system materializes the data
  • Workers process it
  • Data and application are updated to

reflect crowdsourcing results

  • Formal, declarative description of

the data and tasks using SPARQL patterns as a basis for the automatic design of HITs

  • Reducing the number of tasks

through automatic reasoning

slide-22
SLIDE 22

Example using SPARQL

„Retrieve the labels in German of commercial airports located in Baden‐Württemberg, ordered by the better human‐readable description of the airport given in the comment“.

SPARQL Query: SELECT ?label WHERE { ?x a metar:CommercialHubAirport; rdfs:label ?label; rdfs:comment ?comment . ?x geonames:parentFeature ?z . ?z owl:sameAs <http://dbpedia.org/resource/Baden‐Wuerttemberg> . FILTER (LANG(?label) = "de") } ORDER BY CROWD(?comment, "Better description of %x") 1 2 3 4 Classification Identity resolution Missing Information Ordering

slide-23
SLIDE 23

HITs design: Classification

  • It is not always possible to automatically infer classification

from the properties.

  • Example: Retrieve the names (labels) of METAR stations that correspond to

commercial airports.

SELECT ?label WHERE { ?station a metar:CommercialHubAirport; rdfs:label ?label .}

{?station a metar:Station; rdfs:label ?label; wgs84:lat ?lat; wgs84:long ?long} Input: {?station a ?type. ?type rdfs:subClassOf metar:Station} Output:

slide-24
SLIDE 24

HITs design: Ordering

  • Orderings defined via less straightforward built‐ins; for instance,

the ordering of pictorial representations of entities.

  • SPARQL extension: ORDER BY CROWD
  • Example: Retrieves all airports and their pictures, and the pictures should be
  • rdered according to the more representative image of the given airport.

SELECT ?airport ?picture WHERE { ?airport a metar:Airport; foaf:depiction ?picture . } ORDER BY CROWD(?picture, "Most representative image for %airport")

{?airport foaf:depiction ?x, ?y} Input: {{(?x ?y) a rdf:List} UNION {(?y ?x) a rdf:List}} Output:

slide-25
SLIDE 25

Challenges

  • Decomposition of queries

– Query optimisation obfuscates what is used and should involve costs for human tasks

  • Query execution and caching

– Naively we can materialise HIT results into datasets – How to deal with partial coverage and dynamic datasets

  • Appropriate level of granularity for HITs design for specific SPARQL

constructs and typical functionality of Linked Data management components

  • Optimal user interfaces of graph‐like content

– (Contextual) Rendering of LOD entities and tasks

  • Pricing and workers’ assignment

– Can we connect the end‐users of an application and their wish for specific data to be consumed with the payment of workers and prioritization of HITs? – Dealing with spam / gaming

slide-26
SLIDE 26

Thank you

e: elena.simperl@kit.edu, t: @esimperl

Publications available at www.insemtives.org

Team: Maribel Acosta, Barry Norton, Katharina Siorpaes, Stefan Thaler, Stephan Wölger and many others

slide-27
SLIDE 27

6/17/2012 www.insemtives.eu 27

Realizing the Semantic Web by encouraging millions of end‐users to create semantic content.