aspects and objects in sentiment analysis
play

Aspects and Objects in Sentiment Analysis Jared Kramer and Clara - PowerPoint PPT Presentation

Aspects and Objects in Sentiment Analysis Jared Kramer and Clara Gordon April 29, 2014 The Problem Most online reviews dont just offer a single opinion on a I liked the food, but the service was product terrible. Users


  1. Aspects and Objects in Sentiment Analysis Jared Kramer and Clara Gordon April 29, 2014

  2. The Problem ● Most online reviews don’t just offer a single opinion on a … I liked the food, but the service was product terrible…. ● Users are interested in finer- grained information about product features ● Other sentiment tasks, like automatic summarization, rely on this fine-grained information ● Aspect grouping is a subjective task ○ Grouping task benefits from seed user input

  3. Aspect Extraction (Mukherjee & Liu, 2012) ● Parallels with: ● Semi-unsupervised method for extracting aspects (features of Topic modeling ○ the product being reviewed) ○ Joint sentiment and aspect ● User provides seed aspect models categories DF-LDA model ○ ● Two subtasks: (Andrezejewski, 2009) Extracting aspect terms ○ Must-link and cannot- ■ from reviews link constraints ○ Clustering synonymous ● Novel contribution: two semi- aspect terms supervised ASMs that both extract aspects and performs grouping, while jointly modeling aspect and sentiment

  4. Previous Approaches ● Latent Dirichlet Allocation topic of current (LDA) word topic distribution ○ Topic model that assigns Dirichlet prior to: Distribution of topics in ■ document Distribution of words in ■ topic document ○ Determine topics using “higher-order co- occurrence” document ■ Co-occurrence of same collection terms in different Image credit: http://en.wikipedia. contexts org/wiki/Latent_Dirichlet_allocation

  5. Motivation and Intuition ● Unsupervised methods for extracting and grouping aspects are, well, unsupervised. By adding seeds, you can tap into human intuition and guide the creation of the statistical model

  6. The Two Flavors Flavor 1 Flavor 2 ● Extracting aspects without ● Extract and group in a single grouping them step, using a sentiment switch ● Grouping can be done in a later ● Usually unsupervised step ● Their approach falls into this category more-or-less

  7. Seeded Aspect and Sentiment (SAS) Model: Notation Components Counts: v 1...V : non-seed terms in vocabulary ● V non-seed terms Q l=1...C : seed sets ● C seed sets Sent d s : sentence s of doc d ● T aspect models w d,s,j : jth term of Sent d s r d,s,j : switch variable for w d,s,j Distributions Ψ A t=1...T : aspect distribution Ψ O t=1...T : sentiment distribution Ω t, l : distribution of seeds in set Q l ψ d,s : aspect and sentiment terms in Sent d s

  8. Algorithm Overview ● Authors assume that a review ● For each aspect t , draw Dirichlet sentence usually talks about one distribution over: aspect. sentiment terms → (Ψ O ○ t ) ○ True? Each non-seed term and seed ○ Is a sentence with two ○ set → (Ψ A t ) aspects only able to yield ■ Each term in seed set → one? Ω t, l ME-SAS variant ● For each document d: ● Intuition: “aspect and sentiment Draw various distributions ○ terms play different syntactic over the sentiment and aspect roles in a sentence” terms ● Uses Max-Ent priors to model ● For each word w d,s,j : the aspect-sentiment switching (instead of switch variable r d,s,j ) ○ Draw Bernoulli distribution for switch variable r d,s,j

  9. Results Qualitative Quantitative

  10. Critiques Pros: Cons: ● Sentiment analysis is highly ● More explanation of the domain specific intuitions behind the ○ Just a small amount of user- distributions used in the model provided, domain-specific would be helpful goes a long way to improve performance

  11. Brainstorming Session ● If we had this model available to us to build an application, what would it look like?

  12. Who are the users? ● From the paper: ○ “asking users to provide some seeds is easy as they are normally experts in their trades and have a good knowledge what are important in their domains” ● Is this true? ● Who are the users the authors have in mind?

  13. This is about joint sentiment and aspect discovery, right? ● We don’t know how the sentiment side does because they don’t report evaluation ● They actually report sentiment words in aspect categories as errors for this paper. ● The model described in this paper uses seed words to discover aspects: Does this defeat the ○ purpose? ○ Potential for bootstrapping?

  14. Do we believe the results? Despite these criticisms, for the most part we do believe these results.

  15. Matching Reviews to Objects using a LM (Dalvi et al, 2009) ● Problem: determine entity (object) described by an online review using text only Restaurant Review ● “IR in reverse:” review is query, and objects are “documents” in collection ● Advantage: expands range of search when aggregating user opinions: blogs, message boards, etc. Casablanca Marrakech Tagine

  16. Context Information Retrieval Entity Matching Our Task object query document object document object = structured

  17. Problems with Traditional IR ● IR methods incompatible with problem tf-idf: restaurant named ○ ...the food was great… when “Food” will have a high idf we finished with our food…. score, causing it to be the match for ● Long queries, short documents Soup The Sandwich Shop ○ Predictable language in query, structured document Food ● Innovation: “mixture” language model: assumes two different types of language in review ○ Generic review language Object-specific language ○

  18. Model Notation General intuition behind generative model: state a model for documents, and select the document most likely to have been generated by the query Reviews: R Objects: E r e attributes: text(e) ● r e = r ∩ text(e) ● P e (w): probability word in review describes object ● P(w): probability word is generic review language ● Parameter α: α = P e (w), 1 - α = P(w) ● Z(r): normalizing function based on review length and word counts

  19. Model Definition Estimating review P(r|e) probability: Matching object to review: ** uniform assumption for review language allows us to ignore words outside r e

  20. Parameter Estimation ● Similar to a traditional LM, but requires estimation because total term frequency counts aren’t available g(w) = log(1/ freq(w)) ● P(w) calculated using reviews with all object-related language removed ● α estimated using development set: 0.002 ○ Experiments showed performance is not sensitive to this parameter

  21. Dataset ● ~300K Yelp reviews, describing 12K restaurants ● Processing: removed reviews with no mention of the restaurant ● Expanded set of 681K restaurants from Yahoo! Local ● Final dataset: 25K reviews, describing 6K restaurants ● Evenly divided test and training sets, with 1K reserved as development data

  22. Results ● Baseline algorithm: TFIDF+ ○ Treats objects as queries, review as documents RLM: f(w) = TFIDF+: f(w) = N/df(w) ● RLM outperforms TFIDF+ particularly for longer reviews ● Longer reviews more difficult to categorize in general: more confounding proper noun mentions

  23. Critiques Pros: Cons: ● Good example of using relatively ● Data processing removed ~11/12 simple LM techniques to gain a of original Yelp review set significant advantage over tf-idf ○ Suggests only a small ● Methods could be expanded to fraction of reviews are other IR tasks with long queries suitable for object and short “documents” classification ○ Ex: topic of customer emails ● Proliferation of structured review sites calls into question usefulness of method ● Questionable assumptions: uniform distribution of review language

  24. Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews Yu, Zha, Wang, Chua, 2011 Main RQ: Define importance: ● Beyond identifying aspects, can The aspects that most we rank them according to importance? influence a consumer’ s opinion about a Building on Previous Work: product. ● Frequency alone has been used as an indicator of importance ● Is frequency enough? ● Is frequency a good idea at all?

  25. Aspect Ranking: Assumptions Central Idea: “we assume that consumer’s overall opinion rating on a product is generated based Do we agree with this on a weighted sum of his/her specific opinions on assumption? multiple aspects of the product, where the weights essentially measure the degree of importance of the aspects” (p. 1497)

  26. Aspect Ranking: Data ● 11 products in 4 domains: ○ All electronics products ● 2 types of reviews crawled from 4 web sites: ○ Pros + Cons ○ Free text ● Manually annotated by several people for aspect importance and sentiment (importance = average of gold standard)

  27. Aspect Ranking: Methodology Overview 1. Extract aspects via dependency 2. Classify the sentiment of these parsing aspects ● Take frequent NPs from ● Train SVM (again) on Pros/Cons, use them to train an Pros/Cons, classify sentiment SVM for the free text. expressions in free text closest to aspects. ● Expand via synonymy ● Problems? ( thesaurus.com) ● This seemed almost unrelated to ● Problems? the core goals of the paper

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend