Author :
McAuley, Julian ; Leskovec, Jure ; Jurafsky, Dan
Abstract :
Most online reviews consist of plain-text feedback together with a single numeric score. However, understanding the multiple `aspects´ that contribute to users´ ratings may help us to better understand their individual preferences. For example, a user´s impression of an audio book presumably depends on aspects such as the story and the narrator, and knowing their opinions on these aspects may help us to recommend better products. In this paper, we build models for rating systems in which such dimensions are explicit, in the sense that users leave separate ratings for each aspect of a product. By introducing new corpora consisting of five million reviews, rated with between three and six aspects, we evaluate our models on three prediction tasks: First, we uncover which parts of a review discuss which of the rated aspects. Second, we summarize reviews by finding the sentences that best explain a user´s rating. Finally, since aspect ratings are optional in many of the datasets we consider, we recover ratings that are missing from a user´s evaluation. Our model matches state-of-the-art approaches on existing small-scale datasets, while scaling to the real-world datasets we introduce. Moreover, our model is able to `disentangle´ content and sentiment words: we automatically learn content words that are indicative of a particular aspect as well as the aspect-specific sentiment words that are indicative of a particular rating.
Keywords :
information services; learning (artificial intelligence); user interfaces; aspect rating; content word; learning attitude; learning attribute; multiaspect review; numeric score; online review; plain-text feedback; rating recovery; rating system; sentiment word; user audio book impression; user evaluation; Bipartite graph; Correlation; Data models; Optimization; Predictive models; Training; Unsupervised learning; machine learning; segmentation; sentiment analysis; summarization;