Current Research

Google Scholar Page here

Paul Nulty, Yannis Theocharis, Sebastian Adrian Popa, Olivier Parnet, and Kenneth Benoit. 2015. “Social Media and Political Communication in the 2014 Elections to the European Parliament“. Version: October 12, 2015.

Social media play an increasingly important part in the communication strategies of po- litical campaigns by reflecting information about the policy preferences and opinions of political actors and their public followers. In addition, the content of the messages provides rich information about the political issues and the framing of those issues during elections, such as whether contested issues concern Europe or rather extend pre-existing national debates. In this study, we survey the European landscape of so- cial media using tweets originating from and referring to political actors during the 2014 European Parliament election campaign. We describe the language and national distribution of the messages, the relative volume of different types of communications, and the factors that determine the adoption and use of social media by the candidates. We also analyze the dynamics of the volume and content of the communications over the duration of the campaign with reference to both the EU integration dimension of the debate and the prominence of the most visible list-leading candidates. Our findings indicate that the lead candidates and their televised debate had a prominent influence on the volume and content of communications, and that the content and emotional tone of communications more reflects preferences along the EU dimension of political contestation rather than classic national issues relating to left-right differences.

Schwarz, Daniel, Denise Traber, and Kenneth Benoit. “Estimating Intra-Party Preferences: Comparing Speeches to Votes.” Conditional acceptance at Political Science Research and Methods.

Kenneth Benoit and Thomas Däubler. May 8, 2015. “Putting Text in Context: How to Estimate Better Left-Right Positions by Scaling Party Manifesto Data.”

Hand-coded party manifestos have formed the largest source of comparative, over-time data for estimating party policy positions and emphases, based on the fundamental as- sumption that left-right ideological positions can be measured by comparing the relative emphasis of predefined policy categories. We critically challenge this approach by show- ing that left-right ideology can be better measured from specific policy emphasis using an inductive approach, and by demonstrating that there is no single a priori definition of left-right policy that outperforms the inductive approach across contexts. To estimate party positions, we apply a Bayesian measurement model to category counts from coded party manifestos, treating treating the categories as “items” and policy positions as a latent vari- able. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on politi- cal theory, exploratory analysis, or guesswork. We also demonstrate that the IRT approach can work even when the items are not specifically designed to measure ideological posi- tions. A big advantage of our framework lies in its flexibility: here, we specifically show how two infer policy positions in two dimensions , but there are numerous extensions for future research, such as examining coder effects or adding covariates to predict the model parameters.

Kenneth Benoit and Paul Nulty. April 8, 2013. “Classification Methods for Scaling Latent Political Traits.” Paper prepared for presentation at the Annual Meeting of the Midwest Political Science Association, April 11–14, 2013, Chicago.

Quantitative methods for scaling latent political traits have much in common with supervised machine learning methods commonly applied to tasks such as email spam detection and product recommender systems. Despite commonalities, however, the research goals and philosophical underpinnings are quite different: machine learning is usually concerned with predicting a knowable or known class, most often with a practical application in mind. Estimating political traits through text, by contrast, involves measuring latent quantities that are inherently unobservable through direct means, and where human “verification” is unreliable, prohibitively costly, or otherwise unavailable. In this paper we show that not only can the Naive Bayes classifier, one of the most widely used machine learning classification methods, can be successfully adapted to measuring latent traits, and also that it is equivalent in general form to \cite{lbg:2003}’s “Wordscores” algorithm for measuring policy positions. We revisit several prominent applications of Wordscores reformulated as Naive Bayes, demonstrating the equivalence but also revealing areas where the original Wordscores algorithm can be substantially improved using standard techniques from machine learning. From this we issue some concrete recommendations for future applications of supervised machine learning to scale latent political traits.

Older papers:

William Lowe and Kenneth Benoit. 2011. “Estimating Uncertainty in Quantitative Text Analysis“. Paper prepared for the 2011 Midwest Political Science Association. Version: 30 March 2011.

Several methods have now become popular in political science for scaling latent traits— usually left-right policy positions—from political texts. Following a great deal of de- velopment, application, and replication, we now have a fairly good understanding of the estimates produced by scaling models such as “Wordscores”, “Wordfish”, and other variants (i.e. Monroe and Maeda’s two-dimensional estimates). Less well understood, however, are the appropriate methods for estimating uncertainty around these esti- mates, which are based on untested assumptions about the stochastic processes that generate text. In this paper we address this gap in our understanding on three fronts. First, we lay out the model assumptions of scaling models and how to generate un- certainty estimates that would be appropriate if all assumptions are correct. Second, we examine a set of real texts to see where and to what extent these assumptions fail. Finally, we introduce a sequence of bootstrap methods to deal with assumption failure and demonstrate their application using a series of simulated and real political texts.