Graduate Student Posters

Graduate student posters will be presented during the community reception and poster session, Thursday, March 1, 2018, from 5:30-7:00pm in the Moss Arts Center Atrium.


The Source of Judicial Error Matters for Public Support of Self-Help Conflict Management

Leanna Ireland, Dept. of Sociology

People turn to forms of self-help conflict management when formal processes of the state deteriorate and become inaccessible. This deterioration can often include real or alleged inaccurate verdicts that lower perceived justice of the judicial system. Increasingly, the potential source of error is algorithmic, with more and more facets of the system incorporating predictive assessments. In other domains, people have expressed preference for human rather than algorithmic forecasters after the occurrence of error. Does this algorithmic aversion apply in the criminal justice system? Will people exposed to algorithmic error be more likely to circumvent the system than those exposed to human error? In this paper, I examine whether the source of judicial error matters for declared support of self-help conflict management. In the experiment, respondents read information about identical judicial error made by either a human or algorithm before indicating their attitudes towards the judicial systems. The findings suggest that the source of judicial error matters for support of self-help. Respondents that read the algorithm-error scenario had greater odds of believing that revenge and naming and shaming someone on social media was extremely right, but lower odds for protest. The paper discusses potential mechanisms behind the differences between the human- and algorithmic error groups for self-help behaviors.


Characterizing evolution of misinformation in social media and news media

Rongrong Tao, Dept. of Computer Science

Online communication platforms, including traditional news media and exploding social media, have increased the spreading of information. However, it also increases the possibility of spreading misinformation before the truth can get its boots on. While there are existing studies focus on characterizing and detecting of various forms of misinformation in social media and news media, comparably fewer studies try to reveal the interaction between the evolution of misinformation in social media and news media.


Supporting Co-located Decision-making and Sense-making with Large Twitter Data

Shuo Niu, Dept. of Computer Science

This work investigates a system to support awareness of topics from large Twitter data in co-located sense-making and decision-making activities. The large multi-touch tabletop is utilized to present a collaborative interface for users to search, view and organize the tweets. Users can interact with the tweets, and perform manipulations such as moving, categorization, and highlighting. The system interprets users’ semantic of topics, and dynamically update the parameters of machine learning models to generate topic meta-data for visualization. A secondary display visualizes topics, for users to assess, discuss and exchange ideas about the topics they created. Parameters of visualization are tailored towards higher awareness of the topics and quality of topic information. We evaluate how the co-located users perceive and trust the visualized tweets, and probe design guidelines for dynamically mining tweet topics to support co-located work.


Students, Statistics, and Gun Violence

Nick Bolin, Dept. of History

As the debate over gun control is carried out in public discourse, statistics are often employed by both sides to explain and argue their points. By involving students in assignments that require them to analyze the creation and usage of this data – the definitions, uses, and design – we equip them with the tools to analyze arguments and think critically about facts, not only around the gun control debate, but in public discourse more generally. I draw on student feedback and the contents of their assignments (anonymously) to demonstrate the ways these assignments changed the way they think about facts in conversation.


Where Should We Protect? Identifying Potential Targets of Terrorist Attack Plans via Crowdsourced Text Analysis

Tianyi Li, Dept. of Computer Science

The increasing volume of text datasets is challenging the cognitive capabilities of expert analysts to produce meaningful insights. Large-scale distributed agents like machine learning algorithms and crowd workers present new opportunities to make sense of big data. However, we must first overcome the challenge of modeling and guiding the overall process so that many distributed agents can meaningfully contribute to suitable components. Inspired by the sensemaking loop, collaboration models, and investigation techniques used in Intelligence Analysis community, we propose a pipeline to better enable collaboration among expert analysts, crowds, and algorithms. We modularize and clarify the components in the sensemaking loop so that they are connected via clearly defined inputs and outputs to pass intermediate analysis results along the pipeline, and can be assigned to different agents with appropriate techniques. We instantiate the pipeline with a location-based investigation strategies and experimented with crowd workers on Amazon Mechanical Turk. Our results show that the pipeline can successfully guide crowd workers to contribute meaningful insights that are helpful to solve complicated sensemaking challenges. This allows us to imagine broader possibilities for how each component could be executed: with individual experts, crowds, or algorithms, as well as new combinations of these, where each is best suited.


Evaluating Experts’ Performance in Image Geolocation with GroundTruth

Rifat Sabbir Mansur, Dept. of Computer Science

The process of geo-locating images to verify the validity of a news is a long, tedious process. GroundTruth is a crowdsource based geo-locating system that allows the user to identify the location of an image with the feedback from the crowd. In this research paper, we proposed an experimental design that will evaluate the experts’ reflection of the system. For this reason, we introduced some guidelines that can measure the quality and difficulty of an image. We also tried to introduce real-time crowdsourcing by associating LegionTools with our system. Finally, we proposed the result analysis by which we could quantify the performance between two different geo-locating system.


Trust and trustworthiness in social recommender systems

Taha Hassan, Dept. of Computer Science

The prevalence of misinformation on online social media has tangible empirical connections to increasing political polarization and partisan antipathy in the United States. Social recommender systems are a collection of complex strategies designed to assess a social network user’s preferences and habits, often with the goal of curating the most statistically sound recommendations on what to read or listen to, what to buy, what and where to eat, etc. Ranking algorithms driving these decisions often encode broad assumptions about network structure (like homophily) and group cognition (like, social action is largely imitative). Assumptions like these can be naïve in the era of fake news and ideological uniformity towards the political poles and are therefore re-examined with aid from the user-centric notion of trustworthiness in social news recommendation. With Facebook’s Trending feed as a case study, the constituent dimensions of trustworthiness (diversity, transparency, explainability, disruption) highlight new opportunities for discouraging dogmatization and building decision-aware, transparent news recommender systems.


The (Absence of) Racial Bias in Content Evaluation on Amazon Mechanical Turk

Sukrit Venkatagiri, Dept. of Computer Science

Fake news poses a danger to today’s society. It has negatively affected public health and safety, reduced public trust in news media, and even has the potential to destabilize governments. Currently, social media and news aggregator services – such as such as Facebook, Twitter, and Google – employ algorithmic methods to moderate online content and prevent the spread of fake news, but often fail to do so. These corporations have thus resorted to manual verification by crowds of humans, who, in their evaluations, may not be free from their own inherent forms of bias. This paper explores one aspect of content moderation: how perceptions of a journalist’s social media profile affects the evaluation of content created by them. We perform a 2×2 between-subject study on Amazon Mechanical Turk and find that raters’ content evaluations are not correlated with the journalist’s social media profile quality. We also find that their evaluations are free from racial bias. We discuss possible interpretations of our findings and future work towards better understanding these results.


A person in a haystack : Using Crowdsourcing and Computer Vision to identify a person in Historical Photographs

Vikram Mohanty, Dept. of Computer Science

In the age of surveillance cameras and in a time where Apple’s latest iPhone unlocks to a Face ID, it is only a matter of time before Face Recognition, as an application, becomes ubiquitous around us. Treading on the normal human tendency to seek perfect (or a realistic, near-perfect) accuracies while designing solutions to any problems, and simultaneously, balancing it with our thirst for automating everything that can be automated in any possible way, the intricacies attached to this problem of Face Recognition surface up. To apply this in a related context of identifying a person in a stack of photographs can be seen as a needle in a haystack problem, which gives us a measure of the search scale. However, with state-of-the-art Face Recognition AI methods, the size of the haystack is significantly reduced to a pool of very similar-looking people, but NOT just the one (in question here). This necessitates human intervention of some sorts. In current times, when Citizen Journalism is finding increasing acceptance by the day, a large volume of user-generated content is floating out there on the Internet to support the reporting (tweets, posts, blogs, etc.), and as it happens often, the issue of identifying a certain person somewhere arises, with the stakes of accuracy being quite high here. Here, in this project, we address the same problem in a much lower-risk context of identifying American Civil War Soldiers. We discuss how the expertise of Civil War Historians can be leveraged, in conjunction with the the complementary computation power of CrowdWorkers and an AI designed for Face Recognition, to solve this complex task.