Language and Conflict Panel
Language and Conflict Panel
Join the Language Science Center, Baha'i Chair for World Peace, and College of Behavioral and Social Sciences for a hybrid panel on language and conflict, featuring panelists Philip Resnik (LING), Julia Mendelsohn (INFO), and Erik Nesse (ARLIS). This panel will be moderated by Hoda Mahmoudi, Director of The Baha'i Chair for World Peace.
To attend virtually, click here to register for the panel Zoom.
In their respective presentations, panelists will address the following questions: How do language and/or communication barriers contribute to societal conflict? What are the indicators of good vs. bad communication in high-stakes settings? How does this knowledge inform our strategies for communication across people and social communities? What makes people with conflicting views more receptive?
Panelists
Panelist: Philip Resnik, Department of Linguistics | Political Framing
We don’t just take in information, we filter it through what we already know and believe. Two people can look at the same story and see something totally different, a phenomenon that is the root of a lot of conflict. Even factually accurate news can reinforce misinformation through framing or headlines. Some scientists call perception “controlled hallucination” and in political discourse, it’s not always well controlled.
Julia Mendelsohn, College of Information | Internet Discourse
What is the role of common ground in the way language drives conflict? Dogwhistles rely on a lack of shared knowledge among a full audience, making them hard to detect or moderate. Metaphorical dehumanization works in the opposite way, drawing on common ground we share about source domains like animals or water. Learn more about how these dynamics shape and define internet discourse.
Erik Nesse, Applied Research Laboratory for Intelligence and Security (ARLIS) | Multilingualism and Machine Translation
Understanding each other across linguistic and cultural barriers is valuable and reduces chances of conflict, but it is really difficult to do. Computational linguistics has produced tools to help bridge this gap, and they work well enough to have an impact in reducing potential for conflict, but this poses a problem. When a machine produces a convincing, seemingly quality translation, people assume its accuracy and stop questioning it. Real communication across languages involves culture, history, and context that tools don’t fully capture. By trusting these tools too much, do we stop training people to think critically about what’s being said and actually increase the risk of miscommunication and conflict?