Archived: Our toolkit for people and teams tackling misinformation online

This is a simplified archive of the page at https://misinfocon.com/our-toolkit-for-people-and-teams-tackling-misinformation-online-9e6d240f3136

Use this page embed on your own site:

This is the first in a series of blog posts from the Credibility Coalition reflecting on workshops we led at MisinfoCon DC, held at the Newseum. We conducted these workshops over the course of 6…

ReadArchived

Credibility Coalition Workshop #1 @ MisinfoCon DC

connie moon sehat

MisinfoCon

Welcome to our workshop!

This is the first in a series of blog posts from the Credibility Coalition reflecting on workshops we led at MisinfoCon DC, held at the Newseum. We conducted these workshops over the course of 6 hours across two days, with some 50 people in attendance from a variety of organizations representing government, research, advocacy, journalism, and academic groups.

When it comes to the issue of “information disorder,” to use Claire Wardle and Hossein Derakhshan’s term describing the amalgam of false and harmful information polluting the internet, there are many concerns. For example, is the main problem an issue of platform filtering, or of active disinformation campaigns from foreign countries? The problem at hand is not well understood, and people are approaching it in a number of different ways.

This is why we’ve spent a lot of time developing frameworks at the Credibility Coalition: we think the breadth and complexity of internet information disorder needs people to work in collaborative, interdisciplinary ways. The more people who can rigorously research the issues together, the more that improved understanding and better standards can be established about credible content online, all of which can then be thoughtfully applied.

(This by the way is a test of our new tagline: rigorous research, better standards, thoughtful application. Feedback welcome.)

At MisinfoCon DC, we employed two of our frameworks — MisinfoMap and indicator development — to help us think through information issues related to politics and elections.

MisinfoMap: How can a map help us navigate the territory more effectively?

Earlier last month, we introduced our MisinfoMap effort during a working session at the CUNY Graduate School of Journalism and during the W3C Credible Web Community Group F2F (“face to face”) meeting in San Francisco. We continued to develop the map with workshop participants at the Newseum during Day 1 of MisinfoCon DC.

The goal of the MisinfoMap exercise is to help situate the various information disorder efforts strategically. Say you have a project that you are working on. By placing your effort on the map, you can ask:

  • How does my work relate to others? Am I duplicating efforts or complementing them?
  • How could my work grow? What projects or work could I collaborate with?
  • How well is my work addressing known information disorder problems? Where are the gaps?
The Credibility Coalition stylized MisinfoMap.

Placing your work on the map involves considering three dimensions:

1. Theory to Practice

Where you place yourself on the spectrum between theory and practice depends on how you answer these questions:

  • How much does the project deal with theory and empirical information?
  • How much does the project seek to shape the quality of information encountered by audiences? (e.g. fact checking, ad tech, manufactured amplification, hate speech policies)

2. Infrastructure to Content (aka Medium to Message)

Questions here include:

  • How much does the project focus upon the structures of information exchange that condition how information disorder manifests itself? (e.g. standards, algorithms, search indices)
  • How much does the project focus upon the information flowing through networked spaces?

One lovely suggestion we received during this session: infrastructure may also be institutions, policies and legal frameworks, not just technical platforms and algorithms.

3) Dimensions of Diversity

In this case, questions help you think through how robust your project is across different contexts:

  • What language does the project focus on?
  • What region/country does the project focus on?
  • What news topic does the project focus on (e.g. science, health, politics)?
Our sticky-driven working process related to MisinfoMap.

Knowing from the outset, for instance, that your project is English-only focused helps either to think about that design more clearly (if a solution is English-only, is that sufficient given the various languages at work in the United States?), or think about future growth directions and perhaps additional partner projects.

This exercise will work optimally once we have the resources to share a dynamically growing map with folks. We are currently developing a backend AirTable for this effort, and are looking forward to adding the new stickies we gathered during the Monday session.

Examples from the MisinfoMap session. We love stickies.

What’s exciting is thinking about the many ways this map could be useful. Since we were in the capital city of the United States, a natural point of conversation was the effect of information disorder on the realms of US politics and elections. Here, evidence about information disorder can be seen lately at the highest levels: consider the recent July hearing of the House Judiciary Committee which took Facebook, Google, and Twitter to task, who may be called upon again to testify before the Senate Intelligence Committee in September as well.

We received feedback from attendees that a map can help folks understand the range of state and non-state efforts around information disorder, even within a single national context like the United States.

What are the indicators of credible content relevant to elections?

Taking advantage of the array of state and non-state folks at MisinfoCon, we workshopped another framework: indicators of content credibility, with a special focus on elections. In our inaugural study, members of the Credibility Coalition examined indicators, or signals, of credibility related to science and health information. How would these hold up when it comes to politics?

We think it’s a bit more complicated. To spur people’s thinking, we offered a few starter indicators and posed questions.

For example, consider the possible signal of aggressive advertisements. In our first study, we found a correlation between lower evaluations of an article’s credibility by experts and other reviewers with an aggressive placement of advertisements.

But in the realm of US politics, where there are few limitations on campaign spending, aggressive advertising has been part of the landscape for quite some time; the millions spent in the last presidential election alone seems problematic. So more thinking about the context of this indicator is needed.

Lively brainstorming and discussion led to indicators described the image below.

Or, take another signal related to information credibility: hyperpartisan language. There is a good deal of work currently done in Natural Language Processing realms to understand the relationship between false information and hyperpartisan or extremely polarizing language.¹

Brainstorming indicators of credible content around elections.

Yet differently from discussions in science forums, during an election, we would expect to see partisan language — the very essence of citizens taking positions on issues that are also aligned with their parties. So understanding the relationship of this indicator to mis-, dis- or malinformation we already know will be something that needs to be approached with complexity, and in relationship to a nexus of other indicators as well.

We came up with a host of other indicators during the workshop, which we’ll look forward to adding to our model!

Those are some of the main takeaways from this workshop. In our next post, we’ll talk about our workshop on private messaging apps, co-hosted with Oren Levine from the International Center for Journalists.

— —

¹Just to point out a couple of examples, because there are many: https://arxiv.org/pdf/1702.05638.pdf, or http://www.niemanlab.org/2017/11/even-automating-just-parts-of-journalists-fact-checking-efforts-can-speed-up-their-work-can-this-tool-help-on-that-front/