![]() We’re going to need a lot more open science on the frontiers of media, machines, and conflict.The success of Ignition in boosting sales of backlist books, combined with requests from publishers to find a way to lift frontlist sales, led Open Road to launch its Activation service in August. Simultaneously, the AI era is dawning, and our information ecosystem is about to get a lot weirder. Many in the research community support a bill called PATA, which would give the National Science Foundation authority to vet and prioritize research projects which platforms would be obligated to support. I’m aware of at least one large research project Meta recently canceled, and the company said it “does not have plans to allow” another wave of election research in 2024. In the face of layoffs and criticism, the appetite for open science on hard questions may be waning across the industry. Yet this is the culmination of work announced three years ago. Meta should be commended for undertaking open research on these significant topics. That’s not what they do this is bigger than Facebook, and these studies are early results in a new field. Many people will be looking to the current batch of experiments to either crucify or exonerate Facebook. There have even been experiments showing that a carefully designed AI chatbot can help mediate divisive conversations. Such a ranking system is already in use to select Twitter’s community notes. It’s also possible to algorithmically identify political content that garners agreement across societal divides, a strategy known as bridging-based ranking, and prioritizing such content is thought to reduce polarization. Meta’s results notwithstanding, we know that content can have effects on polarization-because of the Strengthening Democracy Challenge, a series of experiments that tried to change how people approach political conflict. We don’t want to eliminate all political conflict or enforce conformity, but there’s no denying that the way Americans are fighting each other now, sometimes called pernicious polarization, is destructive, escalatory, and unhealthy. It’s a deep question, and scholars have explored how different theories of democracy might call for different types of recommender algorithms. We need to ask not just how to prevent harm, but what part platforms should play in helping to make societal conflicts healthier. For example, there’s evidence that engagement-based algorithms amplify divisive content, and tools to reach targeted audiences can also be used for propaganda or harassment. Social media use has many effects, both good and bad, and filter bubbles aren’t the only way of thinking about the relationship between media, algorithms, and democracy. A review of hundreds of studies has found a positive correlation between general “digital media” use and polarization, worldwide, as well as a positive correlation with political knowledge and participation. This is pretty good evidence against the most straightforward version of the “algorithmic filter bubbles cause polarization” thesis.īut this is not the end of the story, because filter bubbles aren’t the only way of thinking about the relationship between media, algorithms, and democracy. Of the eight polarization variables measured-including affective polarization, extreme ideological views, and respect for election norms-none changed in a statistically significant way.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |