Advocacy in Technology and Society/Building Informed Critique

Topic Summary edit

Summarize what was covered during this topic through the lecture, discussion, and the readings assigned this week. 

The overarching theme for this week revolves around accountability. Whether there is misinformation being spread or a lack of fact checking with news sources and social media posts, there is also a lack of accountability and transparency occurring when it comes to the algorithms being applied for use in the public sector.

Rise in Usage of Social Media as a News Platform

According to a PEW Research Center study, about half of Americans get news on social media at least sometimes, down slightly from 2020 (Walker & Matsa, 2021). With the increasing prevalence of social media in people's lives, the question of whether social media serves as a reliable news platform continues to be raised. Considering how a third of Americans choose Facebook as their primary news source, we explore how their fact-checking procedures affect such an audience that is largely White, about 60% of the site's regular news consumers.

Accountability in Media, Misinformation, and International Relations

With the recent invasion into Ukraine from Russia, Facebook has been conducting independent fact checking and labeling of content to which the Russian government has requested come to a halt (Al Jazeera, 2022). As Facebook refused, Russia has since announced that it will place restrictions for people using Facebook in the country. Facebook felt compelled to continue their fact checking and labeling work to provide an outlet for ordinary Russians who feel anti-war and want to make their voices heard.

Facebook Factchecks

Facebook partnered with third-party fact-checkers from the International Fact-Checking Network (IFCN) to achieve two goals: 1. Empower users to access reliable information, and 2. Fight the spread of misinformation. Facebook and IFCN work through a multi-step process to achieve these goals, first by reviewing content shared on the Facebook platform, identifying the site where it first took off, the associated keywords, and tip-offs from the community. Then, fact-checkers will review the material's verity and either certify the information as acceptably accurate or tag it with a warning. One unique point about this partnership is fact-checkers will not remove content, but will only "apply strong warning labels" and stem future dissemination to other sources. This approach aims to strike a balance amongst free speech, "safety, authenticity, privacy, and dignity" for contributors and users to communicate in a shared space with confidence.

Media & Misinformation

An example discussed in class referred to a fake video where conflict appeared to be occurring in Ukraine when it in fact was not, as a result of stitching a video game clip with actual clips from the Russian invasion on Ukraine (Subramaniam, 2022). Other instances exist as well where images are mirrored and even broadcasted to depict war in other regions of the world. Videos from war occurring in Palestine were taken and said to be part of the war in Ukraine without ever raising awareness to the war in Palestine.

Algorithmic Accountability in The Public Sector

A global study was conducted by the Ada Lovelace Institute (Ada), AI Now Institute (AI Now), and Open Government Partnership (OGP) to analyze the first wave of algorithmic accountability policy in the public sector. According to the study, an algorithm is described as a series of steps through which particular inputs can be turned into outputs (Ada Lovelace Institute, AI Now Institute and Open Government Partnership, 2021). An algorithmic system consists of one or more algorithms that are often used to produce outputs used for making decisions to replace certain decisions that would be made by humans. This report calls for algorithmic accountability policies to make sure that those who build, obtain, and use these algorithms are held responsible in the public sector. The report is constructed to be comprehendible for policymakers, researchers, and civil society members. It is important to note that the public sector is comprised of government agencies and institutions. Additionally, it is important to note the contexts and systems that these policies are designed and implemented into.

The report discusses 8 different policy mechanisms and aims that have been previously used in attempts to achieve algorithmic accountability in the public sector.

  1. Principles and Guidelines
  2. Prohibitions and Moratoria
  3. Public Transparency
  4. Impact Assessments
  5. Audits and Regulatory Inspection
  6. External/Independent Oversight Bodies
  7. Rights to Hearings and Appeal
  8. Procurement Conditions

6 Lessons Learned from the Implementation of Effective Algorithmic Accountability Policies

  1. Clear institutional incentives and binding legal frameworks can support consistent and effective implementation of accountability mechanisms, supported by reputational pressure from media coverage and civil society activism.
  2. Algorithmic accountability policies need to clearly define the objects of governance as well as establish shared terminologies across government departments.
  3. Setting the appropriate scope of policy application supports their adoption. existing approaches for determining scope such as risk-based tiering will need to evolve to prevent under-and over-inclusive application.
  4. Policy mechanisms that focus on transparency must be detailed and audience appropriate to underpin accountability.
  5. Public participation supports policies that meet the needs of affected communities. Policies should prioritize public participation as a core policy goal, supported by appropriate resources and formal public engagement strategies.
  6. Policies benefit from institutional coordination across sectors and levels of governance to create consistency in application and leverage diverse expertise.

Insights edit

Go beyond your summary, synthesize what you learned on this topic, drawing on personal insights and that of your peers, the content, what you read in class and in the media

Overall, the main takeaway from the topic is that accountability and transparency are becoming increasingly more important. With an increase in social media being used as a news platform, the onus is on the public to decipher what is or is not "fake news". While this may be difficult enough to do on a daily basis, this issue becomes even more difficult for individuals in the United States if English is not their first language. With the "death of the expert," no one is trusting of news and even facts appear as subjective.

Talk show host John Oliver critiques the state of digital misinformation and poignantly discusses:

“There needs to be public pressure on platforms to do more about misinformation whether they are in English or not because until they do, if you are a member of one if these diaspora communities you may have to prepare yourself with more difficult conversations with your least favourite uncles" (Guardian News and Media, 2021).

Classmates have shared that the passive act of mindless scrolling and liking on social media are actually active in feeding into the algorithms embedded within the technology and it is also important to note that certain media platforms continue to perpetuate divisive content as they profit off of doing so yet they are not held accountable for their actions. Algorithms on social media platforms amplify divisive content and the key point is this: "This is not about free speech and what individuals post on these platforms. It is about what the platforms choose to do with that content, which voices they decide to amplify, which groups are allowed to thrive and even grow at the hand of the platforms’ own algorithmic help" (Harvard Business Review, 2021).

Social media evolves ever so quickly that it's difficult to anticipate when the next new app or trend will be. TikTok, Instagram, and Youtube are a few examples of "fast media" where information is passed from one source to another, more often than not with insufficient citation. However, it's critical to consider data "laterally" and observing what related sources are reporting. It's interesting to consider how social work blends with other professions like law and journalism that make for a multi-faceted way to digest and disseminate data. While journalism revolves around the newsroom and widely sharing "factual information," social workers abide by the ethical code of confidentiality, transparency, and informed consent. In this intersection with social work and technology, advocates can consider how to develop language around "informed critique" as well to empower all communities by meeting their needs as they are. Oftentimes this can mean communities without internet access or a personal device; however, this should not undermine their contribution to discussions on accountability and misinformation in technology.

What does advocacy look like here? edit

Considering your topic, respond to the following: 1. How can advocates use technology and data-based practice to further their advocacy efforts? And 2. How are current and emergent technologies presenting challenges in society, and how can advocates work in pursuit of better, fairer, and more just technology?

Social workers in technology are critical to expounding data dissemination, misinformation, and informed critique by bringing in voices and experiences from marginalized and overlooked communities. They have the unique blend as a community organizer, case manager, and policy advocate along with other hats to serve as a liaison between the public and Big Tech. Advocates can use social media and technology to their advantage by holding space for communities who've been hurt by misinformation or misrepresentations and bridging the gap through public hearings, offering data analysis workshops, or building a technology wellness toolkit with guidelines on how to recover if they saw a triggering news headline. Social workers bring a myriad of skills to any workspace but even more so in technology and data-based practice because they center the user as a person, as a human being, whereas Big Tech see users as numbers, digits, and followers. It's important for social workers to humanize the technology experience and bear witness to the harms and benefits that technology offers.


Current and Emergent Technologies

Deepfake is one practice in artificial intelligence where an individual in an existing picture or video is scanned, matched, and replaced with another individual of a similar likeness. This AI practice threatens the ethical framework of technology where users and developers should agree to disseminating accurate information. With technology becoming increasingly advanced, it's become more and more difficult to pinpoint misinformation and hold the responsible parties accountable, presenting a challenge on credibility and accountability in society. Social work advocates in pursuit of fair tech that works for the people are encouraged to consider where the technology and information gap is between current and emerging technologies and where marginalized populations stand in the midst of it. Questions to consider can include "Who is the target population, and why? How is this technology informative, empowering, or transparent? Does the technology overlook or isolate a particular population, and how?"

Annotated References edit

Cite what we covered in class and include 4 additional resources you find on the topic. Cite resources that are related to the topic, they can be in agreement with the topic, extending it, in disagreement or present conflict with the topic.

Ada Lovelace Institute, AI Now Institute and Open Government Partnership. (2021). Algorithmic Accountability for the Public Sector. Available at: https:// www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/

Al Jazeera. (2022, February 25). Russia limiting access to facebook over fact-checking row. Russia-Ukraine war News | Al Jazeera. Retrieved April 26, 2022, from https://www.aljazeera.com/news/2022/2/25/russia-limiting-access-to-facebook-over-fact-checking-row

Guardian News and Media. (2021, October 11). John Oliver on digital misinformation: 'there needs to be more public pressure on platforms'. The Guardian. Retrieved April 26, 2022, from https://www.theguardian.com/tv-and-radio/2021/oct/11/john-oliver-last-week-tonight-digital-misinformation

How Facebook’s third-party fact-checking program works. (2021, June 1). Retrieved April 26, 2022, from https://www.facebook.com/journalismproject/programs/third-party-fact-checking/how-it-works

How to hold social media accountable for undermining democracy. Harvard Business Review. (2021, January 12). Retrieved April 26, 2022, from https://hbr.org/2021/01/how-to-hold-social-media-accountable-for-undermining-democracy

Myers, S. L., & Kang, C. (2022, April 21). Barack Obama Takes On a New Role: Fighting Disinformation. The New York Times. https://www.nytimes.com/2022/04/20/technology/barack-obama-disinformation.html

Subramaniam, T. (2022, February 26). Fact-checking fake videos of Ukraine conflict. CNN. Retrieved April 26, 2022, from https://www.cnn.com/2022/02/26/politics/fake-ukraine-videos-fact-check/index.html

The Associated Press. (2022, April 23). EU law targets Big Tech over hate speech, disinformation. NPR. https://choice.npr.org/index.html?origin=https://www.npr.org/2022/04/23/1094485542/eu-law-big-tech-hate-speech-disinformation

Walker, M., & Matsa, K. E. (2021, September 20). News consumption across social media in 2021. Pew Research Center's Journalism Project. Retrieved April 26, 2022, from https://www.pewresearch.org/journalism/2021/09/20/news-consumption-across-social-media-in-2021/