Smartphones and the numerous technologies that continue to make them smarter have had a profound effect on our communication, organization, and mobilization. If you have ever led, attended, or even pondered participating in a demonstration, you may have accessed the information you needed on your smartphone or smartwatch, but you may have been encouraged to leave them at home to prevent discovery.
Smartphones have facilitated access to knowledge and educational resources through online learning and technologies, especially in situations when in-person/physical learning is impractical or difficult. The use of mobile phones and the internet has become integral to the enjoyment of some rights and liberties, such as the right to free speech, the right to free expression, and the right to protest.
However, technologies such as face recognition and geolocation, which allow you to operate your mobile phone and some of its apps, are also employed by systems such as traffic and security cameras, as well as by public and commercial entities seeking data, outside of your mobile devices. This was proven in Hong Kong when it was revealed that police were identifying protesters using data collected from surveillance cameras and social media.
Given the rising usage and capabilities of artificial intelligence (AI), there is a new need for research into the effects of these technologies on civic space and civil rights, as well as on everything else.
Dr. Mona Sloane is a senior research scientist at the Center for Responsible AI at New York University. She is a sociologist who studies the interaction of design, technology, and society in the context of AI design and policy. Sloane demonstrates that the majority of AI systems are designed to facilitate and accelerate decision-making in ordinary life, but that the data underlying their construction is incorrect.
Sloane told Global Citizen that “entities that create and use AI frequently have a vested incentive in avoiding sharing the assumptions that underpin a model, as well as the data it was built on and the code that reflects it.”
“AI systems often require huge quantities of data to function adequately. The procedures of data extraction can compromise privacy. Data is always historical and will always reflect injustices and unfairness from the past. Using it as a foundation for determining what should occur in the future further entrenches existing injustices.”
Researchers like Sloane are concentrating on the ways in which these potent technologies exist in the real world and make it nearly hard to overcome systemic restrictions.
Amnesty International began its global Ban the Scan campaign in January 2021, with the goal of eliminating “the use of face recognition technologies, a kind of mass monitoring that exacerbates racial policing and undermines the freedom to demonstrate.”
The campaign emphasized that algorithmic technology, such as face recognition scanners, “are a type of mass monitoring that violates the right to privacy and endangers the rights to peaceful assembly and speech.”
In 2019, the world watched as demonstrators in Hong Kong concealed their faces and toppled lampposts with face scanners to dodge facial recognition detection or attempted to find methods to utilize artificial intelligence to their advantage.
AI affects us not simply in terms of the data it can collect, but also in terms of molding our perceptions of what is true.
Algorithms may be used to establish a person’s preferences, including their political views, which can then be used to affect the type of political messaging that a person may encounter on social media. According to the New York Times, one noteworthy breach of data collecting in this area involves the consulting firm Cambridge Analytica, which gathered private data from the Facebook accounts of more than 50 million users while working for former US President Donald Trump’s 2016 campaign.
According to Jamie Susskind, author of Future Politics: Living Together in a World Transformed by Technology, not only does it become easier to control individuals the more you know about them, but when people recognize they are being observed, they tend to alter their behavior.
By exposing individuals to material that aligns with their political beliefs, algorithms make it possible for individuals to have differing interpretations of reality.
According to the US Immigration and Customs Enforcement’s (ICE) yearly enforcement report, the US has deported 271,000 people to 192…
The Biden-Harris administration now approved $4.28 billion in student debt cancellation for several 54,000 public servants across the nation. This…
Today is celebrated as the “International Human Solidarity Day” around the world. ‘December 20’ of every year has been recognised…
Tech giant Google is continuing its layoff spree this year, too. Chief Executive Officer Sundar Pichai recently announced in an…
The United Nations human rights office plans to send a small team of its officers to Syria for the first…
After months of hard bargaining with the labor unions, Volkswagen has emerged close to striking a major deal with German…
This website uses cookies.
Read More