Should we be worried about how technology is changing the human condition?

ForumIAS announcing GS Foundation Program for UPSC CSE 2025-26 from 26th June. Click Here for more information.

Synopsis: Fears about algorithms designed for addiction, advances in AI are grounded in recent revelations about corporate greed and government surveillance.

Why expecting the social media corporates to self-regulate themselves is an absurd idea?

Inaction of social media corporates: Social media corporates are well aware of the moral uncertainty towards the consequences of their products and the agnosticism (an agnostic approach is the one which is interoperable across the systems and there are no prejudices towards using a specific technology, model, methodology or data) that is built into the design of the algorithms.

Take, for instance, the effect of Instagram on the mental health of adolescent girls, or the role WhatsApp and Facebook have played in promoting ethnic violence in places as diverse as Myanmar, parts of Africa and India.

The corporation that runs all three apps was well aware of these consequences and yet, it did little to stop them.

Large scale use of the social media: The apps are so deeply intertwined with how we live and work that a competitor is likely to fill in the space vacated by any one company.

Finally, social media’s entire architecture is based on maximising screen time and the data so collected. What the algorithm does is find what will keep people hooked the most, and for the longest duration. Expecting social media giants to regulate the very thing that their profits are based on is absurd.

If self-regulation is out, is government regulation the answer?

Unfortunately, the actions of even democratically-elected governments often inspire little confidence.

Take just two recent examples, the Pegasus snooping scandal and the Arsenal Consulting findings. From both, it seems clear that for many governments, the use of technology to breach individual rights is an intrinsic part of how they function.

Governments can now deploy “zero-click” spyware that can easily bypass security mechanisms. And that such capabilities have been deployed against journalists, political friends and opponents, defence personnel, businessmen citizens with an inalienable right to privacy and dignity.

Unfortunately, the Pegasus scandal is only the tip of the iceberg. By using voice cloning technology, advanced robotics it will soon be possible to create a simulacrum (an image or representation of someone or something) of deceased loved ones.

In future, with technology development it is possible to use a doctored video to jail activists or to establish the chanting of “anti-national”, “seditious” slogans.

In this context, the dangers flagged by the Arsenal Consulting that evidence was likely planted on the computers of academics, lawyers and activists in the Bhima Koregaon case become all the more frightening.

Given that governments have at least as much interest in maintaining power as corporations do in making profits, they can hardly be expected to be impartial arbiters of the limits of technology.

Source: This post is based on the article “Should we be worried about how technology is changing the human condition?” published in Indian Express on 13th October 2021.

Print Friendly and PDF
Blog
Academy
Community