Synopsis

While self-regulation should be the first step, separate instructions are needed in cases where platforms are not living up to the expectation of users, or found to be violating norms, Radhakrishna said.

Allegations of Facebook’s system and algorithms fuelling hate speech and fake news, have led to widespread concerns over the influence of algorithms and tools in amplifying harmful content and misinformation.

Taking transparent and consistent approach to algorithms is a conscious choice to be made by every social media platform, and is “the right way to go” to establish trust with users, homegrown microblogging platform Koo’s co-founder and CEO Aprameya Radhakrishna has said.

Advertisements

While self-regulation should be the first step, separate instructions are needed in cases where platforms are not living up to the expectation of users, or found to be violating norms, Radhakrishna said.

Allegations of Facebook’s system and algorithms fuelling hate speech and fake news, have led to widespread concerns over the influence of algorithms and tools in amplifying harmful content and misinformation.

Following recent revelations by whistleblower Frances Haugen, Facebook drew flak for allegedly putting profit before public good, and not doing enough to shed its ‘growth at all costs’ culture that propelled its rise to capture 2.91 billion monthly active users globally, over 400 million in India.

Minister of State for IT Rajeev Chandrasekhar has categorically stated that no ‘algorithm’ of any social media platform should violate fundamental rights of Indians, and the laws and jurisprudence would need to evolve continuously to keep pace with the changing nature of internet

Also, the IT ministry is planning a massive outreach next year in the form of a dialogue with public, consumer forums, academia, industry and others on fast-evolving online space and what more needs to be done to ensure that internet is open, safe and trusted.

Koo’s Radhakrishna said making algorithms transparent is a conscious choice to be made by companies, and an easy one for all platforms, irrespective of whether they are into microblogging or other forms of social media.

“If you want to be opaque about it, then nobody will understand why I’m seeing a particular type of content and hence accusations will be made…but as long as it’s transparent and consistent, it is the right way to go and more trust is built with the user as well as individual governments of every country,” Radhakrishna elaborated.

India, the world’s second-largest telecom market and the biggest consumer of data, is a key market for internet companies like Facebook, WhatsApp and Twitter, given the large population base, burgeoning internet and smartphone adoption, as well as explosive growth.

Amid rising instances of user harm and dangerous behaviour on digital platforms, India enforced new IT intermediary rules earlier this year, aiming to bring greater accountability for big tech companies, including Twitter and Facebook.

The new rules require social media platforms to remove any content flagged by authorities within 36 hours and set up a robust complaint redressal mechanism with an officer being based in the country. Social media companies are required to take down posts depicting nudity or morphed photos within 24 hours of receiving a complaint.

Significant social media companies – those with over 50 lakh users – also have to publish a monthly compliance report disclosing details of complaints received and action taken as also details of contents removed proactively.

Last month, the government released Frequently Asked Questions (FAQs) around the intermediary guidelines seeking to address queries that internet and social media users may have about scope of the new rules, major changes it brings over past provisions, how the rules enhance safety of women and children, and due diligence to be done by an intermediary, among others.

The much-awaited standard operating procedure (SOP) around the IT rules and intermediary norms, which will contain details of the appropriate agencies who will have the authority to issue takedown notices to platforms, is in the works.

In May this year, Koo was among the first social media platform to declare that it had met compliance requirements of the new guidelines for digital platforms.

Advertisements