Enhancing transparency of social media platforms: multi-stakeholder perspectives

Information

Time: 2022/10/25 02:00-04:00PM
Venue: IEAT International Conference Center Meeting 8F Room 2

Moderator:
• Prof. Jason Ho, Taiwan Society of Convergence
Panelist:
•Allen Lee, Manager of Corporate Affairs, LINE Taiwan
• Kuan-Ju Chou, Digital Rights Specialist of Taiwan Association of Human Rights
• Ken-Ying Tseng, Attorney-at-Law of LEE AND LI
• Chih-Liang Yeh, Professor of Department of Information Communication. Yuan Ze University

 

 

Session Highlights

Taiwan School of Internet Governance hosted a panel discussion titled ‘improving the transparency of social media platforms – multistakeholder perspectives’ on 25 October. The panel was moderated by Prof. Jason Ho from Digital Convergence Development Association. Panelists included Yu Shiang Lee (Corporate Affairs Manager, LINE Taiwan), Kuan Ju Chou (Taiwan Association for Human Rights), Ken-Ying Tseng (Partner, Lee and Li), and Chih-Liang Yeh (Assistant Professor, Department of Information Communication, Yuan Ze University). Panelists discussed what information, to whom, and how the social media platforms providers should implement transparency. Sharing from different perspectives, the panelists also touched on the keys and concerns in terms of platform regulations, including the balance between user privacy and free speech, and mitigating polarization on social media platforms.

Prof. Ho referenced the US Platform Accountability and Transparency Act (PATA) in his opening remarks, noting that the audience of ‘information disclosure’ defined in PATA was not limited to the government. Instead, PATA requires certain information to be made available only to “qualified researchers” at academic or certain nonprofit institutions. As to expectations for transparency regulations, Ho suggested what people care the most are: what information the platforms holds about their users, how the algorithms work, and whether information made available to the government raises human rights concerns. He looked forward to the fruitful discussion with the panel. 

Lee shared LINE’s work on improving transparency. All messages transmitted via LINE were point-to-point encrypted and not accessible to even LINE employees. That is also why LINE cannot add warnings to messages that are potentially spam or disinformation. Lee noted that lawmakers should pay attention and discriminate to the distinct natures of different service types. Referencing data from surveys, Lee pointed out that people are more against the government interfering in private conversations in massaging services than public social media platforms. Levels of acceptance regarding the government’s access to or restrictions on data, and whether to have warning labels also vary. We also need to consider proportionality.

In terms of content censoring, Cho argued that platform providers should reveal (1) how much and why content was censored in a specific timeframe; (2) inform users why the content is censored; (3) have effective appealing mechanisms in place. The subject of complaints could be human or AI review standards. Cultural sensitivity is also important. For example, Mexican content should not be moderated by Spanish speakers who are not Mexicans. According to the second iteration of Santa Clara Principles, a content moderation principle endorsed and practiced voluntarily by Internet companies, ‘governments and other state actors should themselves report their involvement in content moderation decisions, including data on demands or requests for content to be actioned or an account suspended, broken down by the legal basis for the request.’ The Taiwanese government has not been fulfilling such requirements.

Zhou introduced Digital Industry Association (DIGI) from Australia, the nonprofit organization that developed the Australian Code of Practice on Disinformation and Misinformation, a voluntary code of practice on disinformation. In addition to the minimum commitments, signatories of the code can opt into a range of measures and objectives according to their needs. All signatories are obliged to provide an annual report to DIGI setting out its progress towards achieving the outcomes contained in the Code, which will be published on the DIGI website. Taiwan can learn from the DIGI and the voluntary code model.

In fact, Taiwan used to have a similar industry-developed code. The Taipei Computer Association (TCA), with a number of companies, including a Taiwanese platform, proposed a set of guidelines for practice on disinformation. Unfortunately, without following up with regular reports detailing the participating organizations’ practice and progress, it was impossible to assess the effect of the guidelines. No local platforms in Taiwan have ever published transparency reports. TAHR does not advocate for mandatory publication by law. It is better for the industry to do this voluntarily. From a privacy perspective, we can only reduce the risk of user exploitation by the platforms when users know exactly how their data is processed.

Tseng introduced the five aspects of transparency. The first and fundamental aspect is the terms of service on the websites. Secondly, platform providers should disclose information regarding their procedures of content moderation. The third aspect is information regarding the government’s request to access or remove content. The fourth is business information, including basic company information and information about advertisements, the latter a lot more controversial than the former. Sometimes it is difficult to tell whether certain content is an advertisement and adding warning labels is not viable in practice. Last is the algorithm the platform utilizes to target and deliver ads. The purpose of information disclosure is to protect users’ personal data and privacy by providing public oversight. Different stakeholders have different views on what should be disclosed and how to supervise; even consumers as a single stakeholder group will have various opinions. As for the government, what they expect the most is to catch criminals using the information disclosed.

Tzeng introduced PATA; the act requires platform providers to share information with ‘qualified researchers.’ The law, once passed, will detail the specific types of information to be made available. According to PATA, the companies are allowed to review, making sure no confidential information is included, before publishing the reports. PATA has in place an independent supervision mechanism and provides a timeframe for actions in each stage. This act is a valuable reference in terms of procedure design.

The market of online platforms evolves at great speed. Tzeng noted that when it comes to digital platforms, market forces and voluntary industry efforts are faster and more effective than government regulations. However, the government should also abide by transparency principles to prevent the regulatory authorities’ abusive behaviors. In Taiwan, it seems that only the “Communications Protection and Supervision Law” has similar requirements.

Professor Yeh introduced the types of digital platforms, including internet service providers, cloud services, search engines, marketplace, online transactions, and social networks. The regulatory mindset for online platforms has evolved from deeming platform providers and intermediaries’ joint liability to recognizing their legal obligations to cooperate while allowing exemptions. Finally, we are now at the stage of legal duties of online platforms and intermediaries. The industry is also moving towards self-regulation. The most referenced ones are the Manila Principles and the Santa Clara Principles. Yeh argued that large platforms’ self-regulation is not reliable. On the other hand, government regulations are limited to providers within the border, which is not effective either.

Transparency requirements can come from industry voluntary efforts or regulations. There should also be an oversight. It is possible to moderate content on platforms using similar governing policies in corporations. Internal control measures such as independent board directors seem feasible, but oversight by a third party is not as simple. Transparency requirements must not be mere lip service. Unilaterally designating specific platforms to provide reports to justify their content moderation practice could falsely increase users’ trust in the platform. Prof. Yeh introduced the six ‘Principles for Enhancing Competition and Tech Platform Accountability’ released by the White House recently. He argued that the cost of implementing some of the principles is too high to be practical.

Prof. Yeh pointed out that there is no single answer to the problem. Based on commercial or other public benefit interests, there must be third-party verifications in addition to self-disclosure by the platforms. In terms of content moderation, platform providers should provide the legal basis or rationale for labeling illegal or inappropriate content. Users should be able to file complaints through an appropriate appealing mechanism.

Prof. Ho concluded that transparency is best practiced in a bottom-up, voluntary model. Companies might comply only on a superficial level if mandated by government regulations.