數位時代的民主:威脅與方案

議程

14:00-14:05  活動介紹
14:05-15:45  焦點座談

  • 主持人:王鼎棫  主編(法律白話文運動網站)
  • 與談人:
    – 周宇修  律師 (台灣人權促進會 執行委員(兼任財務長)、謙眾國際法律事務所 律師 )
    – 劉嘉薇 教授(國立臺北大學公共行政暨政策學系)–線上出席
    – 羅棋駿 營運長(皮爾森數據)
    (依姓名筆劃順序排列)

15:45-16:00  現場問答

會議記錄

台灣網路講堂於11月21日(二)舉辦「數位時代的民主:威脅與方案」座談會,由法律白話文運動網站王鼎棫主編主持,邀請台灣人權促進會執行委員周宇修律師、國立臺北大學公共行政暨政策學系劉嘉薇教授以及皮爾森數據 羅棋駿營運長,共同深入探討台灣的民主在數位時代所面臨的挑戰,並尋找創新解決方案,以確保民主價值在當下及未來得以繼續實現。

王鼎棫主編開場首先解釋民主意涵為,民主是指人民自己可以作主,而涉及政府公權力的民主是可滿足民眾想法與期待的政策;而當民主碰上數位時代時也可能產生威脅,例如社群平台的演算法透過個資蒐集投放假訊息內容等。

羅棋駿營運長認為在由人民作主的民主時代,釐清人民的特質與想法是至關重要的,透過網路調查或市場調查可理解社會運作的現況,在民主制度運行的過程中,更有助於人民形成普遍共識。以分析網路假訊息威脅為例,首先可從民眾平均使用網路的時間來評估容易受到影響的年齡層,國人平均使用時常為4-8小時,年齡層愈年輕使用時間也愈長;分析國人經常使用的社群平台,可觀察到各平台具有不同的討論風氣以及傳播方式,對於議題討論通常會有截然不同的火花與激盪,目前臺灣民眾最常使⽤的社群平台為LINE、Youtube及Facebook;藉由分析臺灣⼈每⽇作息與公共事務⽣態也可理解公共事務及新聞傳播的運作邏輯。羅棋駿也提醒,當有使用假訊息破壞民主的事件發生,我們可透過上述三項的分析可瞭解自身的弱點所在,進而加強自我防衛意識。

接著,羅棋駿透過AI繪圖工具來描繪民主圖像,當他利用政府力量、資本力量與社會力量三個關鍵字時,AI工具產出的是三者交鋒之圖騰;而當他把科技力量納入後,原本被交錯在一起的圖樣會被分開,他認為這是因為科技講究的是獨立辯證過程及獨立運作流程。他接續從成本觀點來談民主維運,這些成本包括建立與維護法律制度、政治參與、教育與傳播、維持社會穩定與轉型,以及對外政策等。從數據分析者的觀點,其不會在意未來人類發展樣貌,但是對於發展的過程,可以透過數據蒐集來觀察變化;而數位科技涉入社會變遷演變時,也應考量科技的涵蓋率與替代率。羅棋駿表示,民眾對於社會秩序的變化應有自我覺察的能力,當新興科技的力量開始影響社會甚至個人生活時,人們應該去找尋適應方案以及參與的方式。

周宇修律師從假新聞的角度來談科技對民主的影響。假新聞因為網路興起效果被放大,核實成本也更高,台權會的立場則是,管制不實訊息應兼顧言論自由。他首先介紹釋字613號解釋,其中提到憲法保障的言論自由,其包含通訊傳播自由,意即透過通訊傳播設施以取得資訊及發表言論之自由,且須同時確保多元意見表達的公共討論。而在112年憲判字第8號判決中,大法官則認為雖人民具有言論自由,但無論是媒體或一般人,對於提供的資訊均應負有一定程度之真實查證義務,避免助長假訊息流竄。對此,周宇修有不同的看法,他將言論比喻為商品進一步舉例,若消費者到商店買到過期商品,普遍認知是該商店應負管理商品不周之責任,通常不會要求消費者應負起自行確認效期的義務。對於假訊息的傳播,應進一步思考言論自由市場的機制是否需要改善或由政府介入管制,並非一味要求受眾肩負起自行查證義務。

目前臺灣法律針對「假訊息」無明確定義,相似名詞(如謠言、不實之事)散見於《社會秩序維護法》、《公職人員選舉罷免法》、《傳染病防治法》、《災害防救法》等法規中,在對於假訊息的定義與範圍尚未明確解釋的情況下,執法的正當性是有待確認的。如《社維法》賦予警察擅自成案的權力,在處理網路言論時,不僅可能無法達到遏止傷害的目的,反會導致機關濫用該法,強化人民對政府的不信任。且不實訊息與其所造成的危害究竟該如何認定,亦需足夠的「因果關係」來證明,實不應持續使用《社維法》中「散佈謠言,足以影響公共安寧」如此模糊且廣泛的條文介入言論。

周宇修以經濟學的角度分析,言論自由市場究竟要百家爭鳴高度競爭,由受眾自行選擇資訊,抑或是交由政府統一管制?他進一步援引中研院林春元博士的分析:數位科技帶來的資訊爆炸,可能引發過多假訊息破壞聽聞者思辨,甚至阻礙民主審議。對於言論的適度管制,其實有助於讓民眾更有效率接收有意義的資訊。

最後他提出因應假新聞現實面的三個困難點,其一是溝通,因為數位民主中,並非每一個人都具備有相同的科技能力或設備;其二是極化現象使公共議題的討論更困難;最後是同質性,例如國內明明新聞業者很多,但各家媒體產出的新聞都一樣,也許是市場迫使媒體放棄屬於相對邊緣的新聞與看法。目前的解法有市場自由競爭及政府管制,但是要拉近資訊不對稱性,而先前NCC推出的數位中介法確實是一個好方向,只是在實施細節國內尚未有共識。

劉嘉薇教授從媒體與政策的角度來談。他說當前台灣的民主確實面臨著數位威脅,傳統媒體已沒落,網路興起也讓資訊之查證變得越來越不可靠,加上新冠疫情影響,民眾使用網路時間更長,加劇暴露在大量假訊息的風險之中。在政治方面,大量假帳號、機器人等網軍肆意操弄議題風向也引發重大影響。在政府也需要透過網路進行政治傳播,藉此推行公共政策與匯聚利益的情況下,因為網路內容品質參差導致真假難辨,加之社群平台演算法的影響之下,讓人們對於假訊息更難以起疑。

是否能夠利用當前火熱的AI技術來解決數位科技帶來的威脅,劉教授建議可應用弱AI(weak artificial intelligence)來處理,弱AI可輔助人們做出決策,其僅能依照人們輸入的資訊執行特定任務,而人們有權拒絕做出弱AI建議的政策選擇。劉教授也強調弱AI的應用並非完全取代人類的決策制定,而是扮演輔助角色。至於該如何實際應用弱AI於提升政府的行政效率,他建議可從蒐集意見工作開始,運用AI釐清各方意見的複雜關聯性,並配合專家進行人工確認。不過運用AI的同時仍需兼顧透明性及可監督性,建議可公開使用AI蒐集資料的方式,並接受專家學者與各方團體的嚴格監督,且政府也應該適時介入並要求AI廠商提供的服務符合社會期待。他也建議發展AI應用於分析虛假訊息與強化資訊安全,藉此防範數位極權國家。

對於AI技術維護民主價值效用的期待,劉教授認為AI的可協作性以及任何人都可以使用參與的平等性是值得關注的重點。至於未來AI技術對於數位民主的影響,他也提出幾個思考方向,例如AI的發展走向究竟要傾向商業價值或是著重於公共利益;政府對於AI是否要監管,又監管同時是否能兼顧隱私;AI若朝開源模式發展,雖多方參與有助於AI成長,但也可能招致惡意目的之濫用。最後劉教授提醒,當人們心中懷有憤怒與恐懼時,就比較容易受到網軍及假訊息的影響。

最末王鼎棫總結,當我們談論數位時代的民主時,科技工具的搭配顯得十分重要;若我們重視民主價值且要建立共同體時,就不應該讓整個社會群體極化擴散,改善之道就是從更多的橫向溝通與聆聽開始。

**簡報下載 <經講者同意提供>**

– 周宇修  律師 (台灣人權促進會 執行委員(兼任財務長)、謙眾國際法律事務所 律師 )- 簡報下載
– 劉嘉薇 教授(國立臺北大學公共行政暨政策學系)– 簡報下載
– 羅棋駿 營運長(皮爾森數據)- 簡報下載

Democracy in the Digital Age: Threats and Solutions

Agenda

14:00-14:05  Introduce
14:05-15:45  Panel Discussion

  • Moderator
    – Wang Ting-Yu, Editor-in-Chief of Plain Law Movement
  • Panelists:
    – Chou Yu-Siou, Executive Committee Member of the Taiwan Association for Human Rights, Lawyer at CHEN & CHOU Law Firm.
    – Liu Jia-Wei, Professor of the Department of Public Administration and Policy at National Taipei University.–online
    – Lo Chi-chun, Chief Operating Officer at Pearson Data.

15:45-16:00  Q&A

Meeting Minutes

Mr. Wang Ting-Yu first explained the meaning of democracy at the opening, stating that democracy refers to the people being able to make decisions for themselves. The democracy involving government authority is a policy that aims to fulfill the thoughts and expectations of the people. However, when democracy intersects with the digital age, it may also pose threats. For example, algorithms on social media platforms collect personal information to disseminate false information content.

Mr. Lo Chi-chun believes that in an era of democracy where the people are in charge, clarifying the characteristics and thoughts of the people is crucial. Understanding the current state of society through online surveys or market research can contribute to the functioning of the democratic system and help people form a general consensus. Taking the analysis of the threat of online misinformation as an example, one can first assess the age groups that are more susceptible to influence by examining the average time people spend on the internet. The average internet usage time for Taiwanese people is 4-8 hours, and younger age groups tend to spend more time online.

Analyzing the social media platforms commonly used by the Taiwanese population reveals different discussion atmospheres and dissemination methods on each platform. Different sparks and clashes in topic discussions can be observed, with LINE, YouTube, and Facebook being the most frequently used social platforms in Taiwan. By analyzing the daily routines of Taiwanese people and the ecology of public affairs, one can also understand the logic behind public affairs and news dissemination.

Mr. Lo also emphasizes that when incidents occur where misinformation is used to undermine democracy, the analysis of the three aspects mentioned above can help us understand our vulnerabilities and strengthen our self-defense awareness.

Mr. Lo utilized an AI drawing tool to depict the image of democracy. When he used the keywords “government power,” “capital power,” and “social power,” the AI tool generated a totem of the three forces in conflict. However, when he included “technological power,” the originally intertwined pattern was separated. He believes this is because technology emphasizes independent dialectical processes and independent operational workflows. He then discussed the maintenance of democracy from a cost perspective, including the costs of establishing and maintaining legal systems, political participation, education and communication, maintaining social stability and transformation, and foreign policy.

From the perspective of a data analyst, there is no concern about the future development of humanity, but the process of development can be observed through data collection. When digital technology is involved in social change and evolution, one should also consider the coverage and substitution rate of technology. Mr. Lo pointed out that people should have the ability to self-awareness regarding changes in social order. When the power of emerging technologies begins to impact society and even individual lives, people should seek adaptive solutions and ways to participate.

Mr. Chou Yu-Shiou discusses the impact of technology on democracy from the perspective of fake news. Due to the rise of the internet, the effects of fake news have been amplified, and the cost of verification is higher. The position of the Taiwan Association for Human Rights is to balance the control of false information while ensuring freedom of speech. He first introduced Constitutional Interpretation No. 613, which mentions the freedom of speech guaranteed by the constitution, including the freedom of communication and broadcasting. This means the freedom to obtain information and express opinions through communication facilities while ensuring a public discussion of diverse opinions.

In Judgment No. 8 of the Constitutional Court in 112 years, the judges acknowledged that although people have freedom of speech, both the media and the general public should bear a certain level of verification responsibility for the information provided to prevent the spread of false information. In response, Mr. Chou has a different perspective. He metaphorically likens speech to a commodity, further illustrating that if a consumer buys an expired product in a store, the general understanding is that the store should bear the responsibility for not properly managing the product. Consumers typically are not required to take on the duty of verifying the expiration date themselves.

Regarding the spread of false information, Mr. Chou suggests further consideration of whether the mechanism of the free speech market needs improvement or government intervention for regulation. It should not be a simple demand for the audience to shoulder the duty of self-verification.

Currently, Taiwan’s laws do not have a clear definition for “false information.” Similar terms such as rumors and untrue matters can be found scattered in regulations such as the Social Order Maintenance Act, Public Officials Election and Recall Act, Communicable Disease Control Act, Disaster Prevention and Rescue Act, etc. In the absence of a clear explanation of the definition and scope of misinformation, the legitimacy of law enforcement remains to be confirmed.

For example, the Social Order Maintenance Act grants the police the power to initiate cases on their own. When dealing with online comments, this not only may fail to achieve the goal of preventing harm but could lead to the abuse of the law by authorities, reinforcing people’s distrust in the government. Moreover, determining how to recognize the harm caused by false information requires sufficient “causal relationships” to be proven. Therefore, the vague and broad provisions in the Social Order Maintenance Act, such as “spreading rumors that could affect public order,” should not continue to be used to intervene in speech without proper clarification and limitations.

Mr. Chou, from an economic perspective, analyzes whether the marketplace of free speech should involve intense competition among numerous voices, allowing audiences to independently choose information, or if it should be subject to unified government regulation. He further references the analysis of Dr. Lin Chun-yuan from the Academia Sinica, pointing out that the information explosion brought about by digital technology may lead to an excess of false information, undermining the audience’s ability to engage in critical thinking and potentially hindering democratic deliberation. Moderate regulation of speech actually helps people more efficiently receive meaningful information.

Finally, Mr. Chou raises three practical difficulties in addressing the issue of fake news. Firstly, communication poses a challenge because not everyone in digital democracy possesses the same technological capabilities or devices. Secondly, the phenomenon of polarization makes it more difficult to engage in discussions on public issues. Lastly, homogeneity is an issue; despite having numerous domestic news outlets, the news produced by each media entity may be identical, possibly due to market pressures forcing media to abandon relatively marginalized news and perspectives.

Current solutions include market-driven competition and government regulation. However, bridging the information asymmetry gap, as suggested by the previously introduced Digital Intermediary Act by the National Communications Commission (NCC), is indeed a positive direction. Yet, there is currently no consensus on the implementation details domestically.

Professor Liu Jia-Wei discusses the current democratic challenges in Taiwan from the perspectives of media and policy. She states that Taiwan’s democracy is indeed facing digital threats, with traditional media declining, the rise of the internet making information verification increasingly unreliable, and the extended internet usage due to the impact of the COVID-19 pandemic exposing people to a higher risk of encountering a large amount of false information.

In the political arena, the rampant manipulation of public opinion by a large number of fake accounts, bots, and other forms of online manipulation has had a significant impact. The government, too, needs to engage in political communication through the internet to promote public policies and gather interests. However, due to the varied quality of online content and the difficulty in distinguishing between truth and falsehood, exacerbated by the influence of social media platform algorithms, people find it more challenging to be skeptical about misinformation.

Certainly, Professor Liu Jiawei suggests leveraging the current advanced AI technologies to address the threats posed by digital technology. She recommends the application of weak artificial intelligence (weak AI) to handle these issues. Weak AI can assist people in decision-making by executing specific tasks based on the information input by individuals. Importantly, individuals have the right to reject policy choices suggested by weak AI. Professor Liu emphasizes that the application of weak AI is not meant to completely replace human decision-making but rather to play a supportive role.

Regarding practical applications of weak AI to enhance government administrative efficiency, she suggests starting with opinion collection. AI can be used to clarify the complex relationships among various opinions, with human experts conducting manual confirmation. However, while using AI, transparency and accountability must be maintained. It is suggested to openly disclose the methods by which AI collects data and to accept rigorous oversight from experts, scholars, and various organizations. Governments should also intervene as necessary and demand that AI service providers meet societal expectations. She also proposes developing AI applications for analyzing false information and strengthening information security to counteract digital authoritarianism.

In anticipation of the utility of AI technology in upholding democratic values, Professor Liu emphasizes the importance of AI’s collaborativeness and the equality of participation, allowing anyone to use and engage with it. Regarding the future impact of AI technology on digital democracy, she raises several considerations. For example, she questions whether the development of AI should prioritize commercial value or focus on public interest. She also expresses concerns about government regulation of AI, emphasizing the need for such regulation to balance privacy considerations. If AI development adopts an open-source model, where multiple parties contribute to its growth, it can enhance AI’s development but may also be susceptible to misuse for malicious purposes. Lastly, Professor Liu warns that individuals are more susceptible to the influence of online manipulation and misinformation when harboring feelings of anger and fear.

In conclusion, Mr. Wang emphasizes the significance of incorporating technological tools when discussing democracy in the digital age. He highlights the importance of avoiding the polarization and diffusion of the entire social community if we value democratic values and aim to build a sense of commonality. The pathway to improvement, according to Mr. Wang, starts with fostering more horizontal communication and active listening within society.

Presentation Download <Provided with the consent of the speaker>

– Chou Yu-Siou, Executive Committee Member of the Taiwan Association for Human Rights, Lawyer at CHEN & CHOU Law Firm.- Presentation Download
– Liu Jia-Wei, Professor of the Department of Public Administration and Policy at National Taipei University– Presentation Download
– Lo Chi-chun, Chief Operating Officer at Pearson Data.- Presentation Download