注意力經濟持續發燒,我們該如何看待演算法?

活動訊息

日期:2021年8月26日(四)14:00-16:00

主持人:陶振超 理事長(台灣資訊社會研究學會)

與談人:

  • 李政德 副教授(國立成功大學 數據科學研究所)
  • 李韶曼 助理教授(國立成功大學 敏求智慧運算學院)
  • 何明諠 專案經理(中央研究院 資訊科技創新研究中心)

李政德 副教授 【簡報下載

社群媒體已離不開大眾生活,當我們點開Youtube,推薦影片便映入眼簾;當我們登入Facebook,用戶的每個點擊幾乎都是透過AI演算法找出用戶興趣所在而推薦的內容。社群媒體可透過演算法計算個人偏好並精準捕捉用戶注意力。

注意力研究已行之有年,但網際網路的出現改變注意力研究的深度與範圍。過去,個人對於各種不同商品的喜好僅存於心中,傳統社會學研究是透過問卷調查進行分析,資料取得較慢且使用者對喜好的定義不盡相同,有時難以量化比較,加上使用者在不同領域表達的喜好很難以跨平臺的方式蒐集,然而,當用戶大量投入網際網路和社群媒體平臺,資料蒐集變得更加容易,因此衍生出有效掌握用戶喜好的演算法。

注意力經濟的來源以用戶數位足跡(如網購、線上刷卡等)為基礎,AI演算法能從大量資料中學習規則,透過數位足跡捕捉用戶喜好,最後藉由精準推薦獲取用戶注意力。無論用戶是否購買商品,一旦點擊頁面或用手機滑過某貼文,演算法就會留下紀錄成為數位足跡。以Youtube為例,推薦影片佔點擊流量60%,由此可知,注意力經濟對社群媒體的重要性不言可喻。社群媒體平臺可透過注意力經濟進行精準行銷以提升商品購買率,用戶也會不斷被這些投其所好的行銷手法或廣告吸引而持續消費,形成雙向循環。

我們該如何面對注意力經濟?李副教授提出三大面向,第一,使用者必須留意網路上最珍貴的資產,例如個資和數位足跡,用戶須意識到自己的網路行為會留下紀錄。第二,用戶須意識到演算法會產生偏差,包括同溫層效應、從眾效應、假訊息散播等負面影響,因此用戶應從更多面向蒐集資訊,而非僅靠演算法。最後,注意力不等於行動力,看到推薦商品或影片,未必要購買或點擊,決策前應多查證與比較,並衡量是否有足夠金錢和時間購買或點擊影片。

注意力經濟是把雙面刃,其雖為用戶帶來便利,但也帶來對立和仇恨(例如選舉期間不同立場者互相攻擊),它也有助於用戶關注國際時事,用戶無須大量搜尋即可取得大量時事資訊,透過注意力經濟,用戶可輕鬆取得系統推薦的有用資訊讓學習事半功倍,未來我們應發展自我約束與減緩偏差的演算法,讓具道德框架的AI降低負面影響。 最後,李老師指出,演算法的負面影響或許無法被徹底解決,但可透過推動科普教育讓大家認識演算法,並使用透明化與可解釋的AI技術建構對演算法的信任評估機制,使用AI技術的風險包括AI可能蓄意推薦不真實或造成對立的內容,未來須特別關注。

李韶曼 助理教授【簡報下載

科技從來就不是中性的,我們須省思誰在推動AI系統發展?在科技發展的洪流中,誰又扮演何種角色?個人又該如何自處?

社群網站與媒體提供便利而免費的服務,其營利模式有賴於大量蒐集用戶個資,只要涉及當代重要競爭行為就無可避免演算法,其中一個重要議題是,注意力經濟在市場運作中是否與個資保護衝突?

李老師認為,注意力經濟已解構傳統主權觀,過去審議式民主是透過公共討論與意見交換形塑與凝聚共識,但注意力經濟已改變民主政治生態,候選人透過將自己包裝成選民喜歡的商品吸引選民注意力,其中政治性廣告就是一個顯著案例。AI可預測甚至介入選民投票行為,投放定向廣告凝聚支持者,這種情況已影響民主選舉機制的透明性與公平。演算法排序該如何問責?是否影響選舉權與公民投票的自由?少數選民是否被演算法忽略?這些問題都是必須審慎思考的,政治性廣告必須受到監管。

至於如何監管政治性廣告?其中一種解方是將政治性廣告列為網路言論,目前可能的管制模式包括:讓提出言論者負責、行政機管監控、網路平臺介入負責,以及公民大眾共同承擔責任。然而,網路具跨國性和匿名性,因此針對個人究責不易;大量網路言論也可能造成行政機關監控成本過大,介入也有可能衍生出濫權風險;由於社群網站與國家對移除網路言論的觀點不同,網路平臺也難以介入;針對公民大眾,則應多加投入媒體識讀教育。 另一種解方是仰賴社群媒體平臺的政策,目前許多平臺都有不當言論檢舉管道,通常是由AI與人工搭配進行審查程序,但這種做法是否會侵害本來合法的內容,是接下來須思考的議題。在去年選舉期間,Google也提出暫時停止政治性廣告的方案,由此可見,目前許多解方都仰賴社群媒體平臺,未來可能須進一步思考,以資訊科技為中心的解方是否適切?但若跳脫演算法,那又有何種價值須納入考量?

何明諠 專案經理

演算法會影響FB貼文模式或政治人物觸擊率,並因此影響訊息流通的程度,由於社群媒體平臺廣泛使用演算法,它已成為當代重要議題,包括社群媒體、ISP業者都是演算法使用者。注意力產業的營利所在與影響力是我們必須關注的議題。早在2003年,Google便申請過類似專利,而這十年來,演算法的發展已更加蓬勃。

在內容審查方面,演算法也扮演重要角色,Google有安全、隱私權及內容移除的報告,被移除的影片大部分都是透過自動偵測系統認定,移除的留言數大部分也來自自動檢舉和移除。2018年的Santa Clara要求平臺揭露移除統計數據、移除原因、移除內容種類、用戶申訴機制與用戶救濟結果等相關說明。 在臺灣方面,台權會於2015年發現政府在兩年內審查超過七萬件網路內容,但數字無法反映個案決策品質與整體系統運作。未來我們應努力提升決策品質,近年推出的《通用資料保護規則》(General Data Protection Regulation, GDPR)、歐盟《數位服務法》和《數位市場法》都有類似規範。

焦點座談

陶振超 教授

資訊多樣化過去被認為是對民眾賦權,但目前卻造成用戶的選擇困境。人們仰賴少數媒體或演算法形成偏好,社群媒體應提供用戶必須知道的資訊,而非僅提供用戶想要的資訊,現在社群媒體一方面提供資訊,另一方面卻審查資訊,到底媒體應扮演何種角色,請各位老師發表看法。

李振德 副教授

我認為社群媒體應允許使用者自行選擇想看的內容,但矛盾的是,演算法卻已過濾內容。我們可從主動/被動獲取資訊的角度來看,主動代表用戶感興趣、主動訂閱或取出之內容,被動代表用戶允許的演算法推薦內容。

社群媒體應扮演提供多樣化資訊的角色,其相對而言不會受到特定立場媒體影響,但多元性應明訂於法律或特定規範之中,至於如何定義多元?也須請專家提供不同思考面向。演算法透過資料學習用戶喜好,最終仍可能提供大眾偏好的內容從而限制多元性,而我認為應在保有多元價值的情境下適度規範演算法。

李韶曼 助理教授

同意李政德老師支持多元性的觀點,但問題是我們該如何保持多元?我認為傳統媒體能有效過濾有用資訊而達成更好的多元性,媒體角色應被正視,國家是否應作為主動提供平臺的角色,而官方平臺的演算邏輯可能與營利導向的演算邏輯不同,這又對多元性將造成何種影響?另一個可能達成多元性的方法是打破科技巨頭壟斷局面,若Facebook能裂解成更小的單位,現在演算法獨霸的局面是否有可能改變?這些都是可思考的面向。

何明諠 專案經理

我們應思考網際網路的最初理想為何,網際網路一開始的理想是多元資訊的連結,非常自由也沒有任何限制,照當代社群媒體發展路徑,我們須重新思考對網路的最初想像是否合理?另外,人們會因來自不同的多元背景執行價值判斷,演算法並未讓我們擺脫既有處境,反而強化既有處境。

陶振超 教授

接下來請教李政德老師,道德性的AI是否可行?

李政德 副教授

目前AI運作框架是快速有效率地處理資料,因此會提供簡短或圖像化的資訊,若要做到可解釋或透明性AI,未來可能須解釋推薦原因,雖然這在技術上可行,但用戶時間有限可能導致執行困難。

陶振超 教授

這個問題是想請教李韶曼老師,對於政府監管言論的看法為何?

李韶曼 助理教授

政府監管言論的討論很多,但政治性廣告已扭曲民主,因此我認為政府須介入監管的對象。

陶振超 教授

請教何經理,政府應如何協助公民團體監督擁有權力的大企業?也請補充臺灣的網路言論管制政策或法規。

何明諠 專案經理

捐款是對公民團體最直接的幫助!捐款是對公民團體最直接的幫助!監管大企業通常有兩種做法,其一,透過法律和政策管制;其二,透過公民社會監督,政府須投入更多資源協助公民團體吸納人才,才能幫助公民團體壯大。 臺灣曾提過《數位通傳法》草案,目前正準備重提一個新的草案,在最初的版本有針對網路審查的規範,比方說規範何種內容應下架或何種資訊應揭露。

提問與回應

Q1. 聽眾:李韶曼老師提到有害民主的言論須管制,讓我聯想到美國大選時川普被暫時關閉帳號,當時也引發爭議,我們該如何判斷言論是否達成有害民主的標準?

與談人回應

李韶曼 助理教授

是否須管制言論應從該言論發生當下的時空脈絡來看,就川普的案例而言,我支持禁止其言論的決策,當民主重要的制度條件被扭曲或戕害時,就有必要介入管制。另外,當Twitter或Facebook作出停權決策的時候,他們也有不同的審查程序,整體言論管制並非片面性的單純決策,還是應納入不同的利害相關人共同參與。

李政德副教授

將科技巨頭拆分成多個不同面向的小群體,如此一來,在A平臺被禁的言論在B平臺也未必會被禁。

How do we deal with algorithms in the attention economy?

Information

Date: August 26 (Thu.) 2021, 14:00-16:00

Moderator

Moderator:

  • Chen-Chao Tao, Chair, Taiwan Academy of Information Society

Panelist:

  • Cheng-Te Li, Associate Professor, Institute of Data Science, National Cheng Kung University  
  • Shao-Man Lee, Visiting Assistant Professor, Miin Wu School of Computing, National Cheng Kung University 
  • Ming-Syuan Ho, Project Manager, Research Center for Information Technology Innovation, Academia Sinica

Session details

Famous for coining the term ‘net neutrality, Tim Wu, an author, lawyer, activist, and professor at Columbia Law School, published his book titled ‘The Attention Merchants: The Epic Scramble to Get Inside Our Heads’ in 2016. In the book, Wu delineated how the tech companies became the newest heads of attention merchants by cultivating and harvesting our attention with the help of algorithms and customized advertisements. According to this argument, users are constantly facing trade-offs between their attention and free internet services.  People have argued that the newest form of attention economics, where the tech companies and social media platforms cultivate our attention by personalized contents and sell them to other businesses to make money, is what leads to today’s divided society and growing hate crimes. How do we address this issue? American technology ethicist Tristan Harris advocates that the economic principle of maximizing net profits should evolve along with the impact technology poses on humans and our environment. In other words, the tech giants should rethink what it means to ‘grow the business.’ Should Harris’s idea be realized? Or is this going to slow technology innovation? 

Minutes:

Economics is the study of how scarce resources are allocated; whether that is housing, food, or money. However, in an era of endless amounts of information at the hands of our fingertips, what is the scarcity? Unlike the first three examples that can be empirically quantified and measured, our intangible yet extremely valuable attention is the limiting factor: we are in the age of the attention economy[1].

The term “attention economy” was coined by psychologist, economist, and Nobel Laureate Herbert A. Simon. According to Simon, attention is the “bottleneck of human thought,” limiting both what we can perceive in stimulating environments and what we can do. He also noted that “a wealth of information creates a poverty of attention.” In 1997, theoretical physicist Micheal Goldhaber warned that the international economy is shifting from a material-based economy to an attention-based economy, pointing to the many services online offered for free.

The session’s moderator, Chen-Chao Tao’s opening echoed Goldhaber’s sentiment. Online services such as search engines, social media platforms, and instant messaging apps have drastically changed the ways people receive information. Tao pointed out that the tech companies are no longer mere Information and Communication Technology service providers. The truth is that some of the tech giants—particularly Google and Facebook—have literally been playing the role of media while consistently denying the fact.

Tao soon moved to introduce the panelists. The first panelist was Cheng-Te Li, Associate Professor of Institute of Data Science at National Cheng Kung University. From a data scientist’s point of view, Li shared his perspective on how we deal with algorithms in the age of attention economy.

Using YouTube as an example, Li illustrated how our daily life is fully immersed in the attention economy. The recommended videos we see on our YouTube homepage are the calculated results of our view history on YouTube and browsing history from other websites. The same applies to Facebook; most contents we see on our personal feeds, though ostensibly from those we follow, are recommendations pushed to us by Facebook and its algorithm. We are surrounded by recommendations in our daily lives, and there are hardly any ways to escape.

The American Psychological Association defines attention as “a state in which cognitive resources are focused on certain aspects of the environment rather than on others.” Although theoretically unquantifiable, many derive attention’s value from how much time we focus on a particular thing. We face attention’s scarcity every day; while “paying attention” to one thing we ignore others.

The academic has been conducting researches on attention for a long time, Li explained; economists, sociologists, and psychologists are all interested in how our attention works and how it can and has been manipulated. One of the most prominent challenges researchers in the past had with studying attention is collecting data. The data of what and how people choose to pay attention to, the user preference, was mainly collected by hardcopy surveys. The problem with surveys is that the data collected was heavily influenced by user bias. Another downside is that it is difficult to collect user data on different topics with only one survey.

The Paradigm Shift of the studies on attention happened when the Internet transformed our daily lives. Tech companies can easily collect abundant user data by following users’ digital footprint across the Internet and analyze the data using machine learning technology. On the one hand, they utilized the analysis themselves to profile and categorize the users. On the other hand, they can also make money by selling advertisement products based on the analysis to businesses who want to better target their potential customers.   

Li argued that there are 4 types of business model depending on how companies are using the user data. The first is e-commerce, where the companies collect data of not only purchase but also clicks. The second is content sharing platforms such as YouTube, Pinterest, and Instagram; platforms recommend channels and people to follow based on user’s subscriptions and searches. The third is social media platforms where they devise your feeds by AI-picking and implicitly recommending you contents from people you follow. Finally, online forums such as IMDb and TripAdvisor also rely heavily on user data.

Research has shown that microtargeting is extremely lucrative. It is no wonder companies keep collecting data despite growing awareness of the protection of personal information and privacy. How do users counter this situation?

Li had 3 suggestions. Firstly, users have to understand that their personal data is extremely valuable. In other words, personal data, including non-private and non-sensitive data, has become an important personal property. Users should be more aware of this fact and know well the right they have over their data.

Second of all, we need to learn and remind ourselves of our own cognition bias and the unwanted consequences of algorithms, whether it is the echo chamber where we only hear opinions we want to hear or the bandwagon effect where the crowd easily hop on and abandon yet another ‘trend.’ The best way to get your information is to get them from multiple and various channels; fact-checking and better media literacy are also critical in guarding ourselves against the offense of attention economy.

Last but not least, Li noted that attention does not have to lead to action. Companies and merchants might easily catch our attention by showing us things we are interested in, but that does not mean we have to take action at every trigger. Li suggested that users can always do their own research, make comparisons and evaluate before purchasing. A hard think of whether you need this particular product is also a good way of preventing impulsive purchases invoked by targeting ads.

In the end, Li reiterated that algorithm is a double-blade sword. In the attention economy, we essentially are trading our privacy for convenience. As suggested earlier, as long as the users are conscious of the negative aspect of the algorithm behind most online services, as well as being able to filter through the pseudo-recommendations, they can still appreciate and fully enjoy the benefit of the Internet, connecting to people and accessing information from faraway places otherwise unreachable thanks to the modern technology.

The second panelist, Shao-Man Lee, also teaches in NCKU. Coming from a legal background, Lee shed light on the topic of attention economy from a very different angle than the previous panelist. Lee’s argument was based on 2 premises: technology is never neutral, and any discussion of individual behaviors cannot be carried out without taking into account the broader historical, social, and economic context.

One of the most important points Lee raised during her presentation was that we should no longer ignore the harm attention economy inflicts on democracy and free speech.

When Micheal Goldhaber warned that the economy is changing from a material-based economy to an attention-based economy, he also rejected the characterization of ‘information economy’ instead of ‘attention economy.’ After all, attention is the scarce resource, not information.

Swiss economist Josef Falkinger argues that the scarcity of attention is a function of an information-rich economy. In other words, attentional scarcity is contributed by too much information competing for our limited attention. Another unwanted consequence of attentional scarcity is ‘reverse censorship,’ a technique of speech control where the regime distorts or drowns out disfavored speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots.

We have to also learn to distinguish between consumer sovereignty and political sovereignty. Lee argued that consumer sovereignty is a myth in the attention economy; consumers thought they are the ones who ‘like’ and ‘decide’ to buy things, while a considerable part of their preferences and purchase was actually shaped by the recommendations they get from the web.

What is worse is that in the attention economy, our political choices are increasingly influenced by our ostensible consumer sovereignty. Politicians begin to imitate what merchants do to attract consumer attention: they collect user data to understand their preferences and tailor their talking points and campaign to cater to the target audience. A healthy political environment is where all citizens make informed choices using their votes while reaching compromises by having lively and open public discussions. This has changed because of how political sovereignty is jeopardized by the twisted consumer sovereignty.

Some solutions have been proposed and implemented. Twitter, for example, stopped accepting political ads in 2019. Google’s move was not as decisive, but the company did stop giving advertisers the ability to target election ads using data such as public voter records and general political affiliations. Some measures were only taken during the election time. Facebook suspended all ads concerning social issues, elections, or politics during the American presidential election. Google barred all kinds of election ads during Taiwan’s presidential election from 15 Nov. 2019 to 17 Jan. 2020.

Although it is encouraging to see the tech companies finally taking action, Lee contended that there’s still a void when it comes to either an ethical code of conduct or regulations. She was not in favor of letting the tech self-regulate, arguing that regulatory developments were also critical in achieving a more open, democratic, and accountable public space. 

The last panelist was Ming-Syuan Ho, who recently joined Research Center for Information Technology Innovation in Academia Sinica after working 6 years in Taiwan Association for Human Rights (TAHR).

During his time in TAHR, Ho was the manager of Taiwan’s Internet Transparency Report project. They surveyed and tracked whether and how the government censors content provided by Internet service providers (ISPs) or the government’s request for personal information from these providers.

In an effort to avoid repeating what his predecessors have discussed thoroughly in the session, Ho focused his presentation on how to develop a more transparent algorithm.

The Santa Clara Principles result from a collective effort of NGOs and digital rights groups to increase the transparency and accountability of content moderation. It proposed 3 principles:

  1. Numbers: companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines.
  2. Notice: Companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.
  3. Appeal: Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

Ho believed that these principles would help build a more meaningful transparency when it comes to algorithms. He quoted Fung, Graham, & Weil to illustrate the vision of meaningful transparency: ‘for transparency to be meaningful, it has to be targeted—not just increasing information, but communicating in a way that can be used to help hold decision-makers to account’.

He also touched on the transparency of data for research. Referencing the news of Facebook threatening to sue advocacy group Algorithm Watch unless they stopped using tracking tools on Instagram to monitor politician activities. Matthias Spielkamp, executive director of Algorithm Watch, had no choice but took down the tracking tool even though the data they have been collecting were purely for research purposes.

It is indeed ironic how Facebook, who is rapacious when collecting user data on its own platforms, suddenly started to care about ‘protecting user data.’ Due to the time constraint, Ho did not have the time to dive deeper into the legitimacy and accountability of data collection for research purposes. He did, however, made the final point that the discussion of algorithms and how we use data is entitled to a lot more attention from the Taiwanese society.


[1] Ally Mintzer. Paying Attention: The Attention Economy – Berkeley Economic Review. https://econreview.berkeley.edu/paying-attention-the-attention-economy/