Using Facial Recognition Technology in the Warfare

Information

Date: June 21, 2022 14:00-16:00

Moderator:

  • Jen-Ran Chen, Chairman of Digital Transformation Association

Panelists:

  • Hsin-Hsuan Lin, Professor of Department of Law, Chinese Cultural University)
  • Kuan-Ju Chou, Digital Rights Specialist of Taiwan Association of Human Rights
  • YJ Hsu, Professor of Department of Computer Science & Information Engineering, National Taiwan University)
  • JH Hua, Chairman of CyberLink Corp.

Session details

Ukraine government is using facial recognition software to help identify the bodies of Russian soldiers killed in combat and track down their families to inform them of their deaths.  The free offer of the facial recognition technology solution from Clearview AI seems a righteous move, however the company — who claims itself with the largest facial recognition database in the world, — has faced a string of legal challenges.  In this panel, experts in the fields are invited for discussing issues such as: can we ignore the controversy debate when facial recognition technology is applied in wartime?  What are the ethical issues in technology usage? Is it possible to use the facial recognition technology without privacy and other risks?

 

Session Highlights

The Moderator Mr. Jen-Ran Chen opened the session by indicating that digital technology has developed to the extent that it closely integrated with our daily life.  It even changes or transforms the people’s lifestyle. Among them, the biometric data used in areas of business,  security and access control is also becoming more and more diversified.

Mr. JH Hua then described facial recognition technology as a double-edged sword, with high accuracy and high risks violating privacy.  When the government use facial recognition technology, it is mandatory to obtain people’s consent in advanced, he suggested in the session.  Mr. Hua believes that face recognition technology does help increase efficiency in various applications such as custom entry control in the airport, identification of lost elders, insurance sales.  However, he reminded the audiences that, the controversy of Clearview AI is not using the technology, but to combine the results with other personal information found in the social media.  That’s also the main reason why the company was sued in many countries.

Professor Hsin-Hsuan Lin talked about the issue from international law perspective.  She started by mentioning the humanity and human rights crises caused by applying face recognition technology and AI weapons in the armed conflict, violent extremism, or counter-terrorism events. UN Security Council Resolution 2396 (2017) and the Madrid Guiding Principles were mentioned as legal instrument to address the crises.  However, the former is not detailed enough, and the latter proposes only the bottom line.  Although international regulations are developed in a slow path, they provide a normative basis.  Professor Lin suggested that private companies should suspend their relationship with countries that may violate the human rights.  Further, when companies have doubts regarding government’s request for handing out the biometric data, they should seek for judicial remedies.  She believed that the establishment of local law to regulate the AI application is urgent and suggested to reference the BIPA Act (Biometric Information Privacy Act) of Illinois.

Ms. Kuan-Ju Chou put forward her views as a human right activist.  By reviewing the open records of all the government procurements during 2006 and 2021, she found out that there are at least 107 cases related to acquiring face recognition solutions in Taiwan.  The buyers include libraries, schools, police department, and etc.  She also cited several facial recognition technology projects initiated by the public sector but cancelled due to high controversy.  One example, a university applied technologies tracking eye movement  and face expression in the class, to catch the cheating students.  Taiwan Railway Administration once installed Surveillance cameras with facial recognition function trying to identify suspects and people in need in the train station. 

She further argued that these AI surveillance systems are prone to be abused or misused, and that’s the reason human rights groups advocating for banning facial recognition technology used in public sector or in public space.  In addition, there is no laws in the country at this moment to resolve the possible disputes.  The application of the technology in the school is even worse, as students may get used to live in an environment without privacy.  Finally,  Ms. Chou suggested all the biometrics data can be used only under the premise of protecting data owners.  It may take a while to let the society to trust the technology.

Follow Ms. Chou’s comments, professor YJ Hsu explained that it is not easy to regulate the usage of face recognition technology, because different stakeholders have different views and values.  The most important thing is, no one’s rights should be determined by automatic decision-making system. Data subjects should also be clearly informed about how their personal data is collected, processed, stored, and used.  More importantly, they should be aware of the impact of the data usage.

 

Professor Hsu indicated the UK court has imposed a £7.5 million fine on Clearview AI just a few days ago.  There are still concerns that even high penalty cannot stop Clearview AI from continuing what it is doing. She suggested that all the AI providers should take privacy into consideration when designing their services or products and put in mind that AI cannot be 100% accurate.  Another focus be put is raising public awareness and education.  It is hoped that multistakeholder discussion may urge the law makers to come out a better-rounded legislation.  Professor Hsu reminded that AI is a crucial technology for the national economic development, forbiting the use of the technology may not the best answer for the good of the country. She suggested students in Engineering major should learn about the concepts of ethic and human rights.

In the end, The panel moderator concluded that it is obvious that AI and other emerging technologies will change people way of life. Through the law, people may better use the technology rather than manipulated by the technology.

One question was raised from the floor about the right of data usage and the possibilities to ignore the data regulation in the state of emergency.

Professor Lin first made a conceptual clarification on the state of emergency.  In the international law, the “state of emergency” refers to the launch of lethal attacks in an armed conflict.  She believed that the question raised from the floor is about if the pandemic time or severe disaster event are considered state of emergency.  She further explained that it may broadly covers acts of war, but in terms of international law, there are two different systems to apply the law: when the two countries have officially declared war and entered a state of conflict, international humanitarian law is applicable; when there is no war, international human rights law is more applicable.

In the cases of pandemic or severe disaster, it is suggested to refer to the International Covenant on Civil and Political Rights (ICCPR) as well as The International Covenant on Economic, Social and Cultural Rights (ICESCR) as basis to determine the applicability.   In the pandemic period and in Taiwan, it is legally accepted to allow the use of data as the collective public health may have higher priority.

伸張正義或侵犯人權?從烏克蘭政府使用人臉辨識談起

議程

14:00–14:05     活動介紹

14:05–15:45     焦點座談

主持人:陳正然(數位經濟暨產業發展協會 理事長)

與談人:

  • 林昕璇(文化大學法律學系 教授)
  • 周冠汝(台灣人權促進會 專員)
  • 許永真(台灣大學資訊工程學系 教授)
  • 黃肇雄(訊連科技 董事長)

15:45–16:00        現場問答

會議紀要

陳正然(數位經濟暨產業發展協會 理事長)

陳理事長首先開場說明,從來沒有一個科技發展如數位科技可以和我們的日常生活如此高度密合,甚至已經改變或轉變了原有的生活型態,其中,擷取生物特徵用在商務、安防、門禁等應用也將越來越多樣化。

黃肇雄(訊連科技 董事長)

黃肇雄董事長形容人臉辨識科技是雙刃刀,在技術上可達到非常準確,但同時也有侵犯隱私之虞。他認為政府在應用該技術務需事先取得人民的同意。他認為好的人臉辨識應用可以讓既有流程更有效率,包括像是機場入出境管控的應用、辨識走失長者身份、用來取代親筆簽名的買賣保險行為等。他也提醒,Clearview AI的爭議,特別是讓許多國家對其開罰的原因不是因為他利用了人臉辨識技術,而是該公司在辨識後所連帶提供的當事人社群帳號資訊。

林昕璇(文化大學法律學系 教授)

林昕璇教授則從國際法比較廣的框架來談,他首先提及人臉辨識技術以及AI無人武器在包括武裝衝突、暴力極端主義或反恐應用中,對人道與人權造成的危機。他提及聯合國安理會第2396號決議以及馬德里指導原則,針對上述情境所建議的規範方式,前者並不細緻,後者則提出應用方面的底線。他認為,雖然這些國際法規如牛步發展,但至少提供了規範的基礎,只是仍嫌稀薄,也沒有權衡上的國際標準。林教授也提出他的觀察建議包括,企業應停止與有干預人權可能之國家間的非正式合作關係,而當企業對於國家所提出的生物識別資料在人權遵守方面有質疑時,應當有法律途徑尋解決之道的救濟機制。他也建議可以參考由瑞士政府和紅十字會所聯合倡議的「蒙特勒文件」,並透過利益集團及公私夥伴關係促進權利的遵守。林教授認為國內相關法制建立刻不容緩,而美國伊利諾州的BIPA法案(Biometric Information Privacy Act)是一個很好的參考。

針對法律規範,陳理事長進一步表示,在疫情或戰爭等緊急狀態下,法規很容易被略過;上述國際規範是否有可能形成具約束力的管制,有待持續探討。並考量到科技的不斷翻新,與新興應用的出現。

周冠汝(台灣人權促進會 專員)

周冠汝專員則從公民團體的角度提出看法。她提到,透過公開政府採購資料的搜尋,台灣在2006-2021年期間,至少有107案是人臉辨識相關的採購案,大多與考勤、體溫偵測附加功能有關,採購單位包括圖書館、學校、警察單位等。他也舉出幾個國內試圖使用人臉辨識技術,但因具爭議性而未實施的案例,包括大學透過人臉辨識與眼球抓出作弊者;台鐵的車站人臉辨識試圖找出需要協助者或通緝犯;M-Police計畫等。這些人工智慧監控系統容易遭濫用或誤用,周冠汝認為這是人權團體多主張禁止公部門或禁止在公共空間使用該技術的主要原因。他也提醒,目前國內相關法規多在作業要點或辦法層級,並無法處理可能產生的爭議。他也引用一位家長以溫水煮青蛙來比喻學生求學階段不斷接受人臉辨識技術,結果可能讓學生們慢慢忘記隱私的存在與重要性。他主張要讓這些資料的使用是在保護當事人的前提下被應用,讓社會去信任這些技術。

許永真(台灣大學資訊工程學系 教授)

許永真教授則說明,要透過法律來好好規範人臉辨識技術的應用其實並不容易,因為不同利害關係人的觀點與立場可能也不同。他特別強調,無論是人臉辨識技術或其他科技應用,任何人都不應該被自動決策系統來決定其權益;更重要的是,應清楚地告知資料當事人,如何搜集、處理、保存與利用他們的資料,以及這些利用方式可能對當事人產生的影響。英國也在幾天前剛剛針對Clearview AI開罰750萬英鎊,但也有人憂心即使有高額罰款,也沒有辦法阻止Clearview AI讀繼續營運。許教授也建議,企業在設計相關服務或產品之初,就應當把上述問題都納入考量,且技術無法百分百準確,當出錯時,還要能提出如何處理降低損傷的方案。他也強調公眾的意識與教育的重要性,並提醒國內的立法要跟上國際,希望可以透過這樣的討論,督促立法機構做更完備的規劃。此外AI是各國都在競爭的關鍵技術,也涉及到整個國家的經濟發展,也不該只有禁止使用唯一答案。許教授建議,教育單位有需要針對道德與人權的論述,讓學生去思考這些問題,不要讓技術與人權變成彼此不理解的結果。

最後陳理事長結論,顯然這些新的科技應用會決定大家的生活方式,如何積極介入,透過法治讓這些科技變成為人所用,而非操弄我們的生活。

提問與回應

Q1.在緊急狀態下是否可以使用合法或不合法取得的資料庫或軟體?還是他仍應遵守相關的規範,包括國際人權法或人道法,或符合蒙特勒文件的典範?要多緊急才能避開法規?

林教授先就「緊急狀態」做概念上的澄清。在國際法上,武裝衝突的體系只限定於發動具有殺傷力的攻擊或武裝交火的情況,從60年代開始國際人道法中所提及的戰爭法體系規範只限定軍事武裝交火情境,並沒有預期到我們現今所遇到的情況。民眾這邊提到的「緊急狀態」應是指目前後疫情時代,對於大型資料庫不當取用的行為,或者是在有重大天災(如風災、水災等)的緊急情況下,需取得民眾個資來做防災相關作為的情況。

    「緊急狀態」廣義來說包含了戰爭行為,可是在國際法的規範上是兩套不同的體系。當兩國已經正式宣戰,進入交火的衝突狀態,此時適用國際人道法;在沒有戰爭時期,較適用國際人權法。

   而以防疫、防災為目的,取得資料庫或軟體使用合法或不合法的情況,在國際上可能還是要從公民與政治權利國際公約(The International Covenant on Civil and Political Rights,縮寫為ICCPR)及經濟社會文化權利國際公約(The International Covenant on Economic, Social and Cultural Rights,縮寫為ICESCR)的相關規範來做檢驗。至於是否有違反ICCPR第七條或歐洲人權公約等相關規定,可能還是要從國際人權法的脈絡來加以探討。

    就國內情況來說,如前面講者所提到,到底大數據防疫或者是電子圍籬檢疫系統,是不是能在我國目前民主體制下面被合法的接受並存在,這部分可以參照我國大法官釋字第690號解釋裡提到在SARS時期,針對資料的使用、人身自由的限制做出裁示,解釋中提到在集體安全及公民自由衝突的情況下,集體公共衛生的防疫需求仍需優先被考量。

Q2.周專員在簡報時最後有提到建議公部門不要做生物辨識的研發或投入,目前國際特赦組織也有做類似的倡議,他們認為目前在人類的倫理與法體制無法相互配合的狀況下,應全面禁止生物、臉部辨識技術的發展,科技是中性的,但當它被放到人類行為與社會中時時就帶有很多的價值判斷,請教黃董事長的看法?

站在技術研發與發展的立場上,「技術」是可以提高效率以及能被良善的運用,端看人類如何使用技術,我們應該是利用「技術」來協助決策而不是依賴技術做最後決策,舉例來說,先前提到的案例國外曾發生過人臉辨識出現錯誤導致抓錯人的狀況,因此還是採用目擊證人的說法,但實際上,根據數據來看,技術出錯的機率比人眼出錯機率來的低,因此或許可以藉由目擊證人的說法,再輔以人臉辨識的結果來加強其說詞正確性,但不論如何,建議國內的相關法規還是需先先制訂完備,讓人民權利能夠被完善的保護,同時也可讓「技術」做正向的發揮。