Using Facial Recognition Technology in the Warfare


Date: June 21, 2022 14:00-16:00


  • Jen-Ran Chen, Chairman of Digital Transformation Association


  • Hsin-Hsuan Lin, Professor of Department of Law, Chinese Cultural University)
  • Kuan-Ju Chou, Digital Rights Specialist of Taiwan Association of Human Rights
  • YJ Hsu, Professor of Department of Computer Science & Information Engineering, National Taiwan University)
  • JH Hua, Chairman of CyberLink Corp.

Session details

Ukraine government is using facial recognition software to help identify the bodies of Russian soldiers killed in combat and track down their families to inform them of their deaths.  The free offer of the facial recognition technology solution from Clearview AI seems a righteous move, however the company — who claims itself with the largest facial recognition database in the world, — has faced a string of legal challenges.  In this panel, experts in the fields are invited for discussing issues such as: can we ignore the controversy debate when facial recognition technology is applied in wartime?  What are the ethical issues in technology usage? Is it possible to use the facial recognition technology without privacy and other risks?


Session Highlights

The Moderator Mr. Jen-Ran Chen opened the session by indicating that digital technology has developed to the extent that it closely integrated with our daily life.  It even changes or transforms the people’s lifestyle. Among them, the biometric data used in areas of business,  security and access control is also becoming more and more diversified.

Mr. JH Hua then described facial recognition technology as a double-edged sword, with high accuracy and high risks violating privacy.  When the government use facial recognition technology, it is mandatory to obtain people’s consent in advanced, he suggested in the session.  Mr. Hua believes that face recognition technology does help increase efficiency in various applications such as custom entry control in the airport, identification of lost elders, insurance sales.  However, he reminded the audiences that, the controversy of Clearview AI is not using the technology, but to combine the results with other personal information found in the social media.  That’s also the main reason why the company was sued in many countries.

Professor Hsin-Hsuan Lin talked about the issue from international law perspective.  She started by mentioning the humanity and human rights crises caused by applying face recognition technology and AI weapons in the armed conflict, violent extremism, or counter-terrorism events. UN Security Council Resolution 2396 (2017) and the Madrid Guiding Principles were mentioned as legal instrument to address the crises.  However, the former is not detailed enough, and the latter proposes only the bottom line.  Although international regulations are developed in a slow path, they provide a normative basis.  Professor Lin suggested that private companies should suspend their relationship with countries that may violate the human rights.  Further, when companies have doubts regarding government’s request for handing out the biometric data, they should seek for judicial remedies.  She believed that the establishment of local law to regulate the AI application is urgent and suggested to reference the BIPA Act (Biometric Information Privacy Act) of Illinois.

Ms. Kuan-Ju Chou put forward her views as a human right activist.  By reviewing the open records of all the government procurements during 2006 and 2021, she found out that there are at least 107 cases related to acquiring face recognition solutions in Taiwan.  The buyers include libraries, schools, police department, and etc.  She also cited several facial recognition technology projects initiated by the public sector but cancelled due to high controversy.  One example, a university applied technologies tracking eye movement  and face expression in the class, to catch the cheating students.  Taiwan Railway Administration once installed Surveillance cameras with facial recognition function trying to identify suspects and people in need in the train station. 

She further argued that these AI surveillance systems are prone to be abused or misused, and that’s the reason human rights groups advocating for banning facial recognition technology used in public sector or in public space.  In addition, there is no laws in the country at this moment to resolve the possible disputes.  The application of the technology in the school is even worse, as students may get used to live in an environment without privacy.  Finally,  Ms. Chou suggested all the biometrics data can be used only under the premise of protecting data owners.  It may take a while to let the society to trust the technology.

Follow Ms. Chou’s comments, professor YJ Hsu explained that it is not easy to regulate the usage of face recognition technology, because different stakeholders have different views and values.  The most important thing is, no one’s rights should be determined by automatic decision-making system. Data subjects should also be clearly informed about how their personal data is collected, processed, stored, and used.  More importantly, they should be aware of the impact of the data usage.


Professor Hsu indicated the UK court has imposed a £7.5 million fine on Clearview AI just a few days ago.  There are still concerns that even high penalty cannot stop Clearview AI from continuing what it is doing. She suggested that all the AI providers should take privacy into consideration when designing their services or products and put in mind that AI cannot be 100% accurate.  Another focus be put is raising public awareness and education.  It is hoped that multistakeholder discussion may urge the law makers to come out a better-rounded legislation.  Professor Hsu reminded that AI is a crucial technology for the national economic development, forbiting the use of the technology may not the best answer for the good of the country. She suggested students in Engineering major should learn about the concepts of ethic and human rights.

In the end, The panel moderator concluded that it is obvious that AI and other emerging technologies will change people way of life. Through the law, people may better use the technology rather than manipulated by the technology.

One question was raised from the floor about the right of data usage and the possibilities to ignore the data regulation in the state of emergency.

Professor Lin first made a conceptual clarification on the state of emergency.  In the international law, the “state of emergency” refers to the launch of lethal attacks in an armed conflict.  She believed that the question raised from the floor is about if the pandemic time or severe disaster event are considered state of emergency.  She further explained that it may broadly covers acts of war, but in terms of international law, there are two different systems to apply the law: when the two countries have officially declared war and entered a state of conflict, international humanitarian law is applicable; when there is no war, international human rights law is more applicable.

In the cases of pandemic or severe disaster, it is suggested to refer to the International Covenant on Civil and Political Rights (ICCPR) as well as The International Covenant on Economic, Social and Cultural Rights (ICESCR) as basis to determine the applicability.   In the pandemic period and in Taiwan, it is legally accepted to allow the use of data as the collective public health may have higher priority.



14:00–14:05     活動介紹

14:05–15:45     焦點座談

主持人:陳正然(數位經濟暨產業發展協會 理事長)


  • 林昕璇(文化大學法律學系 教授)
  • 周冠汝(台灣人權促進會 專員)
  • 許永真(台灣大學資訊工程學系 教授)
  • 黃肇雄(訊連科技 董事長)

15:45–16:00        現場問答


陳正然(數位經濟暨產業發展協會 理事長)


黃肇雄(訊連科技 董事長)

黃肇雄董事長形容人臉辨識科技是雙刃刀,在技術上可達到非常準確,但同時也有侵犯隱私之虞。他認為政府在應用該技術務需事先取得人民的同意。他認為好的人臉辨識應用可以讓既有流程更有效率,包括像是機場入出境管控的應用、辨識走失長者身份、用來取代親筆簽名的買賣保險行為等。他也提醒,Clearview AI的爭議,特別是讓許多國家對其開罰的原因不是因為他利用了人臉辨識技術,而是該公司在辨識後所連帶提供的當事人社群帳號資訊。

林昕璇(文化大學法律學系 教授)

林昕璇教授則從國際法比較廣的框架來談,他首先提及人臉辨識技術以及AI無人武器在包括武裝衝突、暴力極端主義或反恐應用中,對人道與人權造成的危機。他提及聯合國安理會第2396號決議以及馬德里指導原則,針對上述情境所建議的規範方式,前者並不細緻,後者則提出應用方面的底線。他認為,雖然這些國際法規如牛步發展,但至少提供了規範的基礎,只是仍嫌稀薄,也沒有權衡上的國際標準。林教授也提出他的觀察建議包括,企業應停止與有干預人權可能之國家間的非正式合作關係,而當企業對於國家所提出的生物識別資料在人權遵守方面有質疑時,應當有法律途徑尋解決之道的救濟機制。他也建議可以參考由瑞士政府和紅十字會所聯合倡議的「蒙特勒文件」,並透過利益集團及公私夥伴關係促進權利的遵守。林教授認為國內相關法制建立刻不容緩,而美國伊利諾州的BIPA法案(Biometric Information Privacy Act)是一個很好的參考。


周冠汝(台灣人權促進會 專員)


許永真(台灣大學資訊工程學系 教授)

許永真教授則說明,要透過法律來好好規範人臉辨識技術的應用其實並不容易,因為不同利害關係人的觀點與立場可能也不同。他特別強調,無論是人臉辨識技術或其他科技應用,任何人都不應該被自動決策系統來決定其權益;更重要的是,應清楚地告知資料當事人,如何搜集、處理、保存與利用他們的資料,以及這些利用方式可能對當事人產生的影響。英國也在幾天前剛剛針對Clearview AI開罰750萬英鎊,但也有人憂心即使有高額罰款,也沒有辦法阻止Clearview AI讀繼續營運。許教授也建議,企業在設計相關服務或產品之初,就應當把上述問題都納入考量,且技術無法百分百準確,當出錯時,還要能提出如何處理降低損傷的方案。他也強調公眾的意識與教育的重要性,並提醒國內的立法要跟上國際,希望可以透過這樣的討論,督促立法機構做更完備的規劃。此外AI是各國都在競爭的關鍵技術,也涉及到整個國家的經濟發展,也不該只有禁止使用唯一答案。許教授建議,教育單位有需要針對道德與人權的論述,讓學生去思考這些問題,不要讓技術與人權變成彼此不理解的結果。






   而以防疫、防災為目的,取得資料庫或軟體使用合法或不合法的情況,在國際上可能還是要從公民與政治權利國際公約(The International Covenant on Civil and Political Rights,縮寫為ICCPR)及經濟社會文化權利國際公約(The International Covenant on Economic, Social and Cultural Rights,縮寫為ICESCR)的相關規範來做檢驗。至於是否有違反ICCPR第七條或歐洲人權公約等相關規定,可能還是要從國際人權法的脈絡來加以探討。