Information
Time: 2022/11/8 02:00-04:00PM
Venue: IEAT International Conference Center Meeting 8F Room 2
Keynote Speaker:
14:05–14:45 Keynote 1. EU’s Human-Centric Approach to AI
14:45–15:25 Keynote 2. US’ Governance Principles for AI
15:25–16:05 Keynote 3. China’s AI Governance: for Social Surveillance
Keynote 1. EU’s Human-Centric Approach to AI by Kuo-Lien Hsieh, Professor of Department of Economic and Financial Law, NUK
The European Commission published the Ethics Guidelines for Trustworthy AI in 2019 and proposed a human-centric approach to AI emphasizing that AI systems must be used to serve human beings and fulfill their common goods and well-being. These concepts are consistent with the Charter of Fundamental Rights of the European Union. According to the Guidelines, trustworthy AI should be lawful, ethical, and robust which mean respecting all applicable laws and regulations, respecting ethical principles and values, as well as both from a technical perspective while taking into account its social environment. The Guideline also defines seven key requirements for trustworthy AI systems such as human agency and oversight; technical robustness and safety; privacy and data governance; and accountability. From the viewpoint of the speaker, this human-centric Guidelines provides a profound meanings though it seems to be highbrow.
Keynote 2. US’ Governance Principles for AI by Jung-Chin Kuo, Assistant professor of Institute of Financial & Economic Law (IFEL), Southern Taiwan University of Science & Technology (STUST)
With several states enact their AI regulations such as Artificial Intelligence Video Interview Act of Illinois, however, there is no fundamental law at the federal level for Artificial Intelligence in the United States. The White House issued the Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government in 2020 and proposed principles for the use of AI in government. These principles emphasize that AI use by Federal agencies must be lawful; purposeful and performance-driven; accurate, reliable, and effective; safe, secure, and resilient; understandable; responsible and traceable; regularly monitored; transparent; and accountable. In addition, the White House Office of Science and Technology Policy (OSTP) released the Blueprint for an AI Bill of Rights in October 2022 to guide the design, use, and deployment of automated systems to protect the public’s right of American. The Blueprint identified five principles to incorporate these protections into policy and practice, including safe and effective systems; algorithmic discrimination protections; data privacy protection; notice and explanation; human alternatives, consideration, and fallback. It remains to be seen whether this bill will become a national law in the future. Other than the White House, the National Institute of Standards and Technology (NIST) is expected to release the Artificial Intelligence Risk Management Framework (AI RMF) next year to effectively address issues of accurateness, explainability and discrimination in the design, development and use of AI.
Keynote 3. China’s AI Governance: for Social Surveillance by Yisuo Tzeng, Assistant Research Fellow of Institute for National Defense and Security Research
Although the words in China’s AI regulations look similar to those in the EU and the US, the intentions and impacts of the former are complete different. The governance model of China is top-down, and AI is regarded as a national strategic tool. To build a massive scale of database for machine learning, Chinese government enacts several regulations such as Cybersecurity Law, Data Security Law, and Personal Information Protection Law (PIPL) to ensure personal data of billion citizens are collected, used and controlled by the authority. This is why China is considered to be the country most likely to achieve AI hegemony. The most famous applications of AI in China are its ubiquitous surveillance networks such as Social Credit System, Skynet Project, and AI emotion-recognition system. These involve the value of human rights and are concerned by the EU and the US. In terms of three basic elements of the development AI, the characteristics of Chinese AI regulation could be summarized as: (1) computing power: focusing on the layout strategy of data centers and taking into account of factors such as CO2 emissions, power supply, and cloud resilience; (2) algorithm: improving government’s algorithm capabilities to surveil at national or even global level by overseeing the algorithm technology of the private sectors; (3) data: requiring the private sectors to protect personal data but allowing the authority to abuse it for social control.