日韩精品久久一区二区三区_亚洲色图p_亚洲综合在线最大成人_国产中出在线观看_日韩免费_亚洲综合在线一区

Global EditionASIA 中文雙語Fran?ais
China
Home / China / Innovation

Guidelines for proper use of AI explored

By Chang Jun in San Francisco | China Daily | Updated: 2019-11-12 08:54
Share
Share - WeChat
[Photo/IC]

Experts at a conference discuss ways to monitor technology and minimize risks

Artificial intelligence, the revolutionary, disruptive and diffuse technology that has been sparking controversy and awe since its inception over 50 years ago, now enters a stage that requires the global community - academia, civil society, government and industry - to orchestrate regulations to have it guided in order to serve the common good.

At its conference on AI's ethics, policy and governance in late October, the Stanford Institute for Human-Centered Artificial Intelligence's drew hundreds of experts worldwide to a two-day conference to discuss how the major stakeholders can work together to supervise AI research, minimize risks and prohibit unethical AI-enhanced practices.

Unanimously, the attendees agreed that AI has transformed society profoundly. Major progress has been made due to availability of massive data, powerful computing architectures and machine learning advancement. AI is playing an increasing role across domains such as healthcare, education, mobility and smart homes.

However, AI has also caused concerns all over the world, mainly because of a lack of ethical awareness and penetration of individual privacy. For example, the AI applications in facial recognition.

Joy Buolamwini, a computer scientist at the MIT Media Lab, a research laboratory at the Massachusetts Institute of Technology, presented findings of her research on intersectional accuracy disparities in commercial gender classifications. In her research, Buolamwini showed facial recognition systems developed by tech companies such as Amazon, Microsoft and Google 1,000 faces and asked them to identify gender. The algorithms misidentified Michelle Obama, Oprah Winfrey and Serena Williams, the three iconic dark-skinned women, as male.

The bias in code can lead to discrimination against underrepresented groups and the most vulnerable individuals, Buolamwini said.

She also founded Algorithmic Justice League, a program through which she aims to highlight collective and individual harms that AI can cause - loss of opportunities, social stigmatization, workplace discrimination and inequality - and advocate for changes concerning regulating big tech companies and checking government's application of AI.

One of the key questions around AI governance and ethics, as a majority of attendees agreed, is how to regulate big tech companies.

This "nascent technology will help us build powerful new materials, understand the climate in new ways and generate far more efficient energy - it could even cure cancer," said Eric Schmidt, former Google CEO and current technical advisor to Alphabet Inc.

This is all good, he continued. "I don't want us, in these complicated debates about what we are doing, to forget that the scientists here at Stanford and other places are making progress on problems which were thought to be unsolvable ... because (without AI) they couldn't do the math at scale."

However, Marietje Schaake, a HAI International Policy Fellow and Dutch former member of the European Parliament who worked to pass the European Union's General Data Protection Regulation, argued that AI's potential shouldn't obscure its potential harms, which the law can help mitigate.

Large technology companies have a lot of power, Schaake said. "And with great power should come great responsibility, or at least modesty. Some of the outcomes of pattern recognition or machine learning are reason for such serious concerns that pauses are justified. I don't think that everything that's possible should also be put in the wild or into society as part of this often quoted 'race for dominance'. We need to actually answer the question, collectively, 'How much risk are we willing to take?'"

Like it or not, the age of AI is coming, and fast, and there is plenty to be concerned about, wrote Stanford HAI co-directors Fei-Fei Li and John Etchemendy.

The two believe the real threat lies in the fact that "Most of the world, including the United States, is unprepared to reap many of the economic and societal benefits offered by AI or mitigate the inevitable risks".

Getting there will take decades, they said. "Yet, AI applications are advancing faster than our policies or institutions at a time in which science and technology are being underfunded, under-supported and even challenged. It's a national emergency in the making."

They asked the US government to commit $120 billion in research, data and computing resources, education and startup capital over the next decade to support a bold human-centered AI framework in order to retain America's competence and leading position in this field.

An open dialogue and collaboration among nations regarding AI research and governance is important, attendees said. Given the complexity of cultural differences and motivation variations among international stakeholders, however, it's unrealistic for the whole world to create a single AI vision and a once-and-for-all solution to the problems and issues.

Nevertheless, governments across the continents are in action.

In Europe, the European Union issued its first draft of the ethical guidelines for the development, deployment and use of AI in Dec 2018, an important step toward innovative and trustworthy AI "made in Europe'".

In Feb, the US president signed an executive order revealing the country's cohesive plan for US leadership in AI development. "Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States," he said.

In China, the National New Generation Artificial Intelligence Governance Committee, which is under the Ministry of Science and Technology, in June released the New Generation AI Governance Principles - Developing Responsible AI.

The first official document of its kind issued in China on AI governance ethics, the principles include harmony and friendship, fairness and justice, inclusive and sharing, privacy protection, safety and controllability, shared responsibility, open collaboration and agile governance.

"We want to ensure the reliability and safety of AI while promoting economic, social and ecological sustainable development," said Zhang Xu, deputy director of the strategic planning department under the Ministry of Science and Technology.

"AI is advancing rapidly, but we still have time to get it right - if we act now," said Fei-fei Li.

Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
主站蜘蛛池模板: 精品欧美一区视频在线观看 | 亚洲天堂一区二区三区四区 | 91亚洲精品在线观看 | 天天干天天拍天天射 | 哥斯拉大战金刚2在线观看免费完整版 | 欧美不在线 | 91视频免费观看高清观看完整 | 日韩高清一区二区 | 挑战者联盟第一季免费观看完整版 | 午夜在线视频 | 超碰成人免费 | 亚洲精品国偷拍自产在线观看蜜桃 | 男女激情动态视频 | 欧美日韩一区精品 | 亚洲精品一 | 色欧美片视频在线观看 | 人人性人人性碰国产 | 91免费视频网站 | 国产精品福利短视在线播放频 | 国内精品伊人久久久久7777人 | 色婷婷色 | 色婷婷视频在线观看 | 成年在线视频免费视频观看 | jzz 护士| 欧美激情高清 | 中文字幕综合在线观看 | 欧美成人激情视频 | 夜夜撸日日操 | 国产午夜精品AV一区二区 | 国产熟妇无码A片AAA毛片视频 | 国产一区二区久久精品 | 午夜国产| 亚洲一区日韩 | 国产www色 | 欧美色无极 | 欧美日韩一区二区三区自拍 | 精品午夜寂寞影院在线观看 | 日本在线综合 | 一级片免费 | 麻豆专区一区二区三区四区五区 | 亚洲一区二区三区视频 |