[返回学习园地首页]·[所有跟帖]·[ 回复本帖 ] ·[热门原创] ·[繁體閱讀]·[版主管理]
监管人工智能是一个四维挑战
送交者: icemessenger[♂☆★★★SuperMod★★★☆♂] 于 2023-05-28 0:57 已读 1596 次 1 赞  

icemessenger的个人频道

Regulating artificial intelligence is a 4D challenge



Experts warn of existential threats that require a global competition for good ideas. 6park.com

与核技术相比,AI相对便宜、不可见、无处不在,而且有无限多用例。它带来一个四维挑战,要求世人拿出更灵活的应对方式。




The leaders of the G7 nations addressed plenty of global concerns over sake-steamed Nomi oysters in Hiroshima last weekend: war in Ukraine, economic resilience, clean energy and food security among others. But they also threw one extra item into their parting swag bag of good intentions: the promotion of inclusive and trustworthy artificial intelligence.  6park.com

上周末,七国集团(G7)领导人在广岛一边享用清酒蒸牡蛎,一边讨论了大量全球关切:乌克兰战争、经济韧性、清洁能源、粮食安全等重大话题。但在临别的善意礼物袋中,他们还放入一样额外的东西:促进包容和值得信赖的人工智能(AI)。 6park.com


While recognising AI’s innovative potential, the leaders worried about the damage it might cause to public safety and human rights. Launching the Hiroshima AI process, the G7 commissioned a working group to analyse the impact of generative AI models, such as ChatGPT, and prime the leaders’ discussions by the end of this year. 6park.com

在承认AI的创新潜力的同时,领导人担心它对公共安全和人权可能造成的损害。G7发起“广岛人工智能进程”(Hiroshima AI Process),委托一个工作组来分析ChatGPT之类的生成式人工智能(generative AI)模型的影响,并在今年底前安排领导人讨论。 6park.com


The initial challenges will be how best to define AI, categorise its dangers and frame an appropriate response. Is regulation best left to existing national agencies? Or is the technology so consequential that it demands new international institutions? Do we need the modern-day equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and deter its military use? 6park.com

初期挑战将是如何最好地界定人工智能、对其危险进行分类,然后框定一个适当的回应。监管是否最好留给现有的国家机构?抑或这项技术如此事关重大,以至于需要新的国际机构?我们是否需要建立现代版的国际原子能机构(IAEA)——1957年为促进核技术和平发展并阻止其军事用途而成立的联合国机构? 6park.com


One can debate how effectively the UN body has fulfilled that mission. Besides, nuclear technology involves radioactive material and massive infrastructure that is physically easy to spot. AI, on the other hand, is comparatively cheap, invisible, pervasive and has infinite use cases. At the very least, it presents a four-dimensional challenge that must be addressed in more flexible ways.  6park.com

对于该机构在多大程度上有效履行了使命,人们可以展开辩论。此外,核技术涉及放射性物质和巨大基础设施,在物理上容易被发现。人工智能则相对便宜、不可见、无处不在,而且有无限多的用例。至少来说,它带来一个四维挑战,必须拿出更为灵活的应对方式。 6park.com


The first dimension is discrimination. Machine learning systems are designed to discriminate, to spot outliers in patterns. That’s good for spotting cancerous cells in radiology scans. But it’s bad if black box systems trained on flawed data sets are used to hire and fire workers or authorise bank loans. Bias in, bias out, as they say. Banning these systems in unacceptably high-risk areas, as the EU’s forthcoming AI Act proposes, is one strict, precautionary approach. Creating independent, expert auditors might be a more adaptable way to go. 6park.com

第一个维度是歧视。机器学习系统的设计宗旨是区分,是发现模式中的异常值。这有利于在放射学扫描中发现癌细胞。但是,如果使用有缺陷的数据集来训练“黑匣子”系统,而这些系统随后被用来雇用和炒掉员工,或决定是否批准银行贷款,那就不好了。正如人们所说的,输入有偏见,输出也会有偏见。在风险高得不可接受的领域禁用这些系统——就像欧盟拟议中的《人工智能法案》(AI Act)所提议的那样——是一种严厉的、预防性的做法。设立独立的专家审计师可能是一种更具适应性的做法。 6park.com


Second, disinformation. As the academic expert Gary Marcus warned US Congress last week, generative AI might endanger democracy itself. Such models can generate plausible lies and counterfeit humans at lightning speed and industrial scale.  6park.com

第二个维度是虚假信息。就像学术专家加里•马库斯(Gary Marcus)上周警告美国国会的那样,生成式AI可能危及民主制度本身。这类模型有能力以闪电般的速度和工业规模生成貌似可信的谎言,甚至假冒人类。 6park.com


The onus should be on the technology companies themselves to watermark content and minimise disinformation, much as they suppressed email spam. Failure to do so will only amplify calls for more drastic intervention. The precedent may have been set in China, where a draft law places responsibility for misuse of AI models on the producer rather than the user.  6park.com

就此而言,责任在于科技公司本身,它们应该给内容“添加水印”并尽量减少虚假信息,就像他们抑制垃圾邮件一样。如果做不到这一点,那只会放大要求更严厉干预的呼声。中国可能已经设定了先例:一项法律草案将滥用AI模型的责任归于生产者,而非使用者。 6park.com


Third, dislocation. No one can accurately forecast what economic impact AI is going to have overall. But it seems pretty certain that it is going to lead to the “deprofessionalisation” of swaths of white-collar jobs, as the entrepreneur Vivienne Ming told the FT Weekend festival in DC.  6park.com

第三个维度是经济失调。没有人能够准确预测AI总体上会产生什么经济影响。但似乎可以相当肯定的是,它将成批导致白领工作“去专业化”,就像企业家薇薇恩•明(Vivienne Ming)在华盛顿举行的英国《金融时报》周末节上所说的。 6park.com


Computer programmers have broadly embraced generative AI as a productivity-enhancing tool. By contrast, striking Hollywood scriptwriters may be the first of many trades to fear their core skills will be automated. This messy story defies simple solutions. Nations will have to adjust to the societal challenges in their own ways. 6park.com

计算机程序员迄今大体上接受生成式AI,视其为一件提高生产率的工具。与此形成反差的是,正在罢工的好莱坞编剧可能是第一个担心自己的核心技能将被自动化的行当,还会有很多这样的行当。这种一团乱麻的问题不存在简单的解决方案。各国将不得不以自己的方式针对社会挑战做出调整。 6park.com


Fourth, devastation. Incorporating AI into lethal autonomous weapons systems (LAWS), or killer robots, is a terrifying prospect. The principle that humans should always remain in the decision-making loop can only be established and enforced through international treaties. The same applies to discussion around artificial general intelligence, the (possibly fictional) day when AI surpasses human intelligence across every domain. Some campaigners dismiss this scenario as a distracting fantasy. But it is surely worth heeding those experts who warn of potential existential risks and call for international research collaboration. 6park.com

第四个维度是毁灭。将AI集成至致命自主武器系统(LAWS)——亦称“杀手机器人”——是一个可怕的前景。人类应该始终处于决策循环中的原则,只能通过国际条约来确立和执行。这同样适用于围绕通用人工智能(AGI)的讨论,即人工智能在各个领域超越人类智能的(可能是虚构的)那一天。一些活动人士对这种场景不屑一顾,视其为分散注意力的幻想。但那些警告潜在生存风险、并呼吁国际研究协作的专家的意见肯定值得聆听。 6park.com


Others may argue that trying to regulate AI is as futile as praying for the sun not to set. Laws only ever evolve incrementally whereas AI is developing exponentially. But Marcus says he was heartened by the bipartisan consensus for action in the US Congress. Fearful perhaps that EU regulators might establish global norms for AI, as they did five years ago with data protection, US tech companies are also publicly backing regulation.  6park.com

其他人可能会主张,试图监管AI就像祈祷太阳不会西下一样徒劳。法律只能逐步发展,而AI则在指数级加速发展。但马库斯表示,他对美国国会两党形成的行动共识感到鼓舞。美国科技公司也在公开场合支持监管,也许是担心欧盟监管机构可能会确立AI的全球规范,就像他们五年前在数据保护领域所做的那样。 6park.com


G7 leaders should encourage a competition for good ideas. They now need to trigger a regulatory race to the top, rather than presiding over a scary slide to the bottom. 6park.com

G7领导人应该鼓励一场迸发好主意的竞争。他们现在需要引发一场向上的监管竞赛,而不是眼睁睁看着一场可怕的逐底竞争上演。


喜欢icemessenger朋友的这个贴子的话, 请点这里投票,“赞”助支持!
[举报反馈]·[ icemessenger的个人频道 ]·[-->>参与评论回复]·[用户前期主贴]·[手机扫描浏览分享]·[返回学习园地首页]
帖子内容是网友自行贴上分享,如果您认为其中内容违规或者侵犯了您的权益,请与我们联系,我们核实后会第一时间删除。

所有跟帖:        ( 主贴楼主有权删除不文明回复,拉黑不受欢迎的用户 )


用户名:密码:[--注册ID--]

标 题:

粗体 斜体 下划线 居中 插入图片插入图片 插入Flash插入Flash动画


     图片上传  Youtube代码器  预览辅助

打开微信,扫一扫[Scan QR Code]
进入内容页点击屏幕右上分享按钮

楼主本栏目热帖推荐:

>>>>查看更多楼主社区动态...






[ 留园条例 ] [ 广告服务 ] [ 联系我们 ] [ 个人帐户 ] [ 版主申请 ] [ Contact us ]