来源:techmonitor
“AI will fundamentally change every software category” said Microsoft’s CEO Satya Nadella Tuesday when he announced OpenAI’s generative AI technology was coming to the Bing search engine to offer users what MSFT hopes will be a richer search experience.
(相关资料图)
But how to regulate tools, such as OpenAI’s chatbot ChatGPT, that can generate any type of content from a few words, and are trained on the world’s knowledge, is a question that is puzzling policymakers around the world. The solution will involve assessing risk, one expert told Tech Monitor, and certain types of content will need to be more closely monitored than others.
Within two months of launch, ChatGPT, the AI chatbot became the fastest-growing consumer product in history, with more than 100 million active monthly users in January alone. It has prompted some of the world’s largest companies to pivot to or speed up AI rollout plans and has given a new lease of life to the conversational AI sector.
Microsoft is embedding conversational AI in its browser, search engine and broader product range, while Google is planning to do the same with the chatbot Bard and other integrations into Gmail and Google Cloud, several of which it showcased at an event in Paris today.
Other tech giants such as China’s Baidu are also getting in on the act with chatbots of their own, and start-ups and smaller companies including Jasper and Quora bringing generative and conversational AI to the mainstream consumer and enterprise markets.
This comes with real risks from widespread misinformation and harder-to-spot phishing emails through to misdiagnosis and malpractice if used for medical information. There is also a high risk of bias if the data used to feed the model isn’t diverse. While Microsoft has a retrained model that is more accurate, and other providers like AI21 are working on verifying generated content against live data, the risk of “real looking but completely inaccurate” responses from generative AI are still high.
Last week, Thierry Breton, the EU commissioner for the internal market, said that the upcoming EU AI act would include provisions targeted at generative AI systems such as ChatGPT and Bard. “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks,” Breton told Reuters.“This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data.”
Breton and his colleagues will have to act fast, as new AI rules drawn up in the EU and elsewhere may not be ready to cope with the challenges posed by these advanced chatbots.
Analytics software provider SAS outlined some of the risks posed by AI in a recent report, AI & Responsible Innovation. Author Dr Kirk Borne said: “AI has become so powerful, and so pervasive, that it’s increasingly difficult to tell what’s real or not, and what’s good or bad”, adding that this technology is being adopted faster than it can be regulated.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
Dr Iain Brown, head of data science at SAS UK & Ireland, said governments and industry both have a role to play in ensuring AI is used for good, not harm. This includes the use of ethical frameworks to guide the development of AI models and strict governance to ensure fair, transparent and equitable decisions from those models. “We test our AI models against challenger models and optimise them as new data becomes available,” Brown explained.
Other experts believe companies producing the software will be charged with mitigating the risk the software represents, with only the highest-risk activities facing tighter regulation.
Edward Machin, data, privacy and cybersecurity associate at law firm Ropes and Gray told Tech Monitorit is inevitable that technology like ChatGPT, which seemingly appeared overnight, will move faster than regulation, especially in an area like AI which is already difficult to regulate. “Although regulation of these models is going to happen, whether it is the right regulation, or at the right time, remains to be seen,” he says
“Providers of AI systems will bear the brunt of the legislation, but importers and distributors – in the EU at least – will also be subject to potentially onerous obligations,” Machin adds. This could put some developers of open-source software in a difficult position. “There is also the thorny question of how liability will be handled for open-source developers and other downstream parties, which may have a chilling effect on willingness of those folks to innovate and conduct research,” Machin says.
Aside from the overall regulation of AI, there are also questions around the copyright of generated content and around privacy, Machin continues. “For example, it’s not clear whether developers can easily – if at all – address individuals’ deletion or rectification requests, nor how they get comfortable with scraping large volumes of data from third-party websites in a way that likely breaches those sites’ terms of service,” he says.
Lilian Edwards, Professor of Law, Innovation and Society at Newcastle University,who works on regulation of AI and with the Alan Turing Institute, said some of these models will come under GDPR, and this could lead to orders being issued to delete training data or even the algorithms themselves. It may also spell the end of widescale scraping of the internet, currently used to power search engines like Google, if website owners lose out on traffic to AI searches.
The big problem, says Edwards, is the general purpose nature of these models. This makes them difficult to regulate under the EU AI Act, which has been drafted to work on the basis of risk, as it is difficult to judge what the end user is going to be doing with the technology due to the fact it is designed for multiple use cases. She said the European Commission is trying to add rules to govern this type of technology but is likely to do this after the act becomes law, which could happen this year.
Enforcing algorithmic transparency could be one solution. “Big Tech will start lobbying to say ‘you can’t put these obligations on us as we can’t imagine every future risk or use’,” says Dr Edwards. “There are ways of dealing with this that are less or more helpful to Big Tech, including making the underlying algorithms more transparent. We are in a head-in-the-sand moment. Incentives ought to be towards openness and transparency to better understand how AI makes decisions and generates content.”
“It is the same problem you get with much more boring technology, that tech is global, bad actors are global and enforcement is incredibly difficult,” she said. “General purpose AI doesn’t match the structure of the AI act which is what the fight is over now.”
Adam Leon Smith, CTO of AI consultancy DragonFly has worked in technical AI standardisation with UK and international standards development organisations and acted as the UK industry representative to the EU AI standards group. “Regulators globally are increasingly realising that it is very difficult to regulate technology without consideration of how it is actually being used,” he says.
He told Tech Monitorthat accuracy and bias requirements can only be considered in the context of use, with risks, rights and freedoms requirements also difficult to consider before it reaches widescale adoption. The problem, he says, is that large language models are general-purpose AI.
“Regulators can force transparency and logging requirements on the technology providers,” Leon Smith says. “However, only the user – the company that operates and deploys the LLM system for a particular purpose – can understand the risks and implement mitigations like humans in the loop or ongoing monitoring.”
It is a large-scale debate that is looming over the European Commission and hasn’t even started in the UK, but one that regulators such as data watchdog the Information Commissioner’s Office and its counterpart for financial markets, the Financial Conduct Authority, will have to tackle. Eventually, Leon Smith believes, as regulators increase their focus on the issue, AI providers will start to list the purposes for which the technology “must not be used”, including issuing legal disclaimers before a user signs in to put them outside the scope of “risk-based regulatory action”.
Current best practices for managing AI systems “barely touch on LLMs, it is a nascent field that is moving extremely quickly,” Leon Smith says. “A lot of work is necessary in this space and the firms providing such technologies are not stepping up to help define them.”
OpenAI’s CTO Mira Muratti this week said that generative AI tools will need to be regulated. “It is important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” she said in an interview with Time.
But beyond the AI vendors, she said “a tonne more input into the system” is needed, including from regulators and governments. She added that it’s important the issue is considered quickly. “It’s not too early,” Muratti said. “It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”
Topics in this article : AI , Google , Microsoft
关键词: article businesses pose
还在苦苦寻找优秀经典的名言吗?为大家整理的关于法律的名言警句
关于法律的名言警句(精选220句)在日常学习、工作或生活中,大家都有令自己印象深刻的名言吧,巧用名言有助于我们正确对待学习、生活、成长
怎样写方案才更能起到其作用呢?整理的项目合作实施方案
项目合作实施方案5篇为保证事情或工作高起点、高质量、高水平开展,往往需要预先进行方案制定工作,方案指的是为某一次行动所制定的计划类
楚辞影响最大的作品 《离骚》全文对照翻译
《离骚》全文对照翻译《离骚》是屈原的代表作,创作于楚怀王时期屈原遭谗被疏之时,是楚辞影响最大的作品。下面是《离骚》全文对照翻译...
写申请书时理由总是不够充分?为大家整理的退学申请书
退学申请书(精选12篇)在一步步向前发展的社会中,申请书在现实生活中使用广泛,申请书不同于其他书信,是一种专用书信。写申请书时理由总是
都有哪些类型的话语呢?为大家收集的感恩老师的话精选150句
感恩老师的话在生活、工作和学习中,越来越多人喜欢发表话语,话语是特定社会语境中人与人之间从事沟通的具体言语行为。那么都有哪些类...
关于我们 加入我们 联系我们 商务合作 粤ICP备18023326号-32
中国创氪网 www.chuanganwang.cn 版权所有
投稿投诉联系邮箱:85 572 98@qq.com