来源:scientific
Months after the chatbot ChatGPTwowed the worldwith its uncanny ability towrite essaysand answer questions like a human, artificial intelligence (AI) is coming to Internet search.
Three of the world’s biggest search engines — Google, Bing and Baidu — last week said they will be integrating ChatGPT or similar technology into their search products, allowing people to get direct answers or engage in a conversation, rather than merely receiving a list of links after typing in a word or question. How will this change the way people relate to search engines? Are there risks to this form of human–machine interaction?
【资料图】
Microsoft’s Bing uses the same technology as ChatGPT, which was developed by OpenAI of San Francisco, California. But all three companies are using large language models (LLMs).LLMs create convincing sentences by echoing the statistical patterns of textthey encounter in a large database. Google’s AI-powered search engine, Bard, announced on 6 February, is currently in use by a small group of testers. Microsoft’s version is widely available now, although there is a waiting list for unfettered access. Baidu’s ERNIE Bot will be available in March.
Before these announcements, a few smaller companies had already released AI-powered search engines. “Search engines are evolving into this new state, where you can actually start talking to them, and converse with them like you would talk to a friend,” says Aravind Srinivas, a computer scientist in San Francisco who last August co-founded Perplexity — an LLM-based search engine that provides answers in conversational English.
The intensely personal nature of a conversation — compared with a classic Internet search — might help to sway perceptions of search results. People might inherently trust the answers from a chatbot that engages in conversation more than those from a detached search engine, says Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland.
A2022 study1by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.
That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Google’s Bardflubbed a questionabout theJames Webb Space Telescopein its own tech demo, confidently answering incorrectly. And ChatGPT has a tendency to create fictional answers to questions to which it doesn’t know the answer — known by those in the field as hallucinating.
A Google spokesperson said Bard’s error “highlights the importance of a rigorous testing process, something that we’re kicking off this week with our trusted-tester programme”. But some speculate that, rather than increasing trust, such errors, assuming they are discovered, could cause users to lose confidence in chat-based search. “Early perception can have a very large impact,” says Sridhar Ramaswamy, a computer scientists based in Mountain View, California and chief executive of Neeva, an LLM-powered search engine launched in January. The mistakewiped $100 billion from Google’s valueas investors worried about the future and sold stock.
Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines present users with their sources — a list of links — and leave them to decide what they trust. By contrast, it’s rarely known what data an LLM trained on — is it Encyclopaedia Britannica or a gossip blog?
“It’s completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Urman.
If search bots make enough errors, then, rather than increasing trust with their conversational ability, they have the potential to unseat users’ perceptions of search engines as impartial arbiters of truth, Urman says.
She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as‘featured snippets’, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and‘knowledge panels’— summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.
Chatbot-powered search blurs the distinction between machines and humans, says Giada Pistilli, principal ethicist at Hugging Face, a data-science platform in Paris that promotes the responsible use of AI. She worries about how quickly companies are adopting AI advances: “We always have these new technologies thrown at us without any control or an educational framework to know how to use them.”
This article is reproduced with permission and wasfirst publishedon February 132023.
还在苦苦寻找优秀经典的名言吗?为大家整理的关于法律的名言警句
关于法律的名言警句(精选220句)在日常学习、工作或生活中,大家都有令自己印象深刻的名言吧,巧用名言有助于我们正确对待学习、生活、成长
怎样写方案才更能起到其作用呢?整理的项目合作实施方案
项目合作实施方案5篇为保证事情或工作高起点、高质量、高水平开展,往往需要预先进行方案制定工作,方案指的是为某一次行动所制定的计划类
楚辞影响最大的作品 《离骚》全文对照翻译
《离骚》全文对照翻译《离骚》是屈原的代表作,创作于楚怀王时期屈原遭谗被疏之时,是楚辞影响最大的作品。下面是《离骚》全文对照翻译...
写申请书时理由总是不够充分?为大家整理的退学申请书
退学申请书(精选12篇)在一步步向前发展的社会中,申请书在现实生活中使用广泛,申请书不同于其他书信,是一种专用书信。写申请书时理由总是
都有哪些类型的话语呢?为大家收集的感恩老师的话精选150句
感恩老师的话在生活、工作和学习中,越来越多人喜欢发表话语,话语是特定社会语境中人与人之间从事沟通的具体言语行为。那么都有哪些类...
关于我们 加入我们 联系我们 商务合作 粤ICP备18023326号-32
中国创氪网 www.chuanganwang.cn 版权所有
投稿投诉联系邮箱:85 572 98@qq.com