首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space 化身还是人类,是谁在体验?虚拟游戏世界中社交互动对个人空间的影响
Pub Date : 2025-12-01 Epub Date: 2025-11-17 DOI: 10.1016/j.chbah.2025.100237
Ruoyu Niu, Mengzhu Huang, Rixin Tang
Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.
虚拟游戏世界支持丰富的社交互动,玩家可以使用虚拟角色进行协作、竞争和远距离交流。由于越来越依赖中介社会联系,本研究考察了虚拟共享空间和虚拟角色属性是否以平行面对面接触的方式塑造了个人空间调节。三个实验测试了虚拟共享空间、化身代理和化身拟人化对人际距离的影响。在所有研究中,虚拟舒适距离和心理距离被用作个人空间变化的补充指标,而物理舒适距离在有实际在场的人类伴侣的情况下被额外评估。实验1表明,即使在控制了物理共享空间之后,与在实验室中与人类驱动的伙伴互动时,与在单独的虚拟空间互动相比,占据共享虚拟空间可靠地减少了舒适距离,增加了心理亲密度。实验2模拟了网络虚拟游戏世界中由电脑驱动的伴侣所产生的虚拟共享空间效应,表明人与人之间距离的减少并不仅仅取决于人的行为。实验3显示,拟人化的虚拟形象增加了对电脑驱动的伴侣的舒适度,而当伴侣是人类时,虚拟形象的形式几乎没有影响。总之,研究结果表明,虚拟共享空间、感知代理和虚拟形象共同塑造了数字环境中的个人空间监管,并为设计虚拟形象和虚拟空间提供了可操作的指导,以促进面向方法的、亲社会的互动。
{"title":"Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space","authors":"Ruoyu Niu,&nbsp;Mengzhu Huang,&nbsp;Rixin Tang","doi":"10.1016/j.chbah.2025.100237","DOIUrl":"10.1016/j.chbah.2025.100237","url":null,"abstract":"<div><div>Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100237"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Culturally responsive AI chatbots: From framework to field evidence 文化响应型人工智能聊天机器人:从框架到现场证据
Pub Date : 2025-12-01 Epub Date: 2025-10-28 DOI: 10.1016/j.chbah.2025.100224
Vik Naidoo , Karman Kaur Chadha
As AI systems become part of everyday life around the world, their failure to recognise and respond to cultural differences can erode trust, reduce engagement, and undermine legitimacy. This paper introduces the Culturally Responsive Artificial Intelligence (Chatbot) Framework (CRAIF-C), a practical, modular approach to building AI chatbots that understand and respect cultural diversity. CRAIF-C is novel in that it operationalises cultural responsiveness across the entire AI lifecycle, combining domain-specific technical methods with validated measurement tools and multi-context empirical testing. It addresses persistent limitations of earlier approaches, such as Value-Sensitive Design or Participatory AI, which often remain conceptual, sector-bound, or late-stage interventions. CRAIF-C works across four key domains: Enculturation, Adaptive Interaction, Explainability & Transparency, and Governance & Accountability. The framework's effectiveness is demonstrated through four complementary studies, which consistently show that AI chatbot systems using CRAIF-C achieve meaningful gains in cultural fit, natural communication, clear explanations, user trust, and sustained engagement. By incorporating cultural sensitivity into the core of AI chatbot design, CRAIF-C provides a roadmap for creating technology that is both technically capable, socially aware, ethically robust, and globally adaptable.
随着人工智能系统成为世界各地日常生活的一部分,它们无法识别和应对文化差异,可能会侵蚀信任、减少参与并破坏合法性。本文介绍了文化响应型人工智能(聊天机器人)框架(CRAIF-C),这是一种实用的模块化方法,用于构建理解和尊重文化多样性的人工智能聊天机器人。crif -c的新颖之处在于,它将特定领域的技术方法与经过验证的测量工具和多上下文经验测试相结合,在整个人工智能生命周期中实现文化响应性。它解决了早期方法的持续局限性,如价值敏感设计或参与式人工智能,这些方法通常仍然是概念性的,受部门限制的,或后期干预。crif - c在四个关键领域开展工作:文化适应、适应性互动、可解释性和透明度以及治理和问责制。该框架的有效性通过四项互补研究得到了证明,这些研究一致表明,使用CRAIF-C的人工智能聊天机器人系统在文化契合度、自然沟通、清晰解释、用户信任和持续参与方面取得了有意义的进展。通过将文化敏感性纳入人工智能聊天机器人设计的核心,crif - c为创造技术提供了路线图,该技术既具有技术能力,又具有社会意识,道德健全,又具有全球适应性。
{"title":"Culturally responsive AI chatbots: From framework to field evidence","authors":"Vik Naidoo ,&nbsp;Karman Kaur Chadha","doi":"10.1016/j.chbah.2025.100224","DOIUrl":"10.1016/j.chbah.2025.100224","url":null,"abstract":"<div><div>As AI systems become part of everyday life around the world, their failure to recognise and respond to cultural differences can erode trust, reduce engagement, and undermine legitimacy. This paper introduces the Culturally Responsive Artificial Intelligence (Chatbot) Framework (CRAIF-C), a practical, modular approach to building AI chatbots that understand and respect cultural diversity. CRAIF-C is novel in that it operationalises cultural responsiveness across the <em>entire</em> AI lifecycle, combining domain-specific technical methods with validated measurement tools and multi-context empirical testing. It addresses persistent limitations of earlier approaches, such as Value-Sensitive Design or Participatory AI, which often remain conceptual, sector-bound, or late-stage interventions. CRAIF-C works across four key domains: Enculturation, Adaptive Interaction, Explainability &amp; Transparency, and Governance &amp; Accountability. The framework's effectiveness is demonstrated through four complementary studies, which consistently show that AI chatbot systems using CRAIF-C achieve meaningful gains in cultural fit, natural communication, clear explanations, user trust, and sustained engagement. By incorporating cultural sensitivity into the core of AI chatbot design, CRAIF-C provides a roadmap for creating technology that is both technically capable, socially aware, ethically robust, and globally adaptable.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100224"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“What is the latest news, Avatar Pavel?” - AI assistants in transformation processes of metaverse “有什么最新消息吗,阿凡达·帕维尔”- AI辅助元宇宙的转换过程
Pub Date : 2025-12-01 Epub Date: 2025-11-01 DOI: 10.1016/j.chbah.2025.100225
Vaclav Moravec , Beata Gavurova , Martin Rigelsky
The main goal of the study was to examine and to evaluate the relationships between public attitudes towards AI avatars, their selected socio-demographic characteristics, fields of their media consumption, as well as ideological attitudes in order to reveal other adoption perspectives of AI avatars, which are not yet explored, and their strong economic and social potential in the metaverse. The data collection was carried out on a sample of 1250 respondents aged 18 and over in the period from 2 April 2025 to 9 April 2025. The research used an AI avatar experimentally developed by the start-up company The MAMA AI.
The outcomes of the descriptive analysis confirmed the fact that the AI news avatar Pavel was perceived rather neutrally to slightly positively, but as impersonal, with respondents demonstrating a low willingness to accept him as a guide across the media. The respondents also evaluated the use of AI assistants most favorably in the technical-service fields, but significantly more negatively in the sensitive domains such as psychology or politics. The differences between these groups were most noticeable in the perception of the AI avatar as more or less human and intimate, especially between men and women. Contrariwise, media habits played a much larger role. The study confirmed the importance of investigating specific adoption factors related to media consumption, media habits, and ideological attitudes along with the socio-demographic factors and thus, allowed us to understand the new adoption potential of AI avatars and the possibilities of its expansion.
该研究的主要目标是检查和评估公众对人工智能化身的态度、他们所选择的社会人口特征、媒体消费领域以及意识形态态度之间的关系,以揭示尚未探索的人工智能化身的其他采用观点,以及它们在虚拟世界中强大的经济和社会潜力。在2025年4月2日至2025年4月9日期间,对1250名18岁及以上的受访者进行了数据收集。这项研究使用了初创公司The MAMA AI实验性开发的人工智能化身。描述性分析的结果证实了这样一个事实,即人们对人工智能新闻化身帕维尔的看法相当中性或略微积极,但没有人情感,受访者表示不太愿意接受他作为整个媒体的向导。受访者还对人工智能助手在技术服务领域的使用进行了最有利的评估,但在心理学或政治等敏感领域则明显更为负面。这些群体之间的差异最明显的是对人工智能化身的看法,尤其是在男性和女性之间。相反,媒体习惯发挥了更大的作用。该研究证实了调查与媒体消费、媒体习惯、意识形态态度以及社会人口因素相关的具体采用因素的重要性,从而使我们能够了解人工智能化身的新采用潜力及其扩展的可能性。
{"title":"“What is the latest news, Avatar Pavel?” - AI assistants in transformation processes of metaverse","authors":"Vaclav Moravec ,&nbsp;Beata Gavurova ,&nbsp;Martin Rigelsky","doi":"10.1016/j.chbah.2025.100225","DOIUrl":"10.1016/j.chbah.2025.100225","url":null,"abstract":"<div><div>The main goal of the study was to examine and to evaluate the relationships between public attitudes towards AI avatars, their selected socio-demographic characteristics, fields of their media consumption, as well as ideological attitudes in order to reveal other adoption perspectives of AI avatars, which are not yet explored, and their strong economic and social potential in the metaverse. The data collection was carried out on a sample of 1250 respondents aged 18 and over in the period from 2 April 2025 to 9 April 2025. The research used an AI avatar experimentally developed by the start-up company The MAMA AI.</div><div>The outcomes of the descriptive analysis confirmed the fact that the AI news avatar Pavel was perceived rather neutrally to slightly positively, but as impersonal, with respondents demonstrating a low willingness to accept him as a guide across the media. The respondents also evaluated the use of AI assistants most favorably in the technical-service fields, but significantly more negatively in the sensitive domains such as psychology or politics. The differences between these groups were most noticeable in the perception of the AI avatar as more or less human and intimate, especially between men and women. Contrariwise, media habits played a much larger role. The study confirmed the importance of investigating specific adoption factors related to media consumption, media habits, and ideological attitudes along with the socio-demographic factors and thus, allowed us to understand the new adoption potential of AI avatars and the possibilities of its expansion.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100225"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attractive synthetic voices 迷人的合成声音
Pub Date : 2025-12-01 Epub Date: 2025-10-03 DOI: 10.1016/j.chbah.2025.100211
Camila Bruder , Pamela Breda , Pauline Larrouy-Maestri
With recent advances in Artificial Intelligence (AI), synthetic voices have become increasingly prevalent in our everyday soundscape. This study examined listeners’ perception of human and neural Text-To-Speech (TTS) voices. In an online experiment, 75 participants listened to different versions of a short utterance spoken by eight different voices (half human, half TTS), each presented in four expressed emotions (neutral, happy, sad, angry). For each stimulus, participants rated voice attractiveness and willingness to interact, and selected the perceived emotion from a forced-choice list. In a second part, participants were asked to classify each voice as human or AI-generated. Results revealed that participants were often “fooled” by the TTS voices, misidentifying them as human. Voice ratings were influenced by the perceived emotion regardless of the voice type, with happy-sounding voices rated more positively than those perceived as sad or angry. However, TTS voices were rated as less attractive and socially appealing overall, though with large individual differences. These findings indicate that TTS voices are approaching human ones in how they are perceived by listeners, highlighting progress in their naturalness.
随着人工智能(AI)的发展,合成声音在我们的日常音景中变得越来越普遍。这项研究考察了听者对人类和神经文本到语音(TTS)声音的感知。在一项在线实验中,75名参与者听了8个不同的声音(一半是人类,一半是TTS)说的不同版本的简短话语,每个声音都有四种表达的情绪(中性、快乐、悲伤、愤怒)。对于每个刺激,参与者对声音的吸引力和互动意愿进行评分,并从强制选择列表中选择感知到的情绪。在第二部分中,参与者被要求将每个声音分类为人类或人工智能生成的。结果显示,参与者经常被TTS的声音“愚弄”,误以为他们是人。无论声音类型如何,声音评分都会受到感知到的情绪的影响,听起来快乐的声音比听起来悲伤或愤怒的声音更积极。然而,TTS的声音总体上被评为不那么有吸引力和社会吸引力,尽管存在很大的个体差异。这些发现表明,TTS的声音在听者的感知上正在接近人类的声音,突出了其自然性的进步。
{"title":"Attractive synthetic voices","authors":"Camila Bruder ,&nbsp;Pamela Breda ,&nbsp;Pauline Larrouy-Maestri","doi":"10.1016/j.chbah.2025.100211","DOIUrl":"10.1016/j.chbah.2025.100211","url":null,"abstract":"<div><div>With recent advances in Artificial Intelligence (AI), synthetic voices have become increasingly prevalent in our everyday soundscape. This study examined listeners’ perception of human and neural Text-To-Speech (TTS) voices. In an online experiment, 75 participants listened to different versions of a short utterance spoken by eight different voices (half human, half TTS), each presented in four expressed emotions (neutral, happy, sad, angry). For each stimulus, participants rated voice attractiveness and willingness to interact, and selected the perceived emotion from a forced-choice list. In a second part, participants were asked to classify each voice as human or AI-generated. Results revealed that participants were often “fooled” by the TTS voices, misidentifying them as human. Voice ratings were influenced by the perceived emotion regardless of the voice type, with happy-sounding voices rated more positively than those perceived as sad or angry. However, TTS voices were rated as less attractive and socially appealing overall, though with large individual differences. These findings indicate that TTS voices are approaching human ones in how they are perceived by listeners, highlighting progress in their naturalness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100211"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why human mistakes hurt more? Emotional responses in human-AI errors 为什么人为的错误伤害更大?人类-人工智能错误中的情绪反应
Pub Date : 2025-12-01 Epub Date: 2025-11-19 DOI: 10.1016/j.chbah.2025.100238
Ying Qin, Wanhui Zhou, Bu Zhong
Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (N = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.
理解用户对人工智能和人为错误的反应至关重要,因为它们会影响信任、接受和交互结果。本研究通过研究代理身份(人类与人工智能)和错误严重程度(低与高)如何影响负面情绪反应,调查了人类与人工智能互动的情感动态。使用2 × 2因子设计(N = 250),研究结果显示,无论错误严重程度如何,人类代理始终比人工智能代理引发更强烈的负面情绪。此外,感知经验在特定条件下调节了这种关系:认为人工智能经验不如人类的个体对人类的错误表现出更强烈的负面情绪,而当人工智能被认为具有更高的经验时,这种影响就会减弱。然而,感知代理对情绪反应没有显著影响。这些发现强调了代理身份和感知经验在塑造对错误的情绪反应方面的关键作用,增加了对人类与人工智能互动动态的见解。这项研究表明,开发有效的人工智能系统需要管理用户的情绪反应和信任,其中感知经验和能力在采用中起着关键作用。这些发现可以指导人工智能系统的设计,根据人工智能感知的经验水平调整用户的期望和情绪反应。
{"title":"Why human mistakes hurt more? Emotional responses in human-AI errors","authors":"Ying Qin,&nbsp;Wanhui Zhou,&nbsp;Bu Zhong","doi":"10.1016/j.chbah.2025.100238","DOIUrl":"10.1016/j.chbah.2025.100238","url":null,"abstract":"<div><div>Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (<em>N</em> = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100238"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Whose agent are you? Relational norms shape expectation from algorithmic and human advisors in social decisions 你是谁的代理人?在社会决策中,关系规范塑造了算法和人类顾问的期望
Pub Date : 2025-12-01 Epub Date: 2025-10-10 DOI: 10.1016/j.chbah.2025.100218
Lior Gazit , Ofer Arazy , Uri Hertz
As technology companies develop AI agents designed to function as friends, therapists, and personal advisors, a fundamental question arises: can algorithms fulfill these intimate social roles? Relational Models Theory (RMT) suggests that relationships shape normative expectations in social decisions. Our research examines the perceived relationship between human/algorithmic advisors and advisee. Across two experiments (N = 492), participants reported their expectations from advisors that recommended splitting money between the advisee and an unknown other. Participants expected algorithmic advisors to exhibit higher consistency and higher sensitivity to others' payoffs, even when this resulted in smaller gains for the advisee, reflecting expectations of institutional fairness rather than personal favoritism. In contrast, participants anticipated that human advisors would prioritize their own welfare, consistent with personal relational norms. Seeking to validate that relational norms indeed drive expectations, in a follow-up experiment, we framed advisors as either "Institutional" or "Personal". Participants expected both human and algorithmic advisors to show higher sensitivity to others' payoffs and greater consistency when framed as Institutional, in line with RMT. However, regardless of framing, participants expected algorithmic advisors to exhibit higher sensitivity to others’ payoffs and greater consistency than the expectations from human advisors. Our findings extend Human-AI interaction literature by showing that people apply different normative standards to algorithmic versus human advisors. Results suggest that while relational framing can influence perceptions, attempts to position AI as replacements for humans must account for the persistent tendency to view algorithms through an institutional lens.
随着科技公司开发出可以充当朋友、治疗师和个人顾问的人工智能代理,一个基本问题出现了:算法能完成这些亲密的社会角色吗?关系模型理论(RMT)认为,关系塑造了社会决策中的规范性期望。我们的研究考察了人类/算法顾问和被顾问之间的感知关系。在两个实验中(N = 492),参与者报告了他们对建议在被建议者和另一个不认识的人之间分配金钱的顾问的期望。参与者期望算法顾问对其他人的回报表现出更高的一致性和更高的敏感性,即使这导致被顾问的收益较小,这反映了对制度公平而不是个人偏袒的期望。相比之下,参与者预期人类顾问将优先考虑他们自己的利益,符合个人关系规范。为了验证关系规范确实会驱动期望,在后续实验中,我们将顾问定义为“机构”或“个人”。参与者期望人力和算法顾问在与RMT一致的情况下,对其他人的回报表现出更高的敏感性和更大的一致性。然而,无论框架如何,参与者期望算法顾问比人类顾问表现出更高的对他人回报的敏感性和更高的一致性。我们的研究结果扩展了人类与人工智能交互的文献,表明人们对算法和人类顾问采用不同的规范标准。结果表明,虽然关系框架可以影响感知,但试图将人工智能定位为人类的替代品,必须考虑到通过制度视角看待算法的持续倾向。
{"title":"Whose agent are you? Relational norms shape expectation from algorithmic and human advisors in social decisions","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2025.100218","DOIUrl":"10.1016/j.chbah.2025.100218","url":null,"abstract":"<div><div>As technology companies develop AI agents designed to function as friends, therapists, and personal advisors, a fundamental question arises: can algorithms fulfill these intimate social roles? Relational Models Theory (RMT) suggests that relationships shape normative expectations in social decisions. Our research examines the perceived relationship between human/algorithmic advisors and advisee. Across two experiments (N = 492), participants reported their expectations from advisors that recommended splitting money between the advisee and an unknown other. Participants expected algorithmic advisors to exhibit higher consistency and higher sensitivity to others' payoffs, even when this resulted in smaller gains for the advisee, reflecting expectations of institutional fairness rather than personal favoritism. In contrast, participants anticipated that human advisors would prioritize their own welfare, consistent with personal relational norms. Seeking to validate that relational norms indeed drive expectations, in a follow-up experiment, we framed advisors as either \"Institutional\" or \"Personal\". Participants expected both human and algorithmic advisors to show higher sensitivity to others' payoffs and greater consistency when framed as Institutional, in line with RMT. However, regardless of framing, participants expected algorithmic advisors to exhibit higher sensitivity to others’ payoffs and greater consistency than the expectations from human advisors. Our findings extend Human-AI interaction literature by showing that people apply different normative standards to algorithmic versus human advisors. Results suggest that while relational framing can influence perceptions, attempts to position AI as replacements for humans must account for the persistent tendency to view algorithms through an institutional lens.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100218"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The early wave of ChatGPT research: A review and future agenda ChatGPT研究的早期浪潮:回顾与未来议程
Pub Date : 2025-12-01 Epub Date: 2025-10-04 DOI: 10.1016/j.chbah.2025.100213
Peter André Busch , Geir Inge Hausvik , Jeppe Agger Nielsen
Researchers and practitioners are increasingly engaged in discussions about the hopes and fears of artificial intelligence (AI). In this article, we critically examine the early scholarly response to one prominent form of generative and conversational AI: ChatGPT. The launch of ChatGPT has sparked a surge in research, resulting in a fast-growing but fragmented body of literature. Against this backdrop, we undertook a systematic literature review of 192 empirical articles about ChatGPT to examine, synthesize, and evaluate the foci and gaps in this early wave of research to capture the dominating and immediate scholarly reactions to ChatGPT's release. Our analytical focus covered the following main aspects: perspectives on the purpose, usage, attitudes, and impacts of ChatGPT, as well as the theories and methods scholars apply in studying ChatGPT. Most studies in our sample focus on performance tests of ChatGPT, highlighting its strengths in remembering, understanding, and analyzing content, while revealing limitations in its capacity to generate novel ideas and its hallucination habit. Although the initial wave of ChatGPT research has generated valuable first insights, much of this early research remains a-theoretical, descriptive, and narrowly scoped, with limited attention to broader social, ethical, and institutional implications. These patterns reflect both the rapid publication pace and the early stage of scholarly engagement with this emerging technology. In response, we propose a conceptual model that maps key focus areas of ChatGPT research and suggest ways of strengthening ChatGPT research by proposing a research agenda aimed at advancing more theoretically informed, contextually grounded, and socially responsive studies of generative and conversational AI.
研究人员和实践者越来越多地参与到关于人工智能(AI)的希望和恐惧的讨论中。在本文中,我们批判性地研究了早期学术界对一种突出的生成式和会话式人工智能的反应:ChatGPT。ChatGPT的推出引发了研究热潮,导致了一个快速增长但支离破碎的文献体系。在此背景下,我们对192篇关于ChatGPT的实证文章进行了系统的文献综述,以检查、综合和评估这一早期研究浪潮中的焦点和差距,以捕捉对ChatGPT发布的主要和直接的学术反应。我们的分析重点包括以下几个主要方面:对ChatGPT的目的、使用、态度和影响的看法,以及学者们研究ChatGPT的理论和方法。我们样本中的大多数研究都集中在ChatGPT的性能测试上,突出了它在记忆、理解和分析内容方面的优势,同时揭示了它在产生新想法和产生幻觉习惯方面的局限性。尽管ChatGPT研究的最初浪潮产生了有价值的初步见解,但这些早期研究的大部分仍然是理论性的、描述性的、范围狭窄的,对更广泛的社会、伦理和制度影响的关注有限。这些模式既反映了快速的出版速度,也反映了与这种新兴技术的早期学术接触。作为回应,我们提出了一个概念模型,该模型绘制了ChatGPT研究的关键重点领域,并提出了加强ChatGPT研究的方法,提出了一个研究议程,旨在推进生成式和会话式人工智能的更多理论依据、情境基础和社会响应性研究。
{"title":"The early wave of ChatGPT research: A review and future agenda","authors":"Peter André Busch ,&nbsp;Geir Inge Hausvik ,&nbsp;Jeppe Agger Nielsen","doi":"10.1016/j.chbah.2025.100213","DOIUrl":"10.1016/j.chbah.2025.100213","url":null,"abstract":"<div><div>Researchers and practitioners are increasingly engaged in discussions about the hopes and fears of artificial intelligence (AI). In this article, we critically examine the early scholarly response to one prominent form of generative and conversational AI: ChatGPT. The launch of ChatGPT has sparked a surge in research, resulting in a fast-growing but fragmented body of literature. Against this backdrop, we undertook a systematic literature review of 192 empirical articles about ChatGPT to examine, synthesize, and evaluate the foci and gaps in this early wave of research to capture the dominating and immediate scholarly reactions to ChatGPT's release. Our analytical focus covered the following main aspects: perspectives on the purpose, usage, attitudes, and impacts of ChatGPT, as well as the theories and methods scholars apply in studying ChatGPT. Most studies in our sample focus on performance tests of ChatGPT, highlighting its strengths in remembering, understanding, and analyzing content, while revealing limitations in its capacity to generate novel ideas and its hallucination habit. Although the initial wave of ChatGPT research has generated valuable first insights, much of this early research remains a-theoretical, descriptive, and narrowly scoped, with limited attention to broader social, ethical, and institutional implications. These patterns reflect both the rapid publication pace and the early stage of scholarly engagement with this emerging technology. In response, we propose a conceptual model that maps key focus areas of ChatGPT research and suggest ways of strengthening ChatGPT research by proposing a research agenda aimed at advancing more theoretically informed, contextually grounded, and socially responsive studies of generative and conversational AI.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100213"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music 合成和声的威胁:人工智能与人类起源信念对听者对音乐的认知、情感和生理反应的影响
Pub Date : 2025-12-01 Epub Date: 2025-09-05 DOI: 10.1016/j.chbah.2025.100205
Rohan L. Dunham, Gerben A. van Kleef, Eftychia Stamkou
People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (N = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (N = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.
如果人们认为音乐是由人工智能(AI)而不是人类创造的,那么人们通常会对音乐的评价不那么乐观。但这种倾向背后的心理机制尚不清楚。之前的研究完全依赖于易受偏见影响的自我报告。这就留下了一个问题,即负面反应是否反映了动机推理——一个受控的认知过程,人们在这个过程中证明了他们对人工智能创造力的怀疑——或者它们是否源于更深层次的、具体的、对人类创造性独特性的威胁的感觉,这种威胁表现在生理上。我们通过两项实验室现场研究来解决这个问题,测量参与者对同一段音乐的自我报告和生理反应,这些音乐要么是人工智能的,要么是人类的。研究1 (N = 50)显示,与人类条件下的个体相比,人工智能条件下的个体欣赏音乐的次数较少,报告的强烈情绪较少,副交感神经系统活动减少。研究2 (N = 372)表明,这些影响在那些更强烈地相信创造力是人类独有的个体中更为明显,这在很大程度上可以用人工智能带来的感知威胁来解释。总之,这些发现表明,对人工智能生成的音乐的不良反应不仅仅是由受控的认知理由驱动的,也是由对创造性人工智能的自动、具体的威胁反应驱动的。他们认为,解决人工智能带来的威胁的策略可能是促进人类与人工智能更和谐合作和接受的关键。
{"title":"The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music","authors":"Rohan L. Dunham,&nbsp;Gerben A. van Kleef,&nbsp;Eftychia Stamkou","doi":"10.1016/j.chbah.2025.100205","DOIUrl":"10.1016/j.chbah.2025.100205","url":null,"abstract":"<div><div>People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (<em>N</em> = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (<em>N</em> = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100205"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing 大型语言模型(LLMs)对创造性多样性的均质化效应:人类和ChatGPT写作的实证比较
Pub Date : 2025-12-01 Epub Date: 2025-09-15 DOI: 10.1016/j.chbah.2025.100207
Kibum Moon, Adam E. Green, Kostadin Kushlev
Generative AI systems, especially Large Language Models (LLMs) such as ChatGPT, have recently emerged as significant contributors to creative processes. While LLMs can produce creative content that might be as good as or even better than human-created content, their widespread use risks reducing creative diversity across groups of people. In the present research, we aimed to quantify this homogenizing effect of LLMs on creative diversity, not only at the individual level but also at the collective level. Across three preregistered studies, we analyzed 2,200 college admissions essays. Using a novel measure—the diversity growth rate—we showed that each additional human-written essay contributed more new ideas than did each additional GPT-4 essay. Notably, this difference became more pronounced as more essays were included in the analysis and persisted despite efforts to enhance AI-generated content through both prompt and parameter modifications. Overall, our findings suggest that, despite their potential to enhance individual creativity, the widespread use of LLMs could diminish the collective diversity of creative ideas.
生成式人工智能系统,尤其是像ChatGPT这样的大型语言模型(llm),最近已经成为创造性过程的重要贡献者。虽然法学硕士可以产生与人类创造的内容一样好的创造性内容,甚至比人类创造的内容更好,但它们的广泛使用可能会减少群体之间创造性的多样性。在目前的研究中,我们旨在量化法学硕士对创造性多样性的同质化效应,不仅在个人层面,而且在集体层面。在三项预先注册的研究中,我们分析了2200份大学入学申请文书。使用一种新颖的测量方法——多样性增长率——我们发现,每一篇额外的人工写作论文比每一篇额外的GPT-4论文贡献了更多的新想法。值得注意的是,随着更多的文章被纳入分析,这种差异变得更加明显,尽管通过提示和参数修改来增强人工智能生成的内容,这种差异仍然存在。总的来说,我们的研究结果表明,尽管法学硕士有增强个人创造力的潜力,但法学硕士的广泛使用可能会减少创造性思想的集体多样性。
{"title":"Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing","authors":"Kibum Moon,&nbsp;Adam E. Green,&nbsp;Kostadin Kushlev","doi":"10.1016/j.chbah.2025.100207","DOIUrl":"10.1016/j.chbah.2025.100207","url":null,"abstract":"<div><div>Generative AI systems, especially Large Language Models (LLMs) such as ChatGPT, have recently emerged as significant contributors to creative processes. While LLMs can produce creative content that might be as good as or even better than human-created content, their widespread use risks reducing creative diversity across groups of people. In the present research, we aimed to quantify this homogenizing effect of LLMs on creative diversity, not only at the individual level but also at the collective level. Across three preregistered studies, we analyzed 2,200 college admissions essays. Using a novel measure—the diversity growth rate—we showed that each additional human-written essay contributed more new ideas than did each additional GPT-4 essay. Notably, this difference became more pronounced as more essays were included in the analysis and persisted despite efforts to enhance AI-generated content through both prompt and parameter modifications. Overall, our findings suggest that, despite their potential to enhance individual creativity, the widespread use of LLMs could diminish the collective diversity of creative ideas.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100207"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game 合作解决问题游戏中AI顾问自我报告和行为信任的决定因素
Pub Date : 2025-12-01 Epub Date: 2025-10-30 DOI: 10.1016/j.chbah.2025.100235
Simon Schreibelmayr, Martina Mara
The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.
人工智能咨询系统在日常决策情境中的广泛应用引起了人们对用户信任话题的关注。基于信任形成的心理学理论,提出了自动化信任的几个关键决定因素,但系统的实证验证仍然有限。为了在高度控制的条件下测试他们,我们实施了一个沉浸式虚拟现实信任游戏,在这个游戏中,165名参与者与基于语音的人工智能助手一起解决谜语,根据多个理论推导的维度对其进行评估,并表明他们对其建议的依赖程度。与Körber(2019)的TiA模型基本一致,我们发现感知到的系统能力、可理解性、开发人员的假设意图和参与者的个人信任倾向显著地预测了用户对AI顾问的信任,其中前者的影响最大。此外,熟悉程度调节了感知系统能力与信任之间的关系。该模型源自主观信任测量(自我报告量表),然后使用行为依赖(即接受的游戏内AI推荐的数量)作为结果变量重新评估。讨论了研究结果的理论、实证和实践意义。
{"title":"Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game","authors":"Simon Schreibelmayr,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2025.100235","DOIUrl":"10.1016/j.chbah.2025.100235","url":null,"abstract":"<div><div>The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100235"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1