首页 > 最新文献

Gaze-In '12最新文献

英文 中文
Gaze and conversational engagement in multiparty video conversation: an annotation scheme and classification of high and low levels of engagement 多方视频对话中的凝视和会话参与:一种注释方案和高低参与度的分类
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401846
R. Bednarik, Shahram Eivazi, Michal Hradiš
When using a multiparty video mediated system, interacting participants assume a range of various roles and exhibit behaviors according to how engaged in the communication they are. In this paper we focus on estimation of conversational engagement from gaze signal. In particular, we present an annotation scheme for conversational engagement, a statistical analysis of gaze behavior across varying levels of engagement, and we classify vectors of computed eye tracking measures. The results show that in 74% of cases the level of engagement can be correctly classified into either high or low level. In addition, we describe the nuances of gaze during distinct levels of engagement.
当使用多方视频中介系统时,互动参与者承担一系列不同的角色,并根据他们参与交流的方式表现出行为。本文主要研究了基于注视信号的会话参与度估计。特别地,我们提出了一种会话参与的注释方案,对不同参与水平的凝视行为进行统计分析,并对计算眼动追踪测量的向量进行分类。结果表明,在74%的情况下,敬业度可以正确地分为高水平或低水平。此外,我们还描述了不同参与水平下凝视的细微差别。
{"title":"Gaze and conversational engagement in multiparty video conversation: an annotation scheme and classification of high and low levels of engagement","authors":"R. Bednarik, Shahram Eivazi, Michal Hradiš","doi":"10.1145/2401836.2401846","DOIUrl":"https://doi.org/10.1145/2401836.2401846","url":null,"abstract":"When using a multiparty video mediated system, interacting participants assume a range of various roles and exhibit behaviors according to how engaged in the communication they are. In this paper we focus on estimation of conversational engagement from gaze signal. In particular, we present an annotation scheme for conversational engagement, a statistical analysis of gaze behavior across varying levels of engagement, and we classify vectors of computed eye tracking measures. The results show that in 74% of cases the level of engagement can be correctly classified into either high or low level. In addition, we describe the nuances of gaze during distinct levels of engagement.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127613150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Sensing visual attention using an interactive bidirectional HMD 使用交互式双向头戴式显示器感知视觉注意力
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401852
T. Schuchert, Sascha Voth, Judith Baumgarten
This paper presents a novel system for sensing of attentional behavior in Augmented Reality (AR) environments by analyzing eye movement. The system is based on light weight head mounted optical see-through glasses containing bidirectional microdisplays, which allow displaying images and eye tracking on a single chip. The sensing and interaction application has been developed in the European project ARtSENSE in order to (1) detect museum visitors attention/interest in artworks as well as in presented AR content, (2) present appropriate personalized information based on the detected attention as augmented overlays, and (3) allow museum visitors gaze-based interaction with the system or the AR content. In this paper we present a novel algorithm for pupil estimation in low resolution eye-tracking images and show first results on attention estimation by eye movement analysis and interaction with the system by gaze.
本文提出了一种通过分析眼球运动来感知增强现实(AR)环境中注意力行为的新系统。该系统基于轻型头戴式光学透明眼镜,其中包含双向微显示器,可以在单个芯片上显示图像和眼球追踪。欧洲项目ARtSENSE开发了传感和交互应用程序,以便(1)检测博物馆游客对艺术品和呈现的AR内容的注意力/兴趣,(2)根据检测到的注意力以增强叠加的形式呈现适当的个性化信息,以及(3)允许博物馆游客与系统或AR内容进行基于凝视的交互。本文提出了一种在低分辨率眼动追踪图像中瞳孔估计的新算法,并首次展示了通过眼动分析和注视与系统交互来估计注意力的结果。
{"title":"Sensing visual attention using an interactive bidirectional HMD","authors":"T. Schuchert, Sascha Voth, Judith Baumgarten","doi":"10.1145/2401836.2401852","DOIUrl":"https://doi.org/10.1145/2401836.2401852","url":null,"abstract":"This paper presents a novel system for sensing of attentional behavior in Augmented Reality (AR) environments by analyzing eye movement. The system is based on light weight head mounted optical see-through glasses containing bidirectional microdisplays, which allow displaying images and eye tracking on a single chip. The sensing and interaction application has been developed in the European project ARtSENSE in order to (1) detect museum visitors attention/interest in artworks as well as in presented AR content, (2) present appropriate personalized information based on the detected attention as augmented overlays, and (3) allow museum visitors gaze-based interaction with the system or the AR content. In this paper we present a novel algorithm for pupil estimation in low resolution eye-tracking images and show first results on attention estimation by eye movement analysis and interaction with the system by gaze.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124731994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Eye gaze assisted human-computer interaction in a hand gesture controlled multi-display environment 在手势控制的多显示环境中,眼睛注视辅助人机交互
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401849
Tong Cha, S. Maier
A special human-computer interaction (HCI) framework processing user input in a multi-display environment has the ability to detect and interpret dynamic hand gesture input. In an environment equipped with large displays, full contactless application control is possible with this system. This framework was extended with a new input modality that involves human gaze in the interaction. The main contribution of this work is the possibility to unite any types of computer input and obtain a detailed view on the behaviour of every modality. Information is then available in the form of high speed data samples received in real time. The framework is designed with a special regard to gaze and hand gesture input modality in multi-display environments with large-area screens.
在多显示环境中处理用户输入的特殊人机交互(HCI)框架具有检测和解释动态手势输入的能力。在配备大型显示器的环境中,该系统可以实现完全的非接触式应用控制。这个框架扩展了一种新的输入方式,在交互中涉及到人类的注视。这项工作的主要贡献是有可能统一任何类型的计算机输入,并获得每个模态行为的详细视图。然后以实时接收的高速数据样本的形式提供信息。该框架的设计特别考虑了在具有大面积屏幕的多显示环境中的凝视和手势输入方式。
{"title":"Eye gaze assisted human-computer interaction in a hand gesture controlled multi-display environment","authors":"Tong Cha, S. Maier","doi":"10.1145/2401836.2401849","DOIUrl":"https://doi.org/10.1145/2401836.2401849","url":null,"abstract":"A special human-computer interaction (HCI) framework processing user input in a multi-display environment has the ability to detect and interpret dynamic hand gesture input. In an environment equipped with large displays, full contactless application control is possible with this system. This framework was extended with a new input modality that involves human gaze in the interaction. The main contribution of this work is the possibility to unite any types of computer input and obtain a detailed view on the behaviour of every modality. Information is then available in the form of high speed data samples received in real time. The framework is designed with a special regard to gaze and hand gesture input modality in multi-display environments with large-area screens.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114398091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multimodal corpus of conversations in mother tongue and second language by same interlocutors 同一对话者母语和第二语言对话的多模态语料库
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401845
Kosuke Kabashima, Kristiina Jokinen, M. Nishida, Seiichi Yamamoto
We describe data on multi-modal information that were collected from conversations both in the mother tongue and the second language in this paper. We also compare eye movements and utterance styles between communications in the mother tongue and second language. The results we obtained from analyzing eye movements and utterance styles are presented.
本文描述了从母语和第二语言对话中收集的多模态信息数据。我们还比较了母语和第二语言交流时的眼球运动和说话风格。本文给出了我们通过分析眼球运动和话语风格得到的结果。
{"title":"Multimodal corpus of conversations in mother tongue and second language by same interlocutors","authors":"Kosuke Kabashima, Kristiina Jokinen, M. Nishida, Seiichi Yamamoto","doi":"10.1145/2401836.2401845","DOIUrl":"https://doi.org/10.1145/2401836.2401845","url":null,"abstract":"We describe data on multi-modal information that were collected from conversations both in the mother tongue and the second language in this paper. We also compare eye movements and utterance styles between communications in the mother tongue and second language. The results we obtained from analyzing eye movements and utterance styles are presented.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130442424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Brain-enhanced synergistic attention (BESA) 脑增强协同注意(BESA)
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401837
D. Khosla, Matthew S. Keegan, Lei Zhang, K. Martin, Darrel J. VanBuer, David J. Huber
In this paper, we describe a hybrid human-machine system for searching and detecting Objects of Interest (OI) in imagery. Automated methods for OI detection based on models of human visual attention have received much interest, but are inherently bottom-up and driven by features. Humans fixate on regions of imagery based on a much stronger top-down component. While it may be possible to include some aspects of top-down cognition into these methods, it is difficult to fully capture all aspects of human cognition into an automated algorithm. Our hypothesis is that combination of automated methods with human fixations will provide a better solution than either alone. In this work, we describe a Brain-Enhanced Synergistic Attention (BESA) system that combines models of visual attention with real-time eye fixations from a human for accurate search and detections of OI. We describe two different BESA schemes and provide implementation details. Preliminary studies were conducted to determine the efficacy of the system and initial results are promising. Typical applications of this technology are in surveillance, reconnaissance and intelligence analysis.
本文描述了一种用于图像中感兴趣对象(OI)搜索和检测的混合人机系统。基于人类视觉注意模型的OI检测自动化方法受到了广泛关注,但这些方法本质上是自下而上的,并且由特征驱动。人类对图像区域的关注基于一种更强的自上而下的成分。虽然有可能将自上而下的认知的某些方面纳入这些方法,但很难将人类认知的所有方面完全捕获到自动化算法中。我们的假设是,自动化方法与人类注视的结合将提供比单独使用更好的解决方案。在这项工作中,我们描述了一个脑增强的协同注意(BESA)系统,该系统将视觉注意模型与人类的实时眼睛注视相结合,用于准确搜索和检测成骨不全症。我们描述了两种不同的BESA方案,并提供了实现细节。初步研究确定了该系统的功效,初步结果是有希望的。该技术的典型应用是监视、侦察和情报分析。
{"title":"Brain-enhanced synergistic attention (BESA)","authors":"D. Khosla, Matthew S. Keegan, Lei Zhang, K. Martin, Darrel J. VanBuer, David J. Huber","doi":"10.1145/2401836.2401837","DOIUrl":"https://doi.org/10.1145/2401836.2401837","url":null,"abstract":"In this paper, we describe a hybrid human-machine system for searching and detecting Objects of Interest (OI) in imagery. Automated methods for OI detection based on models of human visual attention have received much interest, but are inherently bottom-up and driven by features. Humans fixate on regions of imagery based on a much stronger top-down component. While it may be possible to include some aspects of top-down cognition into these methods, it is difficult to fully capture all aspects of human cognition into an automated algorithm. Our hypothesis is that combination of automated methods with human fixations will provide a better solution than either alone. In this work, we describe a Brain-Enhanced Synergistic Attention (BESA) system that combines models of visual attention with real-time eye fixations from a human for accurate search and detections of OI. We describe two different BESA schemes and provide implementation details. Preliminary studies were conducted to determine the efficacy of the system and initial results are promising. Typical applications of this technology are in surveillance, reconnaissance and intelligence analysis.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132702174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A framework of personal assistant for computer users by analyzing video stream 基于视频流分析的计算机用户个人助理框架
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401850
Zixuan Wang, Jinyun Yan, H. Aghajan
The engagement time on the computer is increasing steadily with the rapid development of the Internet. During the long period in front of the computer, bad postures and habits will result in some health risks, and the unawareness of fatigue will impair the work efficiency. We investigate how users behave in front of the computer with a camera. Face pose, eye gaze, eye blinking, and yawn frequency are considered. These visual cues are then used to give suggestions to users for correcting wrong posture and indicating the need for a break. We propose a novel framework of personal assistant for a user when he uses computer for a long time. The camera produces the video stream which records the user behavior. The automatically assistant system will analyze the visual inputs and give suggestions at the right time. Our experiment shows that it achieves high accuracy of detecting visual cues, and makes reasonable suggestions to users. The work initializes the area of assistant system for individuals who use computer frequently.
随着互联网的快速发展,人们使用电脑的时间也在稳步增加。在长时间坐在电脑前时,不良的姿势和习惯会对健康造成一定的危害,对疲劳的不自觉也会影响工作效率。我们用摄像头调查用户在电脑前的行为。面部姿势、眼睛凝视、眨眼和打哈欠频率被考虑在内。然后,这些视觉线索被用来给用户提供纠正错误姿势的建议,并表明需要休息。针对用户长期使用电脑的情况,提出了一种新颖的个人助理框架。摄像机产生记录用户行为的视频流。自动辅助系统将分析视觉输入并在适当的时候给出建议。实验表明,该方法在检测视觉线索方面达到了较高的准确率,并为用户提供了合理的建议。该工作为经常使用计算机的个人开创了辅助系统领域。
{"title":"A framework of personal assistant for computer users by analyzing video stream","authors":"Zixuan Wang, Jinyun Yan, H. Aghajan","doi":"10.1145/2401836.2401850","DOIUrl":"https://doi.org/10.1145/2401836.2401850","url":null,"abstract":"The engagement time on the computer is increasing steadily with the rapid development of the Internet. During the long period in front of the computer, bad postures and habits will result in some health risks, and the unawareness of fatigue will impair the work efficiency. We investigate how users behave in front of the computer with a camera. Face pose, eye gaze, eye blinking, and yawn frequency are considered. These visual cues are then used to give suggestions to users for correcting wrong posture and indicating the need for a break. We propose a novel framework of personal assistant for a user when he uses computer for a long time. The camera produces the video stream which records the user behavior. The automatically assistant system will analyze the visual inputs and give suggestions at the right time. Our experiment shows that it achieves high accuracy of detecting visual cues, and makes reasonable suggestions to users. The work initializes the area of assistant system for individuals who use computer frequently.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multi-modal object of interest detection using eye gaze and RGB-D cameras 基于眼睛注视和RGB-D相机的多模态感兴趣目标检测
Pub Date : 2012-10-26 DOI: 10.1145/2401836.2401838
Christopher D. McMurrough, Jonathan Rich, C. Conly, V. Athitsos, F. Makedon
This paper presents a low-cost, wearable headset for mobile 3D Point of Gaze (PoG) estimation in assistive applications. The device consists of an eye tracking camera and forward facing RGB-D scene camera which are able to provide an estimate of the user gaze vector and its intersection with a 3D point in space. A computational approach that considers object 3D information and visual appearance together with the visual gaze interactions of the user is also given to demonstrate the utility of the device. The resulting system is able to identify, in real-time, known objects within a scene that intersect with the user gaze vector.
本文提出了一种低成本的可穿戴式头显,用于辅助应用中的移动3D凝视点(PoG)估计。该设备由眼动追踪摄像头和前向RGB-D场景摄像头组成,它们能够提供用户凝视矢量及其与空间中3D点相交的估计。还给出了考虑对象3D信息和视觉外观以及用户视觉凝视交互的计算方法,以演示该设备的实用性。由此产生的系统能够实时识别场景中与用户凝视向量相交的已知物体。
{"title":"Multi-modal object of interest detection using eye gaze and RGB-D cameras","authors":"Christopher D. McMurrough, Jonathan Rich, C. Conly, V. Athitsos, F. Makedon","doi":"10.1145/2401836.2401838","DOIUrl":"https://doi.org/10.1145/2401836.2401838","url":null,"abstract":"This paper presents a low-cost, wearable headset for mobile 3D Point of Gaze (PoG) estimation in assistive applications. The device consists of an eye tracking camera and forward facing RGB-D scene camera which are able to provide an estimate of the user gaze vector and its intersection with a 3D point in space. A computational approach that considers object 3D information and visual appearance together with the visual gaze interactions of the user is also given to demonstrate the utility of the device. The resulting system is able to identify, in real-time, known objects within a scene that intersect with the user gaze vector.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"543 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126079986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
From the eye to the heart: eye contact triggers emotion simulation 从眼睛到心灵:眼神接触触发情感模拟
Pub Date : 2012-10-22 DOI: 10.1145/2401836.2401841
Magdalena Rychlowska, Leah Zinner, Serban C. Musca, P. Niedenthal
Smiles are complex facial expressions that carry multiple meanings. Recent literature suggests that deep processing of smiles via embodied simulation can be triggered by achieved eye contact. Three studies supported this prediction. In Study 1, participants rated the emotional impact of portraits, which varied in eye contact and smiling. Smiling portraits that achieved eye contact were more emotionally impactful than smiling portraits that did not achieve eye contact. In Study 2, participants saw photographs of smiles in which eye contact was manipulated. The same smile of the same individual caused more positive emotion and higher ratings of authenticity when eye contact was achieved than when it was not. In Study 3, participants' facial EMG was recorded. Activity over the zygomatic major (i.e. smile) muscle was greater when participants observed smiles that achieved eye contact compared to smiles that did not. These results support the role of eye contact as a trigger of embodied simulation. Implications for human-machine interactions are discussed.
微笑是一种复杂的面部表情,具有多种含义。最近的文献表明,通过具身模拟对微笑的深度处理可以通过实现眼神接触来触发。三项研究支持了这一预测。在研究1中,参与者对肖像的情感影响进行了评分,这些影响在眼神接触和微笑方面有所不同。有眼神接触的微笑肖像比没有眼神接触的微笑肖像在情感上更有影响力。在第二项研究中,参与者看到了一些微笑的照片,其中的目光接触是被操纵的。与没有眼神接触的情况相比,同一个人的同一个微笑在眼神接触的情况下会产生更多的积极情绪,对真实性的评价也更高。研究3记录参与者的面部肌电图。当参与者观察到有眼神接触的微笑时,与没有眼神接触的微笑相比,颧大肌(即微笑)的活动更大。这些结果支持眼神接触作为具身模拟的触发因素的作用。讨论了人机交互的含义。
{"title":"From the eye to the heart: eye contact triggers emotion simulation","authors":"Magdalena Rychlowska, Leah Zinner, Serban C. Musca, P. Niedenthal","doi":"10.1145/2401836.2401841","DOIUrl":"https://doi.org/10.1145/2401836.2401841","url":null,"abstract":"Smiles are complex facial expressions that carry multiple meanings. Recent literature suggests that deep processing of smiles via embodied simulation can be triggered by achieved eye contact. Three studies supported this prediction. In Study 1, participants rated the emotional impact of portraits, which varied in eye contact and smiling. Smiling portraits that achieved eye contact were more emotionally impactful than smiling portraits that did not achieve eye contact. In Study 2, participants saw photographs of smiles in which eye contact was manipulated. The same smile of the same individual caused more positive emotion and higher ratings of authenticity when eye contact was achieved than when it was not. In Study 3, participants' facial EMG was recorded. Activity over the zygomatic major (i.e. smile) muscle was greater when participants observed smiles that achieved eye contact compared to smiles that did not. These results support the role of eye contact as a trigger of embodied simulation. Implications for human-machine interactions are discussed.","PeriodicalId":272657,"journal":{"name":"Gaze-In '12","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124762794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Gaze-In '12
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1