首页 > 最新文献

2022 26th International Conference Information Visualisation (IV)最新文献

英文 中文
Visualization and Visual Knowledge Discovery from Big Uncertain Data 大不确定数据的可视化和可视化知识发现
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00062
C. Leung, Evan W. R. Madill, Adam G. M. Pazdor
In the current uncertain world, data are kept growing bigger. Big data refer to the data flow of huge volume, high velocity, wide variety, and different levels of veracity (e.g., precise data, imprecise/uncertain data). Embedded in these big data are implicit, previously unknown, but valuable information and knowledge. With huge volumes of information and knowledge that can be discovered by techniques like data mining, a challenge is to validate and visualize the data mining results. To validate data for better data aggregation in estimation and prediction and for establishing trustworthy artificial intelligence, the synergy of visualization models and data mining strategies are needed. Hence, in this paper, we present a solution for visualization and visual knowledge discovery from big uncertain data. Our solution aims to discover knowledge in the form of frequently co-occurring patterns from big uncertain data and visualize the discovered knowledge. In particular, the solution shows the upper and lower bounds on frequency of these patterns. Evaluation with real-life Coronavirus disease 2019 (COVID-19) data demonstrates the effectiveness and practicality of our solution in visualization and visual knowledge discovery from big health informatics data collected from the current uncertain world.
在当前这个不确定的世界里,数据越来越大。大数据是指量大、速度快、种类多、准确性等级不一的数据流(如精确数据、不精确/不确定数据)。这些大数据中隐含着以前未知的、但有价值的信息和知识。由于数据挖掘等技术可以发现大量的信息和知识,因此验证和可视化数据挖掘结果是一个挑战。为了验证数据,以便更好地在估计和预测中进行数据聚合,并建立可信的人工智能,需要可视化模型和数据挖掘策略的协同作用。因此,本文提出了一种基于大不确定数据的可视化和可视化知识发现的解决方案。我们的解决方案旨在从大不确定数据中以频繁共存模式的形式发现知识,并将发现的知识可视化。特别地,解显示了这些模式频率的上界和下界。以2019冠状病毒病(COVID-19)真实数据进行评估,证明了我们的解决方案在可视化和从当前不确定世界中收集的大健康信息学数据中发现视觉知识方面的有效性和实用性。
{"title":"Visualization and Visual Knowledge Discovery from Big Uncertain Data","authors":"C. Leung, Evan W. R. Madill, Adam G. M. Pazdor","doi":"10.1109/IV56949.2022.00062","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00062","url":null,"abstract":"In the current uncertain world, data are kept growing bigger. Big data refer to the data flow of huge volume, high velocity, wide variety, and different levels of veracity (e.g., precise data, imprecise/uncertain data). Embedded in these big data are implicit, previously unknown, but valuable information and knowledge. With huge volumes of information and knowledge that can be discovered by techniques like data mining, a challenge is to validate and visualize the data mining results. To validate data for better data aggregation in estimation and prediction and for establishing trustworthy artificial intelligence, the synergy of visualization models and data mining strategies are needed. Hence, in this paper, we present a solution for visualization and visual knowledge discovery from big uncertain data. Our solution aims to discover knowledge in the form of frequently co-occurring patterns from big uncertain data and visualize the discovered knowledge. In particular, the solution shows the upper and lower bounds on frequency of these patterns. Evaluation with real-life Coronavirus disease 2019 (COVID-19) data demonstrates the effectiveness and practicality of our solution in visualization and visual knowledge discovery from big health informatics data collected from the current uncertain world.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133809823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Code-Space Quality Evaluation for Information Visualization 面向信息可视化的代码空间质量评价
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00029
Ying Zhu
The quality evaluation is essential to creating effective data visualization designs. The data visualization research community has produced many quality metrics for evaluating data visualization. However, these quality metrics are rarely integrated into popular data visualization tools. As a result, most data visualization creators are either not aware of these quality metrics or do not know how to apply these metrics to the visualization creation process. In this paper, we propose a novel quality evaluation method that integrates quality metrics into popular data visualization programming tools. Our main contribution is a code-space quality evaluation method, different from the traditional image-space or data-space quality evaluation method. Using our method, a visualization programmer passes a coded data visualization design to a quality evaluation function that generates warnings, comments, and design recommendations. This allows users to integrate quality checks into the design process.
质量评估对于创建有效的数据可视化设计至关重要。数据可视化研究团体已经产生了许多评估数据可视化的质量指标。然而,这些质量度量很少集成到流行的数据可视化工具中。因此,大多数数据可视化创建者要么不知道这些质量度量,要么不知道如何将这些度量应用到可视化创建过程中。在本文中,我们提出了一种新的质量评估方法,将质量度量集成到流行的数据可视化编程工具中。我们的主要贡献是一种代码空间质量评价方法,不同于传统的图像空间或数据空间质量评价方法。使用我们的方法,可视化程序员将编码的数据可视化设计传递给质量评估函数,该函数生成警告、注释和设计建议。这允许用户将质量检查集成到设计过程中。
{"title":"Code-Space Quality Evaluation for Information Visualization","authors":"Ying Zhu","doi":"10.1109/IV56949.2022.00029","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00029","url":null,"abstract":"The quality evaluation is essential to creating effective data visualization designs. The data visualization research community has produced many quality metrics for evaluating data visualization. However, these quality metrics are rarely integrated into popular data visualization tools. As a result, most data visualization creators are either not aware of these quality metrics or do not know how to apply these metrics to the visualization creation process. In this paper, we propose a novel quality evaluation method that integrates quality metrics into popular data visualization programming tools. Our main contribution is a code-space quality evaluation method, different from the traditional image-space or data-space quality evaluation method. Using our method, a visualization programmer passes a coded data visualization design to a quality evaluation function that generates warnings, comments, and design recommendations. This allows users to integrate quality checks into the design process.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121720260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Color Palette Recommendations with Non-negative Tensor Factorization 基于非负张量分解的情感调色板推荐
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00016
Ikuya Morita, Shigeo Takahashi, Satoshi Nishimura, Kazuo Misue
Color is an essential factor that influences human perception, and thus, the proper selection of color sets is crucial in creating informative and appealing visual content. Furthermore, the choice of such color palettes often reflects the underlying emotional intention of creators, especially when they want to introduce specific affective styles. This paper presents a color palette recommendation system that facilitates preferred colors and affective expressions in visual content. This is accomplished by introducing non-negative tensor factorization (NTF), which extends the conventional matrix-based collaborative filtering for recommending items through ratings of multiple users. In our approach, we composed a rating tensor that constitutes the scores for colors in terms of affective factors provided by participants in the user study. With this rating tensor, we explored the meaningful relation between affective expression and color preference. Our experiments exposed that we can successfully apply a tensor-based approach to recommending convincing sets of colors in several possible cases by predicting the underlying emotional intentions in the visual content design.
颜色是影响人类感知的重要因素,因此,正确选择颜色集对于创建信息丰富和吸引人的视觉内容至关重要。此外,这种调色板的选择往往反映了创作者潜在的情感意图,特别是当他们想要引入特定的情感风格时。本文提出了一种调色板推荐系统,以促进视觉内容中的偏好颜色和情感表达。这是通过引入非负张量分解(NTF)来实现的,NTF扩展了传统的基于矩阵的协同过滤,通过多个用户的评分来推荐商品。在我们的方法中,我们根据用户研究中参与者提供的情感因素组成了一个评分张量,该张量构成了颜色的分数。利用这一评价张量,我们探讨了情感表达与色彩偏好之间的有意义的关系。我们的实验表明,我们可以成功地应用基于张量的方法,通过预测视觉内容设计中的潜在情感意图,在几种可能的情况下推荐令人信服的颜色集。
{"title":"Affective Color Palette Recommendations with Non-negative Tensor Factorization","authors":"Ikuya Morita, Shigeo Takahashi, Satoshi Nishimura, Kazuo Misue","doi":"10.1109/IV56949.2022.00016","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00016","url":null,"abstract":"Color is an essential factor that influences human perception, and thus, the proper selection of color sets is crucial in creating informative and appealing visual content. Furthermore, the choice of such color palettes often reflects the underlying emotional intention of creators, especially when they want to introduce specific affective styles. This paper presents a color palette recommendation system that facilitates preferred colors and affective expressions in visual content. This is accomplished by introducing non-negative tensor factorization (NTF), which extends the conventional matrix-based collaborative filtering for recommending items through ratings of multiple users. In our approach, we composed a rating tensor that constitutes the scores for colors in terms of affective factors provided by participants in the user study. With this rating tensor, we explored the meaningful relation between affective expression and color preference. Our experiments exposed that we can successfully apply a tensor-based approach to recommending convincing sets of colors in several possible cases by predicting the underlying emotional intentions in the visual content design.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121831634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How originality looks like. Integrating visualization and meta-heuristics to dissect music plagiarism 创意是什么样子的。结合可视化和元启发式来剖析音乐剽窃
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00052
N. Lettieri, R. De Prisco, Delfina Malandrino, R. Zaccagnino, Alfonso Guarino
Plagiarism is a debated and controversial topic in different fields. For example, in Law, where the subjectivity of the judges that have to pronounce a suspicious case usually lead to long and often unsolved cases, and in Music, where huge amounts of money are invested every year to face and try to solve suspicious cases. In this scenario, the automatic detection of music plagiarism is fundamental by representing useful support for judges during their pronouncements and an important result to avoid musicians spending more time in court than on composing music. This paper shows how the combination of visual analytics and the employment of adaptive meta-heuristics can assist domain experts in judging suspicious cases. Solutions will be presented as part of PlagiarismDetection, a cross-platform tool that leverages text-similarity algorithms, computational intelligence, optimization methods, and visualization techniques to enable new critical approaches to music plagiarism analysis.
抄袭是一个在不同领域争论和争议的话题。例如,在法律领域,由于法官的主观性,必须对可疑案件进行宣判,通常导致案件漫长且往往无法解决,而在音乐领域,每年都要投入大量资金来面对和试图解决可疑案件。在这种情况下,音乐剽窃的自动检测是至关重要的,因为它为法官在判决过程中提供了有用的支持,也是避免音乐家在法庭上花费比创作音乐更多的时间的重要结果。本文展示了可视化分析和自适应元启发式的结合如何帮助领域专家判断可疑案件。解决方案将作为剽窃者检测的一部分呈现,剽窃者检测是一个跨平台工具,它利用文本相似算法、计算智能、优化方法和可视化技术来实现音乐剽窃分析的新关键方法。
{"title":"How originality looks like. Integrating visualization and meta-heuristics to dissect music plagiarism","authors":"N. Lettieri, R. De Prisco, Delfina Malandrino, R. Zaccagnino, Alfonso Guarino","doi":"10.1109/IV56949.2022.00052","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00052","url":null,"abstract":"Plagiarism is a debated and controversial topic in different fields. For example, in Law, where the subjectivity of the judges that have to pronounce a suspicious case usually lead to long and often unsolved cases, and in Music, where huge amounts of money are invested every year to face and try to solve suspicious cases. In this scenario, the automatic detection of music plagiarism is fundamental by representing useful support for judges during their pronouncements and an important result to avoid musicians spending more time in court than on composing music. This paper shows how the combination of visual analytics and the employment of adaptive meta-heuristics can assist domain experts in judging suspicious cases. Solutions will be presented as part of PlagiarismDetection, a cross-platform tool that leverages text-similarity algorithms, computational intelligence, optimization methods, and visualization techniques to enable new critical approaches to music plagiarism analysis.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117082551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Observation and Visualization of Subjectivity-based Annotation Tasks 基于主观性的标注任务的观察与可视化
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00023
Rika Miura, Ami Tochigi, T. Itoh
Annotation is an upstream process for constructing training data for machine learning tasks. The reliability of annotation is very important for the reliability of machine learning. The annotations vary from worker to worker, and differences in these tendencies may impair the reliability of the data. This is especially relevant for tasks that depend on the subjectivity of the workers. This study aims to realize reliable annotation by observing the annotation results of workers. As a specific example, we applied the annotations of three workers who evaluated facial expressions by the Likert scale on 977 face images as a subject. We verified the reliability of the annotations from the visualization results.
标注是为机器学习任务构建训练数据的上游过程。标注的可靠性对机器学习的可靠性至关重要。注释因工作人员而异,这些趋势的差异可能会损害数据的可靠性。这对于依赖于工人主观性的任务尤其重要。本研究旨在通过观察工人的标注结果来实现可靠的标注。作为一个具体的例子,我们应用了三名工作人员的注释,他们用李克特量表对977张面部图像进行了面部表情评估。我们从可视化结果验证了注释的可靠性。
{"title":"Observation and Visualization of Subjectivity-based Annotation Tasks","authors":"Rika Miura, Ami Tochigi, T. Itoh","doi":"10.1109/IV56949.2022.00023","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00023","url":null,"abstract":"Annotation is an upstream process for constructing training data for machine learning tasks. The reliability of annotation is very important for the reliability of machine learning. The annotations vary from worker to worker, and differences in these tendencies may impair the reliability of the data. This is especially relevant for tasks that depend on the subjectivity of the workers. This study aims to realize reliable annotation by observing the annotation results of workers. As a specific example, we applied the annotations of three workers who evaluated facial expressions by the Likert scale on 977 face images as a subject. We verified the reliability of the annotations from the visualization results.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Older Driver's Cognitive Performance and Workload Using Features of Eye movement and Pupil Response on Test Routes 利用测试路线上的眼动和瞳孔反应特征估计老年驾驶员的认知表现和工作量
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00033
M. Nakayama, Q. Sun, J. Xia
In order to evaluate cognitive performance and the level of task mental workload of older drivers, some features of oculo-motors such as eye movement and pupillary response were extracted during car driving on a public road. The individual cognitive performance of 11 selected older drivers was measured in advance using some conventional tests including Manoeuvre and the useful field of view (UFOV). Changes in the extracted features deviated along with the test route, which was classified into 5 groups according to the type of route. The regression relationships between the features of oculo-motors and the cognitive test scores was created using the LASSO technique, where fitness and feature selections are evaluated. The predicted scores for driver's cognitive performance and dependency of route group were evaluated and the overall possibility of the estimating driver's conditions was confirmed. These results suggest that older driver's eye movements during driving reflect their cognitive abilities and level of mental workload.
为了评价老年驾驶员的认知表现和任务心理负荷水平,提取了汽车在公路上行驶时眼动和瞳孔反应等眼动特征。11名被选中的老年司机的个人认知能力被提前测量,使用一些传统的测试,包括机动和有用视野(UFOV)。提取的特征变化随测试路线偏离,根据路线类型将其分为5组。使用LASSO技术建立了眼运动特征与认知测试分数之间的回归关系,其中评估了适应度和特征选择。对驾驶员认知能力和路线组依赖性的预测得分进行了评估,并证实了估计驾驶员状况的总体可能性。这些结果表明,老年驾驶员在驾驶过程中的眼球运动反映了他们的认知能力和精神负荷水平。
{"title":"Estimation of Older Driver's Cognitive Performance and Workload Using Features of Eye movement and Pupil Response on Test Routes","authors":"M. Nakayama, Q. Sun, J. Xia","doi":"10.1109/IV56949.2022.00033","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00033","url":null,"abstract":"In order to evaluate cognitive performance and the level of task mental workload of older drivers, some features of oculo-motors such as eye movement and pupillary response were extracted during car driving on a public road. The individual cognitive performance of 11 selected older drivers was measured in advance using some conventional tests including Manoeuvre and the useful field of view (UFOV). Changes in the extracted features deviated along with the test route, which was classified into 5 groups according to the type of route. The regression relationships between the features of oculo-motors and the cognitive test scores was created using the LASSO technique, where fitness and feature selections are evaluated. The predicted scores for driver's cognitive performance and dependency of route group were evaluated and the overall possibility of the estimating driver's conditions was confirmed. These results suggest that older driver's eye movements during driving reflect their cognitive abilities and level of mental workload.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115557718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glyph-Based Visual Analysis of Q-Leaning Based Action Policy Ensembles on Racetrack 基于q - learning的赛马场行动策略集合的字形可视化分析
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00011
David Groß, M. Klauck, Timo P. Gros, Marcel Steinmetz, Jörg Hoffmann, S. Gumhold
Recently, deep reinforcement learning has become very successful in making complex decisions, achieving super-human performance in Go, chess, and challenging video games. When applied to safety-critical applications, however, like the control of cyber-physical systems with a learned action policy, the need for certification arises. To empower domain experts to decide whether to trust a learned action policy, we propose visualization methods for a detailed assessment of action policies implemented as neural networks trained with Q-learning. We propose a highly responsive visual analysis tool that fosters efficient analysis of Q-learning based action policies over the complete state space of the system, which is essential for verification and gaining detailed insights on policy quality. For efficient visual inspection of the per-action Q-value rating over the state space, we designed three glyphs that provide different levels of detail. In particular, we introduce the two-dimensional Q-Glyph that visually encodes Q-values in a compact manner while preserving directional information of the actions. Placing glyphs in ordered stacks allows for simultaneous inspection of policy ensembles, that for example result from Q-learning meta parameter studies. Further analysis of the policy is supported by enabling inspection of individual traces generated from a chosen start state. A user study was conducted to evaluate the effectiveness of our tool applied to the Racetrack case study, which is a commonly used benchmark in the AI community abstracting driving control.
最近,深度强化学习在做出复杂决策方面非常成功,在围棋、国际象棋和具有挑战性的电子游戏中取得了超人的表现。然而,当应用于安全关键应用程序时,例如使用学习操作策略控制网络物理系统,就需要进行认证。为了使领域专家能够决定是否信任学习到的行动策略,我们提出了可视化方法来详细评估使用Q-learning训练的神经网络实现的行动策略。我们提出了一种高响应性的可视化分析工具,可以在系统的完整状态空间上促进基于q学习的行动策略的有效分析,这对于验证和获得策略质量的详细见解至关重要。为了对状态空间上的每个动作q值评级进行有效的视觉检查,我们设计了三个提供不同细节级别的符号。特别地,我们引入了二维Q-Glyph,它以一种紧凑的方式可视化地编码q值,同时保留了动作的方向信息。将字形放置在有序的堆栈中,可以同时检查策略集合,例如Q-learning元参数研究的结果。通过启用从选定的开始状态生成的单个跟踪的检查,可以支持对策略的进一步分析。我们进行了一项用户研究,以评估我们的工具应用于Racetrack案例研究的有效性,这是人工智能社区抽象驾驶控制的常用基准。
{"title":"Glyph-Based Visual Analysis of Q-Leaning Based Action Policy Ensembles on Racetrack","authors":"David Groß, M. Klauck, Timo P. Gros, Marcel Steinmetz, Jörg Hoffmann, S. Gumhold","doi":"10.1109/IV56949.2022.00011","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00011","url":null,"abstract":"Recently, deep reinforcement learning has become very successful in making complex decisions, achieving super-human performance in Go, chess, and challenging video games. When applied to safety-critical applications, however, like the control of cyber-physical systems with a learned action policy, the need for certification arises. To empower domain experts to decide whether to trust a learned action policy, we propose visualization methods for a detailed assessment of action policies implemented as neural networks trained with Q-learning. We propose a highly responsive visual analysis tool that fosters efficient analysis of Q-learning based action policies over the complete state space of the system, which is essential for verification and gaining detailed insights on policy quality. For efficient visual inspection of the per-action Q-value rating over the state space, we designed three glyphs that provide different levels of detail. In particular, we introduce the two-dimensional Q-Glyph that visually encodes Q-values in a compact manner while preserving directional information of the actions. Placing glyphs in ordered stacks allows for simultaneous inspection of policy ensembles, that for example result from Q-learning meta parameter studies. Further analysis of the policy is supported by enabling inspection of individual traces generated from a chosen start state. A user study was conducted to evaluate the effectiveness of our tool applied to the Racetrack case study, which is a commonly used benchmark in the AI community abstracting driving control.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115700352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
In-Place Collaboration in Extended Reality Data Visualization 扩展现实数据可视化中的就地协作
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00044
Heidi Abdelhamed, Nourhan El-Faransawy, Nada Sharaf
Throughout the past centuries technologies have evolved, and we are always looking for ways to make every persons' life easier. Analysing datasets consumes so much time, so interacting with real-life data visualisations is an innovative way to analyse any dataset. Augmented Reality is a new technology that have unlimited ideas, also collaborating over an augmented reality experience is adding a new way of implementing augmented reality to increase humans interaction with the data. This paper is about integrating a collaboration augmented reality experience over dataset visualisation. This paper scope includes the design, implementation and evaluation of the augmented reality experience. This AR experience includes 3 main modules which are Image tracking, data-set visualisation and collaboration. After implementing the AR experience surveys where taken from participants to measure experience usability and participants' task load.
在过去的几个世纪里,技术不断发展,我们一直在寻找使每个人的生活更轻松的方法。分析数据集消耗大量时间,因此与现实生活中的数据可视化交互是分析任何数据集的创新方式。增强现实是一项具有无限创意的新技术,通过增强现实体验进行协作也增加了一种实现增强现实的新方式,以增加人类与数据的互动。这篇论文是关于在数据集可视化上集成协作增强现实体验。本文的研究范围包括增强现实体验的设计、实现和评估。这种增强现实体验包括3个主要模块,即图像跟踪,数据集可视化和协作。在实施了AR体验调查之后,我们从参与者那里收集数据来衡量体验可用性和参与者的任务负荷。
{"title":"In-Place Collaboration in Extended Reality Data Visualization","authors":"Heidi Abdelhamed, Nourhan El-Faransawy, Nada Sharaf","doi":"10.1109/IV56949.2022.00044","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00044","url":null,"abstract":"Throughout the past centuries technologies have evolved, and we are always looking for ways to make every persons' life easier. Analysing datasets consumes so much time, so interacting with real-life data visualisations is an innovative way to analyse any dataset. Augmented Reality is a new technology that have unlimited ideas, also collaborating over an augmented reality experience is adding a new way of implementing augmented reality to increase humans interaction with the data. This paper is about integrating a collaboration augmented reality experience over dataset visualisation. This paper scope includes the design, implementation and evaluation of the augmented reality experience. This AR experience includes 3 main modules which are Image tracking, data-set visualisation and collaboration. After implementing the AR experience surveys where taken from participants to measure experience usability and participants' task load.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117345347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepFingerPCANet: Automatic Fingerprint Classification Using Deep Learning DeepFingerPCANet:基于深度学习的自动指纹分类
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00081
M. Hussain, Fahman Saeed, Hatim Aboalsamh, Abdul Wadood
Fingerprints are expanding in popularity, and the fingerprint datasets are becoming increasingly huge; they are recorded using a range of sensors embedded in smart devices like mobile phones and personal computers. The difficulty of fingerprint recognition systems is worsened when they are obtained using different sensors, which is one of the main challenges. Fingerprints can be categorized in a database to reduce the search space and speed up the query response. However, classifying cross-sensor fingerprints is a challenging problem. An efficient and robust solution is to use a convolutional neural network (CNN), but designing its architecture is time-consuming. In order to automatically design a CNN model for fingerprint classification, we developed a strategy that uses pyramidal clustering, principal component analysis (PCA), and the ratio of the between-class scatter to within-class scatter to determine the number of filters and the number of layers in the model automatically. It aids in the building of lightweight CNN models that are efficient and speed up fingerprint classification. We validated the proposed method on two benchmark datasets, FingerPass and FVC2004, which feature noisy, low-quality fingerprints obtained via live scan devices and various sensors. Compared to existing fingerprint classification methods and well-known pre-trained models, the newly developed models perform noticeably better.
指纹越来越受欢迎,指纹数据集越来越庞大;它们是通过嵌入手机和个人电脑等智能设备的一系列传感器记录下来的。当使用不同的传感器获取指纹时,指纹识别系统的难度会加大,这是指纹识别系统面临的主要挑战之一。指纹可以在数据库中进行分类,以减少搜索空间,加快查询响应速度。然而,跨传感器指纹分类是一个具有挑战性的问题。一种高效且鲁棒的解决方案是使用卷积神经网络(CNN),但其结构设计非常耗时。为了自动设计用于指纹分类的CNN模型,我们开发了一种使用金字塔聚类、主成分分析(PCA)和类间散射与类内散射的比值来自动确定模型中滤波器的数量和层数的策略。它有助于构建轻量级的CNN模型,这些模型既高效又加快了指纹分类速度。我们在FingerPass和FVC2004两个基准数据集上验证了该方法,这两个基准数据集具有通过实时扫描设备和各种传感器获得的噪声低质量指纹。与现有的指纹分类方法和已知的预训练模型相比,新模型的性能明显更好。
{"title":"DeepFingerPCANet: Automatic Fingerprint Classification Using Deep Learning","authors":"M. Hussain, Fahman Saeed, Hatim Aboalsamh, Abdul Wadood","doi":"10.1109/IV56949.2022.00081","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00081","url":null,"abstract":"Fingerprints are expanding in popularity, and the fingerprint datasets are becoming increasingly huge; they are recorded using a range of sensors embedded in smart devices like mobile phones and personal computers. The difficulty of fingerprint recognition systems is worsened when they are obtained using different sensors, which is one of the main challenges. Fingerprints can be categorized in a database to reduce the search space and speed up the query response. However, classifying cross-sensor fingerprints is a challenging problem. An efficient and robust solution is to use a convolutional neural network (CNN), but designing its architecture is time-consuming. In order to automatically design a CNN model for fingerprint classification, we developed a strategy that uses pyramidal clustering, principal component analysis (PCA), and the ratio of the between-class scatter to within-class scatter to determine the number of filters and the number of layers in the model automatically. It aids in the building of lightweight CNN models that are efficient and speed up fingerprint classification. We validated the proposed method on two benchmark datasets, FingerPass and FVC2004, which feature noisy, low-quality fingerprints obtained via live scan devices and various sensors. Compared to existing fingerprint classification methods and well-known pre-trained models, the newly developed models perform noticeably better.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124994842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Web-based 3D Viewer for Multidimensional Microscope Imaging Modalities 交互式基于web的三维查看器多维显微镜成像模式
Pub Date : 2022-07-01 DOI: 10.1109/IV56949.2022.00069
Yubraj Gupta, R. E. D. Guerrero, C. Costa, Rui Jesus, Eduardo Pinho, Luís Bastião
Recent advancements in the acquisition of digital imaging modalities with high-throughput technologies, such as confocal laser scanner microscopy (CLSM) and focused-ion beam scanning electron microscopy (FIB-SEM), are providing researchers with unprecedented opportunities to collect massive amounts of multidimensional datasets. This data can be used to visualize the internal structure of tiny particles (mostly cells and organisms) or to develop analytic algorithms. Visualizing newly obtained multidimensional microscope imaging data is beyond the capabilities of traditional 3D visualization packages, as it carries much information in the form of additional dimensions. Typically, these extra dimensions correspond to space, time, and channels, which has driven the development of new visualization applications. In this article, we describe the design and implementation of an interactive web-based multidimensional 3D visualization tool for CLSM and FIB-SEM microscope imaging modalities. The proposed 3D visualization application accepts DICOM files as input and provides a variety of visualization choices ranging from 3D volume/surface rendering to multiplanar reconstruction approaches. The solution performance was tested by uploading and rendering microscopy images of distinct modalities.
最近在高通量技术的数字成像模式获取方面取得的进展,如共聚焦激光扫描显微镜(CLSM)和聚焦离子束扫描电子显微镜(FIB-SEM),为研究人员提供了前所未有的机会来收集大量的多维数据集。这些数据可用于可视化微小颗粒(主要是细胞和生物体)的内部结构或开发分析算法。将新获得的多维显微镜成像数据可视化超出了传统3D可视化软件包的能力,因为它以附加维度的形式携带了大量信息。通常,这些额外的维度对应于空间、时间和通道,这推动了新的可视化应用程序的开发。在这篇文章中,我们描述了一个交互式的基于web的多维三维可视化工具的设计和实现,用于CLSM和FIB-SEM显微镜成像模式。提出的3D可视化应用程序接受DICOM文件作为输入,并提供从3D体/表面渲染到多平面重建方法的各种可视化选择。通过上传和呈现不同形态的显微镜图像来测试溶液的性能。
{"title":"Interactive Web-based 3D Viewer for Multidimensional Microscope Imaging Modalities","authors":"Yubraj Gupta, R. E. D. Guerrero, C. Costa, Rui Jesus, Eduardo Pinho, Luís Bastião","doi":"10.1109/IV56949.2022.00069","DOIUrl":"https://doi.org/10.1109/IV56949.2022.00069","url":null,"abstract":"Recent advancements in the acquisition of digital imaging modalities with high-throughput technologies, such as confocal laser scanner microscopy (CLSM) and focused-ion beam scanning electron microscopy (FIB-SEM), are providing researchers with unprecedented opportunities to collect massive amounts of multidimensional datasets. This data can be used to visualize the internal structure of tiny particles (mostly cells and organisms) or to develop analytic algorithms. Visualizing newly obtained multidimensional microscope imaging data is beyond the capabilities of traditional 3D visualization packages, as it carries much information in the form of additional dimensions. Typically, these extra dimensions correspond to space, time, and channels, which has driven the development of new visualization applications. In this article, we describe the design and implementation of an interactive web-based multidimensional 3D visualization tool for CLSM and FIB-SEM microscope imaging modalities. The proposed 3D visualization application accepts DICOM files as input and provides a variety of visualization choices ranging from 3D volume/surface rendering to multiplanar reconstruction approaches. The solution performance was tested by uploading and rendering microscopy images of distinct modalities.","PeriodicalId":153161,"journal":{"name":"2022 26th International Conference Information Visualisation (IV)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123660120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 26th International Conference Information Visualisation (IV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1