小色猫传媒m3u8免费入口-在线观看完整版
当前位置: 首页 师资队伍

李奇

发布时间:2022-06-22浏览次数:

基本情况


名:

李奇





别:



称:

教授



教师



导师情况:

博导



历:

博士研究生



办公电话:

0431-85583546



电子邮箱:

liqi@cust.edu.cn


个人简介

李奇:博士,博士生导师中国计算机学会(CCF会员,国际复合医学工程学会理事会理事。2010年毕业于日本冈山大学,获博士学位,2016-2017年作为访问学者赴美国佛罗里达国际大学访学。研究方向包括智能人机交互类脑智能,特别在脑机接口技术认知机理解析应用方面开展了深入研究。目前的具体研究内容主要为:脑机接口系统的新刺激范式探索及脑电信号解析方法优化;视听觉整合脑机理及仿脑理论模型构建;基于机器学习的神经退行性疾病的异常表征及智能诊断;基于神经反馈的康复技术等。部分研究成果达到国际先进水平,取得了良好的社会经济效益

先后主持国家自然科学基金面上项目、吉林省科技发展计划项目教育部留学回国人员科研启动基金项目吉林省教育厅“十三五”科学技术研究项目,并作为骨干参与了国家自然科学基金多项国家、省部级项目。已发表学术论文60余篇出版学术专著1部。担任《IEEE Access》、《IEEE Transactions on Neural Systems and Rehabilitation Engineering》、《Brain Imaging and Behavior》、《Frontiers in Neuroscience》、《Cerebral Cortex》等多个期刊的审稿人。已申请国家发明专利4项,软件著作权1项。吉林省自然科学学术成果三等奖1项。目前承担了本科生《操作系统》课程的教学任务。

研究领域

脑信息与智能科学

研究课题

国家自然科学基金: 不同加工处理阶段的视听觉整合及注意调节脑机制研究(2018-2021);

国家自然科学基金: 视听觉时空间信息整合的脑机制研究 (2013-2015);

吉林省科技发展计划,科技攻关: 视听双模态脑机接口智能交互系统 (2019-2021);

吉林省科技发展计划国际科技合作: 立体感知舒适度影响因素的脑神经机制研究(2012-2014);

吉林省科技发展计划青年基金: 人脑视听觉时空间信息整合机制的理论模型构建 (2010-2012);

教育部留学回国人员科研启动基金项目: 人脑视听觉信息的时空间要素整合机制的研究 (2012-2014);

吉林省教育厅“十三五”科学技术研究项目: 视听双模态脑机接口系统的拼写范式探索及分类算法研究 (2019-2020);

吉林省教育厅“十三五”科学技术研究项目: 脑机接口系统的新刺激范式探索及分类算法研究 (2016-2017).

奖励与荣誉

吉林省第七批拔尖创新人才第三层次 (2019);

吉林省自然科学学术成果三等奖——视听觉信息整合脑机制研究 (2018);

长春理工大学: “三育人”先进个人(2013);

长春理工大学: 先进工作者 (2012).

学术成果

[1]. Y. Xi, Q. Li*, N. Gao, G. J. Li, W. H. Lin, J. L. Wu. Co-stimulation-removed audiovisual semantic integration and modulation of attention: An event-related potential study. International Journal of Psychophysiology, 151: 7-17, 2020. (SCI, IF: 2.407)

[2]. Z. H. Lu, Q. Li*, N. Gao and J. L. Yang. The Self-Face Paradigm Improves the Performance of the P300-Speller System. Front. Comput. Neurosci. 13: 93, 2020. (SCI, IF: 2.323)

[3]. Z. H. Lu, Q. Li*, N. Gao, J. J. Yang and O. Bai. Happy emotion cognition of bimodal audiovisual stimuli optimizes the performance of the P300 speller. Brain and Behavior 00:e01479, 2019. (SCI, IF: 2.072)

[4]. Y. Xi, Q. Li*, M. C. Zhang, L. Liu*, G. J. Li, W. H. Lin and J. L. Wu. Optimized Configuration of Functional Brain Network for Processing Semantic Audiovisual Stimuli Underlying the Modulation of Attention: A Graph-Based Study. Frontiers in Integrative Neuroscience 13: 67, 2019. (SCI, IF: 2.81)

[5]. Z. H. Lu, Q. Li*, N. Gao, J. J. Yang and O. Bai. A Novel Audiovisual P300-Speller Paradigm Based on Cross-Modal Spatial and Semantic Congruence. Frontiers in Neuroscience 13:1040, 2019. (SCI, IF: 3.648)

[6]. Q. Li*, Y. Xi, M. C. Zhang, L. Liu* and X. Y. Tang. Distinct Mechanism of Audiovisual Integration with Informative and Uninformative Sound in a Visual Detection Task: A DCM Study. Frontiers in Computational Neuroscience 13: 59, 2019. (SCI, IF: 2.323)

[7]. Y. Xi, Q. Li*, N. Gao, S. Y. He and X. Y. Tang. Cortical network underlying audiovisual semantic integration and modulation of attention: An fMRI and graph-based study. PLoS ONE 14(8): e0221185, 2019. (SCI, IF: 2.776)

[8]. Q. Li*, Z. H. Lu, N. Gao and J. J. Yang. Optimizing the performance of the visual P300-speller through active mental tasks based on color distinction and modulation of task difficulty, Frontiers in Human Neuroscience 13: 130, 2019. (SCI, IF: 2.871)

[9]. Q. Li*, K. Y. Shi, N. Gao, J. Li and O. Bai. Training set extension for SVM ensemble in P300-speller with familiar face paradigm. Technology and Health Care 26 (3): 469-482, 2018. (SCI, IF: 0.717)

[10]. Q. Li, H. T. Yu, X. J. Li, H. Z. Sun, J. J. Yang and C. L. Li. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study. Neuroreport 28(2): 63-68, 2017. (SCI, IF: 1.343)

[11]. Q. Li, H. T. Yu, Y. Wu and N. Gao. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: an event-related potential study. Neuroscience Letters 629: 149-154, 2016. (SCI, IF: 2.06)

[12]. Q. Li, H. M. Yang, F. Sun, J. L. Wu. Spatiotemporal relationships among audiovisual stimuli modulate auditory facilitation of visual target discrimination. Perception 44(3) 232–242, 2015. (SCI, IF: 1.11)

[13]. Q. Li, Y. Wu, J. J. Yang, J. L. Wu and T. Touge. The temporal reliability of sound modulates visual detection: An event-related potential study. Neuroscience Letters 584: 202-207, 2015.(SCI, IF: 2.06)

[14]. Q. Li, S. Liu, J. Li and O. Bai. Use of a green familiar faces paradigm improves P300-speller brain-computer interface performance. PLoS ONE 10(6): e0130325, 2015. (SCI, IF: 3.534)

[15]. 李奇, 杨菁菁, 武岩, 李修军, 杨华民. 视听觉信息整合脑机制研究. 国防工业出版社, 北京, 2014.

[16]. Y. L. Gao #, Q. Li #, W. P. Yang, J. J. Yang, X. Y. Tang and J. L. Wu. Effects of ipsilateral and bilateral auditory stimuli on audiovisual integration: a behavioral and event-related potential study. Neuroreport, 25(9): 668-675, 2014.(SCI, IF: 1.52)

[17]. Q. Li, J. L. Wu*, and T. Touge. Audiovisual interaction enhances auditory detection in late stage: an ERP study. Neuroreport, 21(3): 173-178, 2010. (SCI, IF: 1.904)

[18]. Q. Li, J.J. Yang, N. Kakura and J. L. Wu. Multimodal audiovisual integration at early and late processing stages in humans: An event-related potential study. Information-An International Interdisciplinary journal, 13(3)A: 807-816, 2010. (SCI)

[19]. J. L. Wu, Q. Li, O. Bai and T. Touge. Multisensory interactions elicited by audiovisual stimuli presented peripherally in a visual attention task: A behavioral and event-related potential study in humans. Journal of Clinical Neurophysiology, 26(6): 407-413, 2009. (SCI, IF: 1.74)

[20]. Q. Li and J. L. Wu. Multisensory Interactions of audiovisual stimuli presented at different location in visual-attention tasks. Information-An International Interdisciplinary journal, 12(6): 1311-1320, 2009. (SCI).






小色猫传媒m3u8免费入口-在线观看完整版