Review Credibility Assessment
Building robust multimodal models that detect both human-written and AI-generated fake reviews using textual, visual, temporal, and relational cues.
About
I work on multimodal learning, trustworthy NLP, and code understanding, with current projects spanning fake review detection and multimodal large models for software engineering.
My recent work includes MDCFN for robust multimodal review credibility assessment and CodeOCR, a study of vision-language models for code understanding.
I am currently an M.Sc. student in Library and Information Studies at Hohai University, and I also work with the LLM for Software Engineering Lab at Shanghai Jiao Tong University.
My recent research focuses on robust multimodal review credibility assessment and the effectiveness of vision-language models for code understanding. I am particularly interested in combining strong empirical evaluation with practical engineering systems.
My work sits at the intersection of multimodal machine learning, information credibility, and software engineering.
Building robust multimodal models that detect both human-written and AI-generated fake reviews using textual, visual, temporal, and relational cues.
Exploring whether rendered code images and visual cues can let multimodal large models match or exceed text-based baselines in software engineering tasks.
Using machine learning and information retrieval methods to study practical information systems problems with an emphasis on reliable evaluation and strong engineering execution.
Selected publications and ongoing work.
Proceedings of ISSTA 2026 · 2026
Studies code-as-image representations for multimodal code understanding and shows how visual encoding can improve efficiency while remaining competitive on downstream tasks.
ACM AIWare 2026 Data and Benchmark Track Submission · 2026
Introduces ClassEval-Pro, a benchmark of 300 class-level code generation tasks across 11 domains, built through an automated three-stage pipeline with complexity enhancement, cross-domain class composition, and real-world GitHub code integration. Each task is validated by an LLM Judge Ensemble and test suites with over 90% line coverage. Experiments on five frontier LLMs under five generation strategies show that the best model reaches only 45.6% class-level Pass@1, while error analysis highlights logic and dependency errors as the main bottlenecks.
International Journal of Intelligent Systems · 2026
Presents MDCFN, a multimodal architecture for robust review credibility assessment across textual, visual, and relational signals.
Research-first, with industry experience that informs implementation and systems thinking.
LLM for Software Engineering Lab (LLMSE), Shanghai Jiao Tong University
Advisor: Prof. Xiaodong Gu
Institute of Management Science, Hohai University
Hohai University
Hohai University
Inspur Morning Cloud Technologies Co., Ltd.
A few implementation-heavy projects that reflect both experimentation and execution.