Hi, I'm Chen Liu (刘晨).

Self-driven, quick starter, passionate researcher with a curious mind who enjoys solving challenging and interesting problems regarding AI for healthcare.

Acknowledgements  :Many thanks to Varad Bhogayata for kindly providing this cool website template.

About

I am currently a Research Assistant at the Smart Health Center at Hong Kong Polytechnic University, where I have the privilege of working closely with Dr. Wenfang Yao, Prof. Kejing Yin, and Prof. Harry Qin on advancing multi-modal learning applications in healthcare. My research journey began with a Bachelor's degree in Software Engineering from South China University of Technology.

I am also honored to be an intern at the Computer Cognition, Vision, and Learning (CCVL@JHU) Lab at Johns Hopkins University, where I work with Dr. Zongwei Zhou and Prof. Alan L. Yuille on exploring the application of generative AI in healthcare.

My passion lies in Multi-modality Learning and Medical Image Analysis. Recently, I have been particularly interested in modeling the time-varying dynamics of image sequences and irregular time-series data to enhance our understanding of disease progression.

Outside of research, I am an enthusiastic runner and a dedicated volleyball player. I love problem-solving and coding, always striving to bring my best effort to every project.

Looking for a PhD opportunity to work in a challenging position combining my skills in Machine Learning, Programming and Data analyses, which provides interesting experiences and personal growth.

Education

South China University of Technology

Guangzhou, China

Degree: Bachelor of Engineering in Software Engineering
GPA: 3.85/4.0

    Relevant Courseworks:

    • Linear Algebra and Analytic Geometry (98)
    • Probability and Mathematical Statistics (97)
    • Database Systems (95)
    • Operating System (94)
    • Machine Learning (95)

Publications

Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation
Multi-modal fusion Disease progression modelling

Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation

Wenfang Yao*, Chen Liu*, Kejing Qin, William K. Cheung, Jing Qin(* These authors contributed equally.)

NeurIPS 2024

[Paper] [PDF] [Code]

Experience

Research Assistant
  • Generated personalized CXR images using latent diffusion models, integrating asynchronous and heterogeneous CXR and EHR data for anatomical and disease progression information.
  • Developed a multi-task EHR encoder to extract relevant imaging and disease progression data for dynamic image generation.
  • Conducted extensive experiments and qualitative analyses to show that our model could generate high quality CXR images and outperform existing methods in multi-modal clinical.
  • Key Words: Multi-modal Fusion, Diffusion Model, PyTorch
Dec 2023 - now | Hong Kong, China
Intern
  • Analyzed hard mode of AI segmentation in pancreatic tumors, identifying patterns for more realistic synthesis.
  • Developed synthetic models for pancreatic tumors with varied characteristics using conditional diffusion models. Enhanced the diagnosis performance of different pancreatic tumor types on Current AI model.
  • Participated in the construction of a large, publicly accessible lesion dataset with per-voxel annotations for 10,136 CT scans, including 60,038 lesions across six organs.
  • Key words: Synthetic Tumor, Tumor Segmentation, Benchmarch Building
June 2024 - Nov. 2024 | Online
Research Assistant
  • Extracted semantic masks using MedSAM, which was integrated with input data as prior knowledge to facilitate the report generation.
  • Transformed LaBERT from non-autoregressive to autoregressive, optimizing the inference method with beam search for comparable results.
  • Developed a clinical loss function for image classification to improve finding awareness and proposed a method for extracting topic-related finding knowledge from pre-trained report models.
  • Key Words: Radiology Report Generation, Controllable Image Captioning
May 2022 - July 2024 | Guangzhou, China
Intern
  • Developed a near-infrared retro-reflective patch for rapid normal vector measurement with optical navigation systems.
  • Created an accurate robot arm positioning algorithm, integrating robot, marker, and optical tracking coordinates for precise injection angle control.
  • Integrated coordinate acquisition, hand-eye calibration, and automatic injection into a C++/QT-based robot control platform.
  • Key Words: medical robot control, hand-eye calibration
May 2022 - Jan 2023 | Guangzhou, China

Skills

  • Programming Languages: Python, C++, QT, Go, R
  • Technologies: Artificial Neural Networks/Machine Learning (PyTorch, Sklearn, NumPy, Pandas), Data Processing, SQL,
  • English proficiency: IELTS 7.0
  • Tools: Git, Latex

Contact