Projects

Trustworthy Weakly Supervised Breast Cancer Detection in Ultrasound Imaging

Abstract: Breast cancer has continued to be a significant health issue in recent years, being the most frequently diagnosed cancer among women in the United States. Early-stage breast cancer lacks apparent symptoms in the early stage of breast cancer; thus, many patients miss the best chance to cure it. Breast Ultrasound (BUS) imaging has emerged as a crucial diagnostic tool. Yet, the inherent challenges in BUS imaging, such as low contrast and noise, impede accurate breast cancer diagnosis by doctors. Computer-aided-diagnosis (CAD) systems are proposed to help radiologists interpret BUS images, make a more precise diagnosis, and reduce their workload. Breast cancer detection plays a crucial role in the CAD system. Training a fully supervised model for detecting breast cancer necessitates many manual annotations. Several weakly supervised frameworks for object detection have been developed to minimize the requirement for extensive annotations. During the SpF 2023 award period, the research team developed two such frameworks for natural images and breast cancer detection. However, trustworthiness is the most critical aspect of smart health applications. Constructing a trustworthy deep-learning model requires extensive datasets. Additionally, previous methods do not offer a qualitative assessment of the models’ trustworthiness. To overcome these hurdles, the researchers propose a new, trustworthy, weakly supervised framework for breast cancer detection named BUSwiNet. This framework utilizes only image-level labels (cancer/non cancer). It is capable of identifying the bounding boxes of breast tumors in breast ultrasound (BUS) images, as well as determining whether the tumor is cancerous. Moreover, the research aims to develop a method using Bayesian Neural Networks to evaluate the trustworthiness of breast cancer detection models quantitatively.

Building a Trustworthy Deep Learning Model for Urban-Scene Image Segmentation: Robustness and Uncertainty Analysis

Abstract: Urban scene image segmentation typically identifies components such as pedestrians, sidewalks, streets, and bus stops within city environments. This technology is crucial for understanding and modeling complex urban systems, aiding city planners and policymakers in designing inclusive policies and infrastructures to support underrepresented groups, such as individuals with vision impairments. It also plays a critical role in autonomous vehicles, traffic management, environmental planning, and public safety. Urban scene image segmentation is a well-studied field, with many publicly available datasets for research. Deep learning techniques perform well on these databases. However, these segmentation models are vulnerable to adversarial perturbations—minor input modifications deliberately designed to deceive the model. Specifically, the ability of models trained on one dataset to generalize effectively to adversarial or noise-disturbed datasets is crucial but often insufficient. It is critical to develop robust models capable of withstanding such adversarial attacks. In this project, we aim to assess cutting-edge segmentation models’ robustness and generalization capabilities under adversarial attacks by evaluating their uncertainty and segmentation performance under adversarial perturbations. Furthermore, we will develop a novel contrastive learning diffusion model, ContraDiff, for trustworthy urban-scene image segmentation. The proposed method will utilize a contrastive learning mechanism with latent diffusion approaches to improve the models’ robustness and generalization ability under adversarial attacks.