Jiaru Zhang
Jiaru Zhang
Home
Publictions
Experience
Service
Talks
Contests
Awards
Others
Contact
Light
Dark
Automatic
1
Stealthy Backdoor Attack in Federated Learning via Adaptive Layer-wise Gradient Alignment
Qingqian Yang
,
Peishen Yan
,
Xiaoyu Wu
,
Jiaru Zhang
,
Tao Song
,
Yang Hua
,
Hao Wang
,
Liangliang Wang
,
Haibing Guan
Leveraging Model Guidance to Extract Training Data from Personalized Diffusion Models
Diffusion Models (DMs) have evolved into advanced image generation tools, especially for few-shot fine-tuning where a pretrained DM is …
Xiaoyu Wu
,
Jiaru Zhang
,
Steven Wu
PDF
Code
Learning Identifiable Structures Helps Avoid Bias in DNN-based Supervised Causal Learning
In this paper, we propose a novel DNN-based method for supervised causal learning that addresses systematic biases in existing methods, with a newly designed pairwise encoder serving as the core architecture.
Jiaru Zhang
,
Rui Ding
,
Qiang Fu
,
Bojun Huang
,
Zizhen Deng
,
Yang Hua
,
Haibing Guan
,
Shi Han
,
Dongmei Zhang
PDF
Cite
Code
Poster
CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion
in this paper, we are the first to explore and propose to utilize adversarial examples for DMs to protect human-created artworks. Specifically, we first build a theoretical framework to define and evaluate the adversarial examples for DMs. Then, based on this framework, we design a novel algorithm, named AdvDM, which exploits a Monte-Carlo estimation of adversarial examples for DMs by optimizing upon different latent variables sampled from the reverse process of DMs.
Xiaoyu Wu
,
Yang Hua
,
Chumeng Liang
,
Jiaru Zhang
,
Hao Wang
,
Tao Song
,
Haibing Guan
SPOT: Harnessing Differentiable Causal Discovery in the Presence of Latent Confounders with Skeleton Posterior
Pingchuan Ma
,
Rui Ding
,
Qiang Fu
,
Jiaru Zhang
,
Shuai Wang
,
Shi Han
,
Dongmei Zhang
Information Bound and its Applications in Bayesian Neural Networks
In this paper, we propose Information Bound as a metric of the amount of information in Bayesian neural networks. Different from mutual information on deterministic neural networks where modification of network structure or specific input data is usually necessary, Information Bound can be easily estimated on current Bayesian neural networks without any modification of network structures or training processes.
Jiaru Zhang
,
Yang Hua
,
Tao Song
,
Hao Wang
,
Zhengui Xue
,
Ruhui Ma
,
Haibing Guan
PDF
Cite
Code
Poster
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples
in this paper, we are the first to explore and propose to utilize adversarial examples for DMs to protect human-created artworks. Specifically, we first build a theoretical framework to define and evaluate the adversarial examples for DMs. Then, based on this framework, we design a novel algorithm, named AdvDM, which exploits a Monte-Carlo estimation of adversarial examples for DMs by optimizing upon different latent variables sampled from the reverse process of DMs.
Chumeng Liang
,
Xiaoyu Wu
,
Yang Hua
,
Jiaru Zhang
,
Yiming Xue
,
Tao Song
,
Zhengui Xue
,
Ruhui Ma
,
Haibing Guan
PDF
Cite
Code
Improving Bayesian Neural Networks by Adversarial Sampling
In this paper, we argue that the randomness of sampling in Bayesian neural networks causes errors in the updating of model parameters during training and some sampled models with poor performance in testing. We propose to train Bayesian neural networks with Adversarial Distribution as a theoretical solution. We further present the Adversarial Sampling method as an approximation in practice.
Jiaru Zhang
,
Yang Hua
,
Tao Song
,
Hao Wang
,
Zhengui Xue
,
Ruhui Ma
,
Haibing Guan
PDF
Cite
Code
Poster
Slides
Video
Robust Bayesian Neural Networks by Spectral Expectation Bound Regularization
We propose Spectral Expectation Bound Regularization (SEBR) to enhance the robustness of Bayesian neural networks. Our theoretical analysis reveals that training with SEBR improves the robustness to adversarial noises. We also prove that training with SEBR can reduce the epistemic uncertainty of the model.
Jiaru Zhang
,
Yang Hua
,
Zhengui Xue
,
Tao Song
,
Chengyu Zheng
,
Ruhui Ma
,
Haibing Guan
PDF
Cite
Code
Poster
Slides
Cite
×