English
 
Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning

Authors
/persons/resource/Junyou.Zhu

Zhu,  Junyou
Potsdam Institute for Climate Impact Research;

He,  Langzhou
External Organizations;

Gao,  Chao
External Organizations;

Hou,  Dongpeng
External Organizations;

/persons/resource/zhen.su

Su,  Zhen
Potsdam Institute for Climate Impact Research;

Yu,  Philip S.
External Organizations;

/persons/resource/Juergen.Kurths

Kurths,  Jürgen
Potsdam Institute for Climate Impact Research;

/persons/resource/frank.hellmann

Hellmann,  Frank       
Potsdam Institute for Climate Impact Research;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

zhu25g.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zhu, J., He, L., Gao, C., Hou, D., Su, Z., Yu, P. S., Kurths, J., Hellmann, F. (2025): SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning - Proceedings of Machine Learning Research, International Conference on Machine Learning (Vancouver, Canada 2025), 21 p.


Cite as: https://publications.pik-potsdam.de/pubman/item/item_33418
Abstract
Diffusion probabilistic models (DPMs) have recently demonstrated impressive generative capabilities. There is emerging evidence that their sample reconstruction ability can yield meaningful representations for recognition tasks. In this paper, we demonstrate that the objectives underlying generation and representation learning are not perfectly aligned. Through a spectral analysis, we find that minimizing the mean squared error (MSE) between the original graph and its reconstructed counterpart does not necessarily optimize representations for downstream tasks. Instead, focusing on reconstructing a small subset of features, specifically those capturing global information, proves to be more effective for learning powerful representations. Motivated by these insights, we propose a novel framework, the Smooth Diffusion Model for Graphs (SDMG), which introduces a multi-scale smoothing loss and low-frequency information encoders to promote the recovery of global, low-frequency details, while suppressing irrelevant high-frequency noise. Extensive experiments validate the effectiveness of our method, suggesting a promising direction for advancing diffusion models in graph representation learning.