Deutsch
 
Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning

Urheber*innen
/persons/resource/Junyou.Zhu

Zhu,  Junyou
Potsdam Institute for Climate Impact Research;

He,  Langzhou
External Organizations;

Gao,  Chao
External Organizations;

Hou,  Dongpeng
External Organizations;

/persons/resource/zhen.su

Su,  Zhen
Potsdam Institute for Climate Impact Research;

Yu,  Philip S.
External Organizations;

/persons/resource/Juergen.Kurths

Kurths,  Jürgen
Potsdam Institute for Climate Impact Research;

/persons/resource/frank.hellmann

Hellmann,  Frank       
Potsdam Institute for Climate Impact Research;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

zhu25g.pdf
(Verlagsversion), 2MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Zhu, J., He, L., Gao, C., Hou, D., Su, Z., Yu, P. S., Kurths, J., Hellmann, F. (2025): SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning - Proceedings of Machine Learning Research, International Conference on Machine Learning (Vancouver, Canada 2025), 21 p.


Zitierlink: https://publications.pik-potsdam.de/pubman/item/item_33418
Zusammenfassung
Diffusion probabilistic models (DPMs) have recently demonstrated impressive generative capabilities. There is emerging evidence that their sample reconstruction ability can yield meaningful representations for recognition tasks. In this paper, we demonstrate that the objectives underlying generation and representation learning are not perfectly aligned. Through a spectral analysis, we find that minimizing the mean squared error (MSE) between the original graph and its reconstructed counterpart does not necessarily optimize representations for downstream tasks. Instead, focusing on reconstructing a small subset of features, specifically those capturing global information, proves to be more effective for learning powerful representations. Motivated by these insights, we propose a novel framework, the Smooth Diffusion Model for Graphs (SDMG), which introduces a multi-scale smoothing loss and low-frequency information encoders to promote the recovery of global, low-frequency details, while suppressing irrelevant high-frequency noise. Extensive experiments validate the effectiveness of our method, suggesting a promising direction for advancing diffusion models in graph representation learning.