OpenAlex
GraSAME: Injecting Token-Level Structural Information to Pretrained Language Models via Graph-guided Self-Attention Mechanism
Work
Year: 2024
Type: preprint
Abstract: Pretrained Language Models (PLMs) benefit from external knowledge stored in graph structures for various downstream tasks. However, bridging the modality gap between graph structures and text remains ... more
Cites:
Cited by:
Related to: 10
Citation percentile (by year/subfield):
Open Access status: green