GraSAME: Injecting Token-Level Structural Information to Pretrained
Language Models via Graph-guided Self-Attention Mechanism
Work
Year: 2024
Type: preprint
Abstract: Pretrained Language Models (PLMs) benefit from external knowledge stored in graph structures for various downstream tasks. However, bridging the modality gap between graph structures and text remains ... more
Source: arXiv (Cornell University)
Authors Shuzhou Yuan, Michael Färber
Cites:
Cited by:
Related to: 10
Citation percentile (by year/subfield):
Topic: Topic Modeling
Subfield: Artificial Intelligence
Field: Computer Science
Domain: Physical Sciences
Open Access status: green