1 |
How does the pre-training objective affect what large language models learn about linguistic properties? ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Automatic Identification and Classification of Bragging in Social Media ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Machine Extraction of Tax Laws from Legislative Texts
|
|
|
|
In: Proceedings of the Natural Legal Language Processing Workshop 2021 (2021)
|
|
BASE
|
|
Show details
|
|
17 |
Point-of-Interest Type Prediction using Text and Images ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Point-of-Interest Type Prediction using Text and Images ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.722/ Abstract: Target-oriented opinion words extraction (TOWE) (Fan et al., 2019b) is a new subtask of target-oriented sentiment analysis that aims to extract opinion words for a given aspect in text. Current state-of-the-art methods leverage position embeddings to capture the relative position of a word to the target. However, the performance of these methods depends on the ability to incorporate this information into word representations. In this paper, we explore a variety of text encoders based on pretrained word embeddings or language models that leverage part-of-speech and position embeddings, aiming to examine the actual contribution of each component in TOWE. We also adapt a graph convolutional network (GCN) to enhance word representations by incorporating syntactic information. Our experimental results demonstrate that BiLSTM-based models can effectively encode position information into word representations while using a GCN only ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
|
|
URL: https://dx.doi.org/10.48448/gvhf-5432 https://underline.io/lecture/37375-an-empirical-study-on-leveraging-position-embeddings-for-target-oriented-opinion-words-extraction
|
|
BASE
|
|
Hide details
|
|
|
|