Contents

Platform-Invariant Topic Modeling via Contrastive Learning to Mitigate Platform-Induced Bias

Koo, Minseo / Kim, Doeun / Han, Sungwon / Park, Sungkyu

  • 33 ITEM VIEW
  • 6 DOWNLOAD
Abstract

Cross-platform topic dissemination is one of the research subjects that delved into media analysis; sometimes it fails to grasp the authentic topics due to platform-induced biases, which may be caused by aggregating documents from multiple platforms and running them on an existing topic model. This work deals with the impact of unique platform characteristics on the performance of topic models and proposes a new approach to enhance the effectiveness of topic modeling. The data utilized in this study consisted of a total of 1.5 million posts collected using the keyword ”ChatGPT” on the three social media platforms. The devised model reduces platform influence in topic models by developing a platform-invariant contrastive learning algorithm and removing platform-specific jargon word sets. The proposed approach was thoroughly validated through quantitative and qualitative experiments alongside standard and state-of-the-art topic models and showed its supremacy. This method can mitigate biases arising from platform influences when modeling topics from texts collected across various platforms.

Issue Date
2024-11-14
Publisher
Association for Computational Linguistics (ACL)
URI
https://archives.kdischool.ac.kr/handle/11125/59036
URL
https://aclanthology.org/2024.findings-emnlp.650/
DOI
https://doi.org/10.18653/v1/2024.findings-emnlp.650
Conf. Name
The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
Place
US
Hyatt Regency Miami Hotel
Conference Date
2024-11-12
Files in This Item:

Click the button and follow the links to connect to the full text. (KDI CL members only)

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

상단으로 이동