Bag of Lies: Robustness in Continuous Pre-training BERT

Ine Gevers, Walter Daelemans

University of Antwerp

This study aims to acquire more insights into the continuous pre-training phase of BERT regarding entity knowledge, using the COVID-19 pandemic as a case study. Since the pandemic emerged after the last update of BERT’s pre-training data, the model has little to no entity knowledge about COVID- 19. Using continuous pre-training, we control what entity knowledge is available to the model. We compare the baseline BERT model with the further pre-trained variants on the fact-checking benchmark Check-COVID. To test the robustness of continuous pre-training, we experiment with several adversarial methods to manipulate the input data, such as training on misinformation, shuffling the word order until the input becomes nonsensical, and using questionable information sources like Reddit. Surprisingly, our findings reveal that these methods do not degrade, and sometimes even improve, the model’s downstream performance. This suggests that continuous pre-training of BERT is robust against misinformation. While prior work has explored the impact of altering the word order or using nonsensical data in continuous pre-training data, to the best of our knowledge, our work is the first to investigate the effects of deliberately using false data as a form of adversarial attack. Furthermore, we are releasing a new dataset, consisting of original texts from academic publications in the LitCovid repository and their AI-generated false counterparts.
© 2024 CLIN 34 Organisators. All rights reserved. Contact us via email.