This repository contains the source code and datasets associated with the paper titled "Exploring Cognitive and Aesthetic Causality for Multimodal Aspect-Based Sentiment Analysis."
- conda env create -f Chimera.yaml
- Constructed datasets: Twitter2015 (twitter2015), Twitter2017 (twitter2017) and Political Twitter (political_twitter).
- Image features can be downloaded from Google Drive. Place the downloaded files in the directories
data/twitter2015anddata/twitter2017, respectively. For thepolitical_twitterdataset, move the contents ofdata/twitter2015anddata/twitter2017intodata/political_twitterand extract the two.zipfiles into the same directory.
- The Flan-T5 model is utilized as the backbone. Download the pre-trained model google/flan-t5-base and save it in the directory
pretrained/flan-t5-base.
python run_chimera_15.py
python run_chimera_17.py
python run_chimera_political.py
If you find this repository beneficial, we kindly encourage you to cite our related papers and consider starring the repository.
@article{xiao2025exploring, title={Exploring Cognitive and Aesthetic Causality for Multimodal Aspect-Based Sentiment Analysis}, author={Xiao, Luwei and Mao, Rui and Zhao, Shuai and Lin, Qika and Jia, Yanhao and He, Liang and Cambria, Erik}, journal={arXiv preprint arXiv:2504.15848}, year={2025} } @article{xiao2024atlantis, title={Atlantis: Aesthetic-oriented multiple granularities fusion network for joint multimodal aspect-based sentiment analysis}, author={Xiao, Luwei and Wu, Xingjiao and Xu, Junjie and Li, Weijie and Jin, Cheng and He, Liang}, journal={Information Fusion}, volume={106}, pages={102304}, year={2024}, publisher={Elsevier} } @inproceedings{xiao2024vanessa, title={Vanessa: Visual connotation and aesthetic attributes understanding network for multimodal aspect-based sentiment analysis}, author={Xiao, Luwei and Mao, Rui and Zhang, Xulang and He, Liang and Cambria, Erik}, booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024}, pages={11486--11500}, year={2024} } This work is primarily built upon the repositories of MDCA and LAPS. Sincere gratitude is extended to everyone who contributed to this project for their invaluable support and dedication.
