PH.D DEFENCE - PUBLIC SEMINAR

Causal Recommender Systems

Speaker
Mr. Wang Wenjie
Advisor
Dr Chua Tat Seng, Kithct Chair Professor, School of Computing


09 Mar 2023 Thursday, 10:30 AM to 12:00 PM

MR20, COM3-02-59

Abstract:

Recommender systems have been widely deployed to alleviate information overloading in extensive applications such as e-commerce and social networks. Technically speaking, recommender models learn personalized user preference from users' historical interactions (e.g., clicks). However, many interference factors (e.g., items' deceptive titles) will affect the users' interaction process, injecting data bias into the interactions. Such bias causes the historical interactions not to be an ideal representation of user preference, hindering the accurate preference learning of recommender models. To alleviate the bias issues, we propose a causal recommender framework, which first studies how the bias issues are generated, and then mitigates them by causal modeling.

First, the recommender models will inevitably inherit the data bias for future recommendations. For example, the users usually have a higher probability of clicking on the items with more attractive exposure features (e.g., titles and cover images). Due to such biased interactions, many clickbait items with deceptive exposure features will be frequently recommended. To mitigate the clickbait bias, we first estimate the causal effect of exposure features on recommendations during the recommender learning procedure, and then mitigate the harmful effect via counterfactual inference.

In addition to inheriting the bias, the models often suffer from bias amplification. The users' interaction distributions over item groups are naturally imbalanced. By learning from the imbalanced data, the recommender models will amplify the imbalance by over-recommending the items from the majority groups. By inspecting the cause-effect factors during the recommender learning procedure, we find that the reason lies in the confounding effect of biased interaction distributions. As such, we adopt causal intervention to achieve deconfounded training, significantly alleviating the bias amplification in the new recommendation lists.

Although deconfounded training alleviates bias amplification, the historical majority groups still occupy the vast majority of recommendation lists. Over time, the homogeneous items in the majority groups will isolate users from diverse contents, leading to filter bubbles. We argue that users have the right to control filter bubbles and conceptually propose a new user-controllable recommender system. To achieve user controls, the key is to mitigate the effect of historical interactions that are inconsistent with users' desired controls to escape from filter bubbles. To this end, we contribute a user-controllable inference strategy to reduce the causal effect of the out-of-date user representations via counterfactual inference and dynamically rerank the recommendations based on user controls.

Lastly, the data bias in the interactions can also be drifted over time because of the change of user features (e.g., an income increase), causing OOD bias. The proposed user-controllable recommender system can partly solve the OOD bias, which however ignores the causal relationships between user features, preference, and interactions. To bridge the gap, we introduce causal representation learning to model these causal relationships during training. We again adopt counterfactual inference to mitigate the effect of out-of-date interactions and leverage the post-intervention inference to predict the latest preference based on the latest user features.

To summarize, we propose a causal recommender framework for debiasing, which advances traditional correlation learning to causal modeling. Extensive experiments on real-world datasets demonstrate the effectiveness of our causal framework.