Text generation [3]: explainable recommendation
Current approaches to generating sentence explanations are either limited to predefined templates, which restrict expressiveness, or opt for free-style generation, which makes it difficult for quality control. Neural Template (NETE) explanation generation framework [Li 20] brings the best of both worlds by learning templates from data and generating template-controlled sentences that comment about specific features resulting in diverse and controllable explanations.
?Instead of simply correlating preference prediction and explanation generation with shared user and item embeddings, these two tasks are presented in dual forms instead in DualPC [Sun 20]. In other words, input of primal preference prediction p(R|C) is exactly output of dual review generation task p(C|R), with R and C denote the preference value and review spaces. Therefore, probabilistic correlation between these two dual tasks can be explicitly modeled with p(R,C)=p(R|C)p(C)=p(C|R)p(R). A unified dual framework injects probabilistic duality of two tasks in the training stage. Furthermore, as the detailed preference and review information are not available for each user-item pair in the test stage, a transfer learning based model is proposed for both the tasks.
Existing approaches take considerable time to train underlying language model for text generation. ReXPlug is an end-to-end framework with a plug and play way of explaining recommendations [Hada 21]. A sentiment classifier is trained herein for controlling a pre-trained language model for generation, bypassing language model's training from scratch again. Generated reviews are personalized by leveraging a special jointly-trained cross attention network. Not only are the reviews closer to ground truth but also rating prediction outperforms earlier approaches.
Recent models can generate fluent and grammatical synthetic reviews while accurately predicting user ratings. The generated reviews, expressing users’ estimated opinions towards related products, are often viewed as natural language ‘rationales’ for the jointly predicted rating. However, previous studies found that existing models often generate repetitive, universally applicable, and generic explanations, resulting in uninformative rationales and factual hallucinations. Inspired by recent success in using retrieved content in addition to parametric knowledge for generation, PRAG augments a generator with the output of a personalized retriever as external knowledge [Xie 22].
领英推荐
Variational autoencoders (VAEs) have been widely applied in recommendations due to their amortized inferences for overcoming data sparsity. VAE can generate acceptable explanations for users with few relevant training samples also, however generated explanations are less personalized for users with relatively sufficient samples than autoencoders (AEs). Information shared by different users in VAE disturbs the information for a specific user. To deal with this problem, PErsonalized VAE (PEVAE) includes two novel mechanisms [Cai 22]: 1) Self-Adaption Fusion (SAF) manipulates the latent space for controlling the influence of shared information. Overcoming data sparsity is thus possible while generating more personalized explanations for a user with relatively sufficient training samples. 2) DEpendence Maximization (DEM) strengthens dependence between recommendations and explanations by maximizing the mutual information thus being more specific to the input user-item pair.