Substitutable/complementary product recommendation

Substitutable/complementary product recommendation

When a user has browsed a t-shirt, it is reasonable to retrieve similar t-shirts, i.e., substitutes; whereas if the user has already purchased one, it would be better to retrieve trousers, hats or shoes, as complements of t-shirts. In [Wang 18], for each product, its embedding representation is learnt in a general semantic space. Thereafter, embedding vectors are projected into two separate spaces via a novel mapping function. In the end, each embedding is incorporated with path-constraints to further boost discriminative ability of the model.

No alt text provided for this image

Given a network of items, the objective in [Rakesh 19] is to learn their content features such that they explain the relationship between items (e.g., Xbox) in terms of substitutes (e.g., PS4) and supplements (game controllers, surround system, and travel case). A generative deep learning model links two variational autoencoders using a connector neural network to create Linked Variational Autoencoder (LVA) in this regard which in turn is extended by incorporating collaborative filtering (CF) to create CLVA that captures implicit relationship between users and items.

No alt text provided for this image

Attribute-aware collaborative filtering (A2CF) [Chen 20], instead of directly modeling user-item interactions, extracts explicit and polarized item attributes from user reviews with sentiment analysis, whereafter the representations of attributes, users, and items are simultaneously learned. Then, by treating attributes as the bridge between users and items, user-item preferences (i.e., personalization) and item-item relationships (i.e., substitution) are modemed for recommendation. In addition, A2CF is capable of generating intuitive interpretations by analyzing which attributes a user currently cares the most and comparing the recommended substitutes with her/his currently browsed items at an attribute level.

No alt text provided for this image

Unlike simple relationships such as similarity, complementariness is asymmetric and non-transitive. Standard usage of representation learning emphasizes on only one set of embedding, which is problematic for modeling such properties of complementariness. Contextual knowledge is encoded into product representation by multi-task learning, to alleviate sparsity in [Xu 20]. By explicitly modeling with user bias terms, noise of customer-specific preferences is separated from complementariness. Furthermore, dual embedding framework is adapted to capture the intrinsic properties of complementariness and provide geometric interpretation motivated by classic separating hyperplane theory. Finally, a Bayesian network structure unifies all the components.

No alt text provided for this image

Product relations and user preferences are modeled with a graph attention network and a sequential behavior transformer, respectively in [Yan 22]. The two networks are cast together through personalized re-ranking and contrastive learning, in which the user and product embedding are learned jointly in an end-to-end fashion. The system recognizes different customer interests by learning from their purchase history and the correlations among customers and products.

[Wang 18] A Path-constrained Framework for Discriminating Substitutable and Complementary Products in E-commerce

[Rakesh 19] Linked Variational AutoEncoders for Inferring Substitutable and Supplementary Items

[Chen 20] Try This Instead: Personalized and Interpretable Substitute Recommendation

[Xu 20]? Knowledge-aware Complementary Product Representation Learning

[Yan 22] Personalized complementary product recommendation

要查看或添加评论,请登录

社区洞察

其他会员也浏览了