Papers
Topics
Authors
Recent
Search
2000 character limit reached

One Person, One Model--Learning Compound Router for Sequential Recommendation

Published 5 Nov 2022 in cs.IR | (2211.02824v2)

Abstract: Deep learning has brought significant breakthroughs in sequential recommendation (SR) for capturing dynamic user interests. A series of recent research revealed that models with more parameters usually achieve optimal performance for SR tasks, inevitably resulting in great challenges for deploying them in real systems. Following the simple assumption that light networks might already suffice for certain users, in this work, we propose CANet, a conceptually simple yet very scalable framework for assigning adaptive network architecture in an input-dependent manner to reduce unnecessary computation. The core idea of CANet is to route the input user behaviors with a light-weighted router module. Specifically, we first construct the routing space with various submodels parameterized in terms of multiple model dimensions such as the number of layers, hidden size and embedding size. To avoid extra storage overhead of the routing space, we employ a weight-slicing schema to maintain all the submodels in exactly one network. Furthermore, we leverage several solutions to solve the discrete optimization issues caused by the router module. Thanks to them, CANet could adaptively adjust its network architecture for each input in an end-to-end manner, in which the user preference can be effectively captured. To evaluate our work, we conduct extensive experiments on benchmark datasets. Experimental results show that CANet reduces computation by 55 ~ 65% while preserving the accuracy of the original model. Our codes are available at https://github.com/icantnamemyself/CANet.

Citations (9)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.