You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxiliary data (e.g., item descriptive text and image, social network, etc). Cornac enables fast experiments and straightforward implementations of new models. It is highly compatible with existing machine learning libraries (e.g., TensorFlow, PyTorch).
Cornac is one of the frameworks recommended by ACM RecSys 2023 for the evaluation and reproducibility of recommendation algorithms. In addition, the implementation of BPR model in Cornac has been recommended as trustworthy baseline for RecSys comparison by independent research.
Additional dependencies required by models are listed here.
Some algorithm implementations use OpenMP to support multi-threading. For Mac OS users, in order to run those algorithms efficiently, you might need to install gcc from Homebrew to have an OpenMP compiler:
brew install gcc | brew link gcc
Getting started: your first Cornac experiment
Flow of an Experiment in Cornac
importcornacfromcornac.eval_methodsimportRatioSplitfromcornac.modelsimportMF, PMF, BPRfromcornac.metricsimportMAE, RMSE, Precision, Recall, NDCG, AUC, MAP# load the built-in MovieLens 100K and split the data based on ratioml_100k=cornac.datasets.movielens.load_feedback()
rs=RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)
# initialize models, here we are comparing: Biased MF, PMF, and BPRmf=MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123)
pmf=PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123)
bpr=BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123)
models= [mf, pmf, bpr]
# define metrics to evaluate the modelsmetrics= [MAE(), RMSE(), Precision(k=10), Recall(k=10), NDCG(k=10), AUC(), MAP()]
# put it together in an experiment, voilà!cornac.Experiment(eval_method=rs, models=models, metrics=metrics, user_based=True).run()
Here, we provide a simple way to serve a Cornac model by launching a standalone web service with Flask. It is very handy for testing or creating a demo application. First, we install the dependency:
$ pip3 install Flask
Supposed that we want to serve the trained BPR model from previous example, we need to save it:
bpr.save("save_dir", save_trainset=True)
After that, the model can be deployed easily by running Cornac serving app as follows:
$ FLASK_APP='cornac.serving.app' \
MODEL_PATH='save_dir/BPR' \
MODEL_CLASS='cornac.models.BPR' \
flask run --host localhost --port 8080
# Running on http://localhost:8080
Here we go, our model service is now ready. Let's get top-5 item recommendations for the user "63":
If we want to remove seen items during training, we need to provide TRAIN_SET which has been saved with the model earlier, when starting the serving app. We can also leverage WSGI server for model deployment in production. Please refer to this guide for more details.
Model A/B testing
Cornac-AB is an extension of Cornac using the Cornac Serving API. Easily create and manage A/B testing experiments to further understand your model performance with online users.
User Interaction Solution
Recommendations Dashboard
Feedback Dashboard
Efficient retrieval with ANN search
One important aspect of deploying recommender model is efficient retrieval via Approximate Nearest Neighbor (ANN) search in vector space. Cornac integrates several vector similarity search frameworks for the ease of deployment. This example demonstrates how ANN search will work seamlessly with any recommender models supporting it (e.g., matrix factorization).
The table below lists the recommendation models/algorithms featured in Cornac. Examples are provided as quick-start showcasing an easy to run script, or as deep-dive explaining the math and intuition behind each model. Why don't you join us to lengthen the list?
@article{truong2021exploring,
title={Exploring Cross-Modality Utilization in Recommender Systems},
author={Truong, Quoc-Tuan and Salah, Aghiles and Tran, Thanh-Binh and Guo, Jingyao and Lauw, Hady W},
journal={IEEE Internet Computing},
year={2021},
publisher={IEEE}
}