Skip to main content

Accuracy improvements for cold-start recommendation problem using indirect relations in social networks

Abstract

Recent advances in Internet applications have facilitated information spreading and, thanks to a wide variety of mobile devices and the burgeoning 5G networks, users easily and quickly gain access to information. Great amounts of digital information moreover have contributed to the emergence of recommender systems that help to filter information. When the rise of mobile networks has pushed forward the growth of social media networks and users get used to posting whatever they do and wherever they visit on the Web, such quick social media updates already make it difficult for users to find historical data. For this reason, this paper presents a social network-based recommender system. Our purpose is to build a user-centered recommender system to exclude the products that users are disinterested in according to user preferences and their friends' shopping experiences so as to make recommendations effective. Since there might be no corresponding reference value for new products or services, we use indirect relations between friends and “friends’ friends” as well as sentinel friends to improve the recommendation accuracy. The simulation result has proven that our proposed mechanism is efficient in enhancing recommendation accuracy.

Introduction

With the development of modern technology and the wide-spread use of mobile devices, people nowadays have convenient access to information. Therefore, recommender systems are extensively adopted in various commercial and educational fields. Whenever a user is interested in a specific product or service, he or she may first ask his/her friends about their shopping experiences and then find more information on the Internet [1]. Nevertheless, it is very time-consuming to read customer reviews of all items listed in the search results list. For example, to find the best restaurants in the designated area, users may follow websites like Google Map to check the comments and star ratings one by one. However, those reviews with star ratings but no comments will be doubted.

Recommender systems have been applied in various fields, like tourism industry, food industry, and film industry and so on [2, 3]. There are many restaurants and movies to choose but not everyone meets the need because each person has different preference: it could be the prices, the brands or the specifications. In such a context, we began to think if we could build a user-centered, personalized recommender system.

Modern recommendation systems are becoming more and more complex with the improvement of technology, and they require a lot of resources [4, 5] and privacy issues. Different phases in the recommendation process also may be at risk for private data exposure [6]. For example, in the user modeling phase, information of every user will be accessed to confirm each one’s identity.

In this paper, we propose a novel recommender system that integrates with social network websites, like Facebook, Twitter and Yelp, to particularly deal with the cold-start problem. While users share information with friends, not only users facilitate the spread of messages and help promote the products or services or attract more consumers, but also the recommender system learns user preferences based on users or their friends’ most recent posts and relevant shopping experiences [7, 8]. According to such information, even in a cold-start condition without rating or review for new products or services, the recommender system still can make follow-up recommendations, and users can save time to find ideal targets. Through the newly established review procedure, users can find suitable products or services more quickly and accurately [9,10,11].

This paper is organized as follows. “Background and related work” section describes the background and related work, including recommender systems, Web 2.0 and social networks. “Methods” section states the problem and how to solve it. “Analysis” section gives the experiment and methodology, and results and discussion are presented in “Result and discussion” section. The conclusion is given in “Conclusion” section.

Background and related work

This section introduces recommender systems, Web 2.0 and social networks.

Recommender systems

According to Resnick and Varian [12], recommender systems, for user convenience, can filter information based on user preferences and provide information to users that they might be interested in. Schafer et al. [13] moreover mentioned three advantages of recommender systems: Converting Browsers into Buyers, Increasing Cross-sell, and Building Loyalty. Generally, traditional recommender systems require explicit or implicit user interactions. Explicit method relies on explicit user ratings while implicit ones are based on implicit observations of users’ behaviors. The Table 1 compares their differences.

Table 1 Recommender system comparison

Figure 1 displays three most popular recommendation approaches: collaborative filtering, content-based filtering and hybrid recommender. Collaborative filtering, first introduced by Goldberg et al. [3] in 1992, was presented in their email filtering system, Tapestry. According to the known preferences of a group of users, the system could help other users perform filtering and make recommendations. Collaborative filtering approach collects user ratings on items to predict user preferences. Based on the opinions of other users who share similar interests, the approach filters items and makes recommendations. Using the collaborative filtering approach, people can help each other to perform filtering [14, 15]. In e-commerce recommender systems nowadays, collaborative filtering that make recommendations according to the shopping experiences of similar users is the most important and widely used approach. The Table 2 compares the advantages and disadvantages of the three recommendation approaches mentioned above.

Fig. 1
figure 1

Recommendation approaches

Table 2 Comparison of three recommendation approaches

Cold-start problem

Cold start, in the beginning, refers to the way a computer cuts off power and restarts, and later extends to the state of a product before a new product user comes into existence, establishes an effective relationship with the user, and continues to generate content and interaction. Cold-start problem is a common research question in recommendation systems [16,17,18] and its biggest problem with cold start is that the system cannot do anything about not collecting enough user information or making any inferences about the project.

Figure 2 shows three categories of cold-start problems:

  • New community: when a new system launches, there may be many items in the catalog but little user interaction and presence, making it difficult to provide reliable recommendations.

  • New items: new items of the system, there may be relevant content information, but there is no user interaction.

  • New users: new users may enter the system without any interaction with the systems and without any personalized recommendations for the guests.

Fig. 2
figure 2

Cold-start problems

In recent years, methods like transfer learning [19], active learning [20], and zero-shot learning [21] are often used to solve cold-start problems.

Brief intro to Web 2.0

Instead of being identified by a software standard, Web 2.0, which is considered as a platform, refers to special user-centered web applications that enable information sharing and collaborative works on the Internet. Typical Web 2.0 applications include RSS, blogs, Wiki, social networking websites and so on. The differences between Web 1.0 and Web 2.0 are shown in the Table 3.

Table 3 Differences between Web 1.0 and Web 2.0.

While Web 1.0 was a one-way information provider with little interaction between the user and the website, Web 2.0 is user-oriented.

Really simple syndication (RSS)

RSS is a format for delivering regularly changing web content like blogs, news headlines and information exchange. Users or applications that subscribe to RSS feeds are able to receive most recent updates.

Blog

Blogs, one of web 2.0 applications, allow users to have their own blogs or websites and become content sources that can be transformed into self-media. Each blog may comprise information, including text, pictures, graphics, audios or videos. Recent famous blog service providers in Taiwan include the “Wretch” blog and the “PIXNET” blog.

Wiki

Wiki is an open-source, collaborative system in which anyone can publish, edit and share. All types of users can contribute knowledge and peers can edit and help improve it.

Link: https://en.wikipedia.org/wiki/Wiki.

Social network

Social network that relies on human-to-human interactions has been a new form of communication, like Facebook, for example. Social network is also the basis of our proposed recommender system and it will be further defined in the following section.

Social network

A social network is comprised of a group of people who share similar personal interests, and can be a way to stay connected or befriend with others. Based on the idea, social networking websites are online platforms that people use to build social networks. Using the Internet, users can interact and share information with each other in real-time without face-to-face communication. In this paper, the social networking websites are used for simulation because each has a great number of users and massive amount of personal data.

Let us take Facebook as an example. It recommends new friends to users or help reconnect with long-lost friends. Games that users can play on Facebook, like Happy Farm, are able to bring families and friends together and strengthen their relationships.

Facebook: In addition to text messages, Facebook users are able to send information to others such as images, photos, pictures and voice messages. Also, Facebook users can add others as friends, connect with them, and receive automatic notifications when friends edit personal information or post status updates. The advantages and disadvantages of social media are presented in Table 4.

Table 4 Advantages and disadvantages of social media

EdgeRank is the algorithm that Facebook originally used to decide which posts to show first in each user’s News Feed. Boring stories are hidden by the algorithm. “So, if your story doesn’t score well, no one will see it.”

Facebook, at the 2010 F8 Conference, revealed that they used three metrics to calculate EdgeRank:

$$\sum\limits_{{edges\;e}} {u_{e} w_{e} d_{e} } ,$$
(1)
  • \(u_{e}\) (Affinity Score): how “connected” is a particular user to the edge?

  • \(w_{e}\) (Edge Weight): what actions were taken by the user on the content?

  • \(d_{e}\) (Time Decay): how old is the post?

In 2014, Facebook CEO Mark Zuckerberg declared in a press conference that “our goal is to build the perfect personalized newspaper for every person in the world.” This newspaper would “show you the stuff that’s going to be most interesting to you”.

In 2015, Facebook made adjustments to its News Feed algorithm and decided what to prioritize according to the equation: News Feed Visibility = \(I \times P \times C \times T \times R.\)

  • \(I\) (Interest) = interest of the user in the creator.

  • P (Post) = the post’s performance amongst other users.

  • C (Creator) = performance of past posts by the content creator amongst other users.

  • T (Type) = types of posts (status, photos, links) that user prefers.

  • R (Recency) = how new the post is.

Facebook, in January 2018, took another move: prioritizing the posts from users’ friends and family, and de-prioritizing content from businesses, brands, and media.

Yelp

Yelp was initially an email-based system that users could email their friends with recommendations for restaurants. However, friends might receive so many emails. Later, Yelp added a review system that business owners cannot edit or remove the content on their business pages. To encourage users to write reviews, the company built the Yelp Elite Squad to recognize people who are active in the Yelp community and role models on and off the Yelp site. The Yelp Elite members are invited to local events and meet-ups, inspiring others to contribute their own opinions. Most review sites focus on one single product or service, like hotels or restaurants, and have no functions of social networking. However, Yelp has both.

Yelp has three major functions: (1) Check-In: this function is synchronized with Facebook so that users and their friends are able to see the rank of the check-in locations according to the number of check-ins; (2) Tips: it is a way to divide long reviews and short ones; (3) Compliment: users can send a compliment about a review or to a reviewer.

Methods

Based on the intimacy and friendship on Facebook, this paper uses the indirect relations on Yelp as the sources of data to deal with cold-start problem [23, 24].

Problem statement

People may search across the Internet for reviews for a shop, a location or a product. However, once the search is narrowed down to a particular range, users need to check the reviews one by one, which is an enormously time-consuming process. Users are unable to find what they need quickly.

Generally speaking, most people favor the reviews written by their friends or coworkers, rather than those by strangers. A cold-start problem means that the recommender system cannot make recommendations for a new user with no history. For example, without enough user history, Facebook would use friends with similar interests to alleviate the cold-start problem. But, if the information is little, the system is still unable to make recommendations [24,25,27].

To cope with the above-mentioned problems, we use Yelp check-ins and reviews to determine the relationships between users. Although this is not a novel method, we can find a user’s interests and the interest similarities with his/her friends (see Figs. 3 and 4). Moreover, different from other systems, we use the sentinel user selection [9] (see Fig. 5) as the basis of the recommender system. The sentinel user selection algorithm is based on Miao’s algorithm [9].

Fig. 3
figure 3

Direct relation

Fig. 4
figure 4

Indirect relation

Fig. 5
figure 5

Sentinel user on top layer

System framework

The system framework (Fig. 6) consists of three parts: (1) data acquisition, including user interest analysis and product analysis; (2) personalized recommender system, including direct and indirect relations; and (3) list of recommendations for users.

Fig. 6
figure 6

System framework

Procedures

Our method includes the following steps: (1) gather user data and user review, (2) select the business category (Select Food), (3) select the food types (5 types of cuisines), (4) test the recommendation data, and (5) analyze the recommendation data. The procedures flowchart were presented in Fig. 7.

Fig. 7
figure 7

Flowchart

Analysis

The simulation is based on the crowd-sourced reviews from the Yelp Dataset (Link: https://www.yelp.com/dataset/).

Simulation design

Yelp collects reviews for not only foods but also businesses. Currently, there are 1293 kinds of businesses in total based on the Yelp Dataset. Restaurants in the “Food” category, which occupies most of the reviews, and there are about 140 labels defined by Yelp which will be the focus of our simulation.

Steps

  1. 1.

    Extract the restaurant categories because the Yelp Dataset provides not only restaurant categories but also entertainment categories.

  2. 2.

    Classify the restaurant categories and retrieve data of 13,273 restaurants in the top 5 food types: American (traditional), Italian, Chinese, American (new) and Mexican.

  3. 3.

    Load reviews. Keep only one review written by the same user for the restaurant on the same day.

  4. 4.

    Remove those reviews that are not categorized as useful, interesting, or cool.

  5. 5.

    Get the interest value Iu(c).

    • With user information of interest,

      $${\text{I}}_{{\text{u}}} \left( c \right) = \frac{{n_{c} }}{{ \sum \nolimits_{{i = 1}}^{j} n_{i} }},\quad 1 \le i \le j,$$
      (2)
      • \({\text{n}}_{{\text{c}}}\): number of user comments in each category.

      • \(n_{i}\): number of users interested in this category.

      • \(j\): number of category.

    • Without user information of interest,

      $${\text{I}}_{{\text{u}}} \left( c \right) = \frac{{ \sum \nolimits_{{i = 1}}^{n} F_{i} \left( u \right) \times I_{{fi}} \left( c \right)}}{{ \sum \nolimits_{{i = 1}}^{n} F_{i} \left( u \right)}},\quad i \le n,$$
      (3)
      • \(I_{{fi}}\): level of interest for each category of user friend.

      • \(n\): number of users’ friends on Yelp.

      • \(F_{i} \left( u \right)\): user to friend interaction scores based on ay-fb-friend-rank algorithm [28].

  6. 6.

    Calculate the recommendation ratio to users.

    $$R\left( c \right) = \frac{{I_{u} \left( c \right)}}{{ \sum \nolimits_{{k = 1}}^{j} I\left( k \right)}},\quad i \le c \le j,$$
    (4)
    • \(I\left( k \right)\): level of interest for all categories.

Test steps

All parameters in each category are weighted to compute the popularity weight so that users can find the item with the top R(c).

There are two weighting variables: (1) popularity of place, (P)P: users’ desired restaurants are usually hot attractions or iconic spots, (2) places that have been visited by friends, F(P): users are also interested in the places which friends have already visited. Other parameters include:

  • \(W\left( p \right)\): weight value of popularity.

  • \(\alpha\): weight value of acceptance.

  • \(P\left( p \right)\): popularity of the place.

  • \(\beta\): weight value of friends’ reviews.

  • F \(\left( p \right)\): place that has been visited by friends.

  1. 1.

    Calculate popularity of place, \(P\left( p \right)\) and places that have been visited by friends, \(F\left( p \right)\) that previously described

    $$P\left( p \right) = \frac{{n_{{ch,p}} }}{{\max \left( {n_{{ch,p}} } \right)}} + \frac{{n_{{li,p}} }}{{\max \left( {n_{{li,p}} } \right)}},$$
    (5)
    • \(n_{{ch,p}}\): number of reviews for a location

    • \(n_{{li,p}}\): rating of a location

      $$F\left( p \right) = \frac{{n_{c} }}{{n_{s} }},$$
      (6)
    • \(n_{s}\): number of reviews for a location

    • \(n_{c}\): rating of a location

  2. 2.

    Sentinel user’s interest in a specific item, \(A\left( p \right)\). We design the behavior weights according to the Rocchio algorithm. \(SC = 0.5\) means that the sentinel user has reviews in this category and is scored between 4 and 5. \(SC = 0.2\) scores 3 while SC = − 0.5 scores between 1 and 2.

    $$A\left( p \right) = \frac{{ \sum \nolimits_{{i = 1}}^{{n_{p} }} SC_{i} }}{{n_{p} }}.$$
    (7)
  3. 3.

    Calculate the weight value, \(\text{W}^{\prime}\left( \text{p} \right)\). A positive number of \(A\left( p \right)\) means that the sentinel user is interested and a negative number means that the sentinel user is not interested, so we fine-tune \(W\left( p \right)\) to the new \(W^{\prime}\left( p \right)\)

    $$W^{\prime}\left( p \right) = W\left( p \right) + A\left( p \right).$$
    (8)
  4. 4.

    Calculate the new interest value \(\text{I}^{\prime}_{\text{u}} \left( \text{c} \right)\)

    $$I^{\prime}_{u} \left( c \right) = I_{u} \left( c \right) + \alpha _{c} A\left( p \right).$$
    (9)
  5. 5.

    Regulate \({\text{R}}\left( {\text{c}} \right)\) with the new interest value \(I^{\prime}_{u} \left( c \right)\)

    $${\text{R}}\left( {\text{c}} \right) = \frac{{l^{\prime}_{u} \left( c \right)}}{{ \sum \nolimits_{{k = 1}}^{j} I\left( k \right)}},\quad 1 \le c \le j$$
    (10)

Evaluation metrics

Mean Reciprocal Rank, MRR: MRR is a measure of the accuracy and average rank. The following equation shows that MRR is the average of the reciprocal ranks of results for a sample of queries n:

$${\text{MRR}} = \frac{1}{n} \sum \limits_{{i = 1}}^{n} seq_{i} ,$$
(11)

\(seq_{i}\) refers to the rank position of the first relevant document for the ith query. According to the predicted probability, the top k items are compared. When an item is relevant and predicted correctly, a score is assigned. The earlier the item appears, the higher the score is, i.e. RR = 1. If there is no correct item, RR = 0. The mean value of n experiments is MRR.

Simulation data and results

  • Scenario 1: reviews submitted to Yelp from January 1, 2016 to December 31, 2016 were taken for simulation. Among those, we retrieved 20 reviews written by users who also wrote reviews between January 1, 2017 and December 22, 2017 to estimate accurate recommendations for new items.

  • Scenario 2: using Yelp review data from January 1, 2017 to December 22, 2017, 20 people who submitted reviews during the period were selected to estimate accuracy rate of new users.

Result and discussion

The experimental results of the small sample were evaluated, and the largest review category was used as the preference type to verify the accuracy rate of each category. In case of equal recommendation ratio, the system is prioritized according to the highest percentage of merchants: 2 American (traditional) → 4 Italian → 1 American (new) → 5 Mexican → 3 Chinese.

Scenario 1

The Table 5 shows the MRR based on user history only in order to find their interest and the Table 6 reveals that the recommendation results based on friends’ reviews were better.

Table 5 MRR based on user history only = 57%
Table 6 MRR based on friend recommendation = 57.25%

The Table 7 proves that the addition of sentinel friend improved recommendation accuracy. Figure 8 below compares the MRR results based on user history, friend recommendation and friend recommendation and A(P).

Table 7 MRR based on friend recommendation and A(P) = 58.5%
Fig. 8
figure 8

Comparison of the MRR results: user history only, friend recommendation vs. friend recommendation and A(P)

The Table 8 shows the MRR result based on sentinel user’s interest in a specific item, A(P) and Popularity of Place P(P). The Table 9 displays the MRR result based on A(P) and F(P), the place that has been visited by friends. Figure 9 proves that the recommendation accuracy based on F(P) was better than that based on P(P).

Table 8 MRR based on A(P) and P(P) review category = 56.8333%
Table 9 MRR based on A(P) and F(P) review category = 61%
Fig. 9
figure 9

Comparison of the A(P)-based MRR results for new items: P(P) review category vs. F(P) review category

Scenario 2

Classical probability is the statistical concept that assumes that all outcomes in the experiment are likely to occur equally. The probability of an event is equal to the ratio of the number of favorable outcomes to the total number of possible outcomes for the experiment.

$$P\left( A \right) = \frac{{\text{n}}}{{\text{t}}},$$
(14)
  • \(P\left( A \right)\): probability of an event,

  • \(n\): number of favorable outcomes,

  • \(t\): total number of possible outcomes.

Scenario 2 focuses on new users who did not submit reviews in 2016. Therefore, the classical probability for the experiment is 1/5 = 20%.

Tables 10 and 11 prove that the recommendation results based on friends’ reviews were better. The Table 11 also reveals that the addition of sentinel friend did not improve the recommendation accuracy. Figure 10 compares the MRR result of new users’ friendships.

Table 10 MRR based on friend recommendation = 34.25%
Table 11 MRR based on friend recommendation and A(P) = 34.25%
Fig. 10
figure 10

Comparison of the results of new users’ friendships

The Tables 12 and 13 both reveal that the outcome based on A(P) but the Table 12 uses P(P) review category while the Table 13 uses F(P). Figure 11 below compares the sentinel mechanisms based on A(P) and P(P).

Table 12 MRR based on A(P) and P(P) review category = 31.0833%
Table 13 MRR based on A(P) and F(P) review category = 33.8333%
Fig. 11
figure 11

New users calculate popularity and friend popularity based on A(p)

Conclusion

In this paper, we propose a personalized recommender system based on the review data from the Yelp dataset. To deal with the cold-start problem for new users, we use indirect relations between friends and “friends’ friends.” If the friend does not have related data, the system finds friend’s friends to make recommendations as accurate as possible. The experiment results reveal that when we use the sentinel mechanism to calculate and obtain the recommendation data, our proposed method obviously outperforms, proving that the use of indirect relations in social networks can improve recommendation accuracy.

References

  1. Zhang KZK, Xu H, Zhao S, Yu Y. Online reviews and impulse buying behavior: the role of browsing and impulsivenes. Internet Res. 2018;28:522–43.

    Article  Google Scholar 

  2. Abel F, Herder E, Houben G-J, Henze N, Krause D. Cross-system user modeling and personalization on the social web. User Model User Adapt Interact. 2012;23:169–209.

    Article  Google Scholar 

  3. Goldberg D, Nichols D, Oki BM, Terry D. Using collaborative filtering to weave an information tapestry. Commun ACM. 1992;35:61–70.

    Article  Google Scholar 

  4. Chu W-T, Tsai Y-L. A hybrid recommendation system considering visual information for predicting favorite restaurants. World Wide Web. 2017;20:1313–31.

    Article  Google Scholar 

  5. Hassan AKA, Abdulwahhab ABA. Reviews sentiment analysis for collaborative recommender system. Kurd J Appl Res. 2017;2:87–91.

    Article  Google Scholar 

  6. Huang W, Liu B, Tang H. Privacy protection for recommendation system: a survey. J Phys Conf Ser. 2019;1325:012087.

    Article  Google Scholar 

  7. Li P, Zhang G, Chao L, Xie Z. Personalized recommendation system for offline shopping. In: 2018 International conference on audio, language and image processing (ICALIP); 2018.

  8. Tareq SU, Noor MH, Bepery C. Framework of dynamic recommendation system for e-shopping. Int J Inf Technol. 2019;12:135–40.

    Google Scholar 

  9. Miao Z, Yan J, Chen K, Yang X, Zha H, Zhang W. Joint prediction of rating and popularity for cold-start item by sentinel user selection. IEEE Access. 2016;4:8500–13.

    Article  Google Scholar 

  10. Kesorn K, Juraphanthong W, Salaiwarakul A. Personalized attraction recommendation system for tourists through check-in data. IEEE Access. 2017;5:26703–21.

    Article  Google Scholar 

  11. Uyangoda L, Ahangama S, Ranasinghe T. User profile feature-based approach to address the cold start problem in collaborative filtering for personalized movie recommendation. In: 2018 Thirteenth international conference on digital information management (ICDIM); 2018.

  12. Resnick P, Varian HR. Recommender systems. Commun ACM. 1997;40(3):56–8. https://doi.org/10.1145/245108.245121.

    Article  Google Scholar 

  13. Schafer JB, Konstan JA, Riedl J. E-Commerce recommendation applications. Data Min Knowl Discov. 2001;5(1/2):115–53. https://doi.org/10.1023/A:1009804230409.

    Article  MATH  Google Scholar 

  14. Cui Z, Xu X, Xue F, Cai X, Cao Y, Zhang W, Chen J. Personalized recommendation system based on collaborative filtering for IoT scenarios. IEEE Trans Serv Comput. 2020;13:685–95.

    Article  Google Scholar 

  15. Ramakrishnan G, Saicharan V, Chandrasekaran K, Rathnamma MV, Ramana VV. Collaborative filtering for book recommendation system. In: Advances in intelligent systems and computing. Singapore: Springer; 2019. p. 325–38.

    Google Scholar 

  16. Bi Y, Song L, Yao M, Wu Z, Wang J, Xiao J. DCDIR: a deep cross-domain recommendation system for cold start users in insurance domain. In: Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval; 2020.

  17. Natarajan S, Vairavasundaram S, Natarajan S, Gandomi AH. Resolving data sparsity and cold start problem in collaborative filtering recommender system using linked open data. Expert Syst Appl. 2020;149:113248.

    Article  Google Scholar 

  18. Paleti L, Krishna PR, Murthy JVR. Approaching the cold-start problem using community detection based alternating least square factorization in recommendation systems. Evol Intell. 2020;14:835–49.

    Article  Google Scholar 

  19. Sun B, Ma Q, Zhang S, Liu K, Liu Y. iSelf: towards cold-start emotion labeling using transfer learning with smartphones. ACM Trans Sens Netw. 2017;13:1–22.

    Article  Google Scholar 

  20. Brangbour E, Bruneau P, Tamisier T, Marchand-Maillet S. Active learning with crowdsourcing for the cold start of imbalanced classifiers. In: Lecture notes in computer science. Springer International Publishing; 2020. p. 192–201.

    Google Scholar 

  21. Li J, Jing M, Lu K, Zhu L, Yang Y, Huang Z. From zero-shot learning to cold-start recommendation. In: Proceedings of the AAAI conference on artificial intelligence, vol. 33; 2019. p. 4189–96.

  22. O'Reilly T. "What Is Web 2.0 - Design Patterns and Business Models for the Next Generation of Software," O’REILLY, 2005. https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html.

  23. Gaspar P, Kompan M, Koncal M, Bielikova M. Improving the personalized recommendation in the cold-start scenarios. In: 2019 IEEE international conference on data science and advanced analytics (DSAA); 2019.

  24. Jin Y, Dong S, Cai Y, Hu J. RACRec: review aware cross-domain recommendation for fully-cold-start user. IEEE Access. 2020;8:55032–41.

    Article  Google Scholar 

  25. Zhou Y, Tang Z, Qi L, Zhang X, Dou W, Wan S. Intelligent service recommendation for cold-start problems in edge computing. IEEE Access. 2019;7:46637–45.

    Article  Google Scholar 

  26. Kumar Y, Sharma A, Khaund A, Kumar A, Kumaraguru P, Shah RR, Zimmermann R. IceBreaker: solving cold start problem for video recommendation engines. In: 2018 IEEE international symposium on multimedia (ISM); 2018.

  27. Zhu Y, Lin J, He S, Wang B, Guan Z, Liu H, Cai D. Addressing the item cold-start problem by attribute-driven active learning. IEEE Trans Knowl Data Eng. 2020;32:631–44.

    Article  Google Scholar 

  28. Kuizinas G. Facebook-friend-rank. 2012. https://github.com/gajus/facebook-friend-rank.

Download references

Acknowledgements

Our gratitude goes to the Ministry of Science and Technology for the financial fund in supporting the study.

Funding

This study was supported by the Ministry of Science and Technology, Taiwan, R.O.C., under Grant No. No. MOST 107-2221-E-197-007-MY3, MOST 108-2321-B-197-004 and MOST 108-2622-E-197-007-CC3.

Author information

Authors and Affiliations

Authors

Contributions

FJT is responsible for the research process and model design. TYW is responsible for experimental design. CLL is responsible for data analysis. JLC is responsible for related literature survey. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tin-Yu Wu.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent to participate

Not applicable.

Competing interests

The authors have declared that no competing interests exist.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tey, F.J., Wu, TY., Lin, CL. et al. Accuracy improvements for cold-start recommendation problem using indirect relations in social networks. J Big Data 8, 98 (2021). https://doi.org/10.1186/s40537-021-00484-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-021-00484-0

Keywords