Article Preview
Top1. Introduction
Big data is generated by social media on social networking sites (Bello-Orgaz, Jung & Camacho, 2016). Recommender systems reduce the large information space generated by Social Big data. It is information filtering tool which provides users suggestions based on their interest. The applications of recommender systems are in various domains such as books, movies or other products recommendations on e-commerce sites, friends recommendations on social networking sites, project recommendation on GitHub, etc.
Collaborative filtering, content-based, and hybrid-based are different techniques of recommender systems (Eirinaki et al., 2018; Resnick & Varian, 1997; Su & Khoshgoftaar, 2009). In these techniques, user provides ratings to products which result in user-item matrix. This matrix is important for analyzing user’s interest. Sparsity, cold start and scalability are limitations of conventional recommender systems. Sparsity and cold start are addressed by several re-searchers (Guo, Zhang & Yorke-Smith, 2015; Jamali & Ester, 2010; Yang et al., 2013; Fang, Bao & Zhang, 2014). The main concern for researchers is scalability which needs to be addressed for large-scale data. Traditional recommender systems work well for limited scale of social data. Moreover, their algorithms are designed for centralized approach only. If these systems are deployed on large-scale data, throughput is degraded significantly, which results in reducing the users’ interest in these systems. In this paper, the key motivation is to improve recommendation accuracy even for a large number of nodes in the social graph.
Recommendation systems leverage Big data in the form of the large-scale social graph and efficient graph algorithms are important for these systems. The large-scale social graph cannot be processed on centralized system. There is need for a distributed approach where sub-graphs can be processed in parallel. Large-scale recommender systems have leveraged distributed algorithms for finding recommendation (Sardianos, Tsirakis, & Varlamis, 2018). Graph partitioning is a technique which can address the scalability issue. Large-scale graph partitioning in traditional recommendation models uses random walk, Fork-Join (Mateos, Zunino, & Hirsch, 2013) or hash partitioning to divide the graph into sub-graphs. In our proposed approach ScaleRec, the direct and indirect trust-based walk is used to partition graph with relevant nodes only which improves locality. The social graph is partitioned based on social trust amongst nodes to reduce communication between nodes in inter-subgraphs and maximize communication in intra-subgraphs. Improved locality minimizes communication overhead which results in improved scalability (Lumsdaine, 2008).
Conventional data analytics technologies based on centralized approach cannot store and process large-scale data. Big data frameworks such as Hadoop, MapReduce (Dean & Ghemawat, 2008), Pregel (Malewicz et al., 2010), GraphLab (Low et al., 2012), Mahout (Owen et al., 2011) and Giraph, PowerGraph (Gonzalez et al., 2012), GraphX (Xin et al., 2013), CUDA (W3) and GPU (W3) are used by many researchers to deal with large-scale data. We have used Giraph and Pregel in our approach, as these can effectively process large-scale social graph. Social graph is distributed on multiple machines with some vertices replication (Chen et al., 2014). This is efficiently implemented by using Giraph API.