Recommender systems based on multi-dimensional matrices, that is, tensors, have recently attracted significant attention, as they can find more latent factors and patterns compared with conventional systems. However, real-world tensor datasets are sparse and large. Further, tensors are more complicated to analyze than matrices. Therefore, tensor decomposition algorithms, such as parallel factor analysis (PARAFAC) decomposition, Tucker decomposition, and high order singular value decomposition (HOSVD), are used to analyze tensors (Yang & Yong, 2019b). These algorithms can find latent patterns and factors in tensors, but they are limited to static tensor datasets. Specifically, in the case of dynamic tensor datasets or tensor streaming, that is, when new tensors are added to an existing tensor, these algorithms should re-decompose all the tensors because the original decomposition results change. Unfortunately, tensor decomposition is expensive owing to its computational complexity and the large number of iterations involved. Recently, incremental tensor decomposition algorithms, such as online CANDECOMP/PARAFAC (CP) (Zhou, Erfani, & Bailey, 2018) and SamBaTen (Gujral, Pasricha, & Papalexakis, 2018), have been developed; these algorithms assume that only the time axis increases. For example, if a three-dimensional tensor is composed of users, items, time indexes, and rating values, the user and item indexes do not increase. However, the number of users and items generally increase in the real world. Few incremental tensor decomposition methods that consider this multiaxial growth have been proposed. One of these algorithms is a multi-aspect dynamic tensor completion method called MAST (Song et al., 2017). In MAST, new tensors are partitioned into sub-tensors and are subsequently decomposed. Partitioning into sub-tensors affects the results of adjacent parts, and sub-tensor decomposition should be performed sequentially. Therefore, we propose a method for reducing re-decomposition cost without partitioning new tensors into sub-tensors, where all dimensions increase. The computation uses the indexes of an added tensor and existing decomposition results. To this end, we propose InParTen2, which is a distributed multi-aspect incremental PARAFAC three-dimensional tensor decomposition algorithm. PARAFAC is a widely used tensor decomposition algorithm, which, in this study, was transformed into an incremental decomposition algorithm. Most tensor decomposition algorithms are based on MATLAB or Hadoop. MATLAB-based algorithms run on a single machine, and therefore they cannot handle large tensors. The MATLAB method by Song et al. (2017) cannot handle large data either. Hadoop-based algorithms have longer execution time because Hadoop has high disk I/O cost when it performs iterative operations such as tensor decomposition. We use Apache Spark, which is a distributed in-memory big-data system and is faster than Hadoop for iterative operations (Zaharia et al., 2012). However, Apache Spark runs on limited in-memory systems; therefore, we used it with the improved form of the PARAFAC decomposition algorithm.
The proposed algorithm decomposes incoming tensors using existing decomposition results without generating sub-tensors. In addition, we propose a memory-efficient method to reduce the intermediate data explosion that occurs during PARAFAC decomposition. This method can handle large tensors and is fast within the limited-memory framework of Apache Spark. Moreover, InParTen2 can handle static as well as incremental tensor decomposition.
We evaluated the performance of InParTen2 by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods based on Apache Spark using various tensor datasets. The results confirm that InParTen2 reduces execution time, and its accuracy is similar to that of the other methods.
The rest of the paper is organized as follows. Section 2 provides relevant notations and definitions. Existing PARAFAC tensor decomposition and incremental tensor decomposition algorithms are also discussed in this section. The proposed algorithm is described in Section 3, and the experimental results are presented in Section 4. Section 5 concludes the paper.
In this section, we present relevant notation and tensor matricization concepts. Existing PARAFAC tensor decomposition and incremental tensor decomposition algorithms are also discussed in this section. Table 1 lists the symbols used in this paper.
Table of symbols.
Symbols | Definitions |
---|---|
Tensor | |
Mode-n unfolding matrix of tensor | |
|| | Frobenius norm of tensor |
nnz( | Number of non-zero elements in tensor |
⊗ | Kronecker product |
⊙ | Khatri-Rao product |
* | Hadamard product (Element-wise product) |
† | Pseudo inverse |
Before describing PARAFAC decomposition, we will present the relevant notations. The Kronecker product is denoted as ⊗. Given two matrices
We will now describe tensor matricization, which unfolds a tensor into matrix format. Figure 1 shows a representative example, where a three-dimensional tensor
However, when a new tensor is added (i.e., streaming or dynamic tensors), the total size of the tensor changes and the results of tensor matricization also change. Figure 2 shows the effect on matricization of the addition of a new tensor.
As tensors generally grow over time, in Figure 2, it is assumed that the K axis (time axis) is increasing and the other axes can also increase. The gray regions represent new values corresponding to the arrival of a new tensor. When the entire tensor is unfolded, the new tensor is located behind or under the existing tensor. Therefore, there is a need for a method in which only newly added tensors are decomposed, and the results of existing tensor decompositions are updated. The proposed method decomposes new values without decomposing the entire tensor; thereby, existing tensors are not unfolded, and new unfolded tensors are added to the existing matrix. The proposed method uses tensor indexing to concatenate new tensors and existing results.
PARAFAC decomposition, also known as CP decomposition, is one of the most widely used tensor decomposition algorithms. It decomposes a tensor into a sum of rank-one tensors (Gujral et al., 2018; Jeon et al., 2015; Kang et al., 2012; Yang & Yong, 2019a). A given three-dimensional tensor X is decomposed into R components to yield three factor matrices. This can be expressed as follows:
The alternating least squares (ALS) method is used for computing the factor matrices in PARAFAC decomposition. In the case of a three-dimensional tensor, this method fixes two factor matrices to obtain the third factor matrix (Yang & Yong, 2019a; 2019b). The PARAFAC-ALS method for computing the factor matrices is expressed as follows:
In Equation (2),
Incremental tensor decomposition algorithms are used to decompose dynamic tensors. Recalculation time can be reduced by such algorithms. Most incremental tensor decomposition algorithms assume that only one axis increases over time. The most representative example is online CP (Zhou et al., 2018), which is an incremental CP decomposition algorithm in which it is assumed that when new tensors are added to an existing tensor, only the time axis increases. Ma, Yang, and Wang (2018) proposed the randomized online CP decomposition algorithm, which is based on random sampling. This algorithm avoids calculating Khatri-Rao products and reduces memory usage. However, the algorithms mentioned above are based on MATLAB routines running on a single machine. Recently, distributed incremental tensor decomposition methods have been introduced. Gujral et al. (2018) proposed a sampling-based batch incremental CP tensor decomposition algorithm, which has been implemented in both MATLAB (Gujral et al., 2018) and Apache Spark (this can be download from
Recently, a few incremental tensor decomposition methods that consider multiaxial growth have been developed. Song et al. (2017) proposed a multi-aspect dynamic tensor completion framework called MAST which is based on CP decomposition. This method decomposes added tensors by partitioning them into sub-tensors. This affects the results of adjacent parts, implying that sub-tensor decomposition must be performed sequentially. Najafi, He, and Yu (2019) proposed an outlier-robust multi-aspect streaming tensor completion and factorization algorithm called OR-MSTC (Najafi et al., 2019). The incremental tensor decomposition method in OR-MSTC generates sub-tensors from incoming tensors. Nimishakavi et al. (2018) proposed a side-information-infused incremental tensor analysis method called SIITA which incorporates side information and can process general incremental tensors by Tucker decomposition. However, this method cannot handle scalable tensor datasets because it does not consider scalability issues (Nimishakavi et al., 2018). Additionally, most multi-aspect tensor decomposition algorithms either are based on MATLAB or focus on tensor completion.
In this study, we developed an incremental tensor decomposition algorithm called InParTen2. It considers both dynamic and static tensor datasets. Furthermore, the proposed method is implemented in Apache Spark and can therefore handle distributed in-memory big-data systems without dividing new tensors into subtensors. It can process new tensors based on their indexes and the results of previous tensor decompositions. Additionally, InParTen2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform. Table 2 provides a summary of various incremental tensor decomposition algorithms.
Summary of different incremental PARAFAC tensor decomposition algorithms.
Static tensor decomposition | Incremental (Single-aspect) | Incremental (Multi-aspect) | Distributed | Scalability | |
---|---|---|---|---|---|
Zhou et al. (2018) | √ | ||||
Ma et al. (2018) | √ | ||||
Gujral et al. (2018) | √ | √ | |||
Yang and Yong (2019b) | √ | √ | √ | ||
Song et al. (2017) | √ | √ | |||
Najafi et al. (2019) | √ | √ | |||
InParTen2 | √ | √ | √ | √ | √ |
In this section, we describe InParTen2. We assume that an existing tensor increases in all dimensions over time. Figure 3 shows a schematic representation of the multi-aspect incremental tensor decomposition process.
Here, the size of an existing tensor
InParTen2 decomposes new tensors without generating sub-tensors. We first perform initialization prior to executing incremental PARAFAC-ALS. In the decomposition of the added tensors, there are certain parts that affect the existing factor matrices. Therefore, we add the existing factor matrices when
The initial values of P and Q are set to
To calculate
Subsequently, we solve for A using the unfolded tensors
As mentioned previously, we update the factor matrices
Following this initial process, we use the incremental PARAFAC-ALS process to update the factor matrices
First, to update the factor matrix
In this section, we describe the implementation of InParTen2 in Apache Spark to achieve fast and memory-efficient execution. First, we use the coordinate format to load a new tensor as an input because real-word tensors are generally sparse. The coordinate format is a resilient distributed data (RDD) type consisting of three indexes and values. Subsequently, this format is used to load existing three-factor matrices. These are constructed from row indexes and row vectors. Owing to the limited memory space, only non-zero values of the tensor and three-factor matrices are loaded. Prior to the execution of PARAFAC-ALS, we initialize the factor matrices
Here, the non-zero tensor values are multiplied by the two-factor matrices corresponding to the non-zero indexes in the tensor. For example, if there is a non-zero value in
Overall, the time and space complexity of InParTen2 is O((6nnz(Xnew)+R2((I+I’)+(J+J’)+K’)) and O(R(I+J+K’)+2nnz(Xnew)), respectively. Where nnz(Xnew) is the number of non-zero in new tensor, and R is the decomposition rank.
In this section, we evaluate the performance of InParTen2 by comparing its execution time and accuracy with those of existing tensor decomposition methods using various tensor datasets. We consider execution time and relative error under the following scenarios: different density using synthetic tensors, increasing data size using synthetic tensors, and real-world datasets.
We used the experimental environments Hadoop v2.7 and Apache Spark v2.2. The experiments were conducted using six node workers with a total memory of 48 GB.
We used synthetic and real-world datasets to evaluate InParTen2. We used 10 types of synthetic datasets for each tensor size. In the synthetic datasets, we generated random tensors of dimension I = J = K, where I ∈ [100, 500, 800], and used different initialized densities: 20% and 60% in each initial tensor. Further, we generated incoming tensors with densities 5%, 10%, and 20% in the initial tensors, which are half of that in the total tensor. In addition, we used different initial size and density.
We used three real-world datasets containing sparse three-dimensional tensors: YELP (Jeon et al., 2016), MovieLens (Jeon et al., 2016), and Netflix (Bennett & Lanning, 2007). Table 3 lists these datasets. YELP contains users, locations, time indexes, and rating values from 1 to 5. MovieLens and Netflix contain users, movies, time indexes, and rating values from 1 to 5. In the incremental tensor decomposition by InParTen2, it is assumed that the three dimensions increase, and the initial dimension is fixed. In the SamBaTen (Gujral et al., 2018) algorithm, which considers the increase in a single dimension, only the time axis of the initial dimension is set to be equal to that of InParTen2.
Real-world tensor datasets.
Data Name | Data type | Total Dimension | Initial Dimension | Non-zero | File size |
---|---|---|---|---|---|
YELP | User×Location×Time | 70,817×15,579×108 | 70,815×15,572×20 | 334,166 | 5.6MB |
MovieLens | User×Movie×Time | 71,567×10,681×157 | 71,559×9,717×5 | 10,000,054 | 187MB |
Netflix | User×Movie×Time | 2,649,429×17,770 ×2,182 | 2,648,623×17,764×50 | 98,754,343 | 1.8GB |
We compared the execution time and accuracy of InParTen2 with those of existing tensor decomposition methods based on Apache Spark. Execution time is an important measure because one of our goals is to reduce time cost. We compared the execution time using synthetic and real-world datasets. In the former case, we performed various measurements. In the first measurement, the initial tensors were set to a density of 20% and 60%, and the incoming tensors had densities of 5%, 10%, and 20%. This experiment was conducted to study the effect of the density of the incoming data on performance. In the second measurement, the size of the initial tensor and incoming tensors were varying. Here, the density of the incoming tensor was set to 20%, and that of the initial tensor to 20% and 60%. In this experiment, the incoming data had the same density, but performance was evaluated in terms of the size of the incoming tensor data. In the real-world datasets, we set the initial size. We tested 10 iterations and 10 ranks in each dataset.
The second comparison is based on accuracy. We reconstructed tensors through decomposition and used the relative error to investigate their similarity to the original tensors. We compared these results to those obtained by existing tensor decomposition methods, particularly static tensor decomposition. Relative error can be defined as follows:
We used Spark-based tensor decomposition methods. There are several Hadoop-based distributed tensor decomposition algorithms, such as HaTen2 (Jeon et al., 2015) and BigTensor (Park et al., 2016), as well as MATLAB-based decomposition tools. However, Hadoop-based decomposition tools require longer iteration time, and therefore their execution time cannot be compared with that of Spark-based algorithms. MATLAB-based algorithms run on a single machine, thus limiting their ability to handle large data. Therefore, we used IM-PARAFAC and SamBaTen (Gujral et al., 2018), which run in the Apache Spark framework, for the comparison. IM-PARAFAC (Yang & Yong, 2019a) can decompose static tensors and handle large datasets. SamBaTen can decompose both static and incremental tensors. The incremental tensor decomposition of SamBaTen considers the increase in the time-axis only; accordingly, the initial size of the tensor datasets was set only for the time axis. The incremental tensor decomposition of SamBaTen is based on sampling. However, in the case of the static CP-ALS provided by SamBaTen, the HaTen2-Naive algorithm was re-implemented instead of the sampling algorithm. As InParTen2 also performs both static and incremental decomposition, both methods were tested. In the tests, the static tensor decomposition of InParTen2 is denoted as InParTen2-Static.
Initially, we considered the case in which initial tensors were set to a density of 20% and 60%, and the incoming tensors to 5%, 10%, and 20%. This test demonstrated the effect of density on computational cost for fixed size. For this experiment, the size of the initial tensor was set to half of the total tensor size. Table 4 and Figure 5 show the comparison of execution time for different densities of the incoming tensors. The best results are highlighted. Overall, it was demonstrated that sparse datasets exhibit shorter execution time. Further, InParTen2 was faster in every respect. InParTen2 expediently handled both static and incremental tensor decomposition. The latter was at least twice as fast as that by SamBaTen.
Comparison of execution time (min) according to the different densities of incoming tensor.
Tensor Synthetic Datasets | 100 × 100 ×100 | 500× 500×500 | 800×800×800 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Density(%) | 5% | 10% | 20% | 5% | 10% | 20% | 5% | 10% | 20% | ||
Initial Density 60% | Static | IM-PARAFAC | 1.19 | 1.37 | 1.62 | 78.5 | 95.5 | 131.5 | 318.9 | 402.3 | 543.6 |
SamBaTen (CP-ALS) | 1.55 | 1.74 | 1.98 | 134.9 | 183.8 | 211.9 | 657.6 | 836.2 | 1323.9 | ||
InParTen2-Static | |||||||||||
Dynamic | SamBaTen | 1.07 | 1.07 | 1.13 | 35.5 | 45.7 | 66.5 | 301.5 | 354.2 | 477.2 | |
InParTen2 | |||||||||||
Initial Density 20% | Static | IM-PARAFAC | 0.87 | 0.99 | 1.27 | 40.9 | 58.5 | 98.03 | 167.9 | 246.3 | 397.8 |
SamBaTen (CP-ALS) | 1.42 | 1.48 | 1.98 | 41.9 | 116.2 | 194.66 | 345.6 | 554.3 | 973.1 | ||
InParTen2-Static | |||||||||||
Dynamic | SamBaTen | 0.92 | 0.96 | 1.04 | 18.3 | 24.9 | 35.2 | 87.7 | 125.9 | 250.7 | |
InParTen2 |
Table 5 shows the comparison of the relative errors for different densities of the incoming tensor datasets. InParTen2 exhibits similar relative errors to those of the other methods. In general, datasets with an initial density of 60% exhibit better relative error than those with an initial density of 20%. If the original datasets are sparse, the reconstructed tensor is filled with values, thus increasing the relative error.
Comparison of relative errors according to the different densities of incoming tensor.
Tensor Synthetic Datasets | 100 × 100 ×100 | 500× 500×500 | 800×800×800 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Density | 5% | 10% | 20% | 5% | 10% | 20% | 5% | 10% | 20% | ||
Initial Density 60% | Static | IM-PARAFAC | 0.63 | 0.67 | 0.69 | 0.64 | 0.72 | 0.73 | 0.62 | 0.70 | 0.70 |
SamBaTen (CP-ALS) | 0.65 | 0.70 | 0.69 | 0.66 | 0.71 | 0.72 | 0.66 | 0.71 | 0.72 | ||
InParTen2-Static | 0.63 | 0.67 | 0.71 | 0.60 | 0.73 | 0.72 | 0.61 | 0.69 | 0.72 | ||
Dynamic | SamBaTen (Incremental) | 0.67 | 0.69 | 0.78 | 0.67 | 0.75 | 0.80 | 0.67 | 0.74 | 0.79 | |
InParTen2 | 0.61 | 0.67 | 0.69 | 0.65 | 0.73 | 0.71 | 0.63 | 0.72 | 0.72 | ||
Initial Density 20% | Static | IM-PARAFAC | 0.87 | 0.87 | 0.82 | 0.90 | 0.89 | 0.83 | 0.89 | 0.88 | 0.83 |
SamBaTen (CP-ALS) | 0.88 | 0.87 | 0.82 | 0.89 | 0.89 | 0.83 | 0.89 | 0.89 | 0.83 | ||
InParTen2-Static | 0.87 | 0.87 | 0.82 | 0.87 | 0.89 | 0.83 | 0.90 | 0.88 | 0.83 | ||
Dynamic | SamBaTen (Incremental) | 0.88 | 0.89 | 0.85 | 0.91 | 0.93 | 0.98 | 0.89 | 0.93 | 0.88 | |
InParTen2 | 0.86 | 0.89 | 0.82 | 0.91 | 0.90 | 0.84 | 0.87 | 0.90 | 0.86 |
Subsequently, we compared the execution time and relative error when data size increased. We considered initial tensors of different sizes under the same tensor size. Here, the density of the initial tensor was set to 20% and 60%, and that of the incoming tensors to 20%. Table 6 and Figure 6 show the comparison of the execution time for different initial size. The best results are highlighted. When the initial density was 60%, the execution time of the static decomposition tools increased gradually, but the dataset with an initial density of 20% exhibited the best performance when the initial size was half. The incremental decomposition by SamBaTen exhibited increased execution time. When the initial density was 60%, its execution time increased with the initial dimension, but when the initial density was 20%, the execution time decreased as the initial dimension increased. However, the incremental tensor decomposition by InParTen2 was faster regardless of the initial density when the initial dimension was larger and the amount of added data was small.
Comparison of execution time (min) according to the different initial size.
Tensor Synthetic Datasets | 100×100 ×100 | 500×500×500 | 800×800×800 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Initial dimension size (I=J=K) | 10 | 50 | 80 | 100 | 250 | 400 | 100 | 400 | 640 | ||
Initial Density 60% | Static | IM-PARAFAC | 1.76 | 1.62 | 2.66 | 126.84 | 131.49 | 270.04 | 540.79 | 543.60 | 1050.21 |
SamBaTen (CP-ALS) | 1.97 | 1.98 | 2.77 | 235.55 | 211.94 | 553.88 | 1398.33 | 1323.99 | 2416.36 | ||
InParTen2-Static | 0.74 | 0.77 | 1.08 | 36.94 | 48.28 | 72.14 | 180.63 | 177.64 | 357.48 | ||
Dynamic | SamBaTen (Incremental) | 1.16 | 1.13 | 1.21 | 42.97 | 66.47 | 89.53 | 351.21 | 477.21 | 652.03 | |
InParTen2 | |||||||||||
Initial Density 20% | Static | IM-PARAFAC | 1.76 | 1.27 | 1.39 | 125.80 | 98.03 | 110.03 | 570.94 | 397.80 | 459.99 |
SamBaTen (CP-ALS) | 1.97 | 1.98 | 2.72 | 264.09 | 194.66 | 265.96 | 1403.27 | 973.12 | 990.81 | ||
InParTen2-Static | 0.97 | 0.86 | 0.88 | 46.03 | 35.58 | 39.55 | 146.53 | 108.07 | 119.39 | ||
Dynamic | SamBaTen (Incremental) | 1.19 | 1.04 | 0.99 | 39.69 | 35.18 | 30.91 | 293.2 | 250.71 | 182.81 | |
InParTen2 |
Table 7 shows the comparison of the relative errors for different initial dimensions. It can be seen that the relative errors by InParTen2 and the other tensor decomposition tools are similar. Overall, the relative error increases with density. Datasets with an initial density of 20% were overall sparse, and therefore they had similar accuracy regardless of the initial size.
Comparison of relative errors according to the different initial size.
Tensor Synthetic Datasets | 100 × 100 ×100 | 500× 500×500 | Tensor 800×800×800 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Initial size (I=J=K) | 10 | 50 | 80 | 100 | 250 | 400 | 100 | 400 | 640 | ||
Initial Density 60% | Static | IM-PARAFAC | 0.82 | 0.69 | 0.57 | 0.82 | 0.73 | 0.57 | 0.83 | 0.70 | 0.57 |
SamBaTen (CP-ALS) | 0.82 | 0.71 | 0.56 | 0.82 | 0.72 | 0.57 | 0.83 | 0.72 | 0.57 | ||
InParTen2-Static | 0.82 | 0.69 | 0.56 | 0.82 | 0.72 | 0.57 | 0.82 | 0.72 | 0.56 | ||
Dynamic | SamBaTen (Incremental) | 0.85 | 0.78 | 0.57 | 0.83 | 0.80 | 0.57 | 0.83 | 0.79 | 0.58 | |
InParTen2 | 0.82 | 0.69 | 0.56 | 0.82 | 0.71 | 0.57 | 0.83 | 0.72 | 0.56 | ||
Initial Density 20% | Static | IM-PARAFAC | 0.82 | 0.82 | 0.82 | 0.83 | 0.83 | 0.83 | 0.83 | 0.83 | 0.83 |
SamBaTen (CP-ALS) | 0.82 | 0.82 | 0.82 | 0.82 | 0.83 | 0.83 | 0.83 | 0.83 | 0.83 | ||
InParTen2-Static | 0.82 | 0.82 | 0.82 | 0.83 | 0.83 | 0.83 | 0.83 | 0.83 | 0.83 | ||
Dynamic | SamBaTen (Incremental) | 0.84 | 0.85 | 0.83 | 0.85 | 0.98 | 0.87 | 1.02 | 0.88 | 0.86 | |
InParTen2 | 0.82 | 0.82 | 0.83 | 0.82 | 0.84 | 0.84 | 0.84 | 0.84 | 0.84 |
Finally, we compared the execution time and relative error using real-world datasets. Table 8 shows the results. It can be seen that InParTen2 is overall faster. InParTen2-static is approximately 1.03 to 3.6 times as fast as IM-PARAFAC, and 17 times as fast as CP-ALS of SamBaTen. InParTen2 exhibits smaller execution time than static tensor decomposition. In addition, it is approximately 1.5 to 18 times as fast as SamBaTen, and unlike SamBaTen, it can handle large tensors. Furthermore, InParTen2 exhibits similar relative errors to those of the other methods. SamBaTen-CPALS and SamBaTen exhibit a large relative error (larger than that of InParTen2) because they sample the additional data. However, InParTen2 has a difference of 0.04 to 0.05 from InParTen2-static and IM-PARAFAC in the MovieLens datasets. This apparently increases the relative error because the number of tensor values is increased to an excessive degree.
Comparison of execution time and relative error using real world datasets.
Execution Time(min) | Relative Error | ||||||
---|---|---|---|---|---|---|---|
YELP | MovieLens | Netflix | YELP | MovieLens | Netflix | ||
Static | IM-PARAFAC | 7.44 | 76.03 | 924.91 | 0.98 | 0.75 | 0.79 |
SamBaTen (CP-ALS) | 44.71 | 518.73 | N.A. | 0.98 | 0.99 | N.A. | |
InParTen2-Static | 2.047 | 29.03 | 893.33 | 0.96 | 0.76 | 0.79 | |
Dynamic | SamBaTen (incremental) | 36.42 | 36.65 | N.A. | 0.98 | 0.91 | N.A. |
InParTen2 | 1.94 | 23.50 | 246.63 | 0.97 | 0.80 | 0.80 |
In this paper, we proposed InParTen2, which is a distributed and incremental PARAFAC decomposition algorithm for multi-dimensional tensors. InParTen2 can reduce the re-calculation cost associated with the addition of new tensors. Furthermore, its accuracy is similar to that obtained by decomposing the entire tensor. The proposed method considers that all dimensions may grow over time. When new tensors are added, InParTen2 decomposes only the new tensors based on existing factor matrices. We implemented InParTen2 in the Apache Spark platform. InParTen2 uses an incremental tensor decomposition method suitable for Apache Spark so that fast execution may be achieved and large tensors may be handled. Moreover, InParTen2 uses a method for alleviating the memory overflow and data explosion problems that occur in in-memory big-data systems based on tensor decomposition algorithms. The performance of InParTen2 was evaluated by comparing its execution time and relative error with those of existing tensor decomposition methods. The results confirmed that InParTen2 can process large tensors and reduce re-calculation cost. Especially, InParTen2 is possible at a much faster than existing incremental tensor tools and can handle memory efficiency.
In the future, we will study Tucker-based incremental tensor decomposition based on Apache Spark. We will extend the proposed method to tensor completion, which can handle missing tensor entries. Furthermore, we will consider real-world applications, such as recommendation systems and social network analysis.