{"title": "Towards a general independent subspace analysis", "book": "Advances in Neural Information Processing Systems", "page_first": 1361, "page_last": 1368, "abstract": null, "full_text": "Towards a general independent subspace analysis\n\nFabian J. Theis Max Planck Institute for Dynamics and Self-Organisation & Bernstein Center for Computational Neuroscience Bunsenstr. 10, 37073 Gottingen, Germany fabian@theis.name\n\nAbstract\nThe increasingly popular independent component analysis (ICA) may only be applied to data following the generative ICA model in order to guarantee algorithmindependent and theoretically valid results. Subspace ICA models generalize the assumption of component independence to independence between groups of components. They are attractive candidates for dimensionality reduction methods, however are currently limited by the assumption of equal group sizes or less general semi-parametric models. By introducing the concept of irreducible independent subspaces or components, we present a generalization to a parameter-free mixture model. Moreover, we relieve the condition of at-most-one-Gaussian by including previous results on non-Gaussian component analysis. After introducing this general model, we discuss joint block diagonalization with unknown block sizes, on which we base a simple extension of JADE to algorithmically perform the subspace analysis. Simulations confirm the feasibility of the algorithm.\n\n1 Independent subspace analysis\nA random vector Y is called an independent component of the random vector X, if there exists an invertible matrix A and a decomposition X = A(Y, Z) such that Y and Z are stochastically independent. The goal of a general independent subspace analysis (ISA) or multidimensional independent component analysis is the decomposition of an arbitrary random vector X into independent components. If X is to be decomposed into one-dimensional components, this coincides with ordinary independent component analysis (ICA). Similarly, if the independent components are required to be of the same dimension k , then this is denoted by multidimensional ICA of fixed group size k or simply k -ISA. So 1-ISA is equivalent to ICA. 1.1 Why extend ICA? An important structural aspect in the search for decompositions is the knowledge of the number of solutions i.e. the indeterminacies of the problem. Without it, the result of any ICA or ISA algorithm cannot be compared with other solutions, so for instance blind source separation (BSS) would be impossible. Clearly, given an ISA solution, invertible transforms in each component (scaling matrices L) as well as permutations of components of the same dimension (permutation matrices P) give again an ISA of X. And indeed, in the special case of ICA, scaling and permutation are already all indeterminacies given that at most one Gaussian is contained in X [6]. This is one of the key theoretical results in ICA, allowing the usage of ICA for solving BSS problems and hence stimulating many applications. It has been shown that also for k -ISA, scalings and permutations as above are the only indeterminacies [11], given some additional rather weak restrictions to the model. However, a serious drawback of k -ISA (and hence of ICA) lies in the fact that the requirement fixed group-size k does not allow us to apply this analysis to an arbitrary random vector. Indeed,\n\n\f\n4\n\ncrosstalking error\n\n3\n\n2\n\n1\n\n0 FastICA JADE Extended Infomax\n\nFigure 1: Applying ICA to a random vector X = AS that does not fulfill the ICA model; here S is chosen to consist of a two-dimensional and a one-dimensional irreducible component. Shown are the statistics over 100 runs of the Amari error of the random original and the reconstructed mixing matrix using the three ICA-algorithms FastICA, JADE and Extended Infomax. Clearly, the original mixing matrix could not be reconstructed in any of the experiments. However, interestingly, the latter two algorithms do indeed find an ISA up to permutation, which will be explained in section 3.\n\ntheoretically speaking, it may only be applied to random vectors following the k -ISA blind source separation model, which means that they have to be mixtures of a random vector that consists of independent groups of size k . If this is the case, uniqueness up to permutation and scaling holds as noted above; however if k -ISA is applied to any random vector, a decomposition into groups that are only `as independent as possible' cannot be unique and depends on the contrast and the algorithm. In the literature, ICA is often applied to find representations fulfilling the independence condition as well as possible, however care has to be taken; the strong uniqueness result is not valid any more, and the results may depend on the algorithm as illustrated in figure 1. This work aims at finding an ISA model that allows applicability to any random vector. After reviewing previous approaches, we will provide such a model together with a corresponding uniqueness result and a preliminary algorithm. 1.2 Previous approaches to ISA for dependent component analysis Generalizations of the ICA model that are to include dependencies of multiple one-dimensional components have been studied for quite some time. ISA in the terminology of multidimensional ICA has first been introduced by Cardoso [4] using geometrical motivations. His model as well as the related but independently proposed factorization of multivariate function classes [9] is quite general, however no identifiability results were presented, and applicability to an arbitrary random vector was unclear; later, in the special case of equal group sizes (k -ISA) uniqueness results have been extended from the ICA theory [11]. Algorithmic enhancements in this setting have been recently studied by [10]. Moreover, if the observation contain additional structures such as spatial or temporal structures, these may be used for the multidimensional separation [13]. Hyvarinen and Hoyer presented a special case of k -ISA by combining it with invariant feature sub space analysis [7]. They model the dependence within a k -tuple explicitly and are therefore able to propose more efficient algorithms without having to resort to the problematic multidimensional density estimation. A related relaxation of the ICA assumption is given by topographic ICA [8], where dependencies between all components are assumed and modelled along a topographic structure (e.g. a 2-dimensional grid). Bach and Jordan [2] formulate ISA as a component clustering problem, which necessitates a model for inter-cluster independence and intra-cluster dependence. For the latter, they propose to use a tree-structure as employed by their tree dependepent component analysis. Together with inter-cluster independence, this implies a search for a transformation of the mixtures into a forest i.e. a set of disjoint trees. However, the above models are all semi-parametric and hence not fully blind. In the following, no additional structures are necessary for the separation.\n\n\f\n1.3 General ISA Definition 1.1. A random vector S is said to be irreducible if it contains no lower-dimensional independent component. An invertible matrix W is called a (general) independent subspace analysis of X if WX = (S1 , . . . , Sk ) with pairwise independent, irreducible random vectors Si . Note that in this case, the Si are independent components of X. The idea behind this definition is that in contrast to ICA and k -ISA, we do not fix the size of the groups Si in advance. Of course, some restriction is necessary, otherwise no decomposition would be enforced at all. This restriction is realized by allowing only irreducible components. The advantage of this formulation now is that it can clearly be applied to any random vector, although of course a trivial decomposition might be the result in the case of an irreducible random vector. Obvious indeterminacies of an ISA of X are, as mentioned above, scalings i.e. invertible transformations within each Si and permutation of Si of the same dimension1. These are already all indeterminacies as shown by the following theorem, which extends previous results in the case of ICA [6] and k -ISA [11], where also the additional slight assumptions on square-integrability i.e. on existing covariance have been made. Theorem 1.2. Given a random vector X with existing covariance and no Gaussian independent component, then an ISA of X exists and is unique except for scaling and permutation. Existence holds trivially but uniqueness is not obvious. Due to the limited space, we only give a short sketch of the proof in the following. The uniqueness result can easily be formulated as a subspace extraction problem, and theorem 1.2 follows readily from Lemma 1.3. Let S = (S1 , . . . , Sk ) be a square-integrable decomposition of S into irreducible independent components Si . If X is an irreducible component of S, then X Si for some i. Here the equivalence relation denotes equality except for an invertible transformation. The following two lemmata each give a simplification of lemma 1.3 by ordering the components Si according to their dimensions. Some care has to be taken when showing that lemma 1.5 implies lemma 1.4. Lemma 1.4. Let S and X be defined as in lemma 1.3. In addition assume that dim Si = dim X for i l and dim Si < dim X for i > l. Then X Si for some i l. Lemma 1.5. Let S and X be defined as in lemma 1.4, and let l = 1 and k = 2. Then X S1 . In order to prove lemma 1.5 (and hence the theorem), it is sufficient to show the following lemma: Lemma 1.6. Let S = (S1 , S2 ) with S1 irreducible and m := dim S1 > dim S2 =: n. If X = AS is again irreducible for some m (m + n)-matrix A, then (i) the left m m-submatrix of A is invertible, and (ii) if X is an independent component of S, the right m n-submatrix of A vanishes. (i) follows after some linear algebra, and is necessary to show the more difficult part (ii). For this, we follow the ideas presented in [12] using factorization of the joint characteristic function of S. 1.4 Dealing with Gaussians In the previous section, Gaussians had to be excluded (or at most one was allowed) in order to avoid additional indeterminacies. Indeed, any orthogonal transformation of two decorrelated hence independent Gaussians is again independent, so clearly such a strong identification result would not be possible. Recently, a general decomposition model dealing with Gaussians was proposed in the form of the socalled non-Gaussian subspace analysis (NGSA) [3]. It tries to detect a whole non-Gaussian subspace within the data, and no assumption of independence within the subspace is made. More precisely, given a random vector X, a factorization X = AS with an invertible matrix A, S = (SN , SG ) and SN a square-integrable m-dimensional random vector is called an m-decomposition of X if SN and SG are stochastically independent and SG is Gaussian. In this case, X is said to be mdecomposable. X is denoted to be minimally n-decomposable if X is not (n - 1)-decomposable. According to our previous notation, SN and SG are independent components of X. It has been shown that the subspaces of such decompositions are unique [12]:\n1 Note that scaling here implies a basis change in the component Si , so for example in the case of a twodimensional source component, this might be rotation and sheering. In the example later in figure 3, these indeterminacies can easily be seen by comparing true and estimated sources.\n\n\f\nTheorem 1.7 (Uniqueness of NGSA). The mixing matrix A of a minimal decomposition is unique except for transformations in each of the two subspaces. Moreover, explicit algorithms can be constructed for identifying the subspaces [3]. This result enables us to generalize theorem 1.2and to get a general decomposition theorem, which characterizes solutions of ISA. Theorem 1.8 (Existence and Uniqueness of ISA). Given a random vector X with existing covariance, an ISA of X exists and is unique except for permutation of components of the same dimension and invertible transformations within each independent component and within the Gaussian part. Proof. Existence is obvious. Uniqueness follows after first applying theorem 1.7 to X and then theorem 1.2 to the non-Gaussian part.\n\n2 Joint block diagonalization with unknown block-sizes\nJoint diagonalization has become an important tool in ICA-based BSS (used for example in JADE) or in BSS relying on second-order temporal decorrelation. The task of (real) joint diagonalization (JD) of a set of symmetric real n n matrices M := {M1 , . . . , MK } is to find an orthogonal matrix K ^ ^ ^ E such that E Mk E is diagonal for all k = 1, . . . , K i.e. to minimizef (E) := k=1 E Mk E - ^ ^ ^ Mk E) 2 with respect to the orthogonal matrix E, where diagM(M) produces a matrix diagM(E F where all off-diagonal elements of M have been set to zero, and M 2 := tr(MM ) denotes the F squared Frobenius norm. The Frobenius norm is invariant under conjugation by an orthogonal maK ^ ^ ^ trix, so minimizing f is equivalent to maximizing g (E) := k=1 diag(E Mk E) 2, where now diag(M) := (mii )i denotes the diagonal of M. For the actual minimization of f respectively maximization of g , we will use the common approach of Jacobi-like optimization by iterative applications of Givens rotation in two coordinates [5]. 2.1 Generalization to blocks In the following we will use a generalization of JD in order to solve ISA problems. Instead of fully diagonalizing all n n matrices Mk M, in joint block diagonalization (JBD) of M we want to determine E such that E Mk E is block-diagonal. Depending on the application, we fix the blockstructure in advance or try to determine it from M. We are not interested in the order of the blocks, so the block-structure is uniquely specified by fixing a partition of n i.e. a way of writing n as a sum of positive integers, where the order of the addends is not significant. So let2 n = m1 + . . . + mr with m1 m2 . . . mr and set m := (m1 , . . . , mr ) Nr . An n n matrix is said to be m-block diagonal if it is of the form D1 0 . . .. . . . . . 0 Dr with arbitrary mi mi matrices Di . As generalization of JD in the case of known the block structure, we can formulate the joint mK ^ ^ block diagonalization (m-JBD) problem as the minimization of f m (E) := ^ k =1 E M k E - m ^ ^ ) 2 with respect to the orthogonal matrix E, where diagMm (M) produces a ^ diagM (E Mk E F m-block diagonal matrix by setting all other elements of M to zero. In practice due to estimation errors, such E will not exist, so we speak of approximate JBD and imply minimizing some errormeasure on non-block-diagonality. Indeterminacies of any m-JBD are m-scaling i.e. multiplication by an m-block diagonal matrix from the right, and m-permutation defined by a permutation matrix that only swaps blocks of the same size. Finally, we speak of general JBD if we search for a JBD but no block structure is given; instead it is to be determined from the matrix set. For this it is necessary to require a block\nWe do not use the convention from Ferrers graphs of specifying partitions in decreasing order, as a visualization of increasing block-sizes seems to be preferable in our setting.\n2\n\n\f\nstructure of maximal length, otherwise trivial solutions or `in-between' solutions could exist (and obviously contain high indeterminacies). Formally, E is said to be a (general) JBD of M if (E, m) = argmaxm | E:f m (E)=0 |m|. In practice due to errors, a true JBD would always result in the trivial decomposition m = (n), so we define an approximate general JBD by requiring f m (E) < for some fixed constant > 0 instead of f m (E) = 0. 2.2 JBD by JD A few algorithms to actually perform JBD have been proposed, see [1] and references therein. In the following we will simply perform joint diagonalization and then permute the columns of E to achieve block-diagonality -- in experiments this turns out to be an efficient solution to JBD [1]. This idea has been formulated in a conjecture [1] essentially claiming that a minimum of the JD cost function f already is a JBD i.e. a minimum of the function f m up to a permutation matrix. Indeed, in the conjecture it is required to use the Jacobi-update algorithm from [5], but this is not necessary, and we can prove the conjecture partially: We want to show that JD implies JBD up to permutation, i.e. if E is a minimum of f , then there exists a permutation P such that f m (EP) = 0 (given existence of a JBD of M). But of course f (EP) = f (E), so we will show why (certain) JBD solutions are minima of f . However, JD might have additional minima. First note that clearly not any JBD minimizes f , only those such that in each block of size mk , f (E) when restricted to the block is maximal over E O(mk ). We will call such a JBD block-optimal in the following. Theorem 2.1. Any block-optimal JBD of M (zero of f m ) is a local minimum of f . Proof. Let E O(n) be block-optimal with f m (E) = 0. We have to show that E is a local minimum of f or equivalently a local maximum of the squared diagonal sum g . After substituting each Mk by E Mk E, we may already assume that Mk is m-block diagonal, so we have to show that E = I is a local maximum of g . Consider the elementary Givens rotation Gij () defined for i < j and (-1, 1) as the orthogonal matrix, where all diagonal elements are 1 except for the two elements 1 - 2 in rows i and j and with all off-diagonal elements equal to 0 except for the two elements and - at (i, j ) and (j, i), respectively. It can be used to construct local coordinatesi of the d := n(n - 1)/2-dimensional manifold O(n) at I, simply by (12 , 13 , . . . , n-1,n ) := 0 and therefore h is negative definite in the direction ij . Altogether we get a negative definite h at 0 except for `trivial directions', and hence a local maximum at 0. 2.3 Recovering the permutation In order to perform JBD, we therefore only have to find a JD E of M. What is left according to the above theorem is to find a permutation matrix P such that EP block-diagonalizes M. In the case of known block-order m, we can employ similar techniques as used in [1, 10], which essentially find P by some combinatorial optimization.\n\n= -2(a2 + b2 )2 + 2(ak - bk )2 4 . k k\n\n\f\n5\n\n5\n\n5 10 15 20 25 30 35 40\n5 10 15 20 25 30 35 40\n\n10\n\n10\n\n15\n\n15\n\n20\n\n20\n\n25\n\n25\n\n30\n\n30\n\n35\n\n35\n\n40 5 10 15 20 25 30 35 40\n\n40\n\n5\n\n10\n\n15\n\n20\n\n25\n\n30\n\n35\n\n40\n\n(a) (unknown) block diagonal M1 (b) E E w/o recovered permutation\n\n^\n\n(c) E E\n\n^\n\nFigure 2: Performance of the proposed general JBD algorithm in the case of the (unknown) blockpartition 40 = 1 + 2 + 2 + 3 + 3 + 5 + 6 + 6 + 6 + 6 in the presence of noise with SNR of 5dB . The ^ product E E of the inverse of the estimated block diagonalizer and the original one is an m-block diagonal matrix except for permutation within groups of the same sizes as claimed in section 2.2. In the case of unknown block-size, we propose to use the following simple permutation-recovery K algorithm: consider the mean diagonalized matrix D := K -1 k=1 E Mk E. Due to the assumption that M is m-block-diagonalizable (with unknown m), each E Mk E and hence also D must be m-block-diagonal except for a permutation P, so it must have the corresponding number of zeros in each column and row. In the approximate JBD case, thresholding with a threshold is necessary, whose choice is non-trivial. We propose using algorithm 1 to recover the permutation; we denote its resulting permuted matrix by P (D) when applied to the input D. P (D) is constructed from possibly thresholded D by iteratively permuting columns and rows in order to guarantee that all non-zeros of D are clustered along the diagonal as closely as possible. This recovers the permutation as well as the partition m of n. Algorithm 1: Block-diagonality permutation finder Input: (n n)-matrix D Output: block-diagonal matrix P (D) := D such that D = PDPT for a permutation matrix P D D for i 1 to n do repeat if (j0 min{j |j i and d j = 0 and d i = 0}) exists then i j if (k0 min{k |k > j0 and (d k = 0 or d i = 0)}) exists then i k swap column j0 of D with column k0 swap row j0 of D with row k0 until no swap has occurred ;\n\nWe illustrate the performance of the proposed JBD algorithm as follows: we generate a set of K = 100 m-block-diagonal matrices Dk of dimension 40 40 with m = (1, 2, 2, 3, 3, 5, 6, 6, 6, 6). They have been generated in blocks of size m with coefficients chosen randomly uniform from [-1, 1], and symmetrized by Dk (Dk + D )/2. After that, they have been mixed by a random k orthogonal mixing matrix E O(40), i.e. Mk := EDk E + N, where N is a noise matrix with independent Gaussian entries such that the resulting signal-to-noise ratio is 5dB . Application of the JBD algorithm from above to {M1 , . . . , MK } with threshold = 0.1 correctly recovers the ^ block sizes, and the estimated block diagonalizer E equals E up to m-scaling and permutation, as illustrated in figure 2.\n\n3 SJADE -- a simple algorithm for general ISA\nAs usual by preprocessing of the observations X by whitening we may assume that Cov(X) = I. The indeterminacies allow scaling transformations in the sources, so without loss of generality let\n\n\f\n5.5\n\n6\n\n5\n\n1\n\n5\n5\n\n5.5\n\n4.5\n\n2\n\n4\n4.5\n\n5 4\n\n3\n\n3\n4\n\n4\n\n4.5 3.5\n\n2\n\n5\n\n3.5\n\n4\n\n6\n3\n3\n\n1\n7\n\n3.5 2.5\n\n2.5\n\n3\n\n0 5 4 5 3 2 1 2 1 0 0 3 4\n\n8\n\n2\n\n2.5\n\n2\n\n9\n\n10 1 2 3 4 5 6 7 8 9 10\n\n1.5\n\n7\n\n8\n\n9\n\n10\n\n11\n\n12\n\n13\n\n14\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n1.5\n\n3\n\n3.5\n\n4\n\n4.5\n\n5\n\n5.5\n\n6\n\n6.5\n\n7\n\n(a) S2\n5\n\n(b) S3\n14\n\n(c) S4\n-3\n\n(d) S5\n\n^ (e) A-1 A\n\n4.5\n250\n\n-3.5\n\n1 0 -1 -2 -3 -4 -5 0 -1 -2 4 -3 -4 2 1 -5 0 3 5\n\n13\n-4\n\n4\n12\n200\n\n-4.5\n\n3.5\n11\n150\n\n-5\n\n-5.5\n\n3\n-6\n100\n\n10\n-6.5\n\n2.5\n\n-7\n\n2\n\n50\n\n9\n-7.5\n\n1.5 -6.5\n\n-6\n\n-5.5\n\n-5\n\n-4.5\n\n-4\n\n-3.5\n\n-3\n\n-2.5\n\n-2\n\n0 -4\n\n-3.5\n\n-3\n\n-2.5\n\n-2\n\n-1.5\n\n-1\n\n-0.5\n\n0\n\n8 -4.5\n\n-4\n\n-3.5\n\n-3\n\n-2.5\n\n-2\n\n-1.5\n\n-1\n\n-0.5\n\n-8 -7.5\n\n-7\n\n-6.5\n\n-6\n\n-5.5\n\n-5\n\n-4.5\n\n-4\n\n-3.5\n\n-3\n\n-2.5\n\n^^ ( f ) ( S1 , S2 )\n\n^ (g) histogram of S3\n\n(h) S4\n\n(i) S5\n\n(j) S6\n\nFigure 3: Example application of general ISA for unknown sizes m = (1, 2, 2, 2, 3). Shown are the ^ scatter plots i.e. densities of the source components and the mixing-separating map A-1 A. also Cov(S) = I. Then I = Cov(X) = A Cov(S)A = AA so A is orthogonal. Due to the ISA assumptions, the fourth-order cross cumulants of the sources have to be trivial between different groups, and within the Gaussians. In order to find transformations of the mixtures fulfilling this property, we follow the idea of the JADE algorithmbut now in the ISA setting. We perform JBD of X - the (whitened) contracted quadricovariance matrices defined by Cij (X) := E Eij XXX Eij - E - tr(Eij )I. Here RX := Cov(X) and Eij is a set of eigen-matrices of Cij , 1 i, j n. ij One simple choice is to use n2 matrices Eij with zeros everywhere except 1 at index (i, j ). More elaborate choices of eigen-matrices (with only n(n + 1)/2 or even n entries) are possible. The resulting algorithm, subspace-JADE (SJADE) not only performs NGCA by grouping Gaussians as one-dimensional components with trivial Cii 's, but also automatically finds the subspace partition m using the general JBD algorithm from section 2.3.\n\n4 Experimental results\nIn a first example, we consider a general ISA problem in dimension n = 10 with the unknown partition m = (1, 2, 2, 2, 3). In order to generate 2- and 3-dimensional irreducible random vectors, we decided to follow the nice visual ideas from [10] and to draw samples from a density following a known shape -- in our case 2d-letters or 3d-geometrical shapes. The chosen sources densities are shown in figure 3(a-d). Another 1-dimensional source following a uniform distribution was constructed. Altogether 104 samples were used. The sources S were mixed by a mixing matrix A with coefficients uniformly randomly sampled from [-1, 1] to give mixtures X = AS. The recovered ^ mixing matrix A was then estimated using the above block-JADE algorithm with unknown block size; we observed that the method is quite sensitive to the choice of the threshold (here = 0.015). ^ Figure 3(e) shows the composed mixing-separating system A-1 A; clearly the matrices are equal except for block permutation and scaling, which experimentally confirms theorem 1.8. The algo^ rithm found a partition m = (1, 1, 1, 2, 2, 3), so one 2d-source was misinterpreted as two 1d-sources, but by using previous knowledge combination of the correct two 1d-sources yields the original 2d^ ^ source. The resulting recovered sources S := A-1 X, figures 3(f-j), then equal the original sources except for permutation and scaling within the sources -- which in the higher-dimensional cases implies transformations such as rotation of the underlying images or shapes. When applying ICA (1-ISA) to the above mixtures, we cannot expect to recover the original sources as explained in figure 1; however, some algorithms might recover the sources up to permutation. Indeed, SJADE equals JADE with additional permutation recovery because the joint block diagonalization is per-\n\n\f\nformed using joint diagonalization. This explains why JADE retrieves meaningful components even in this non-ICA setting as observed in [4]. In a second example, we illustrate how the algorithm deals with Gaussian sources i.e. how the subspace JADE also includes NGCA. For this we consider the case n = 5, m = (1, 1, 1, 2) and sources with two Gaussians, one uniform and a 2-dimensional irreducible component as before; 105 samples were drawn. We perform 100 Monte-Carlo simulations with random mixing matrix ^ A, and apply SJADE with = 0.01. The recovered mixing matrix A is compared with A by 2 3 2 2 ^ -1 A. Indeed, we get nearly taking the ad-hoc measure (P) := i=1 j =1 (pij + pj i ) for P := A perfect recovery in 99 out of 100 runs, the median of (P) is very low with 0.0083. A single run diverges with (P ) = 3.48. In order to show that the algorithm really separates the Gaussian part from the other components, we compare the recovered source kurtoses. The median kurtoses are -0.0006 0.02, -0.003 0.3, -1.2 0.3, -1.2 0.2 and -1.6 0.2. The first two components have kurtoses close to zero, so they are the two Gaussians, whereas the third component has kurtosis of around -1.2, which equals the kurtosis of a uniform density. This confirms the applicability of the algorithm in the general, noisy ISA setting.\n\n5 Conclusion\nPrevious approaches for independent subspace analysis were restricted either to fixed group sizes or semi-parametric models. In neither case, general applicability to any kind of mixture data set was guaranteed, so blind source separation might fail. In the present contribution we introduce the concept of irreducible independent components and give an identifiability result for this general, parameter-free model together with a novel arbitrary-subspace-size algorithm based on joint block diagonalization. As in ICA, the main uniqueness theorem is an asymptotic result (but includes noisy case via NGCA). However in practice in the finite sample case, due to estimation errors the general joint block diagonality only approximately holds. Our simple solution in this contribution was to choose appropriate thresholds. But this choice is non-trivial, and adaptive methods are to be developed in future works.\n\nReferences\n[1] K. Abed-Meraim and A. Belouchrani. Algorithms for joint block diagonalization. In Proc. EUSIPCO 2004, pages 209212, Vienna, Austria, 2004. [2] F.R. Bach and M.I. Jordan. Finding clusters in independent component analysis. In Proc. ICA 2003, pages 891896, 2003. [3] G. Blanchard, M. Kawanabe, M. Sugiyama, V. Spokoiny, and K.-R. Muller. In search of non-gaussian components of a high-dimensional distribution. JMLR, 7:247282, 2006. [4] J.F. Cardoso. Multidimensional independent component analysis. In Proc. of ICASSP '98, Seattle, 1998. [5] J.F. Cardoso and A. Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM J. Mat. Anal. Appl., 17(1):161164, January 1995. [6] P. Comon. Independent component analysis - a new concept? Signal Processing, 36:287314, 1994. [7] A. Hyvarinen and P.O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):17051720, 2000. [8] A. Hyvarinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computa tion, 13(7):15251558, 2001. [9] J.K. Lin. Factorizing multivariate function classes. In Advances in Neural Information Processing Systems, volume 10, pages 563569, 1998. [10] B. Poczos and A. Lorincz. Independent subspace analysis using k-nearest neighborhood distances. In Proc. ICANN 2005, volume 3696 of LNCS, pages 163168, Warsaw, Poland, 2005. Springer. [11] F.J. Theis. Uniqueness of complex and multidimensional independent component analysis. Signal Processing, 84(5):951956, 2004. [12] F.J. Theis and M. Kawanabe. Uniqueness of non-gaussian subspace analysis. In Proc. ICA 2006, pages 917925, Charleston, USA, 2006. [13] R. Vollgraf and K. Obermayer. Multi-dimensional ICA to separate correlated sources. In Proc. NIPS 2001, pages 9931000, 2001.\n\n\f\n", "award": [], "sourceid": 3004, "authors": [{"given_name": "Fabian", "family_name": "Theis", "institution": null}]}