DelayedTensor 1.7.0
Authors: Koki Tsuyuzaki [aut, cre]
Last modified: 2023-04-20 00:28:48.691296
Compiled: Wed May 3 16:26:36 2023
einsumeinsum is an easy and intuitive way to write tensor operations.
It was originally introduced by
Numpy1 https://numpy.org/doc/stable/reference/generated/numpy.einsum.html
package of Python but similar tools have been implemented in other languages
(e.g. R, Julia) inspired by Numpy.
In this vignette, we will use CRAN einsum package first.
einsum is named after
Einstein summation2 https://en.wikipedia.org/wiki/Einstein_notation
introduced by Albert Einstein,
which is a notational convention that implies summation over
a set of indexed terms in a formula.
Here, we consider a simple example of einsum; matrix multiplication.
If we naively implement the matrix multiplication,
the calculation would look like the following in a for loop.
A <- matrix(runif(3*4), nrow=3, ncol=4)
B <- matrix(runif(4*5), nrow=4, ncol=5)
C <- matrix(0, nrow=3, ncol=5)
I <- nrow(A)
J <- ncol(A)
K <- ncol(B)
for(i in 1:I){
for(j in 1:J){
for(k in 1:K){
C[i,k] = C[i,k] + A[i,j] * B[j,k]
}
}
}
Therefore, any programming language can implement this. However, when analyzing tensor data, such operations tend to be more complicated and increase the possibility of causing bugs because the order of tensors is larger or more tensors are handled simultaneously. In addition, several programming languages, especially R, are known to significantly slow down the speed of computation if the code is written in for loop.
Obviously, in the case of the R language, it should be executed using the built-in matrix multiplication function (%*%) prepared by the R, as shown below.
C <- A %*% B
However, more complex operations than matrix multiplication are not always provided by programming languages as standard.
einsum is a function that solves such a problem.
To put it simply, einsum is a wrapper for the for loop above.
Like the Einstein summation, it omits many notations such as for,
array size (e.g. I, J, and K), brackets (e.g. {}, (), and []),
and even addition operator (+) and
extracts the array subscripts (e.g. i, j, and k)
to concisely express the tensor operation as follows.
suppressPackageStartupMessages(library("einsum"))
C <- einsum('ij,jk->ik', A, B)
DelayedTensorCRAN einsum is easy to use because the syntax is almost
the same as that of Numpy‘s einsum,
except that it prohibits the implicit modes that do not use’->’.
It is extremely fast because the internal calculation
is actually performed by C++.
When the input tensor is huge, however,
it is not scalable because it assumes that the input is R’s standard array.
Using einsum of DelayedTensor,
we can augment the CRAN einsum’s functionality;
in DelayedTensor,
the input DelayedArray objects are divided into
multiple block tensors and the CRAN einsum
is incremently applied in the block processing.
A surprisingly large number of tensor operations can be handled
uniformly in einsum.
In more detail, einsum is capable of performing any tensor operation
that can be described by a combination of the following
three operations3 https://ajcr.net/Basic-guide-to-einsum/.
Some typical operations are introduced below. Here we use the arrays and DelayedArray objects below.
suppressPackageStartupMessages(library("DelayedTensor"))
suppressPackageStartupMessages(library("DelayedArray"))
arrA <- array(runif(3), dim=c(3))
arrB <- array(runif(3*3), dim=c(3,3))
arrC <- array(runif(3*4), dim=c(3,4))
arrD <- array(runif(3*3*3), dim=c(3,3,3))
arrE <- array(runif(3*4*5), dim=c(3,4,5))
darrA <- DelayedArray(arrA)
darrB <- DelayedArray(arrB)
darrC <- DelayedArray(arrC)
darrD <- DelayedArray(arrD)
darrE <- DelayedArray(arrE)
If the same subscript is written on both sides of ->,
einsum will simply output the object without any calculation.
einsum::einsum('i->i', arrA)
## [1] 0.9048788 0.3407412 0.1628881
DelayedTensor::einsum('i->i', darrA)
## <3> DelayedArray object of type "double":
## [1] [2] [3]
## 0.9048788 0.3407412 0.1628881
einsum::einsum('ij->ij', arrC)
## [,1] [,2] [,3] [,4]
## [1,] 0.6284030 0.57349160 0.9562938 0.2021698
## [2,] 0.8446129 0.06575913 0.3829822 0.0297084
## [3,] 0.2009556 0.46412640 0.9118148 0.5600719
DelayedTensor::einsum('ij->ij', darrC)
## <3 x 4> DelayedArray object of type "double":
## [,1] [,2] [,3] [,4]
## [1,] 0.62840304 0.57349160 0.95629384 0.20216980
## [2,] 0.84461286 0.06575913 0.38298223 0.02970840
## [3,] 0.20095560 0.46412640 0.91181483 0.56007192
einsum::einsum('ijk->ijk', arrE)
## , , 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.8401728 0.6593265 0.8032228 0.7125663
## [2,] 0.8551251 0.3558006 0.4584391 0.5898153
## [3,] 0.4971833 0.5759473 0.7299171 0.3684410
##
## , , 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.2352250 0.6065988 0.49921325 0.6781636
## [2,] 0.6980215 0.5327041 0.04333164 0.6593056
## [3,] 0.7780080 0.3101937 0.11860005 0.7735610
##
## , , 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5751239 0.5714769 0.5673325 0.1540379
## [2,] 0.7336729 0.7467977 0.9147919 0.9370798
## [3,] 0.7275806 0.7561579 0.2361029 0.1543299
##
## , , 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.01503281 0.8592549 0.8338968 0.1107274
## [2,] 0.42197272 0.9668403 0.7867080 0.2704990
## [3,] 0.58978791 0.4159721 0.9155250 0.8079606
##
## , , 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5838941 0.2760958 0.3331356 0.79371598
## [2,] 0.3098482 0.7449713 0.8095966 0.57584197
## [3,] 0.6185572 0.8659947 0.8342986 0.06295469
DelayedTensor::einsum('ijk->ijk', darrE)
## <3 x 4 x 5> DelayedArray object of type "double":
## ,,1
## [,1] [,2] [,3] [,4]
## [1,] 0.8401728 0.6593265 0.8032228 0.7125663
## [2,] 0.8551251 0.3558006 0.4584391 0.5898153
## [3,] 0.4971833 0.5759473 0.7299171 0.3684410
##
## ,,2
## [,1] [,2] [,3] [,4]
## [1,] 0.23522504 0.60659884 0.49921325 0.67816362
## [2,] 0.69802153 0.53270411 0.04333164 0.65930565
## [3,] 0.77800803 0.31019373 0.11860005 0.77356097
##
## ,,3
## [,1] [,2] [,3] [,4]
## [1,] 0.5751239 0.5714769 0.5673325 0.1540379
## [2,] 0.7336729 0.7467977 0.9147919 0.9370798
## [3,] 0.7275806 0.7561579 0.2361029 0.1543299
##
## ,,4
## [,1] [,2] [,3] [,4]
## [1,] 0.01503281 0.85925487 0.83389683 0.11072744
## [2,] 0.42197272 0.96684032 0.78670796 0.27049905
## [3,] 0.58978791 0.41597206 0.91552498 0.80796062
##
## ,,5
## [,1] [,2] [,3] [,4]
## [1,] 0.58389413 0.27609576 0.33313563 0.79371598
## [2,] 0.30984820 0.74497131 0.80959661 0.57584197
## [3,] 0.61855721 0.86599468 0.83429859 0.06295469
We can also extract the diagonal elements as follows.
einsum::einsum('ii->i', arrB)
## [1] 0.1863305 0.3527597 0.6460557
DelayedTensor::einsum('ii->i', darrB)
## <3> HDF5Array object of type "double":
## [1] [2] [3]
## 0.1863305 0.3527597 0.6460557
einsum::einsum('iii->i', arrD)
## [1] 0.5973563 0.7573835 0.9070781
DelayedTensor::einsum('iii->i', darrD)
## <3> HDF5Array object of type "double":
## [1] [2] [3]
## 0.5973563 0.7573835 0.9070781
By using multiple arrays or DelayedArray objects as input and writing “,” on the right side of ->, multiplication will be performed.
Hadamard Product can also be implemented in einsum,
multiplying by the product of each element.
einsum::einsum('i,i->i', arrA, arrA)
## [1] 0.81880561 0.11610453 0.02653252
DelayedTensor::einsum('i,i->i', darrA, darrA)
## <3> HDF5Array object of type "double":
## [1] [2] [3]
## 0.81880561 0.11610453 0.02653252
einsum::einsum('ij,ij->ij', arrC, arrC)
## [,1] [,2] [,3] [,4]
## [1,] 0.39489038 0.328892614 0.9144979 0.0408726264
## [2,] 0.71337088 0.004324263 0.1466754 0.0008825888
## [3,] 0.04038315 0.215413312 0.8314063 0.3136805551
DelayedTensor::einsum('ij,ij->ij', darrC, darrC)
## <3 x 4> HDF5Matrix object of type "double":
## [,1] [,2] [,3] [,4]
## [1,] 0.3948903782 0.3288926138 0.9144979075 0.0408726264
## [2,] 0.7133708811 0.0043242629 0.1466753912 0.0008825888
## [3,] 0.0403831532 0.2154133119 0.8314062804 0.3136805551
einsum::einsum('ijk,ijk->ijk', arrE, arrE)
## , , 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.7058904 0.4347115 0.6451668 0.5077508
## [2,] 0.7312389 0.1265941 0.2101664 0.3478821
## [3,] 0.2471912 0.3317153 0.5327789 0.1357488
##
## , , 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.05533082 0.36796215 0.249213872 0.4599059
## [2,] 0.48723406 0.28377367 0.001877631 0.4346839
## [3,] 0.60529650 0.09622015 0.014065971 0.5983966
##
## , , 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3307675 0.3265858 0.32186618 0.02372768
## [2,] 0.5382759 0.5577068 0.83684418 0.87811858
## [3,] 0.5293736 0.5717748 0.05574458 0.02381771
##
## , , 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.0002259854 0.7383189 0.6953839 0.01226057
## [2,] 0.1780609723 0.9347802 0.6189094 0.07316973
## [3,] 0.3478497790 0.1730328 0.8381860 0.65280037
##
## , , 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3409324 0.07622887 0.1109793 0.629985056
## [2,] 0.0960059 0.55498226 0.6554467 0.331593980
## [3,] 0.3826130 0.74994679 0.6960541 0.003963293
DelayedTensor::einsum('ijk,ijk->ijk', darrE, darrE)
## <3 x 4 x 5> HDF5Array object of type "double":
## ,,1
## [,1] [,2] [,3] [,4]
## [1,] 0.7058904 0.4347115 0.6451668 0.5077508
## [2,] 0.7312389 0.1265941 0.2101664 0.3478821
## [3,] 0.2471912 0.3317153 0.5327789 0.1357488
##
## ,,2
## [,1] [,2] [,3] [,4]
## [1,] 0.055330819 0.367962153 0.249213872 0.459905891
## [2,] 0.487234058 0.283773673 0.001877631 0.434683935
## [3,] 0.605296501 0.096220149 0.014065971 0.598396568
##
## ,,3
## [,1] [,2] [,3] [,4]
## [1,] 0.33076749 0.32658581 0.32186618 0.02372768
## [2,] 0.53827588 0.55770675 0.83684418 0.87811858
## [3,] 0.52937356 0.57177477 0.05574458 0.02381771
##
## ,,4
## [,1] [,2] [,3] [,4]
## [1,] 0.0002259854 0.7383189402 0.6953839265 0.0122605655
## [2,] 0.1780609723 0.9347802058 0.6189094181 0.0731697339
## [3,] 0.3478497790 0.1730327577 0.8381859819 0.6528003712
##
## ,,5
## [,1] [,2] [,3] [,4]
## [1,] 0.340932361 0.076228871 0.110979346 0.629985056
## [2,] 0.096005905 0.554982257 0.655446678 0.331593980
## [3,] 0.382613022 0.749946786 0.696054141 0.003963293
The outer product can also be implemented in einsum,
in which the subscripts in the input array are all different,
and all of them are kept.
einsum::einsum('i,j->ij', arrA, arrA)
## [,1] [,2] [,3]
## [1,] 0.8188056 0.30832944 0.14739395
## [2,] 0.3083294 0.11610453 0.05550267
## [3,] 0.1473940 0.05550267 0.02653252
DelayedTensor::einsum('i,j->ij', darrA, darrA)
## <3 x 3> HDF5Matrix object of type "double":
## [,1] [,2] [,3]
## [1,] 0.81880561 0.30832944 0.14739395
## [2,] 0.30832944 0.11610453 0.05550267
## [3,] 0.14739395 0.05550267 0.02653252
einsum::einsum('ij,klm->ijklm', arrC, arrE)
## , , 1, 1, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5279672 0.48183206 0.8034521 0.16985757
## [2,] 0.7096208 0.05524903 0.3217713 0.02496019
## [3,] 0.1688374 0.38994639 0.7660820 0.47055721
##
## , , 2, 1, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5373632 0.49040704 0.8177508 0.17288046
## [2,] 0.7222496 0.05623228 0.3274977 0.02540439
## [3,] 0.1718422 0.39688611 0.7797157 0.47893153
##
## , , 3, 1, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.31243149 0.28513044 0.4754533 0.10051545
## [2,] 0.41992741 0.03269434 0.1904124 0.01477052
## [3,] 0.09991177 0.23075589 0.4533391 0.27845840
##
## , , 1, 2, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4143228 0.37811822 0.6305099 0.13329591
## [2,] 0.5568757 0.04335674 0.2525103 0.01958753
## [3,] 0.1324954 0.30601084 0.6011837 0.36927027
##
## , , 2, 2, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.22358620 0.20404868 0.3402500 0.07193214
## [2,] 0.30051379 0.02339714 0.1362653 0.01057027
## [3,] 0.07150013 0.16513647 0.3244243 0.19927394
##
## , , 3, 2, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3619270 0.33030093 0.5507749 0.11643915
## [2,] 0.4864525 0.03787379 0.2205776 0.01711047
## [3,] 0.1157398 0.26731234 0.5251573 0.32257191
##
## , , 1, 3, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5047476 0.46064152 0.7681170 0.16238739
## [2,] 0.6784123 0.05281923 0.3076201 0.02386246
## [3,] 0.1614121 0.37279690 0.7323904 0.44986253
##
## , , 2, 3, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.2880845 0.26291096 0.4384025 0.09268253
## [2,] 0.3872035 0.03014655 0.1755740 0.01361949
## [3,] 0.0921259 0.21277367 0.4180115 0.25675885
##
## , , 3, 3, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4586821 0.41860130 0.6980152 0.14756718
## [2,] 0.6164973 0.04799871 0.2795453 0.02168467
## [3,] 0.1466809 0.33877378 0.6655492 0.40880605
##
## , , 1, 4, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4477788 0.40865080 0.6814228 0.1440594
## [2,] 0.6018427 0.04685774 0.2729002 0.0211692
## [3,] 0.1431942 0.33072084 0.6497285 0.3990884
##
## , , 2, 4, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3706417 0.33825411 0.5640367 0.11924283
## [2,] 0.4981656 0.03878574 0.2258888 0.01752247
## [3,] 0.1185267 0.27374884 0.5378023 0.33033898
##
## , , 3, 4, 1
##
## [,1] [,2] [,3] [,4]
## [1,] 0.23152944 0.21129781 0.3523379 0.07448764
## [2,] 0.31119000 0.02422836 0.1411064 0.01094579
## [3,] 0.07404028 0.17100319 0.3359500 0.20635345
##
## , , 1, 1, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.14781613 0.13489958 0.22494426 0.047555398
## [2,] 0.19867409 0.01546819 0.09008701 0.006988159
## [3,] 0.04726979 0.10917415 0.21448168 0.131742940
##
## , , 2, 1, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4386389 0.40030948 0.6675137 0.1411189
## [2,] 0.5895580 0.04590129 0.2673298 0.0207371
## [3,] 0.1402713 0.32397022 0.6364664 0.3909423
##
## , , 3, 1, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4889026 0.44618107 0.7440043 0.15728973
## [2,] 0.6571156 0.05116113 0.2979633 0.02311337
## [3,] 0.1563451 0.36109407 0.7093993 0.43574045
##
## , , 1, 2, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3811886 0.34787934 0.5800867 0.12263596
## [2,] 0.5123412 0.03988941 0.2323166 0.01802108
## [3,] 0.1218994 0.28153853 0.5531058 0.33973898
##
## , , 2, 2, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3347529 0.30550133 0.5094217 0.10769668
## [2,] 0.4499287 0.03503016 0.2040162 0.01582578
## [3,] 0.1070499 0.24724204 0.4857275 0.29835262
##
## , , 3, 2, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.19492668 0.17789350 0.2966364 0.062711803
## [2,] 0.26199361 0.02039807 0.1187987 0.009215358
## [3,] 0.06233517 0.14396910 0.2828392 0.173730797
##
## , , 1, 3, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3137071 0.28629461 0.4773946 0.10092584
## [2,] 0.4216419 0.03282783 0.1911898 0.01483083
## [3,] 0.1003197 0.23169805 0.4551900 0.27959532
##
## , , 2, 3, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.027229734 0.024850331 0.04143778 0.008760349
## [2,] 0.036598460 0.002849451 0.01659525 0.001287314
## [3,] 0.008707736 0.020111358 0.03951043 0.024268835
##
## , , 3, 3, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.07452863 0.068016131 0.11341650 0.023977348
## [2,] 0.10017113 0.007799036 0.04542171 0.003523417
## [3,] 0.02383334 0.055045413 0.10814128 0.066424557
##
## , , 1, 4, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4261601 0.38892114 0.6485237 0.13710420
## [2,] 0.5727857 0.04459545 0.2597246 0.02014715
## [3,] 0.1362808 0.31475364 0.6183596 0.37982040
##
## , , 2, 4, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4143097 0.37810625 0.6304899 0.13329169
## [2,] 0.5568580 0.04335536 0.2525023 0.01958691
## [3,] 0.1324912 0.30600115 0.6011647 0.36925858
##
## , , 3, 4, 2
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4861081 0.44363071 0.7397516 0.15639066
## [2,] 0.6533595 0.05086869 0.2962601 0.02298126
## [3,] 0.1554514 0.35903006 0.7053444 0.43324977
##
## , , 1, 1, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3614096 0.32982872 0.5499874 0.11627268
## [2,] 0.4857570 0.03781965 0.2202622 0.01708601
## [3,] 0.1155744 0.26693018 0.5244065 0.32211074
##
## , , 2, 1, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4610423 0.42075523 0.7016068 0.14832649
## [2,] 0.6196695 0.04824569 0.2809837 0.02179624
## [3,] 0.1474357 0.34051695 0.6689738 0.41090957
##
## , , 3, 1, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4572139 0.41726137 0.6957809 0.14709483
## [2,] 0.6145239 0.04784507 0.2786505 0.02161525
## [3,] 0.1462114 0.33768937 0.6634188 0.40749748
##
## , , 1, 2, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3591178 0.32773718 0.5464998 0.11553536
## [2,] 0.4826767 0.03757982 0.2188655 0.01697766
## [3,] 0.1148415 0.26523750 0.5210811 0.32006815
##
## , , 2, 2, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4692899 0.42828219 0.7141580 0.15097993
## [2,] 0.6307549 0.04910876 0.2860102 0.02218616
## [3,] 0.1500732 0.34660851 0.6809412 0.41826040
##
## , , 3, 2, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4751719 0.43365020 0.7231091 0.15287229
## [2,] 0.6386607 0.04972428 0.2895950 0.02246424
## [3,] 0.1519542 0.35095284 0.6894760 0.42350280
##
## , , 1, 3, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3565135 0.32536043 0.5425366 0.11469750
## [2,] 0.4791763 0.03730729 0.2172783 0.01685454
## [3,] 0.1140086 0.26331400 0.5173022 0.31774701
##
## , , 2, 3, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5748580 0.52462546 0.8748098 0.1849433
## [2,] 0.7726450 0.06015592 0.3503490 0.0271770
## [3,] 0.1838326 0.42457906 0.8341208 0.5123492
##
## , , 3, 3, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.1483678 0.13540303 0.22578375 0.047732876
## [2,] 0.1994155 0.01552592 0.09042322 0.007014239
## [3,] 0.0474462 0.10958159 0.21528213 0.132234607
##
## , , 1, 4, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.09679790 0.08833946 0.14730552 0.03114182
## [2,] 0.13010242 0.01012940 0.05899379 0.00457622
## [3,] 0.03095478 0.07149307 0.14045407 0.08627232
##
## , , 2, 4, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5888638 0.53740740 0.8961237 0.18944923
## [2,] 0.7914697 0.06162155 0.3588849 0.02783914
## [3,] 0.1883114 0.43492348 0.8544433 0.52483209
##
## , , 3, 4, 3
##
## [,1] [,2] [,3] [,4]
## [1,] 0.09698137 0.08850689 0.1475847 0.031200842
## [2,] 0.13034901 0.01014860 0.0591056 0.004584893
## [3,] 0.03101346 0.07162857 0.1407203 0.086435836
##
## , , 1, 1, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.009446664 0.0086211907 0.014375784 0.0030391803
## [2,] 0.012696905 0.0009885445 0.005757299 0.0004466007
## [3,] 0.003020928 0.0069771243 0.013707140 0.0084194552
##
## , , 2, 1, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.26516894 0.24199781 0.4035299 0.08531014
## [2,] 0.35640358 0.02774856 0.1616081 0.01253613
## [3,] 0.08479778 0.19584868 0.3847610 0.23633507
##
## , , 3, 1, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3706245 0.33823841 0.5640105 0.11923730
## [2,] 0.4981425 0.03878394 0.2258783 0.01752165
## [3,] 0.1185212 0.27373614 0.5377774 0.33032365
##
## , , 1, 2, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5399584 0.49277545 0.8217001 0.17371538
## [2,] 0.7257377 0.05650385 0.3290794 0.02552708
## [3,] 0.1726721 0.39880287 0.7834813 0.48124453
##
## , , 2, 2, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.6075654 0.55447480 0.9245834 0.19546591
## [2,] 0.8166058 0.06357858 0.3702827 0.02872328
## [3,] 0.1942920 0.44873611 0.8815793 0.54150011
##
## , , 3, 2, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.26139811 0.23855648 0.3977915 0.08409699
## [2,] 0.35133535 0.02735396 0.1593099 0.01235786
## [3,] 0.08359192 0.19306361 0.3792895 0.23297427
##
## , , 1, 3, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5240233 0.47823283 0.7974504 0.16858875
## [2,] 0.7043200 0.05483633 0.3193677 0.02477374
## [3,] 0.1675762 0.38703353 0.7603595 0.46704220
##
## , , 2, 3, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4943697 0.45117041 0.7523240 0.15904859
## [2,] 0.6644637 0.05173323 0.3012952 0.02337183
## [3,] 0.1580934 0.36513193 0.7173320 0.44061304
##
## , , 3, 3, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5753187 0.52504588 0.8755109 0.18509150
## [2,] 0.7732642 0.06020412 0.3506298 0.02719878
## [3,] 0.1839799 0.42491931 0.8347892 0.51275983
##
## , , 1, 4, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.06958146 0.06350126 0.10588797 0.022385744
## [2,] 0.09352182 0.00728134 0.04240664 0.003289535
## [3,] 0.02225130 0.05139153 0.10096292 0.062015329
##
## , , 2, 4, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.1699824 0.15512893 0.2586766 0.054686737
## [2,] 0.2284670 0.01778778 0.1035963 0.008036093
## [3,] 0.0543583 0.12554575 0.2466450 0.151498920
##
## , , 3, 4, 4
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5077249 0.46335863 0.7726478 0.16334523
## [2,] 0.6824139 0.05313079 0.3094346 0.02400321
## [3,] 0.1623642 0.37499585 0.7367105 0.45251606
##
## , , 1, 1, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3669208 0.33485838 0.5583744 0.11804576
## [2,] 0.4931645 0.03839637 0.2236211 0.01734656
## [3,] 0.1173368 0.27100068 0.5324033 0.32702271
##
## , , 2, 1, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.19470955 0.17769534 0.2963059 0.062641947
## [2,] 0.26170177 0.02037535 0.1186664 0.009205093
## [3,] 0.06226573 0.14380873 0.2825242 0.173537274
##
## , , 3, 1, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3887032 0.35473736 0.5915224 0.12505358
## [2,] 0.5224414 0.04067578 0.2368964 0.01837634
## [3,] 0.1243025 0.28708873 0.5640096 0.34643652
##
## , , 1, 2, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.17349942 0.15833860 0.2640287 0.055818224
## [2,] 0.23319403 0.01815582 0.1057398 0.008202362
## [3,] 0.05548299 0.12814333 0.2517482 0.154633484
##
## , , 2, 2, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4681422 0.42723479 0.7124115 0.1506107
## [2,] 0.6292124 0.04898866 0.2853108 0.0221319
## [3,] 0.1497062 0.34576085 0.6792759 0.4172375
##
## , , 3, 2, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5441937 0.49664067 0.8281454 0.17507797
## [2,] 0.7314302 0.05694705 0.3316606 0.02572731
## [3,] 0.1740265 0.40193099 0.7896268 0.48501930
##
## , , 1, 3, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.20934344 0.19105048 0.3185755 0.067349962
## [2,] 0.28137063 0.02190671 0.1275850 0.009896925
## [3,] 0.06694547 0.15461704 0.3037580 0.186579910
##
## , , 2, 3, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5087530 0.46429686 0.7742123 0.16367598
## [2,] 0.6837957 0.05323837 0.3100611 0.02405182
## [3,] 0.1626930 0.37575516 0.7382022 0.45343233
##
## , , 3, 3, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.5242758 0.47846323 0.7978346 0.16866998
## [2,] 0.7046593 0.05486275 0.3195215 0.02478567
## [3,] 0.1676570 0.38722000 0.7607258 0.46726721
##
## , , 1, 4, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.4987735 0.45518945 0.7590257 0.16046540
## [2,] 0.6703827 0.05219407 0.3039791 0.02358003
## [3,] 0.1595017 0.36838454 0.7237220 0.44453803
##
## , , 2, 4, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.3618608 0.33024053 0.5506741 0.11641785
## [2,] 0.4863635 0.03786687 0.2205372 0.01710734
## [3,] 0.1157187 0.26726346 0.5250613 0.32251292
##
## , , 3, 4, 5
##
## [,1] [,2] [,3] [,4]
## [1,] 0.03956092 0.036103987 0.06020318 0.012727537
## [2,] 0.05317234 0.004139846 0.02411053 0.001870283
## [3,] 0.01265110 0.029218934 0.05740302 0.035259155
DelayedTensor::einsum('ij,klm->ijklm', darrC, darrE)
## <3 x 4 x 3 x 4 x 5> HDF5Array object of type "double":
## ,,1,1,1
## [,1] [,2] [,3] [,4]
## [1,] 0.52796716 0.48183206 0.80345210 0.16985757
## [2,] 0.70962078 0.05524903 0.32177127 0.02496019
## [3,] 0.16883744 0.38994639 0.76608205 0.47055721
##
## ,,2,1,1
## [,1] [,2] [,3] [,4]
## [1,] 0.53736318 0.49040704 0.81775082 0.17288046
## [2,] 0.72224962 0.05623228 0.32749770 0.02540439
## [3,] 0.17184217 0.39688611 0.77971571 0.47893153
##
## ,,3,1,1
## [,1] [,2] [,3] [,4]
## [1,] 0.31243149 0.28513044 0.47545332 0.10051545
## [2,] 0.41992741 0.03269434 0.19041237 0.01477052
## [3,] 0.09991177 0.23075589 0.45333910 0.27845840
##
## ...
##
## ,,1,4,5
## [,1] [,2] [,3] [,4]
## [1,] 0.49877353 0.45518945 0.75902570 0.16046540
## [2,] 0.67038272 0.05219407 0.30397912 0.02358003
## [3,] 0.15950167 0.36838454 0.72372200 0.44453803
##
## ,,2,4,5
## [,1] [,2] [,3] [,4]
## [1,] 0.36186085 0.33024053 0.55067413 0.11641785
## [2,] 0.48636354 0.03786687 0.22053725 0.01710734
## [3,] 0.11571867 0.26726346 0.52506125 0.32251292
##
## ,,3,4,5
## [,1] [,2] [,3] [,4]
## [1,] 0.039560920 0.036103987 0.060203184 0.012727537
## [2,] 0.053172342 0.004139846 0.024110528 0.001870283
## [3,] 0.012651098 0.029218934 0.057403021 0.035259155
If there is a vanishing subscript on the left or right side of ->, the summation is done for that subscript.
einsum::einsum('i->', arrA)
## [1] 1.408508
DelayedTensor::einsum('i->', darrA)
## <1> HDF5Array object of type "double":
## [1]
## 1.408508
einsum::einsum('ij->', arrC)
## [1] 5.82039
DelayedTensor::einsum('ij->', darrC)
## <1> HDF5Array object of type "double":
## [1]
## 5.82039
einsum::einsum('ijk->', arrE)
## [1] 34.25645
DelayedTensor::einsum('ijk->', darrE)
## <1> HDF5Array object of type "double":
## [1]
## 34.25645
einsum::einsum('ij->i', arrC)
## [1] 2.360358 1.323063 2.136969
DelayedTensor::einsum('ij->i', darrC)
## <3> HDF5Array object of type "double":
## [1] [2] [3]
## 2.360358 1.323063 2.136969
einsum::einsum('ij->j', arrC)
## [1] 1.6739715 1.1033771 2.2510909 0.7919501
DelayedTensor::einsum('ij->j', darrC)
## <4> HDF5Array object of type "double":
## [1] [2] [3] [4]
## 1.6739715 1.1033771 2.2510909 0.7919501
einsum::einsum('ijk->i', arrE)
## [1] 10.70821 12.41116 11.13707
DelayedTensor::einsum('ijk->i', darrE)
## <3> HDF5Array object of type "double":
## [1] [2] [3]
## 10.70821 12.41116 11.13707
einsum::einsum('ijk->j', arrE)
## [1] 8.479206 9.244133 8.884112 7.649000
DelayedTensor::einsum('ijk->j', darrE)
## <4> HDF5Array object of type "double":
## [1] [2] [3] [4]
## 8.479206 9.244133 8.884112 7.649000
einsum::einsum('ijk->k', arrE)
## [1] 7.445957 5.932926 7.074485 6.994178 6.808905
DelayedTensor::einsum('ijk->k', darrE)
## <5> HDF5Array object of type "double":
## [1] [2] [3] [4] [5]
## 7.445957 5.932926 7.074485 6.994178 6.808905
These are the same as what the modeSum function does.
einsum::einsum('ijk->ij', arrE)
## [,1] [,2] [,3] [,4]
## [1,] 2.249449 2.972753 3.036801 2.449211
## [2,] 3.018640 3.347114 3.012867 3.032542
## [3,] 3.211117 2.924266 2.834444 2.167247
DelayedTensor::einsum('ijk->ij', darrE)
## <3 x 4> HDF5Matrix object of type "double":
## [,1] [,2] [,3] [,4]
## [1,] 2.249449 2.972753 3.036801 2.449211
## [2,] 3.018640 3.347114 3.012867 3.032542
## [3,] 3.211117 2.924266 2.834444 2.167247
einsum::einsum('ijk->jk', arrE)
## [,1] [,2] [,3] [,4] [,5]
## [1,] 2.192481 1.7112546 2.036377 1.026793 1.512300
## [2,] 1.591074 1.4494967 2.074432 2.242067 1.887062
## [3,] 1.991579 0.6611449 1.718227 2.536130 1.977031
## [4,] 1.670823 2.1110302 1.245448 1.189187 1.432513
DelayedTensor::einsum('ijk->jk', darrE)
## <4 x 5> HDF5Matrix object of type "double":
## [,1] [,2] [,3] [,4] [,5]
## [1,] 2.1924812 1.7112546 2.0363774 1.0267934 1.5122995
## [2,] 1.5910744 1.4494967 2.0744324 2.2420673 1.8870618
## [3,] 1.9915789 0.6611449 1.7182273 2.5361298 1.9770308
## [4,] 1.6708226 2.1110302 1.2454476 1.1891871 1.4325126
einsum::einsum('ijk->jk', arrE)
## [,1] [,2] [,3] [,4] [,5]
## [1,] 2.192481 1.7112546 2.036377 1.026793 1.512300
## [2,] 1.591074 1.4494967 2.074432 2.242067 1.887062
## [3,] 1.991579 0.6611449 1.718227 2.536130 1.977031
## [4,] 1.670823 2.1110302 1.245448 1.189187 1.432513
DelayedTensor::einsum('ijk->jk', darrE)
## <4 x 5> HDF5Matrix object of type "double":
## [,1] [,2] [,3] [,4] [,5]
## [1,] 2.1924812 1.7112546 2.0363774 1.0267934 1.5122995
## [2,] 1.5910744 1.4494967 2.0744324 2.2420673 1.8870618
## [3,] 1.9915789 0.6611449 1.7182273 2.5361298 1.9770308
## [4,] 1.6708226 2.1110302 1.2454476 1.1891871 1.4325126
If we take the diagonal elements of a matrix
and add them together, we get trace.
einsum::einsum('ii->', arrB)
## [1] 1.185146
DelayedTensor::einsum('ii->', darrB)
## <1> HDF5Array object of type "double":
## [1]
## 1.185146
By changing the order of the indices on the left and right side of ->, we can get a sorted array or DelayedArray.
einsum::einsum('ij->ji', arrB)
## [,1] [,2] [,3]
## [1,] 0.1863305 0.2288500 0.8731255
## [2,] 0.7071327 0.3527597 0.4763693
## [3,] 0.7893546 0.5878199 0.6460557
DelayedTensor::einsum('ij->ji', darrB)
## <3 x 3> DelayedArray object of type "double":
## [,1] [,2] [,3]
## [1,] 0.1863305 0.2288500 0.8731255
## [2,] 0.7071327 0.3527597 0.4763693
## [3,] 0.7893546 0.5878199 0.6460557
einsum::einsum('ijk->jki', arrD)
## , , 1
##
## [,1] [,2] [,3]
## [1,] 0.5973563 0.2671087 0.6087849
## [2,] 0.4332599 0.3405228 0.8869449
## [3,] 0.5684519 0.7480580 0.3320226
##
## , , 2
##
## [,1] [,2] [,3]
## [1,] 0.1916480 0.1417817 0.02704907
## [2,] 0.7804638 0.7573835 0.55758776
## [3,] 0.7216900 0.7657663 0.70023782
##
## , , 3
##
## [,1] [,2] [,3]
## [1,] 0.08253671 0.5035572 0.9239351
## [2,] 0.16549764 0.6974154 0.7528586
## [3,] 0.72407176 0.7490639 0.9070781
DelayedTensor::einsum('ijk->jki', darrD)
## <3 x 3 x 3> DelayedArray object of type "double":
## ,,1
## [,1] [,2] [,3]
## [1,] 0.5973563 0.2671087 0.6087849
## [2,] 0.4332599 0.3405228 0.8869449
## [3,] 0.5684519 0.7480580 0.3320226
##
## ,,2
## [,1] [,2] [,3]
## [1,] 0.19164802 0.14178167 0.02704907
## [2,] 0.78046383 0.75738350 0.55758776
## [3,] 0.72169004 0.76576631 0.70023782
##
## ,,3
## [,1] [,2] [,3]
## [1,] 0.08253671 0.50355724 0.92393510
## [2,] 0.16549764 0.69741541 0.75285861
## [3,] 0.72407176 0.74906390 0.90707808
Some examples of combining Multiplication and Summation are shown below.
Inner Product first calculate Hadamard Product and collapses it to 0D tensor (norm).
einsum::einsum('i,i->', arrA, arrA)
## [1] 0.9614427
DelayedTensor::einsum('i,i->', darrA, darrA)
## <1> HDF5Array object of type "double":
## [1]
## 0.9614427
einsum::einsum('ij,ij->', arrC, arrC)
## [1] 3.94529
DelayedTensor::einsum('ij,ij->', darrC, darrC)
## <1> HDF5Array object of type "double":
## [1]
## 3.94529
einsum::einsum('ijk,ijk->', arrE, arrE)
## [1] 23.49711
DelayedTensor::einsum('ijk,ijk->', darrE, darrE)
## <1> HDF5Array object of type "double":
## [1]
## 23.49711
The inner product is an operation that eliminates all subscripts, while the outer product is an operation that leaves all subscripts intact. In the middle of the two, the operation that eliminates some subscripts while keeping others by summing them is called contracted product.
einsum::einsum('ijk,ijk->jk', arrE, arrE)
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.6843205 1.1478614 1.398417 0.5261367 0.8195513
## [2,] 0.8930208 0.7479560 1.456067 1.8461319 1.3811579
## [3,] 1.3881121 0.2651575 1.214455 2.1524793 1.4624802
## [4,] 0.9913816 1.4929864 0.925664 0.7382307 0.9655423
DelayedTensor::einsum('ijk,ijk->jk', darrE, darrE)
## <4 x 5> HDF5Matrix object of type "double":
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.6843205 1.1478614 1.3984169 0.5261367 0.8195513
## [2,] 0.8930208 0.7479560 1.4560673 1.8461319 1.3811579
## [3,] 1.3881121 0.2651575 1.2144549 2.1524793 1.4624802
## [4,] 0.9913816 1.4929864 0.9256640 0.7382307 0.9655423
Matrix Multiplication is considered a contracted product.
einsum::einsum('ij,jk->ik', arrC, t(arrC))
## [,1] [,2] [,3]
## [1,] 1.6791535 0.9407193 1.3776462
## [2,] 0.9407193 0.8652531 0.5660979
## [3,] 1.3776462 0.5660979 1.4008833
DelayedTensor::einsum('ij,jk->ik', darrC, t(darrC))
## <3 x 3> HDF5Matrix object of type "double":
## [,1] [,2] [,3]
## [1,] 1.6791535 0.9407193 1.3776462
## [2,] 0.9407193 0.8652531 0.5660979
## [3,] 1.3776462 0.5660979 1.4008833
Some examples of combining Multiplication and Permutation are shown below.
einsum::einsum('ij,ij->ji', arrC, arrC)
## [,1] [,2] [,3]
## [1,] 0.39489038 0.7133708811 0.04038315
## [2,] 0.32889261 0.0043242629 0.21541331
## [3,] 0.91449791 0.1466753912 0.83140628
## [4,] 0.04087263 0.0008825888 0.31368056
DelayedTensor::einsum('ij,ij->ji', darrC, darrC)
## <4 x 3> HDF5Matrix object of type "double":
## [,1] [,2] [,3]
## [1,] 0.3948903782 0.7133708811 0.0403831532
## [2,] 0.3288926138 0.0043242629 0.2154133119
## [3,] 0.9144979075 0.1466753912 0.8314062804
## [4,] 0.0408726264 0.0008825888 0.3136805551
einsum::einsum('ijk,ijk->jki', arrE, arrE)
## , , 1
##
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.7058904 0.05533082 0.33076749 0.0002259854 0.34093236
## [2,] 0.4347115 0.36796215 0.32658581 0.7383189402 0.07622887
## [3,] 0.6451668 0.24921387 0.32186618 0.6953839265 0.11097935
## [4,] 0.5077508 0.45990589 0.02372768 0.0122605655 0.62998506
##
## , , 2
##
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.7312389 0.487234058 0.5382759 0.17806097 0.0960059
## [2,] 0.1265941 0.283773673 0.5577068 0.93478021 0.5549823
## [3,] 0.2101664 0.001877631 0.8368442 0.61890942 0.6554467
## [4,] 0.3478821 0.434683935 0.8781186 0.07316973 0.3315940
##
## , , 3
##
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.2471912 0.60529650 0.52937356 0.3478498 0.382613022
## [2,] 0.3317153 0.09622015 0.57177477 0.1730328 0.749946786
## [3,] 0.5327789 0.01406597 0.05574458 0.8381860 0.696054141
## [4,] 0.1357488 0.59839657 0.02381771 0.6528004 0.003963293
DelayedTensor::einsum('ijk,ijk->jki', darrE, darrE)
## <4 x 5 x 3> HDF5Array object of type "double":
## ,,1
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.7058903898 0.0553308194 0.3307674913 0.0002259854 0.3409323606
## [2,] 0.4347114514 0.3679621528 0.3265858092 0.7383189402 0.0762288705
## [3,] 0.6451668476 0.2492138717 0.3218661847 0.6953839265 0.1109793458
## [4,] 0.5077507632 0.4599058910 0.0237276842 0.0122605655 0.6299850564
##
## ,,2
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.731238864 0.487234058 0.538275880 0.178060972 0.096005905
## [2,] 0.126594092 0.283773673 0.557706755 0.934780206 0.554982257
## [3,] 0.210166382 0.001877631 0.836844177 0.618909418 0.655446678
## [4,] 0.347882063 0.434683935 0.878118578 0.073169734 0.331593980
##
## ,,3
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.247191232 0.605296501 0.529373562 0.347849779 0.382613022
## [2,] 0.331715287 0.096220149 0.571774765 0.173032758 0.749946786
## [3,] 0.532778919 0.014065971 0.055744581 0.838185982 0.696054141
## [4,] 0.135748764 0.598396568 0.023817714 0.652800371 0.003963293
Some examples of combining Summation and Permutation are shown below.
einsum::einsum('ijk->ki', arrE)
## [,1] [,2] [,3]
## [1,] 3.015288 2.259180 2.171489
## [2,] 2.019201 1.933363 1.980363
## [3,] 1.867971 3.332342 1.874171
## [4,] 1.818912 2.446020 2.729246
## [5,] 1.986842 2.440258 2.381805
DelayedTensor::einsum('ijk->ki', darrE)
## <5 x 3> HDF5Matrix object of type "double":
## [,1] [,2] [,3]
## [1,] 3.015288 2.259180 2.171489
## [2,] 2.019201 1.933363 1.980363
## [3,] 1.867971 3.332342 1.874171
## [4,] 1.818912 2.446020 2.729246
## [5,] 1.986842 2.440258 2.381805
Finally, we will show a more complex example, combining Multiplication, Summation, and Permutation.
einsum::einsum('i,ij,ijk,ijk,ji->jki',
arrA, arrC, arrE, arrE, t(arrC))
## , , 1
##
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.25223435 0.01977125 0.118192461 8.075089e-05 0.12182465
## [2,] 0.12937358 0.10950846 0.097194532 2.197296e-01 0.02268632
## [3,] 0.53388186 0.20622691 0.266347406 5.754370e-01 0.09183649
## [4,] 0.01877905 0.01700951 0.000877563 4.534542e-04 0.02329985
##
## , , 2
##
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.1777457525 1.184343e-01 0.1308413105 4.328214e-02 2.333662e-02
## [2,] 0.0001865306 4.181275e-04 0.0008217554 1.377356e-03 8.177410e-04
## [3,] 0.0105037673 9.384088e-05 0.0418240843 3.093207e-02 3.275814e-02
## [4,] 0.0001046201 1.307244e-04 0.0002640804 2.200465e-05 9.972169e-05
##
## , , 3
##
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.001626008 0.003981600 0.003482184 0.002288133 0.0025168031
## [2,] 0.011639311 0.003376197 0.020062580 0.006071418 0.0263143261
## [3,] 0.072152204 0.001904901 0.007549275 0.113512310 0.0942639401
## [4,] 0.006936059 0.030574964 0.001216962 0.033354717 0.0002025037
DelayedTensor::einsum('i,ij,ijk,ijk,ji->jki',
darrA, darrC, darrE, darrE, t(darrC))
## <4 x 5 x 3> HDF5Array object of type "double":
## ,,1
## [,1] [,2] [,3] [,4] [,5]
## [1,] 2.522343e-01 1.977125e-02 1.181925e-01 8.075089e-05 1.218247e-01
## [2,] 1.293736e-01 1.095085e-01 9.719453e-02 2.197296e-01 2.268632e-02
## [3,] 5.338819e-01 2.062269e-01 2.663474e-01 5.754370e-01 9.183649e-02
## [4,] 1.877905e-02 1.700951e-02 8.775630e-04 4.534542e-04 2.329985e-02
##
## ,,2
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.777458e-01 1.184343e-01 1.308413e-01 4.328214e-02 2.333662e-02
## [2,] 1.865306e-04 4.181275e-04 8.217554e-04 1.377356e-03 8.177410e-04
## [3,] 1.050377e-02 9.384088e-05 4.182408e-02 3.093207e-02 3.275814e-02
## [4,] 1.046201e-04 1.307244e-04 2.640804e-04 2.200465e-05 9.972169e-05
##
## ,,3
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.0016260075 0.0039816003 0.0034821842 0.0022881328 0.0025168031
## [2,] 0.0116393115 0.0033761974 0.0200625803 0.0060714180 0.0263143261
## [3,] 0.0721522037 0.0019049005 0.0075492746 0.1135123097 0.0942639401
## [4,] 0.0069360585 0.0305749643 0.0012169618 0.0333547168 0.0002025037
einsumBy using einsum and other DelayedTensor functions,
it is possible to implement your original tensor calculation functions.
It is intended to be applied to Delayed Arrays,
which can scale to large-scale data
since the calculation is performed internally by block processing.
For example, kronecker can be easily implmented by eimsum
and other DelayedTensor functions4 https://stackoverflow.com/
questions/56067643/speeding-up-kronecker-products-numpy
(the kronecker function inside DelayedTensor
has a more efficient implementation though).
darr1 <- DelayedArray(array(1:6, dim=c(2,3)))
darr2 <- DelayedArray(array(20:1, dim=c(4,5)))
mykronecker <- function(darr1, darr2){
stopifnot((length(dim(darr1)) == 2) && (length(dim(darr2)) == 2))
# Outer Product
tmpdarr <- DelayedTensor::einsum('ij,kl->ikjl', darr1, darr2)
# Reshape
DelayedTensor::unfold(tmpdarr, row_idx=c(2,1), col_idx=c(4,3))
}
identical(as.array(DelayedTensor::kronecker(darr1, darr2)),
as.array(mykronecker(darr1, darr2)))
## [1] TRUE
## R version 4.3.0 RC (2023-04-18 r84287)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 22.04.2 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.18-bioc/R/lib/libRblas.so
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## time zone: America/New_York
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] einsum_0.1.0 DelayedRandomArray_1.9.0 HDF5Array_1.29.1
## [4] rhdf5_2.45.0 DelayedArray_0.27.1 S4Arrays_1.1.1
## [7] IRanges_2.35.1 S4Vectors_0.39.1 MatrixGenerics_1.13.0
## [10] matrixStats_0.63.0 BiocGenerics_0.47.0 Matrix_1.5-4
## [13] DelayedTensor_1.7.0 BiocStyle_2.29.0
##
## loaded via a namespace (and not attached):
## [1] jsonlite_1.8.4 compiler_4.3.0 BiocManager_1.30.20
## [4] crayon_1.5.2 rsvd_1.0.5 Rcpp_1.0.10
## [7] rhdf5filters_1.13.2 parallel_4.3.0 jquerylib_0.1.4
## [10] BiocParallel_1.35.0 yaml_2.3.7 fastmap_1.1.1
## [13] lattice_0.21-8 R6_2.5.1 ScaledMatrix_1.9.1
## [16] knitr_1.42 bookdown_0.33 bslib_0.4.2
## [19] rlang_1.1.1 cachem_1.0.8 xfun_0.39
## [22] sass_0.4.5 cli_3.6.1 Rhdf5lib_1.23.0
## [25] BiocSingular_1.17.0 digest_0.6.31 grid_4.3.0
## [28] irlba_2.3.5.1 rTensor_1.4.8 dqrng_0.3.0
## [31] evaluate_0.20 codetools_0.2-19 beachmat_2.17.0
## [34] rmarkdown_2.21 tools_4.3.0 htmltools_0.5.5