2016-03-03 10 views
2

的公式爲進行協調4階張量np.einsum轉型np.tensordot

$C'_{ijkl} = Q_{im} Q_{jn} C_{mnop} (Q^{-1})_{ok} (Q^{-1})_{pl}$ 

我能夠使用

np.einsum('im,jn,mnop,ok,pl', Q, Q, C, Q_inv, Q_inv) 

做的工作,也期望

np.tensordot(np.tensordot(np.tensordot(Q, np.tensordot(Q, C, axes=[1,1]), axes=[1,0]), Q_inv, axes=[2,0]), Q_inv, axes=[3,0]) 

工作,但它沒有。

具體細節:

C是第四階彈性張量:

array([[[[ 552.62389047, -0.28689554, -0.32194701], 
     [ -0.28689554, 118.89168597, -0.65559912], 
     [ -0.32194701, -0.65559912, 130.21758722]], 

     [[ -0.28689554, 166.02923119, -0.00000123], 
     [ 166.02923119, 0.49494431, -0.00000127], 
     [ -0.00000123, -0.00000127, -0.57156702]], 

     [[ -0.32194701, -0.00000123, 165.99413061], 
     [ -0.00000123, -0.64666809, -0.0000013 ], 
     [ 165.99413061, -0.0000013 , 0.42997465]]], 


     [[[ -0.28689554, 166.02923119, -0.00000123], 
     [ 166.02923119, 0.49494431, -0.00000127], 
     [ -0.00000123, -0.00000127, -0.57156702]], 

     [[ 118.89168597, 0.49494431, -0.64666809], 
     [ 0.49494431, 516.15898907, -0.33132485], 
     [ -0.64666809, -0.33132485, 140.09010389]], 

     [[ -0.65559912, -0.00000127, -0.0000013 ], 
     [ -0.00000127, -0.33132485, 165.98553869], 
     [ -0.0000013 , 165.98553869, 0.41913346]]], 


     [[[ -0.32194701, -0.00000123, 165.99413061], 
     [ -0.00000123, -0.64666809, -0.0000013 ], 
     [ 165.99413061, -0.0000013 , 0.42997465]], 

     [[ -0.65559912, -0.00000127, -0.0000013 ], 
     [ -0.00000127, -0.33132485, 165.98553869], 
     [ -0.0000013 , 165.98553869, 0.41913346]], 

     [[ 130.21758722, -0.57156702, 0.42997465], 
     [ -0.57156702, 140.09010389, 0.41913346], 
     [ 0.42997465, 0.41913346, 486.62412063]]]]) 

Q是旋轉矩陣改變x和y COORDS。

array([[ 0, 1, 0], 
     [-1, 0, 0], 
     [ 0, 0, 1]]) 

Q_inv是

array([[-0., -1., -0.], 
     [ 1., 0., 0.], 
     [ 0., 0., 1.]]) 

np.einsum導致

array([[[[ 516.15898907, -0.49494431, -0.33132485], 
     [ -0.49494431, 118.89168597, 0.64666809], 
     [ -0.33132485, 0.64666809, 140.09010389]], 

     [[ -0.49494431, 166.02923119, 0.00000127], 
     [ 166.02923119, 0.28689554, -0.00000123], 
     [ 0.00000127, -0.00000123, 0.57156702]], 

     [[ -0.33132485, 0.00000127, 165.98553869], 
     [ 0.00000127, -0.65559912, 0.0000013 ], 
     [ 165.98553869, 0.0000013 , 0.41913346]]], 


     [[[ -0.49494431, 166.02923119, 0.00000127], 
     [ 166.02923119, 0.28689554, -0.00000123], 
     [ 0.00000127, -0.00000123, 0.57156702]], 

     [[ 118.89168597, 0.28689554, -0.65559912], 
     [ 0.28689554, 552.62389047, 0.32194701], 
     [ -0.65559912, 0.32194701, 130.21758722]], 

     [[ 0.64666809, -0.00000123, 0.0000013 ], 
     [ -0.00000123, 0.32194701, 165.99413061], 
     [ 0.0000013 , 165.99413061, -0.42997465]]], 


     [[[ -0.33132485, 0.00000127, 165.98553869], 
     [ 0.00000127, -0.65559912, 0.0000013 ], 
     [ 165.98553869, 0.0000013 , 0.41913346]], 

     [[ 0.64666809, -0.00000123, 0.0000013 ], 
     [ -0.00000123, 0.32194701, 165.99413061], 
     [ 0.0000013 , 165.99413061, -0.42997465]], 

     [[ 140.09010389, 0.57156702, 0.41913346], 
     [ 0.57156702, 130.21758722, -0.42997465], 
     [ 0.41913346, -0.42997465, 486.62412063]]]]) 

我相信這是正確的,而4 np.tensordot導致

array([[[[ 552.62389047, -0.28689554, 0.32194701], 
     [ -0.28689554, 118.89168597, 0.65559912], 
     [ -0.32194701, -0.65559912, -130.21758722]], 

     [[ -0.28689554, 166.02923119, 0.00000123], 
     [ 166.02923119, 0.49494431, 0.00000127], 
     [ -0.00000123, -0.00000127, 0.57156702]], 

     [[ -0.32194701, -0.00000123, -165.99413061], 
     [ -0.00000123, -0.64666809, 0.0000013 ], 
     [ 165.99413061, -0.0000013 , -0.42997465]]], 


     [[[ -0.28689554, 166.02923119, 0.00000123], 
     [ 166.02923119, 0.49494431, 0.00000127], 
     [ -0.00000123, -0.00000127, 0.57156702]], 

     [[ 118.89168597, 0.49494431, 0.64666809], 
     [ 0.49494431, 516.15898907, 0.33132485], 
     [ -0.64666809, -0.33132485, -140.09010389]], 

     [[ -0.65559912, -0.00000127, 0.0000013 ], 
     [ -0.00000127, -0.33132485, -165.98553869], 
     [ -0.0000013 , 165.98553869, -0.41913346]]], 


     [[[ 0.32194701, 0.00000123, 165.99413061], 
     [ 0.00000123, 0.64666809, -0.0000013 ], 
     [-165.99413061, 0.0000013 , 0.42997465]], 

     [[ 0.65559912, 0.00000127, -0.0000013 ], 
     [ 0.00000127, 0.33132485, 165.98553869], 
     [ 0.0000013 , -165.98553869, 0.41913346]], 

     [[-130.21758722, 0.57156702, 0.42997465], 
     [ 0.57156702, -140.09010389, 0.41913346], 
     [ -0.42997465, -0.41913346, 486.62412063]]]]) 

通知負大N近似umbers。

+0

你的問題也行不通;它是不完整的。向我們展示'tensordot'不適用的位置/方式 - 錯誤和/或錯誤答案(尤其是形狀)。 – hpaulj

+0

對不起。我認爲在調用'np.tensordot'時一定會遇到一些明顯的錯誤,這種錯誤很容易通過查看來檢測。它是用一個例子更新的。 – terencezl

+0

你(或某人)需要逐步調試。換句話說,確保每個'tensordot'匹配等效的'einsum'。圍繞4個嵌套'tensordot'表達式很難將我的思想包裹起來。 – hpaulj

回答

3

方法#1

一種方法是使用np.tensordot得到相同的結果與np.einsum雖然不是在一個單一的步驟,並與來自信賴broadcasting一些幫助 -

# Get broadcasted elementwise multiplication between two versions of Q. 
# This corresponds to "np.einsum('im,jn,..', Q, Q)" producing "'ijmn"" 
# broadcasted version of elementwise multiplications between Q's. 
Q_ext = Q[:,None,:,None]*Q[:,None,:] 

# Similarly for Q_inv : For "np.einsum('..ok,pl', Q_inv, Q_inv)" get "'opkl'" 
# broadcasted version of elementwise multiplications between Q_inv's. 
Q_inv_ext = Q_inv[:,None,:,None]*Q_inv[:,None,:] 

# Perform "np.einsum('im,jn,mnop,ok,pl', Q, Q, C)" with "np.tensordot". 
# Notice that we are using the last two axes from 'Q_ext', so "axes=[2,3]" 
# and first two from 'C', so "axes=[0,1]" for it. 
# These axes would be reduced by the dot-product, leaving us with 'ijop'. 
parte1 = np.tensordot(Q_ext,C,axes=([2,3],[0,1])) 

# Do it one more time to perform "np.einsum('ijop,ok,pl', parte1,Q_inv,Q_inv)" 
# to reduce dimensions represented by 'o,p', leaving us with 'ijkl'. 
# To confirm, compare the following against original einsum approach : 
# "np.einsum('im,jn,mnop,ok,pl->ijkl', Q, Q, C, Q_inv, Q_inv)" 
out = np.tensordot(parte1,Q_inv_ext,axes=([2,3],[0,1])) 

方法#2

如果你希望避免broadcasting贊成使用的np.tensordot兩個實例中,你可以做 -

# Perform "np.einsum('jn,mnop', Q, C). Notice how, Q is represented by 'jn' 
# and C by 'mnop'. We need to reduce the 'm' dimension, i.e. reduce 'axes=1' 
# from Q and `axes=1` from C corresponding to `n' in each of the inputs. 
# Thus, 'jn' + 'mnop' => 'jmop' after 'n' is reduced and order is maintained. 
Q_C1 = np.tensordot(Q,C,axes=([1],[1])) 

# Perform "np.einsum('im,jn,mnop', Q, Q, C). We need to use Q and Q_C1. 
# Q is 'im' and Q_C1 is 'jmop'. Thus, again we need to reduce 'axes=1' 
# from Q and `axes=1` from Q_C1 corresponding to `m' in each of the inputs. 
# Thus, 'im' + 'jmop' => 'ijop' after 'm' is reduced and order is maintained. 
parte1 = np.tensordot(Q,Q_C1,axes=([1],[1])) 

# Use the same philosophy to get the rest of the einsum equivalent, 
# but use parte1 and go right and use Q_inv 
out = np.tensordot(np.tensordot(parte1,Q_inv,axes=([2],[0])),Q_inv,axes=([2],[0])) 

np.tensordot的竅門是保持由axes參數減小,並且合攏尺寸如何對齊針對該尺寸的軌道剩餘投入的維度。

+0

它的工作原理。 Q [:,無,:,無] * Q [:,無,:]與Q [:,無,:,無] * Q [無,:,無,:]相同嗎? – terencezl

+0

我仍然很好奇爲什麼四'tensordot'解決方案不起作用。你能說明一下嗎? – terencezl

+0

@terencezl沒錯。爲簡潔起見,省略了領先的'None',但您可能希望保持更好的理解。 – Divakar