You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The assign of the first week of Lesson 4 5.2.3 Putting it together: Pooling backward
Change Part:
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += np.multiply(mask, dA[i, h, w, c])
# Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
The Reason:
Pooling Layer 前向传播是一个slice对应一个scalar,所以在反向传播的时候应该是一个scalar对应一个slice。multiply mask and the scalar 再与对应slice相加能够刚好把error传递给最大值;distribute_value与对应slice相加,能够将error 平均分给slice中的每个值,最终实现精准传播误差的目的。
不知道自己有没有描述清楚,@marsggbo谢谢分享作业!!!
The text was updated successfully, but these errors were encountered:
The assign of the first week of Lesson 4 5.2.3 Putting it together: Pooling backward
Change Part:
The Reason:
Pooling Layer 前向传播是一个slice对应一个scalar,所以在反向传播的时候应该是一个scalar对应一个slice。multiply mask and the scalar 再与对应slice相加能够刚好把error传递给最大值;distribute_value与对应slice相加,能够将error 平均分给slice中的每个值,最终实现精准传播误差的目的。
不知道自己有没有描述清楚,@marsggbo谢谢分享作业!!!
The text was updated successfully, but these errors were encountered: