Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about attention mechanism #31

Open
smayru opened this issue Jan 18, 2021 · 3 comments
Open

Question about attention mechanism #31

smayru opened this issue Jan 18, 2021 · 3 comments

Comments

@smayru
Copy link

smayru commented Jan 18, 2021

スクリーンショット 2021-01-18 17 45 37

Thank you for providing the code.
I have a question about Fig.5 in your paper.
In my understanding, the value of attention is obtained for each pair of (s,e,t) based on Eq.(3).
In Fig.5, you seem to obtain the value of attention for each relation (e), which is irrelevant to s and t.
For example, the value of attention for is_published_at is 0.970 in the right figure of Fig.5.
Would you explain how you obtain the value of attention for each relation?

@dujiaxin
Copy link

I ran into the same question. But I think I figured it out by reading the code.

Besides referring to issue #27 , I pasted my script here.

For example, if you want to visualize the weight on target type 1, the weight of source type 2 would be:
torch.matmul(self.att, self.att.T)[[torch.logical_and((node_type_j == 1), node_type_i == 2)]].sum()/torch.matmul(self.att, self.att.T)[node_type_j == 1].sum()
The self.att is saved in conv.py

Maybe the author can double-check if I am right.

@acbull
Copy link
Owner

acbull commented Jan 18, 2021

I ran into the same question. But I think I figured it out by reading the code.

Besides referring to issue #27 , I pasted my script here.

For example, if you want to visualize the weight on target type 1, the weight of source type 2 would be:
torch.matmul(self.att, self.att.T)[[torch.logical_and((node_type_j == 1), node_type_i == 2)]].sum()/torch.matmul(self.att, self.att.T)[node_type_j == 1].sum()
The self.att is saved in conv.py

Maybe the author can double-check if I am right.

Hi:

Yes, your answer is similar to how we get the average attention weight.

We don't directly use the Relation_weight to plot this tree, instead, we average the attention calculated by <s,e,t> over the nodes within several batches, as this snippet does.

@smayru
Copy link
Author

smayru commented Jan 19, 2021

Thank you for your prompt reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants