Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tests] enable test_weight_qbits_tensor_linear_cuda on xpu devices #345

Closed
wants to merge 4 commits into from

Conversation

faaany
Copy link
Contributor

@faaany faaany commented Nov 1, 2024

What does this PR do?

This is a follow-up PR of #344. This should be merged after #344.

Below are the test results:

PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-32-1-bf16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-32-2-fp16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-32-2-bf16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-48-1-fp16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-48-1-bf16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-48-2-fp16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-48-2-bf16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-64-1-fp16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-64-1-bf16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-64-2-fp16]
PASSED test/tensor/weights/test_weight_qbits_tensor_dispatch.py::test_weight_qbits_tensor_linear_gpu[no-bias-4096-16384-64-2-bf16]
================================================ 288 passed, 161 deselected in 4.56s =================================================

Before submitting

  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you run all tests locally and make sure they pass.
  • Did you write any new necessary tests?

@faaany faaany requested a review from dacorvo as a code owner November 1, 2024 06:59
@faaany faaany changed the title [tests] enable test_weight_qbits_tensor_linear_gpu on xpu devices [tests] enable test_weight_qbits_tensor_linear_cuda on xpu devices Nov 1, 2024
Copy link
Collaborator

@dacorvo dacorvo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rebased and merged as #350

@dacorvo dacorvo closed this Nov 12, 2024
@faaany
Copy link
Contributor Author

faaany commented Nov 13, 2024

Thanks for the rebase and merge!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants