Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging Representation and Ansatz #487

Closed
wants to merge 33 commits into from
Closed

Merging Representation and Ansatz #487

wants to merge 33 commits into from

Conversation

apchytr
Copy link
Collaborator

@apchytr apchytr commented Sep 10, 2024

User description

Opening for discussion

Context: It is becoming apparent that the separation between Representation and Ansatz may not be necessary. E.g. questions about where certain properties should live, questions about what responsibilities the representation should have vs the ansatz, a lot of exposing ansatz properties and methods in representation, etc.

Description of the Change: Ansatz, ArrayAnsatz, PolyExpBase and PolyExpAnsatz are removed. Their underlying logic moved to Fock and Bargmann. bargmann.py and fock.py are renamed to bargmann_utils.py and fock_utils.py.

Benefits: We no longer have to worry about maintaining 7 classes but instead just 3. A lot of duplicate code and exposing properties/ methods is removed. The responsibilities of Representation is much more clear.

Possible Drawbacks: bargmann.py is massive (we could move many of the methods into utils?). If there is a use case to separate out representations and ansatz then we no longer support that.


PR Type

enhancement, refactoring, tests, documentation, bug fix, configuration changes


Description

  • Refactored the codebase to merge Representation and Ansatz classes, simplifying the structure by removing Ansatz, ArrayAnsatz, PolyExpBase, and PolyExpAnsatz.
  • Introduced new Bargmann and Fock classes for quantum state representation with enhanced mathematical operations and transformations.
  • Updated and added unit tests to cover new representation classes and ensure functionality.
  • Refactored various modules to align with the new representation structure, including imports and method adjustments.
  • Updated documentation and comments to reflect changes in module references to bargmann_utils and fock_utils.
  • Fixed a typo in the circuit equality method.
  • Updated GitHub workflow to remove outdated tests related to ansatze.

Changes walkthrough 📝

Relevant files
Enhancement
9 files
bargmann.py
Added Bargmann representation class with mathematical operations.

mrmustard/physics/representations/bargmann.py

  • Introduced a new Bargmann class for quantum state representation.
  • Implemented methods for mathematical operations and transformations on
    Bargmann objects.
  • Added properties and methods for handling batch operations and
    polynomial shapes.
  • Included plotting functionality for visualizing the Bargmann function.

  • +968/-0 
    fock.py
    Added Fock representation class with mathematical operations.

    mrmustard/physics/representations/fock.py

  • Introduced a new Fock class for quantum state representation.
  • Implemented methods for mathematical operations and transformations on
    Fock objects.
  • Added properties and methods for handling batch operations and array
    manipulations.
  • Included functionality for reducing and reordering Fock
    representations.
  • +360/-0 
    state.py
    Refactored state module to use new utils modules.               

    mrmustard/lab/abstract/state.py

  • Updated imports to use bargmann_utils and fock_utils.
  • Replaced references from fock to fock_utils for various operations.
  • Adjusted methods to align with the new representation classes.
  • +25/-25 
    circuit_components.py
    Refactored circuit components to use new representation classes.

    mrmustard/lab_dev/circuit_components.py

  • Updated imports to use fock_utils.
  • Adjusted methods to align with the new Bargmann and Fock classes.
  • Fixed method calls to match the updated representation interfaces.
  • +12/-11 
    dgate.py
    Refactored Dgate transformation to use new Fock class.     

    mrmustard/lab_dev/transformations/dgate.py

  • Updated imports to use fock_utils.
  • Adjusted methods to align with the new Fock class.
  • Fixed method calls to match the updated representation interfaces.
  • +5/-5     
    base.py
    Introduce `Representation` base class with abstract methods.

    mrmustard/physics/representations/base.py

  • Added a new Representation class as a base for representations.
  • Defined abstract methods for properties and operations on
    representations.
  • Implemented methods for serialization and mathematical operations.
  • +247/-0 
    bargmann_utils.py
    Add utility function for Bargmann to phase space conversion.

    mrmustard/physics/bargmann_utils.py

  • Added bargmann_Abc_to_phasespace_cov_means function.
  • Updated imports and utility functions.
  • +46/-1   
    __init__.py
    Update widgets to remove `ansatz` references.                       

    mrmustard/widgets/init.py

    • Removed references to ansatz in widget displays.
    +2/-14   
    __init__.py
    Add new representations module with licensing.                     

    mrmustard/physics/representations/init.py

    • Added new module for representations with licensing information.
    +21/-0   
    Tests
    15 files
    test_bargmann.py
    Introduced unit tests for Bargmann representation class. 

    tests/test_physics/test_representations/test_bargmann.py

  • Added unit tests for the Bargmann class.
  • Tested initialization, mathematical operations, and transformations.
  • Verified functionality of batch operations and polynomial shape
    handling.
  • Included tests for plotting and IPython display features.
  • +428/-0 
    test_fock_utils.py
    Updated tests for fock_utils module functions.                     

    tests/test_physics/test_fock_utils.py

  • Updated test descriptions to match the new fock_utils module.
  • Replaced references from fock to fock_utils.
  • Verified functionality of Fock state and density matrix operations.
  • +49/-31 
    test_fock.py
    Introduced unit tests for Fock representation class.         

    tests/test_physics/test_representations/test_fock.py

  • Added unit tests for the Fock class.
  • Tested initialization, mathematical operations, and transformations.
  • Verified functionality of batch operations and array manipulations.
  • Included tests for reducing and reordering Fock representations.
  • +258/-0 
    test_gates_fock.py
    Update Fock gate tests to use `fock_utils`.                           

    tests/test_lab/test_gates_fock.py

  • Updated imports to use fock_utils.
  • Adjusted test logic to align with new representation structure.
  • +14/-12 
    test_states_base.py
    Add and update tests for Bargmann utilities.                         

    tests/test_lab_dev/test_states/test_states_base.py

  • Added tests for bargmann_Abc_to_phasespace_cov_means.
  • Updated imports to use bargmann_utils.
  • +54/-1   
    test_circuit_components.py
    Update circuit component tests for new representations.   

    tests/test_lab_dev/test_circuit_components.py

  • Updated tests to align with new representation structure.
  • Removed references to ansatz.
  • +4/-4     
    test_transformations_base.py
    Update transformation tests for new representations.         

    tests/test_lab_dev/test_transformations/test_transformations_base.py

    • Updated tests to align with new representation structure.
    +2/-2     
    test_circuit_components_utils.py
    Update circuit component utility tests for new representations.

    tests/test_lab_dev/test_circuit_components_utils.py

  • Updated imports to use bargmann_utils.
  • Adjusted test logic to align with new representation structure.
  • +2/-2     
    test_bargmann_utils.py
    Add license and update imports in Bargmann utils tests.   

    tests/test_physics/test_bargmann_utils.py

  • Added license header to the test file.
  • Updated imports to use bargmann_utils.
  • +17/-1   
    test_coherent.py
    Update coherent state tests for new representations.         

    tests/test_lab_dev/test_states/test_coherent.py

    • Updated tests to align with new representation structure.
    +3/-3     
    test_cft.py
    Update CFT transformation tests for new representations. 

    tests/test_lab_dev/test_transformations/test_cft.py

    • Updated tests to align with new representation structure.
    +1/-1     
    test_lattice.py
    Update lattice tests to use `bargmann_utils`.                       

    tests/test_math/test_lattice.py

    • Updated imports to use bargmann_utils.
    +1/-1     
    test_compactFock.py
    Update compact Fock tests to use `bargmann_utils`.             

    tests/test_math/test_compactFock.py

    • Updated imports to use bargmann_utils.
    +1/-1     
    test_number.py
    Update number state tests to use `fock_utils`.                     

    tests/test_lab_dev/test_states/test_number.py

    • Updated imports to use fock_utils.
    +1/-1     
    test_fidelity.py
    Update fidelity tests to use `fock_utils`.                             

    tests/test_physics/test_fidelity.py

    • Updated imports to use fock_utils.
    +1/-1     
    Refactoring
    9 files
    base.py
    Refactor state base to use new representation structure. 

    mrmustard/lab_dev/states/base.py

  • Updated imports to use fock_utils and bargmann_utils.
  • Removed references to ansatz and updated logic to use representation.
  • Adjusted checks and conditions to align with new representation
    structure.
  • +14/-14 
    transformation.py
    Refactor transformation logic to use updated representations.

    mrmustard/lab/abstract/transformation.py

  • Updated imports to use fock_utils and bargmann_utils.
  • Refactored methods to align with new representation structure.
  • +15/-11 
    gates.py
    Refactor gate logic to use `fock_utils`.                                 

    mrmustard/lab/gates.py

  • Updated imports to use fock_utils.
  • Refactored gate logic to align with new representation structure.
  • +9/-9     
    __init__.py
    Update physics utilities to use `fock_utils`.                       

    mrmustard/physics/init.py

  • Updated imports to use fock_utils.
  • Refactored utility functions to align with new representation
    structure.
  • +7/-7     
    base.py
    Refactor transformation base to use `bargmann_utils`.       

    mrmustard/lab_dev/transformations/base.py

  • Updated imports to use bargmann_utils.
  • Refactored transformation logic to align with new representation
    structure.
  • +6/-6     
    detectors.py
    Refactor detector logic to use `fock_utils`.                         

    mrmustard/lab/detectors.py

  • Updated imports to use fock_utils.
  • Refactored detector logic to align with new representation structure.
  • +3/-3     
    states.py
    Refactor state logic to use `fock_utils`.                               

    mrmustard/lab/states.py

  • Updated imports to use fock_utils.
  • Refactored state logic to align with new representation structure.
  • +3/-3     
    trace_out.py
    Update trace out utility for new representations.               

    mrmustard/lab_dev/circuit_components_utils/trace_out.py

    • Updated logic to align with new representation structure.
    +1/-1     
    number.py
    Update number state logic to use `fock_utils`.                     

    mrmustard/lab_dev/states/number.py

    • Updated imports to use fock_utils.
    +1/-1     
    Documentation
    9 files
    diagonal_grad.py
    Update comments to reference `fock_utils`.                             

    mrmustard/math/lattice/strategies/compactFock/diagonal_grad.py

    • Updated comments to reference fock_utils instead of fock.
    +4/-4     
    diagonal_amps.py
    Update comments to reference `fock_utils`.                             

    mrmustard/math/lattice/strategies/compactFock/diagonal_amps.py

    • Updated comments to reference fock_utils instead of fock.
    +3/-3     
    singleLeftoverMode_grad.py
    Update comments to reference `fock_utils`.                             

    mrmustard/math/lattice/strategies/compactFock/singleLeftoverMode_grad.py

    • Updated comments to reference fock_utils instead of fock.
    +3/-3     
    singleLeftoverMode_amps.py
    Update comments to reference `fock_utils`.                             

    mrmustard/math/lattice/strategies/compactFock/singleLeftoverMode_amps.py

    • Updated comments to reference fock_utils instead of fock.
    +3/-3     
    backend_tensorflow.py
    Update comments to reference `bargmann_utils`.                     

    mrmustard/math/backend_tensorflow.py

  • Updated comments to reference bargmann_utils instead of bargmann.
  • +1/-1     
    backend_numpy.py
    Update comments to reference `bargmann_utils`.                     

    mrmustard/math/backend_numpy.py

  • Updated comments to reference bargmann_utils instead of bargmann.
  • +1/-1     
    diagonal_grad.jl
    Update comments to reference `fock_utils`.                             

    mrmustard/math/lattice/strategies/julia/compactFock/diagonal_grad.jl

    • Updated comments to reference fock_utils instead of fock.
    +1/-1     
    bargmann_calculations.rst
    Update documentation to reference `bargmann_utils`.           

    doc/code/physics/utils/bargmann_calculations.rst

    • Updated module reference to bargmann_utils.
    +2/-2     
    fock_calculations.rst
    Update documentation to reference `fock_utils`.                   

    doc/code/physics/utils/fock_calculations.rst

    • Updated module reference to fock_utils.
    +2/-2     
    Error handling
    1 files
    fock_utils.py
    Add error handling in `fock_state` function.                         

    mrmustard/physics/fock_utils.py

    • Added error handling for index errors in fock_state.
    +4/-4     
    Bug fix
    1 files
    circuits.py
    Fix typo in circuit equality method.                                         

    mrmustard/lab_dev/circuits.py

    • Fixed a typo in the equality method.
    +1/-1     
    Configuration changes
    1 files
    tests_docs.yml
    Update GitHub workflow to remove outdated tests.                 

    .github/workflows/tests_docs.yml

  • Removed outdated test commands for ansatze.
  • Updated test command for representations.
  • +1/-2     

    💡 PR-Agent usage: Comment /help "your question" on any pull request to receive relevant information

    @apchytr apchytr added the no changelog Pull request does not require a CHANGELOG entry label Sep 10, 2024
    Copy link

    codecov bot commented Sep 10, 2024

    Codecov Report

    Attention: Patch coverage is 98.61111% with 8 lines in your changes missing coverage. Please review.

    Project coverage is 89.84%. Comparing base (242c806) to head (4fd828a).

    Files with missing lines Patch % Lines
    mrmustard/physics/__init__.py 57.14% 3 Missing ⚠️
    mrmustard/physics/representations/fock.py 98.76% 2 Missing ⚠️
    mrmustard/lab/abstract/state.py 95.23% 1 Missing ⚠️
    mrmustard/lab/abstract/transformation.py 90.90% 1 Missing ⚠️
    mrmustard/physics/representations/bargmann.py 99.56% 1 Missing ⚠️
    Additional details and impacted files
    @@             Coverage Diff             @@
    ##           develop     #487      +/-   ##
    ===========================================
    - Coverage    89.85%   89.84%   -0.01%     
    ===========================================
      Files          104      106       +2     
      Lines         7698     7644      -54     
    ===========================================
    - Hits          6917     6868      -49     
    + Misses         781      776       -5     
    Files with missing lines Coverage Δ
    mrmustard/lab/detectors.py 79.86% <100.00%> (ø)
    mrmustard/lab/gates.py 97.25% <100.00%> (ø)
    mrmustard/lab/states.py 100.00% <100.00%> (ø)
    mrmustard/lab_dev/circuit_components.py 99.09% <100.00%> (ø)
    ...ustard/lab_dev/circuit_components_utils/b_to_ps.py 100.00% <100.00%> (ø)
    ...mustard/lab_dev/circuit_components_utils/b_to_q.py 100.00% <100.00%> (ø)
    ...tard/lab_dev/circuit_components_utils/trace_out.py 100.00% <100.00%> (ø)
    mrmustard/lab_dev/states/base.py 96.51% <100.00%> (ø)
    mrmustard/lab_dev/states/number.py 100.00% <100.00%> (ø)
    mrmustard/lab_dev/transformations/base.py 90.54% <100.00%> (ø)
    ... and 17 more

    Continue to review full report in Codecov by Sentry.

    Legend - Click here to learn more
    Δ = absolute <relative> (impact), ø = not affected, ? = missing data
    Powered by Codecov. Last update 242c806...4fd828a. Read the comment docs.

    @apchytr apchytr marked this pull request as ready for review October 7, 2024 19:09
    Copy link
    Contributor

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🏅 Score: 85
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Performance Concern
    The _order_batch method uses Python's itertools and a custom argsort function, which may be inefficient for large batch sizes. Consider using numpy's argsort for better performance.

    Code Complexity
    The call method is quite complex and handles multiple cases. Consider refactoring it into smaller, more focused methods to improve readability and maintainability.

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Oct 7, 2024

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Score
    Performance
    Use vectorized operations for simplification to improve performance with large batch sizes

    Consider using vectorized operations instead of explicit loops in methods like
    simplify to improve performance for large batch sizes.

    mrmustard/physics/representations/bargmann.py [355-379]

     def simplify(self) -> None:
         r"""
         Simplifies the representation by combining together terms that have the same
         exponential part, i.e. two terms along the batch are considered equal if their
         matrix and vector are equal. In this case only one is kept and the arrays are added.
     
         Does not run if the representation has already been simplified, so it is safe to call.
         """
         if self._simplified:
             return
    -    indices_to_check = set(range(self.batch_size))
    -    removed = []
    -    while indices_to_check:
    -        i = indices_to_check.pop()
    -        for j in indices_to_check.copy():
    -            if np.allclose(self.A[i], self.A[j]) and np.allclose(self.b[i], self.b[j]):
    -                self.c = math.update_add_tensor(self.c, [[i]], [self.c[j]])
    -                indices_to_check.remove(j)
    -                removed.append(j)
    -    to_keep = [i for i in range(self.batch_size) if i not in removed]
    -    self.A = math.gather(self.A, to_keep, axis=0)
    -    self.b = math.gather(self.b, to_keep, axis=0)
    -    self.c = math.gather(self.c, to_keep, axis=0)
    +    
    +    # Vectorized comparison
    +    A_flat = self.A.reshape(self.batch_size, -1)
    +    b_flat = self.b.reshape(self.batch_size, -1)
    +    combined = np.concatenate([A_flat, b_flat], axis=1)
    +    
    +    # Find unique rows
    +    _, unique_indices, inverse = np.unique(combined, axis=0, return_index=True, return_inverse=True)
    +    
    +    # Sum c values for duplicate entries
    +    new_c = np.zeros_like(self.c[unique_indices])
    +    np.add.at(new_c, inverse, self.c)
    +    
    +    self.A = self.A[unique_indices]
    +    self.b = self.b[unique_indices]
    +    self.c = new_c
         self._simplified = True
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: The suggestion to use vectorized operations in the simplify method can significantly enhance performance for large batch sizes by reducing the overhead of explicit loops, making it a valuable optimization.

    8
    Use sparse matrices for improved performance and memory efficiency in large-scale computations

    Consider using a more efficient data structure for storing and manipulating sparse
    matrices, such as scipy.sparse, to improve performance and memory usage for
    large-scale computations.

    mrmustard/physics/representations/bargmann.py [768-799]

    +from scipy import sparse
    +
     def andA(A1, A2, dim_alpha1, dim_alpha2, dim_beta1, dim_beta2):
    -    A3 = math.block(
    -        [
    -            [
    -                A1[:dim_alpha1, :dim_alpha1],
    -                math.zeros((dim_alpha1, dim_alpha2), dtype=math.complex128),
    -                A1[:dim_alpha1, dim_alpha1:],
    -                math.zeros((dim_alpha1, dim_beta2), dtype=math.complex128),
    -            ],
    -            [
    -                math.zeros((dim_alpha2, dim_alpha1), dtype=math.complex128),
    -                A2[:dim_alpha2:, :dim_alpha2],
    -                math.zeros((dim_alpha2, dim_beta1), dtype=math.complex128),
    -                A2[:dim_alpha2, dim_alpha2:],
    -            ],
    -            [
    -                A1[dim_alpha1:, :dim_alpha1],
    -                math.zeros((dim_beta1, dim_alpha2), dtype=math.complex128),
    -                A1[dim_alpha1:, dim_alpha1:],
    -                math.zeros((dim_beta1, dim_beta2), dtype=math.complex128),
    -            ],
    -            [
    -                math.zeros((dim_beta2, dim_alpha1), dtype=math.complex128),
    -                A2[dim_alpha2:, :dim_alpha2],
    -                math.zeros((dim_beta2, dim_beta1), dtype=math.complex128),
    -                A2[dim_alpha2:, dim_alpha2:],
    -            ],
    -        ]
    -    )
    +    total_dim = dim_alpha1 + dim_alpha2 + dim_beta1 + dim_beta2
    +    A3 = sparse.lil_matrix((total_dim, total_dim), dtype=np.complex128)
    +    
    +    A3[:dim_alpha1, :dim_alpha1] = A1[:dim_alpha1, :dim_alpha1]
    +    A3[:dim_alpha1, dim_alpha1+dim_alpha2:dim_alpha1+dim_alpha2+dim_beta1] = A1[:dim_alpha1, dim_alpha1:]
    +    
    +    A3[dim_alpha1:dim_alpha1+dim_alpha2, dim_alpha1:dim_alpha1+dim_alpha2] = A2[:dim_alpha2, :dim_alpha2]
    +    A3[dim_alpha1:dim_alpha1+dim_alpha2, -dim_beta2:] = A2[:dim_alpha2, dim_alpha2:]
    +    
    +    A3[dim_alpha1+dim_alpha2:dim_alpha1+dim_alpha2+dim_beta1, :dim_alpha1] = A1[dim_alpha1:, :dim_alpha1]
    +    A3[dim_alpha1+dim_alpha2:dim_alpha1+dim_alpha2+dim_beta1, dim_alpha1+dim_alpha2:dim_alpha1+dim_alpha2+dim_beta1] = A1[dim_alpha1:, dim_alpha1:]
    +    
    +    A3[-dim_beta2:, dim_alpha1:dim_alpha1+dim_alpha2] = A2[dim_alpha2:, :dim_alpha2]
    +    A3[-dim_beta2:, -dim_beta2:] = A2[dim_alpha2:, dim_alpha2:]
    +    
         return A3
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion to use sparse matrices can significantly improve performance and memory usage for large-scale computations, which is relevant for the operations in the andA function. However, the actual benefit depends on the sparsity of the matrices involved.

    7
    Implement parallel processing for batch operations to improve performance on multi-core systems

    Implement parallel processing for batch operations to leverage multi-core systems
    and improve performance for large-scale computations.

    mrmustard/physics/representations/bargmann.py [696-781]

    +from concurrent.futures import ProcessPoolExecutor
    +import multiprocessing
    +
     def __and__(self, other: Bargmann) -> Bargmann:
         r"""
         Tensor product of this Bargmann with another Bargmann.
         Equivalent to :math:`F(a) * G(b)` (with different arguments, that is).
         As it distributes over addition on both self and other,
         the batch size of the result is the product of the batch
         size of this representation and the other one.
     
         Args:
             other: Another Barmann.
     
         Returns:
             The tensor product of this Bargmann and other.
         """
     
         def andA(A1, A2, dim_alpha1, dim_alpha2, dim_beta1, dim_beta2):
    -        A3 = math.block(
    -            [
    -                [
    -                    A1[:dim_alpha1, :dim_alpha1],
    -                    math.zeros((dim_alpha1, dim_alpha2), dtype=math.complex128),
    -                    A1[:dim_alpha1, dim_alpha1:],
    -                    math.zeros((dim_alpha1, dim_beta2), dtype=math.complex128),
    -                ],
    -                [
    -                    math.zeros((dim_alpha2, dim_alpha1), dtype=math.complex128),
    -                    A2[:dim_alpha2:, :dim_alpha2],
    -                    math.zeros((dim_alpha2, dim_beta1), dtype=math.complex128),
    -                    A2[:dim_alpha2, dim_alpha2:],
    -                ],
    -                [
    -                    A1[dim_alpha1:, :dim_alpha1],
    -                    math.zeros((dim_beta1, dim_alpha2), dtype=math.complex128),
    -                    A1[dim_alpha1:, dim_alpha1:],
    -                    math.zeros((dim_beta1, dim_beta2), dtype=math.complex128),
    -                ],
    -                [
    -                    math.zeros((dim_beta2, dim_alpha1), dtype=math.complex128),
    -                    A2[dim_alpha2:, :dim_alpha2],
    -                    math.zeros((dim_beta2, dim_beta1), dtype=math.complex128),
    -                    A2[dim_alpha2:, dim_alpha2:],
    -                ],
    -            ]
    -        )
    -        return A3
    +        # ... (same as before)
     
         def andb(b1, b2, dim_alpha1, dim_alpha2):
    -        b3 = math.reshape(
    -            math.block(
    -                [
    -                    [
    -                        b1[:dim_alpha1],
    -                        b2[:dim_alpha2],
    -                        b1[dim_alpha1:],
    -                        b2[dim_alpha2:],
    -                    ]
    -                ]
    -            ),
    -            -1,
    -        )
    -        return b3
    +        # ... (same as before)
     
         def andc(c1, c2):
    -        c3 = math.reshape(math.outer(c1, c2), (c1.shape + c2.shape))
    -        return c3
    +        # ... (same as before)
     
         dim_beta1, _ = self.polynomial_shape
         dim_beta2, _ = other.polynomial_shape
     
         dim_alpha1 = self.A.shape[-1] - dim_beta1
         dim_alpha2 = other.A.shape[-1] - dim_beta2
     
    -    As = [
    -        andA(
    +    def process_batch(args):
    +        A1, A2, b1, b2, c1, c2 = args
    +        A = andA(
                 math.cast(A1, "complex128"),
                 math.cast(A2, "complex128"),
                 dim_alpha1,
                 dim_alpha2,
                 dim_beta1,
                 dim_beta2,
             )
    -        for A1, A2 in itertools.product(self.A, other.A)
    +        b = andb(b1, b2, dim_alpha1, dim_alpha2)
    +        c = andc(c1, c2)
    +        return A, b, c
    +
    +    batch_args = [
    +        (A1, A2, b1, b2, c1, c2)
    +        for (A1, b1, c1), (A2, b2, c2) in itertools.product(
    +            zip(self.A, self.b, self.c), zip(other.A, other.b, other.c)
    +        )
         ]
    -    bs = [andb(b1, b2, dim_alpha1, dim_alpha2) for b1, b2 in itertools.product(self.b, other.b)]
    -    cs = [andc(c1, c2) for c1, c2 in itertools.product(self.c, other.c)]
    +
    +    with ProcessPoolExecutor(max_workers=multiprocessing.cpu_count()) as executor:
    +        results = list(executor.map(process_batch, batch_args))
    +
    +    As, bs, cs = zip(*results)
         return Bargmann(As, bs, cs)
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Introducing parallel processing for batch operations can leverage multi-core systems to improve performance. However, the complexity of managing parallel execution and potential overhead should be considered.

    7
    Implement caching for frequently accessed properties to improve performance

    Implement caching for frequently computed values to avoid redundant calculations and
    improve overall performance.

    mrmustard/physics/representations/bargmann.py [162-175]

    -@property
    -def A(self) -> Batch[ComplexMatrix]:
    -    r"""
    -    The batch of quadratic coefficient :math:`A_i`.
    -    """
    -    self._generate_ansatz()
    -    if not self._backends[0]:
    -        self._A = math.atleast_3d(self._A)
    -        self._backends[0] = True
    -    return self._A
    +from functools import lru_cache
     
    -@A.setter
    -def A(self, value):
    -    self._A = value
    -    self._backends[0] = False
    +class Bargmann(Representation):
    +    def __init__(self, ...):
    +        ...
    +        self._A_cache = None
     
    +    @property
    +    @lru_cache(maxsize=None)
    +    def A(self) -> Batch[ComplexMatrix]:
    +        r"""
    +        The batch of quadratic coefficient :math:`A_i`.
    +        """
    +        self._generate_ansatz()
    +        if self._A_cache is None:
    +            self._A_cache = math.atleast_3d(self._A)
    +        return self._A_cache
    +
    +    @A.setter
    +    def A(self, value):
    +        self._A = value
    +        self._A_cache = None
    +        self.A.cache_clear()
    +
    Suggestion importance[1-10]: 6

    Why: Caching the property A can reduce redundant calculations and improve performance, especially if the property is accessed frequently. However, the implementation using lru_cache might not be directly applicable due to the mutable nature of the class.

    6
    Enhancement
    Expand the density matrix normalization test to include a case with off-diagonal elements

    In the test_normalize_dm function, consider adding a test case for a non-normalized
    density matrix with off-diagonal elements to ensure the normalization works
    correctly in more general cases.

    tests/test_physics/test_fock_utils.py [495-498]

     def test_normalize_dm():
         """Tests normalizing a DM."""
    -    dm = np.array([[0.2, 0], [0, 0.2]])
    -    assert np.allclose(fock_utils.normalize(dm, True), np.array([[0.5, 0], [0, 0.5]]))
    +    dm1 = np.array([[0.2, 0], [0, 0.2]])
    +    assert np.allclose(fock_utils.normalize(dm1, True), np.array([[0.5, 0], [0, 0.5]]))
    +    
    +    dm2 = np.array([[0.3, 0.1], [0.1, 0.3]])
    +    normalized_dm2 = fock_utils.normalize(dm2, True)
    +    assert np.allclose(np.trace(normalized_dm2), 1.0)
    +    assert np.allclose(normalized_dm2, dm2 / np.trace(dm2))
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Adding a test case for a density matrix with off-diagonal elements ensures that the normalization function handles more complex cases, improving test coverage and reliability.

    8
    Enhance test coverage by parameterizing the Fock state test for number variances

    Consider using a parametrized test for the test_number_variances_fock function to
    test multiple Fock states, not just n=1. This would provide more comprehensive
    coverage of the number_variances function's behavior with Fock states.

    tests/test_physics/test_fock_utils.py [489-492]

    -def test_number_variances_fock():
    -    """Tests the variance of the number operator in Fock."""
    -    assert np.allclose(fock_utils.number_variances(Fock(n=1).ket(), False), 0)
    -    assert np.allclose(fock_utils.number_variances(Fock(n=1).dm(), True), 0)
    +@pytest.mark.parametrize("n", [0, 1, 2, 5])
    +def test_number_variances_fock(n):
    +    """Tests the variance of the number operator in Fock states."""
    +    assert np.allclose(fock_utils.number_variances(Fock(n=n).ket(), False), 0)
    +    assert np.allclose(fock_utils.number_variances(Fock(n=n).dm(), True), 0)
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Parameterizing the test for Fock states provides broader coverage and ensures the function behaves correctly for different Fock states, enhancing the robustness of the test suite.

    7
    Add checks to ensure that calculated number variances are non-negative

    In the test_number_variances_coh function, consider adding assertions to verify that
    the variances are non-negative, which is a fundamental property of variances.

    tests/test_physics/test_fock_utils.py [480-486]

     @given(x=st.floats(-1, 1), y=st.floats(-1, 1))
     def test_number_variances_coh(x, y):
         """Tests the variance of the number operator."""
    -    assert np.allclose(
    -        fock_utils.number_variances(Coherent(x, y).ket([80]), False)[0], x * x + y * y
    -    )
    -    assert np.allclose(fock_utils.number_variances(Coherent(x, y).dm([80]), True)[0], x * x + y * y)
    +    ket_variance = fock_utils.number_variances(Coherent(x, y).ket([80]), False)[0]
    +    dm_variance = fock_utils.number_variances(Coherent(x, y).dm([80]), True)[0]
    +    
    +    assert np.allclose(ket_variance, x * x + y * y)
    +    assert np.allclose(dm_variance, x * x + y * y)
    +    assert ket_variance >= 0
    +    assert dm_variance >= 0
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Verifying that variances are non-negative is a fundamental property check that adds value to the test by ensuring the function's correctness in adhering to mathematical principles.

    6
    Use more descriptive variable names to enhance code readability

    Consider using a more descriptive variable name instead of '_fock' to improve code
    readability.

    mrmustard/lab/abstract/state.py [226-229]

    -_fock = fock_utils.wigner_to_fock_state(
    +fock_state = fock_utils.wigner_to_fock_state(
         self.cov,
         self.means,
         shape=self.shape,
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Using a more descriptive variable name like 'fock_state' instead of '_fock' can improve code readability and maintainability by making the variable's purpose clearer.

    6
    Use more descriptive method names to improve code clarity

    Consider using a more explicit method name for '_project_onto_fock' to better
    describe its functionality, such as '_calculate_fock_projection'.

    mrmustard/lab/abstract/state.py [429]

    -def _project_onto_fock(self, other: State) -> State | float:
    +def _calculate_fock_projection(self, other: State) -> State | float:
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Renaming the method to '_calculate_fock_projection' provides a clearer understanding of its functionality, enhancing code clarity and maintainability.

    6
    Use more descriptive variable names to enhance code clarity

    Consider using a more descriptive variable name instead of 'out_fock' to improve
    code readability and understanding of its purpose.

    mrmustard/lab/abstract/state.py [447-450]

    -out_fock = fock_utils.contract_states(
    +contracted_state = fock_utils.contract_states(
         stateA=(other.ket(other_cutoffs) if other.is_pure else other.dm(other_cutoffs)),
         stateB=(self.ket(self_cutoffs) if self.is_pure else self.dm(self_cutoffs)),
         a_is_dm=other.is_mixed,
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Changing 'out_fock' to 'contracted_state' makes the variable's purpose more explicit, improving code readability and understanding.

    6
    Best practice
    Standardize module naming conventions for consistency and readability

    Consider using a consistent naming convention for utility modules. For example,
    rename 'bargmann_utils' to 'bargmann' to match the 'fock' module naming.

    mrmustard/lab/abstract/state.py [31]

    -from mrmustard.physics import bargmann_utils, fock_utils, gaussian
    +from mrmustard.physics import bargmann, fock, gaussian
    • Apply this suggestion
    Suggestion importance[1-10]: 5

    Why: The suggestion to standardize module naming conventions could improve code consistency and readability. However, it may require changes across multiple files, which could introduce unnecessary complexity if not carefully managed.

    5

    💡 Need additional feedback ? start a PR chat

    @apchytr apchytr closed this Oct 21, 2024
    @apchytr
    Copy link
    Collaborator Author

    apchytr commented Oct 21, 2024

    Replaced with #498

    apchytr added a commit that referenced this pull request Oct 29, 2024
    **Context:** One bottleneck currently in Mr Mustard is that
    `Representation` and `Wires` are treated separately when they are in
    fact very closely related. Here is presented an alternative
    implementation based on the [Merging Representation and
    Ansatz](#487) PR. In this, we
    now have a single `Representation` class that contains the ansatz, the
    wires and a dictionary mapping wires to their respective representation.
    
    To consider: Remove ArrayAnsatz and only use PolyExp? 
    
    **Description of the Change:**  
    - Introduced the new `Representation` class. 
    - Introduced `RepEnum` class. 
    - What is currently on develop as Ansatz and Representation is replaced
    with just Ansatz.
    - All `CircuitComponents` are initialized with a `Representation` and a
    name. Exposed classes are still initialized with modes where they make
    use of the new `from_modes` class method.
    - Renamed bargmann.py and fock.py to bargmann_utils.py and fock_utils.py
    - Renamed `fock` to `fock_array` 
    
    **Benefits:** We now have an object responsible for managing multi
    representations and keeping track of what wire each representation is.
    Fewer classes to maintain (6 -> 4). Clearer separation of
    responsibilities. Much cleaner codebase.
    
    ---------
    
    Co-authored-by: Filippo Miatto <miatto@gmail.com>
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    Bug fix configuration changes documentation Improvements or additions to documentation enhancement New feature or request no changelog Pull request does not require a CHANGELOG entry refactoring Review effort [1-5]: 4 tests
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants