Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autoaccept groups shares may create too much load and not finish all accepts #10591

Open
butonic opened this issue Nov 18, 2024 · 4 comments
Open
Labels

Comments

@butonic
Copy link
Member

butonic commented Nov 18, 2024

The current implementation for autoaccept shares needlessly amplifies writes. On a group share with 5k members 5k writes are made to update the received shares status, at least one per member.

Furthermore, when the autoaccept logic iterades over members and is killed by kubernetes it may not have accepted all shares. It will not pick that up because we fetched the event from the queue and ACKed it automatically.

A better solution would be to move the autoaccept code to the share manager implementation itself: whenever a list received shares call is made we already fetch all group shares the user is a member of before fetching the received state for each share (that contains the mountpoint and the pending / accepted / rejected state). We can update any pending shares to accepted on the fly.

This would

  • (PRO) reduce the number of writes to the amount of active users
  • (PRO) move the functionality to a storage manager implementation, where it belongs
  • (PRO) get rid of an event based mechanism that AFAICT may overload the underlying storage
  • (PRO) solve data inconsistency whan a group share accept operation was killed by kubernetes while still iterating over the members
  • (CON) force every share manager implementation to implement autoaccept on its own.

IMO a reasonable tradeoff.

@rhafer
Copy link
Contributor

rhafer commented Nov 18, 2024

  • (PRO) reduce the number of writes to the amount of active users

  • (PRO) move the functionality to a storage manager implementation, where it belongs

  • (PRO) get rid of an event based mechanism that AFAICT may overload the underlying storage

  • (PRO) solve data inconsistency whan a group share accept operation was killed by kubernetes while still iterating over the members

  • (CON) force every share manager implementation to implement autoaccept on its own.

  • (PRO) autoaccept works also for new group members that are added after the share was created

@micbar
Copy link
Contributor

micbar commented Nov 18, 2024

@kobergj @wkloucek I would strongly vote for taking that into account rather earlier than later.

iOPs in general seem to be short, this proposal reduces it significantly.

And it fixes some bad behavior like group members which are added after the share was created.

@wkloucek
Copy link
Contributor

(PRO) autoaccept works also for new group members that are added after the share was created

And it fixes some bad behavior like group members which are added after the share was created.

Could you please elaborate? How is it related to https://github.com/owncloud/enterprise/issues/6958#issuecomment-2464126518 ?

@micbar
Copy link
Contributor

micbar commented Nov 18, 2024

Group shares are auto accepted for all the members of a group at the point in time of the share creation. Members which are added later will see the share, but it will be not synced to the desktop.

NOTE: accepting shares is nowadays called „Sync“ and only has effect on the native clients. For the web client, you do not need to sync a share to work with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants