-
Notifications
You must be signed in to change notification settings - Fork 685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check for noble readiness and tell journalists/admins #7322
Comments
Overall I don't think this poses much of a security risk. The set of changes an attacker might be able to infer is smaller than it seems tho. 2/4 changes (ufw,ssh) above could be inferred by looking at version number, as they'd be enforced by a release's postint. Other ones might be useful to an attacker looking to, say, use up disk space - but they could just put in lots of submissions. I'm kindof of two minds about this approach tho, just because it is a public API, this is extra non-public info (albeit collapsed into a boolean). One alternative - which is in line with what was done on previous updates - would be to consider a banner message in the JI. This can be a lot more verbose - and while we won't be able to see it, if it's sufficiently "alerty" it will enlist the help of users to get admins to contact us rather than the other way round. The other advantage of an alert within the JI is that for folks running an instance without contact with FPF, the API change is unlikely to be noticed, while the JI banner is going be visible to anyone using the system. (Obviously, they are not mutually exclusive.) |
I'll start working on the banner message. I'm also thinking about an OSSEC alert since that can check the mon server as well. |
Sounds good, the OSSEC alert would hopefully also get some folks' attention! |
Here's my first draft of the check script: d601c6c My idea is that it would run on a daily timer, it writes output to a JSON file. The JI will read from that JSON file and display the banner message if needed. |
Perform a number of checks to ensure the system is ready for the noble migration. The results are written to a JSON file in /etc/ that other things like the JI and the upgrade script itself can read from. The script is run hourly on a systemd timer but can also be run interactively for administrators who want slightly more details. Refs #7322.
Perform a number of checks to ensure the system is ready for the noble migration. The results are written to a JSON file in /etc/ that other things like the JI and the upgrade script itself can read from. The script is run hourly on a systemd timer but can also be run interactively for administrators who want slightly more details. Refs #7322.
Perform a number of checks to ensure the system is ready for the noble migration. The results are written to a JSON file in /etc/ that other things like the JI and the upgrade script itself can read from. The script is run hourly on a systemd timer but can also be run interactively for administrators who want slightly more details. Refs #7322.
Perform a number of checks to ensure the system is ready for the noble migration. The results are written to a JSON file in /etc/ that other things like the JI and the upgrade script itself can read from. The script is run hourly on a systemd timer but can also be run interactively for administrators who want slightly more details. Refs #7322.
Description
Update: This is an old proposal, we're not immediately planning to do the API check, instead we'll focus on a JI banner and other things.
The SI's public
/metadata
endpoint outputs some information about a SecureDrop install that is used by us for various ecosystem observation purposes (primarily checking that auto-updates have succeeded).With the upcoming noble migration, we are looking at doing it automatically, but there will still be some issues that cannot be fixed ahead of time (e.g. enough free disk space).
My idea is that we add a flag to the API output like
noble_readiness: true/false
. It only outputs a boolean flag, which we can use to find instances that are not ready, and reach out to their admins, who can log in and run a script that outputs more details on what's wrong.Example checks that would trigger a false response:
The main argument against this is that we are exposing information about an instances internal health, we mitigate that by just outputting a boolean, and checking enough things that a potential attacker cannot figure out which thing is triggering the failure.
Also because this is running from the app server, I don't think we can handle the mon server.
The text was updated successfully, but these errors were encountered: