Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sourcery refactored master branch #19

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

sourcery-ai[bot]
Copy link

@sourcery-ai sourcery-ai bot commented Oct 20, 2022

Branch master refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the master branch, then run:

git fetch origin sourcery/master
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from anantshri October 20, 2022 20:51
@sourcery-ai
Copy link
Author

sourcery-ai bot commented Oct 20, 2022

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.60%.

Quality metrics Before After Change
Complexity 33.43 ⛔ 30.50 😞 -2.93 👍
Method Length 165.89 😞 166.00 😞 0.11 👎
Working memory 13.63 😞 14.06 😞 0.43 👎
Quality 28.66% 😞 29.26% 😞 0.60% 👍
Other metrics Before After Change
Lines 240 255 15
Changed files Quality Before Quality After Quality Change
svn_extractor.py 28.66% 😞 29.26% 😞 0.60% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
svn_extractor.py main 47 ⛔ 579 ⛔ 17 ⛔ 10.39% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
svn_extractor.py readsvn 30 😞 245 ⛔ 19 ⛔ 20.04% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
svn_extractor.py save_url_wc 20 😞 216 ⛔ 10 😞 39.37% 😞 Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
svn_extractor.py readwc 13 🙂 214 ⛔ 10 😞 45.03% 😞 Try splitting into smaller methods. Extract out complex expressions
svn_extractor.py save_url_svn 5 ⭐ 125 😞 7 🙂 67.42% 🙂 Try splitting into smaller methods

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

Comment on lines -29 to +57
urli = urli + "/"
urli = f"{urli}/"
for a in data.text.splitlines():
# below functionality will find all usernames from svn entries file
if a == "has-props":
author_list.append(old_line)
if a == "file":
if not pattern.search(old_line):
continue
ignore = getext(old_line) in excludes
if ignore:
print('{}{}(not extracted)'.format(urli, old_line))
else:
print('{}{}'.format(urli, old_line))
if no_extract and not ignore:
save_url_svn(urli, old_line, proxy_dict)
file_list = file_list + ";" + old_line
if a == "dir":
if old_line != "":
folder_path = os.path.join("output", urli.replace("http://", "").replace("https://", "").replace("/", os.path.sep), old_line)
if not os.path.exists(folder_path):
if no_extract:
os.makedirs(folder_path)
dir_list = dir_list + ";" + old_line
print('{}{}'.format(urli, old_line))
if not os.path.exists(folder_path) and no_extract:
os.makedirs(folder_path)
dir_list = f"{dir_list};{old_line}"
print(f'{urli}{old_line}')
try:
d = requests.get(urli + old_line + "/.svn/entries", verify=False, proxies=(proxy_dict))
readsvn(d, urli + old_line, match, proxy_dict)
except Exception:
print("Error Reading {}{}/.svn/entries so killing".format(urli, old_line))
print(f"Error Reading {urli}{old_line}/.svn/entries so killing")

elif a == "file":
if not pattern.search(old_line):
continue
ignore = getext(old_line) in excludes
if ignore:
print(f'{urli}{old_line}(not extracted)')
else:
print(f'{urli}{old_line}')
if no_extract and not ignore:
save_url_svn(urli, old_line, proxy_dict)
file_list = f"{file_list};{old_line}"
elif a == "has-props":
author_list.append(old_line)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function readsvn refactored with the following changes:

Comment on lines -70 to +71
with open(folder + "wc.db", "wb") as f:
with open(f"{folder}wc.db", "wb") as f:
f.write(data.content)
conn = sqlite3.connect(folder + "wc.db")
conn = sqlite3.connect(f"{folder}wc.db")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function readwc refactored with the following changes:

Comment on lines -104 to +103
print("{} : {}".format(cnt, x))
print(f"{cnt} : {x}")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function show_list refactored with the following changes:

Comment on lines -126 to +125
print("Error while accessing : {}{}".format(url, svn_path))
print(f"Error while accessing : {url}{svn_path}")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function save_url_wc refactored with the following changes:

Comment on lines -139 to +143
r = requests.get(url + "/.svn/text-base/" + filename + ".svn-base", verify=False, proxies=(proxy_dict))
r = requests.get(
f"{url}/.svn/text-base/{filename}.svn-base",
verify=False,
proxies=(proxy_dict),
)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function save_url_svn refactored with the following changes:

Comment on lines -190 to +195
print("Only downloading matches to {}".format(match))
match = "("+match+"|entries$|wc.db$)" # need to allow entries$ and wc.db too
print(f"Only downloading matches to {match}")
match = f"({match}|entries$|wc.db$)"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function main refactored with the following changes:

This removes the following comments ( why? ):

# need to allow entries$ and wc.db too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants