Skip to content

Commit

Permalink
Add example script and update pip installation method
Browse files Browse the repository at this point in the history
  • Loading branch information
salaniz committed Nov 18, 2020
1 parent 14f4a1a commit ad63453
Show file tree
Hide file tree
Showing 5 changed files with 44 additions and 6 deletions.
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,21 @@ Evaluation codes for MS COCO caption generation.
This repository provides Python 3 support for the caption evaluation metrics used for the MS COCO dataset.

The code is derived from the original repository that supports Python 2.7: https://github.com/tylin/coco-caption.
Caption evaluation depends on the COCO API that natively supports Python 3 (see Requirements).
Caption evaluation depends on the COCO API that natively supports Python 3.

## Requirements ##
- Java 1.8.0
- Python 3
- pycocotools (COCO Python API): https://github.com/cocodataset/cocoapi

## Installation ##
To install pycocoevalcap and the pycocotools dependency, run:
To install pycocoevalcap and the pycocotools dependency (https://github.com/cocodataset/cocoapi), run:
```
pip install git+https://github.com/salaniz/pycocoevalcap
pip install pycocoevalcap
```

## Usage ##
See the example script: [example/coco_eval_example.py](example/coco_eval_example.py)

## Files ##
./
- eval.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
Expand Down
1 change: 1 addition & 0 deletions example/captions_val2014.json

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions example/captions_val2014_fakecap_results.json

Large diffs are not rendered by default.

25 changes: 25 additions & 0 deletions example/coco_eval_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
from pycocotools.coco import COCO
from pycocoevalcap.eval import COCOEvalCap

annotation_file = 'captions_val2014.json'
results_file = 'captions_val2014_fakecap_results.json'

# create coco object and coco_result object
coco = COCO(annotation_file)
coco_result = coco.loadRes(results_file)

# create coco_eval object by taking coco and coco_result
coco_eval = COCOEvalCap(coco, coco_result)

# evaluate on a subset of images by setting
# coco_eval.params['image_id'] = coco_result.getImgIds()
# please remove this line when evaluating the full validation set
coco_eval.params['image_id'] = coco_result.getImgIds()

# evaluate results
# SPICE will take a few minutes the first time, but speeds up due to caching
coco_eval.evaluate()

# print output evaluation scores
for metric, score in coco_eval.eval.items():
print(f'{metric}: {score:.3f}')
13 changes: 11 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,20 @@
# Prepend pycocoevalcap to package names
package_names = ['pycocoevalcap.'+p for p in find_namespace_packages()]

with open("README.md", "r") as fh:
readme = fh.read()

setup(
name='pycocoevalcap',
version=1.1,
version=1.2,
maintainer='salaniz',
description="MS-COCO Caption Evaluation for Python 3",
long_description=readme,
long_description_content_type="text/markdown",
url="https://github.com/salaniz/pycocoevalcap",
packages=['pycocoevalcap']+package_names,
package_dir={'pycocoevalcap': '.'},
package_data={'': ['*.jar', '*.gz']},
install_requires=['pycocotools>=2.0.0']
install_requires=['pycocotools>=2.0.2'],
python_requires='>=3'
)

0 comments on commit ad63453

Please sign in to comment.