Skip to content

Commit c1afa54

Browse files
committed
新增通义千问决赛验证集,新增耗时指标,框架优化
1 parent 3bec60f commit c1afa54

File tree

15 files changed

+900
-1121
lines changed

15 files changed

+900
-1121
lines changed

README.md

Lines changed: 117 additions & 97 deletions
Original file line numberDiff line numberDiff line change
@@ -15,88 +15,7 @@ CodeFuse-13B: Python 3.8 or above,PyTorch 1.12 or above, with a recommendation f
1515

1616
CodeFuse-CodeLlama-34B:python>=3.8,pytorch>=2.0.0,transformers==4.32.0,Sentencepiece,CUDA 11.
1717

18-
### Generation Processor:
19-
We designed an infrastructure called Processor. Its main purpose is to handle the differences between different models. It mainly needs to complete three abstract functions:
20-
* ``load_model_tokenizer``:Due to differences in model loading parameters and tokenizer terminators, models need to use different parameters for adaptation and loading. The current function is mainly to help users load and adapt different models.
21-
* ``process_before``: Since prompt adapts to different prompt styles according to different types of evaluation tasks or different models selected by users, the 「process_before」function is extracted mainly to help users process prompts.
22-
* ``process_after``:Due to the diversity of model generation results, in order to adapt to the evaluation framework, the generated result data can be spliced into appropriate use cases for automated operation. The current function mainly processes the generated results to adapt to the evaluation data set and results based on the task type and data set conditions.
23-
24-
25-
We also modified the relevant configuration of ckpt_config to save the evaluation. For example:
26-
```commandline
27-
{
28-
"CodeFuse-13B": {
29-
"path": "/mnt/model/CodeFuse13B-evol-instruction-4K/", // model path
30-
"processor_class": "codefuseEval.process.codefuse13b.Codefuse13BProcessor", // processor path (please create file in "codefuseEval.process")
31-
"tokenizer": {
32-
"truncation": true,
33-
"padding": true,
34-
"max_length": 600
35-
}, // params for tokenizer to encode input prompts
36-
"generation_config": { // generation_config, you can combine 「decode_mode」 param set your own decode, please use jsonObject to set different decodemode. Non-JsonObject data will be read directly into generation config
37-
"greedy": {
38-
"do_sample": false,
39-
"num_beams": 1,
40-
"max_new_tokens": 512
41-
},
42-
"beams": {
43-
"do_sample": false,
44-
"num_beams": 5,
45-
"max_new_tokens": 600,
46-
"num_return_sequences": 1
47-
},
48-
"dosample": {
49-
"do_sample": true
50-
},
51-
"temperature": 0.2,
52-
"max_new_tokens": 600,
53-
"num_return_sequences": 1,
54-
"top_p": 0.9,
55-
"num_beams": 1,
56-
"do_sample": true
57-
},
58-
"task_mode": "code_completion",//current support [code_completion,nl2code,code_trans,codescience] four kinds, if you eval_dataset support many task, suggest you set task mode to get suitable process
59-
"batch_size": 1,
60-
"sample_num": 1,
61-
"decode_mode": "beams" //decode_mode, The configuration of the corresponding decoding mode will be set to the generation config.
62-
}
63-
```
64-
65-
## Generation Comand:
66-
67-
```
68-
bash codefuseEval/script/generation.sh MODELNAME EVALDATASET OUTFILE LANGUAGE
69-
70-
eg:
71-
bash codefuseEval/script/generation.sh CodeFuse-13B humaneval_python result/test.jsonl python
72-
```
73-
74-
if you want to test code translation, the language is source language. For Example:
75-
if you want test the cpp code translate into python
76-
77-
```bash
78-
bash codefuseEval/script/generation.sh CodeFuse-CodeLlama-34B codeTrans_cpp_to_python result/test.jsonl cpp
79-
```
80-
81-
82-
## How to use CodeFuseEval
83-
84-
### Evaluation Data
85-
Data are stored in ``codefuseEval/data``, using JSON list format. We first integrated humaneval-X dataset.
86-
87-
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
88-
* ``prompt``: the function declaration and docstring, used for code generation.
89-
* ``declaration``: only the function declaration, used for code translation.
90-
* ``canonical_solution``: human-crafted example solutions.
91-
* ``test``: hidden test samples, used for evaluation
92-
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
93-
* ``prompt_text``: prompt text
94-
* ``prompt_explain``: prompt explanation
95-
* ``func_title``: code function title
96-
* ``prompt_text_chinese``: Chinese prompt
97-
98-
### Evaluation Environment
99-
18+
## Evaluation Environment
10019
The evaluation of the generated codes involves compiling and running in multiple programming languages. The versions of the programming language environments and packages we use are as follows:
10120

10221
| Dependency | Version |
@@ -128,19 +47,34 @@ After obtaining the image, you can build a container using the following command
12847

12948
```bash
13049
docker run -it --gpus all --mount type=bind,source=<LOCAL PATH>,target=<PATH IN CONTAINER> [OPTIONS] <IMAGE NAME:TAG>
50+
```
51+
52+
## Check result Command:
53+
We provide the script to check the result for provided code LLMs. Please use following scripts to check corresponding results and the environment .
54+
55+
```bash
56+
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-CodeLlama-34B/humaneval_result_python.jsonl humaneval_python
57+
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-13B/humaneval_result_python.jsonl humaneval_python
13158
```
13259

133-
### Evaluation Metrics
134-
In addition to the unbiased pass@k indicators currently provided in [Codex](https://arxiv.org/abs/2107.03374), we will also integrate the relevant indicators of huggingface open source with [CodeBLEU](https://arxiv.org/abs/2009.10297) for integration.
135-
The main indicators currently recommended for users are as follows:
136-
* ``codebleu``
137-
* ``pass@k``
138-
* ``bleu``
139-
* ``bleurt``
60+
## How to use CodeFuseEval
61+
1. Download the model and update current model infomation in ckpt_config.json. Mainly update 「path」parameter in corresponding model and version.
62+
2. Run following generation comand to generate result.
63+
```
64+
bash codefuseEval/script/generation.sh MODELNAME MODELVERSION EVALDATASET OUTFILE
65+
66+
eg:
67+
bash codefuseEval/script/generation.sh CodeFuse-13B v1 humaneval_python result/test.jsonl
68+
```
69+
3. Run following evaluation command to evaluate the generated result for corresponding model and version.
70+
```
71+
bash codefuseEval/script/evaluation.sh <RESULT_FILE> <METRIC> <PROBLEM_FILE>
72+
eg:
73+
bash codefuseEval/script/evaluation.sh codefuseEval/result/test.jsonl pass@k humaneval_python
74+
```
14075

141-
For other related metrics, you can check the code of the metric or the evaluation code to meet your requirements.
14276

143-
### Evaluation
77+
## Evaluation
14478

14579
We recommend evaluating in [the provided image](#evaluation-environment). To evaluate the generated samples, save generated codes in the following JSON list format:
14680

@@ -152,6 +86,36 @@ We recommend evaluating in [the provided image](#evaluation-environment). To eva
15286

15387
and evaluate them using the following script under the root directory of the repository (<font color='red'>please execute with caution, the generated codes might have unexpected behaviours though with very low possibility. See the warnings in [execution.py](execution.py) and uncomment the execution lines at your own risk</font>):
15488

89+
### Evaluation Data
90+
Data are stored in ``codefuseEval/data``, using JSON list format. We first integrated humaneval-X dataset.
91+
92+
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
93+
* ``prompt``: the function declaration and docstring, used for code generation.
94+
* ``declaration``: only the function declaration, used for code translation.
95+
* ``canonical_solution``: human-crafted example solutions.
96+
* ``test``: hidden test samples, used for evaluation
97+
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
98+
* ``prompt_text``: prompt text
99+
* ``prompt_explain``: prompt explanation
100+
* ``func_title``: code function title
101+
* ``prompt_text_chinese``: Chinese prompt
102+
103+
104+
### Evaluation Metrics
105+
In addition to the unbiased pass@k indicators currently provided in [Codex](https://arxiv.org/abs/2107.03374), we will also integrate the relevant indicators of huggingface open source with [CodeBLEU](https://arxiv.org/abs/2009.10297) for integration.
106+
The main indicators currently recommended for users are as follows:
107+
* ``codebleu``
108+
* ``pass@k``
109+
* ``bleu``
110+
* ``bleurt``
111+
112+
For other related metrics, you can check the code of the metric or the evaluation code to meet your requirements.
113+
114+
At the same time, we supplemented the indicators of the total and average generation time of the model for the dataset `total_time_cost` and `Average time cost`
115+
116+
Output during each generation, making it convenient for users to measure the generation performance of the model in the same environment. This indicator is passive output, and it will be output every time it is generated.
117+
118+
### Evaluation Command:
155119
```
156120
bash codefuseEval/script/evaluation.sh <RESULT_FILE> <METRIC> <PROBLEM_FILE> <TEST_GROUDTRUTH>
157121
eg:
@@ -166,16 +130,70 @@ When TEST_GROUDTRUTH is True, the self-test mode is turned on, PROBLEM_FILE will
166130

167131
When TEST_GROUDTRUTH is False, open the evaluation mode, read RESULT_FILE and PROBLEM_FILE, and substitute the generated answer for testing.
168132

169-
# Check result Command:
170-
We provide the script to check the result for provided code LLMs. Please use following scripts to check corresponding results and the environment .
171133

172-
```bash
173-
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-CodeLlama-34B/humaneval_result_python.jsonl humaneval_python
174-
bash codefuseEval/script/check_reference.sh codefuseEval/result/CodeFuse-13B/humaneval_result_python.jsonl humaneval_python
134+
## More Infomation
135+
136+
### Evaluation self model and dataset
137+
138+
1. Registry your evaluate dataset.
139+
* Download evaluation dataset to store in `codefuseEval/data` or other directory. Dataset must be jsonl.
140+
* Setup information dataset `EVAL_DATASET`,`DATASET_SUPPORT` and `DATASET_LANGUAGE` in `codefuseEval/util.py` for dataset path, dataset task_mode and generation code language
141+
2. Registry your evaluate model.
142+
* Download evaluation model to store in `codefuseEval/model` or other directory.
143+
* Write your evaluation model processor code in `codefuseEval/processor` package.
144+
145+
We designed an infrastructure called Processor. Its main purpose is to handle the differences between different models. It mainly needs to complete three abstract functions:
146+
* ``load_model_tokenizer``:Due to differences in model loading parameters and tokenizer terminators, models need to use different parameters for adaptation and loading. The current function is mainly to help users load and adapt different models.
147+
* ``process_before``: Since prompt adapts to different prompt styles according to different types of evaluation tasks or different models selected by users, the 「process_before」function is extracted mainly to help users process prompts.
148+
* ``process_after``:Due to the diversity of model generation results, in order to adapt to the evaluation framework, the generated result data can be spliced into appropriate use cases for automated operation. The current function mainly processes the generated results to adapt to the evaluation data set and results based on the task type and data set conditions.
149+
150+
You can extend the `BaseProcessor` in `codefuseEval/processor/base.py` and implement above functions
175151

152+
* Setup information model in `ckpt_config.json`. For Example as follow
153+
```
154+
{
155+
"CodeFuse-13B": { //model name
156+
"v1": { //model version
157+
"path": "/mnt/model/CodeFuse13B-evol-instruction-4K/", // model path
158+
"processor_class": "codefuseEval.process.codefuse13b.Codefuse13BProcessor", // model processor
159+
"tokenizer": { // tokenizer params to token input string.
160+
"truncation": true,
161+
"padding": true,
162+
"max_length": 600
163+
},
164+
"generation_config": { //generation config params.
165+
"greedy": { //If JsonObject, it is a decode mode, you can set 「decode_mode」param to load params defined in the decode_mode.
166+
"do_sample": false,
167+
"num_beams": 1,
168+
"max_new_tokens": 512
169+
},
170+
"beams": {
171+
"do_sample": false,
172+
"num_beams": 5,
173+
"max_new_tokens": 600,
174+
"num_return_sequences": 1
175+
},
176+
"dosample": {
177+
"da_sample": true
178+
},
179+
"temperature": 0.2, //If not JsonObject, it is a default param, we will set in generation_config default. You can cover param in decode_mode same name param.
180+
"max_new_tokens": 600,
181+
"num_return_sequences": 1,
182+
"top_p": 0.9,
183+
"num_beams": 1,
184+
"do_sample": true
185+
},
186+
"batch_size": 1, // batch size for generate
187+
"sample_num": 1, // The number of samples generated by a single piece of data
188+
"decode_mode": "beams" // choose decode mode defined in generation_config
189+
}
190+
}
176191
```
177192

178-
# Check dataset Command:
193+
### Check dataset Command:
194+
To check whether the reference values provided by the evaluation data set are correct,
195+
we provide the following command to check the dataset.
196+
179197
CodeCompletion
180198
```bash
181199
bash codefuseEval/script/check_dataset.sh humaneval_python
@@ -240,3 +258,5 @@ bash codefuseEval/script/check_dataset.sh codeInsertion_tensorflow
240258
```
241259

242260

261+
262+

0 commit comments

Comments
 (0)