text_simplification
-
### Arugument instruction
- bsize: batch size
- out: the output folder will contains log, best model and result report
- tie_embedding: all means tie the encoder/decoder/projection w embedding, we found it can speed up the training
- bert_mode: the mode of using BERT bert_token indicates we use the subtoken vocabulary from BERT; bertbase indicates we use BERT base version (due to the memory issue, we did not try BERT large version yet)
- environment: the path config of the experiment.
Please change it in model/model_config.py to fit to your system