No description
  • Python 86.3%
  • Jupyter Notebook 12.8%
  • Shell 0.9%
Find a file
陳宜昌 Yi-Chang Chen (YC) 36c3fcce93
調用 GPU 推理功能
close #22 issue
2025-06-21 15:17:44 +08:00
configs init 2022-03-11 15:05:01 +08:00
cpp_dataset init 2022-03-11 15:05:01 +08:00
g2pw 調用 GPU 推理功能 2025-06-21 15:17:44 +08:00
misc [feature] Add demo ipynb 2022-03-22 17:31:50 +08:00
scripts [fix] Change model mode to eval 2022-08-13 21:10:09 +08:00
tests init 2022-03-11 15:05:01 +08:00
.gitignore init 2022-03-11 15:05:01 +08:00
LICENCE init 2022-03-11 15:05:01 +08:00
MANIFEST.in [feature] Use dict instead of opencc 2022-08-15 15:12:58 +08:00
pytest.ini init 2022-03-11 15:05:01 +08:00
README.md Update README.md 2024-10-20 11:26:23 +08:00
requirements.txt add g2pW onnxruntime 2022-08-10 20:42:34 +08:00
setup.py [config] Bump to 0.1.1 2022-09-01 18:09:15 +08:00

g2pW: Mandarin Grapheme-to-Phoneme Converter

Downloads license

Authors: Yi-Chang Chen, Yu-Chuan Chang, Yen-Cheng Chang and Yi-Ren Yeh

This is the official repository of our paper g2pW: A Conditional Weighted Softmax BERT for Polyphone Disambiguation in Mandarin (INTERSPEECH 2022).

News

Getting Started

Dependency / Install

(This work was tested with PyTorch 1.7.0, CUDA 10.1, python 3.6 and Ubuntu 16.04.)

  • Install PyTorch

  • $ pip install g2pw

Quick Demo

Open In Colab

>>> from g2pw import G2PWConverter
>>> conv = G2PWConverter()
>>> sentence = '上校請技術人員校正FN儀器'
>>> conv(sentence)
[['ㄕㄤ4', 'ㄒㄧㄠ4', 'ㄑㄧㄥ3', 'ㄐㄧ4', 'ㄕㄨ4', 'ㄖㄣ2', 'ㄩㄢ2', 'ㄐㄧㄠ4', 'ㄓㄥ4', None, None, 'ㄧ2', 'ㄑㄧ4']]
>>> sentences = ['銀行', '行動']
>>> conv(sentences)
[['ㄧㄣ2', 'ㄏㄤ2'], ['ㄒㄧㄥ2', 'ㄉㄨㄥ4']]

Load Offline Model

conv = G2PWConverter(model_dir='./G2PWModel-v2-onnx/', model_source='./path-to/bert-base-chinese/')

Support Simplified Chinese and Pinyin

>>> from g2pw import G2PWConverter
>>> conv = G2PWConverter(style='pinyin', enable_non_tradional_chinese=True)
>>> conv('然而他红了20年以后他竟退出了大家的视线。')
[['ran2', 'er2', None, 'ta1', 'hong2', 'le5', None, None, 'nian2', 'yi3', 'hou4', None, 'ta1', 'jing4', 'tui4', 'chu1', 'le5', 'da4', 'jia1', 'de5', 'shi4', 'xian4', None]]

Scripts

$ git clone https://github.com/GitYCC/g2pW.git

Train Model

For example, we train models on CPP dataset as follows:

$ bash cpp_dataset/download.sh
$ python scripts/train_g2p_bert.py --config configs/config_cpp.py

Testing

$ python scripts/test_g2p_bert.py \
    --config saved_models/CPP_BERT_M_DescWS-Sec-cLin-B_POSw01/config.py \
    --checkpoint saved_models/CPP_BERT_M_DescWS-Sec-cLin-B_POSw01/best_accuracy.pth \
    --sent_path cpp_dataset/test.sent \
    --output_path output_pred.txt

Prediction

$ python scripts/predict_g2p_bert.py \
    --config saved_models/CPP_BERT_M_DescWS-Sec-cLin-B_POSw01/config.py \
    --checkpoint saved_models/CPP_BERT_M_DescWS-Sec-cLin-B_POSw01/best_accuracy.pth \
    --sent_path cpp_dataset/test.sent \
    --lb_path cpp_dataset/test.lb

Checkpoints

Citation

To cite the code/data/paper, please use this BibTex

@inproceedings{chen22d_interspeech,
  title     = {g2pW: A Conditional Weighted Softmax BERT for Polyphone Disambiguation in Mandarin},
  author    = {Yi-Chang Chen and Yu-Chuan Steven and Yen-Cheng Chang and Yi-Ren Yeh},
  year      = {2022},
  booktitle = {Interspeech 2022},
  pages     = {1926--1930},
  doi       = {10.21437/Interspeech.2022-216},
  issn      = {2958-1796},
}

Star History

Star History Chart