Files
FastGPT/python/sensevoice/app/iic/speech_fsmn_vad_zh-cn-16k-common-pytorch
imgbot[bot] 69ff65973f [ImgBot] Optimize images (#2639)
*Total -- 6,985.44kb -> 4,501.25kb (35.56%)

/python/sensevoice/app/iic/SenseVoiceSmall/fig/inference.png -- 935.23kb -> 292.81kb (68.69%)
/python/sensevoice/app/iic/SenseVoiceSmall/fig/asr_results.png -- 238.19kb -> 76.62kb (67.83%)
/python/sensevoice/app/iic/SenseVoiceSmall/fig/sensevoice.png -- 879.78kb -> 332.45kb (62.21%)
/python/sensevoice/app/iic/SenseVoiceSmall/fig/ser_table.png -- 318.12kb -> 145.67kb (54.21%)
/docSite/assets/imgs/wechat6.png -- 208.98kb -> 119.19kb (42.97%)
/docSite/assets/imgs/collection-tags-2.png -- 83.02kb -> 52.57kb (36.67%)
/.github/imgs/intro3.png -- 258.74kb -> 167.94kb (35.09%)
/python/sensevoice/app/iic/speech_fsmn_vad_zh-cn-16k-common-pytorch/fig/struct.png -- 27.26kb -> 17.81kb (34.66%)
/.github/imgs/intro1.png -- 259.12kb -> 173.33kb (33.11%)
/docSite/assets/imgs/fileinpu-2.png -- 214.61kb -> 147.20kb (31.41%)
/.github/imgs/intro4.png -- 227.64kb -> 158.71kb (30.28%)
/docSite/assets/imgs/questionGuide.png -- 38.89kb -> 27.95kb (28.13%)
/python/sensevoice/app/iic/SenseVoiceSmall/fig/aed_figure.png -- 115.93kb -> 85.46kb (26.28%)
/.github/imgs/intro2.png -- 370.64kb -> 273.45kb (26.22%)
/docSite/assets/imgs/offiaccount-9.png -- 38.62kb -> 28.89kb (25.18%)
/docSite/assets/imgs/collection-tags-3.png -- 125.71kb -> 98.27kb (21.83%)
/docSite/assets/imgs/offiaccount-3.png -- 90.57kb -> 71.85kb (20.66%)
/docSite/assets/imgs/feishu-bot-4.png -- 85.73kb -> 68.44kb (20.17%)
/docSite/assets/imgs/feishu-bot-2.png -- 92.49kb -> 73.94kb (20.05%)
/docSite/assets/imgs/offiaccount-1.png -- 99.70kb -> 79.77kb (19.98%)
/docSite/assets/imgs/feishu-bot-1.png -- 154.50kb -> 126.89kb (17.87%)
/docSite/assets/imgs/feishu-bot-6.png -- 160.72kb -> 133.59kb (16.88%)
/docSite/assets/imgs/fileinpu-6.jpg -- 179.04kb -> 150.60kb (15.88%)
/docSite/assets/imgs/feishu-bot-8.png -- 43.64kb -> 36.83kb (15.61%)
/docSite/assets/imgs/offiaccount-7.png -- 73.88kb -> 62.58kb (15.29%)
/python/sensevoice/app/iic/SenseVoiceSmall/fig/ser_figure.png -- 194.25kb -> 167.12kb (13.97%)
/docSite/assets/imgs/feishu-bot-5.png -- 169.50kb -> 146.65kb (13.48%)
/docSite/assets/imgs/offiaccount-8.png -- 129.99kb -> 114.37kb (12.02%)
/docSite/assets/imgs/offiaccount-6.png -- 131.04kb -> 115.30kb (12.01%)
/docSite/assets/imgs/feishu-bot-3.png -- 165.20kb -> 145.86kb (11.71%)
/docSite/assets/imgs/offiaccount-4.png -- 158.75kb -> 140.69kb (11.38%)
/docSite/assets/imgs/feishu-bot-7.png -- 95.10kb -> 84.29kb (11.37%)
/docSite/assets/imgs/offiaccount-5.png -- 158.34kb -> 140.59kb (11.21%)
/docSite/assets/imgs/offiaccount-2.png -- 163.30kb -> 145.59kb (10.84%)
/docSite/assets/imgs/gpt-translate-example.png -- 299.24kb -> 297.98kb (0.42%)

Signed-off-by: ImgBotApp <ImgBotHelp@gmail.com>
Co-authored-by: ImgBotApp <ImgBotHelp@gmail.com>
2024-09-08 10:23:27 +08:00
..
2024-09-08 10:23:27 +08:00
2024-09-05 13:36:11 +08:00
2024-09-05 13:36:11 +08:00
2024-09-05 13:36:11 +08:00
2024-09-05 13:36:11 +08:00
2024-09-05 13:36:11 +08:00
2024-09-05 13:36:11 +08:00
2024-09-05 13:36:11 +08:00

tasks, domain, model-type, frameworks, backbone, metrics, license, language, tags, datasets, widgets
tasks domain model-type frameworks backbone metrics license language tags datasets widgets
voice-activity-detection
audio
VAD model
pytorch
fsmn
f1_score
Apache License 2.0
cn
FunASR
FSMN
Alibaba
Online
train test
20,000 hour industrial Mandarin task
20,000 hour industrial Mandarin task
task model_revision inputs examples inferencespec
voice-activity-detection v2.0.4
type name title
audio input 音频
name title inputs
1 示例1
name data
input git://example/vad_example.wav
cpu memory
1 4096

FSMN-Monophone VAD 模型介绍

Highlight

  • 16k中文通用VAD模型可用于检测长语音片段中有效语音的起止时间点。

FunASR开源项目介绍

FunASR希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!

github仓库 | 最新动态 | 环境安装 | 服务部署 | 模型库 | 联系我们

模型原理介绍

FSMN-Monophone VAD是达摩院语音团队提出的高效语音端点检测模型用于检测输入音频中有效语音的起止时间点信息并将检测出来的有效音频片段输入识别引擎进行识别减少无效语音带来的识别错误。

VAD模型结构

FSMN-Monophone VAD模型结构如上图所示模型结构层面FSMN模型结构建模时可考虑上下文信息训练和推理速度快且时延可控同时根据VAD模型size以及低时延的要求对FSMN的网络结构、右看帧数进行了适配。在建模单元层面speech信息比较丰富仅用单类来表征学习能力有限我们将单一speech类升级为Monophone。建模单元细分可以避免参数平均抽象学习能力增强区分性更好。

基于ModelScope进行推理

  • 推理支持音频格式如下:
    • wav文件路径例如data/test/audios/vad_example.wav
    • wav文件url例如https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav
    • wav二进制数据格式bytes例如用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
    • 已解析的audio音频例如audio, rate = soundfile.read("vad_example_zh.wav")类型为numpy.ndarray或者torch.Tensor。
    • wav.scp文件需符合如下要求
cat wav.scp
vad_example1  data/test/audios/vad_example1.wav
vad_example2  data/test/audios/vad_example2.wav
...
  • 若输入格式wav文件urlapi调用方式可参考如下范例
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks

inference_pipeline = pipeline(
    task=Tasks.voice_activity_detection,
    model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch',
    model_revision="v2.0.4",
)

segments_result = inference_pipeline(input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav')
print(segments_result)
  • 输入音频为pcm格式调用api时需要传入音频采样率参数fs例如
segments_result = inference_pipeline(input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.pcm', fs=16000)
  • 若输入格式为文件wav.scp(注:文件名需要以.scp结尾),可添加 output_dir 参数将识别结果写入文件中,参考示例如下:
inference_pipeline(input="wav.scp", output_dir='./output_dir')

识别结果输出路径结构如下:

tree output_dir/
output_dir/
└── 1best_recog
    └── text

1 directory, 1 files

textVAD检测语音起止时间点结果文件单位ms

  • 若输入音频为已解析的audio音频api调用方式可参考如下范例
import soundfile

waveform, sample_rate = soundfile.read("vad_example_zh.wav")
segments_result = inference_pipeline(input=waveform)
print(segments_result)
  • VAD常用参数调整说明参考vad.yaml文件
    • max_end_silence_time尾部连续检测到多长时间静音进行尾点判停参数范围500ms6000ms默认值800ms(该值过低容易出现语音提前截断的情况)。
    • speech_noise_thresspeech的得分减去noise的得分大于此值则判断为speech参数范围-1,1
      • 取值越趋于-1噪音被误判定为语音的概率越大FA越高
      • 取值越趋于+1语音被误判定为噪音的概率越大Pmiss越高
      • 通常情况下该值会根据当前模型在长语音测试集上的效果取balance

基于FunASR进行推理

下面为快速上手教程,测试音频(中文英文

可执行命令行

在命令行终端执行:

funasr ++model=paraformer-zh ++vad_model="fsmn-vad" ++punc_model="ct-punc" ++input=vad_example.wav

支持单条音频文件识别也支持文件列表列表为kaldi风格wav.scpwav_id wav_path

python示例

非实时语音识别

from funasr import AutoModel
# paraformer-zh is a multi-functional asr model
# use vad, punc, spk or not as you need
model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
                  vad_model="fsmn-vad", vad_model_revision="v2.0.4",
                  punc_model="ct-punc-c", punc_model_revision="v2.0.4",
                  # spk_model="cam++", spk_model_revision="v2.0.2",
                  )
res = model.generate(input=f"{model.model_path}/example/asr_example.wav", 
            batch_size_s=300, 
            hotword='魔搭')
print(res)

注:model_hub:表示模型仓库,ms为选择modelscope下载hf为选择huggingface下载。

实时语音识别

from funasr import AutoModel

chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms
encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention

model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")

import soundfile
import os

wav_file = os.path.join(model.model_path, "example/asr_example.wav")
speech, sample_rate = soundfile.read(wav_file)
chunk_stride = chunk_size[1] * 960 # 600ms

cache = {}
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
for i in range(total_chunk_num):
    speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
    is_final = i == total_chunk_num - 1
    res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
    print(res)

注:chunk_size为流式延时配置,[0,10,5]表示上屏实时出字粒度为10*60=600ms,未来信息为5*60=300ms。每次推理输入为600ms(采样点数为16000*0.6=960),输出为对应文字,最后一个语音片段输入需要设置is_final=True来强制输出最后一个字。

语音端点检测(非实时)

from funasr import AutoModel

model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")

wav_file = f"{model.model_path}/example/asr_example.wav"
res = model.generate(input=wav_file)
print(res)

语音端点检测(实时)

from funasr import AutoModel

chunk_size = 200 # ms
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")

import soundfile

wav_file = f"{model.model_path}/example/vad_example.wav"
speech, sample_rate = soundfile.read(wav_file)
chunk_stride = int(chunk_size * sample_rate / 1000)

cache = {}
total_chunk_num = int(len((speech)-1)/chunk_stride+1)
for i in range(total_chunk_num):
    speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
    is_final = i == total_chunk_num - 1
    res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
    if len(res[0]["value"]):
        print(res)

标点恢复

from funasr import AutoModel

model = AutoModel(model="ct-punc", model_revision="v2.0.4")

res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
print(res)

时间戳预测

from funasr import AutoModel

model = AutoModel(model="fa-zh", model_revision="v2.0.4")

wav_file = f"{model.model_path}/example/asr_example.wav"
text_file = f"{model.model_path}/example/text.txt"
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
print(res)

更多详细用法(示例

微调

详细用法(示例

使用方式以及适用范围

运行范围

  • 支持Linux-x86_64、Mac和Windows运行。

使用方式

  • 直接推理可以直接对长语音数据进行计算有效语音片段的起止时间点信息单位ms

相关论文以及引用信息

@inproceedings{zhang2018deep,
  title={Deep-FSMN for large vocabulary continuous speech recognition},
  author={Zhang, Shiliang and Lei, Ming and Yan, Zhijie and Dai, Lirong},
  booktitle={2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={5869--5873},
  year={2018},
  organization={IEEE}
}