mirror of
https://github.com/labring/FastGPT.git
synced 2026-05-07 01:02:55 +08:00
c0072fabbc
* fix(embedding): decode base64 embedding responses before vector processing When a model's extra body config includes `encoding_format: "base64"`, the embedding API returns a base64-encoded IEEE 754 little-endian float32 array instead of a `number[]`. The previous code passed this raw string directly to `formatVectors`, which called `.reduce()` on it and threw: TypeError: a.reduce is not a function Add `decodeEmbedding()` that detects base64 strings and decodes them to `number[]` via `Buffer → Float32Array → Array.from()`, then use it in `getVectorsByText` before calling `formatVectors`. Fixes #6769 * perf: test --------- Co-authored-by: octo-patch <octo-patch@github.com> Co-authored-by: archer <545436317@qq.com>