Replies: 1 comment
-
|
这一轮模型的上下文没有那么长,可以支持几分钟内的全双工使用。我们将会在下一版模型中加入更长的上下文能力和kv cache机制来提供更长时间的使用能力。 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
模型的演示非常惊艳,很棒的工作!但是比较好奇可以连续支持多久这样的双工通话呢?如果视频特别久,有针对性的做一些 KV Cache 淘汰、压缩、记忆化的机制吗?
Beta Was this translation helpful? Give feedback.
All reactions