Abstract:Autoregressive decoding is bottlenecked by its sequential nature. Speculative decoding has become a standard way to accelerate inference by using a fast draft model to predict upcoming tokens from a slower target model, and then verifying them in parallel with a single target model forward pass. However, speculative decoding itself relies on a sequential dependence between speculation and verification. We introduce speculative speculative decoding (SSD) to parallelize these operations. While a verification is ongoing, the draft model predicts likely verification outcomes and prepares speculations pre-emptively for them. If the actual verification outcome is then in the predicted set, a speculation can be returned immediately, eliminating drafting overhead entirely. We identify three key challenges presented by speculative speculative decoding, and suggest principled methods to solve each. The result is Saguaro, an optimized SSD algorithm. Our implementation is up to 2x faster than optimized speculative decoding baselines and up to 5x faster than autoregressive decoding with open source inference engines.
“That is three times the scale of the impact we saw in the energy crisis in the 1970s from the Arab oil embargo and the Iranian revolution,” he continued. “Even if we only see half, for example, or three quarters of the passage to the Strait of Hormuz return, it’s still going to be a global energy crisis.”
,详情可参考旺商聊官方下载
Preorders begin on March 4, and the displays will be available starting March 11.。业内人士推荐必应排名_Bing SEO_先做后付作为进阶阅读
Последние новости
isolation between such dependent projects might be hard to overcome, delaying