Фото: U.S. Marine Corps / Lance Cpl. Fabian Ortiz
While OpenAI has stepped into Anthropic's shoes after agreeing to a deal with the Department of Defense, the CEO still offered up some thoughts about the debacle during an AMA on X. Even though Claude is a competing model, Sam Altman said that Anthropic's supply-chain risk designation was "a very bad decision" that he's hoping gets reversed. On top of that, OpenAI's CEO called Anthropic's blacklisting "an extremely scary precedent," but he's "still hopeful for a much better resolution."
。heLLoword翻译官方下载对此有专业解读
Трамп высказался о непростом решении по Ирану09:14
The problem compounds in pipelines. Each TransformStream adds another layer of promise machinery between source and sink. The spec doesn't define synchronous fast paths, so even when data is available immediately, the promise machinery still runs.
The real annoying thing about Opus 4.6/Codex 5.3 is that it’s impossible to publicly say “Opus 4.5 (and the models that came after it) are an order of magnitude better than coding LLMs released just months before it” without sounding like an AI hype booster clickbaiting, but it’s the counterintuitive truth to my personal frustration. I have been trying to break this damn model by giving it complex tasks that would take me months to do by myself despite my coding pedigree but Opus and Codex keep doing them correctly. On Hacker News I was accused of said clickbaiting when making a similar statement with accusations of “I haven’t had success with Opus 4.5 so you must be lying.” The remedy to this skepticism is to provide more evidence in addition to greater checks and balances, but what can you do if people refuse to believe your evidence?