Anthropic raises eyebrows with Haiku price hike, citing increased “intelligence”

Anthropic raises eyebrows with Haiku price hike, citing increased “intelligence”

As an Amazon Associate I earn from qualifying purchases.

Woodworking Plans Banner

Mentioning Opus, Claude 3.5 Opus is no place to be seen, as AI scientist Simon Willison kept in mind to Ars Technica in an interview. “All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released,” he stated. “Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini—the excellent low-cost models from Anthropic’s competitors.”

More affordable gradually?

Far in the AI market, more recent variations of AI language designs generally keep comparable or less expensive prices to their predecessors. The business had actually at first suggested Claude 3.5 Haiku would cost the like the previous variation before revealing the greater rates.

“I was expecting this to be a complete replacement for their existing Claude 3 Haiku model, in the same way that Claude 3.5 Sonnet eclipsed the existing Claude 3 Sonnet while maintaining the same pricing,” Willison composed on his blog site. “Given that Anthropic claim that their new Haiku out-performs their older Claude 3 Opus, this price isn’t disappointing, but it’s a small surprise nonetheless.”

Claude 3.5 Haiku shows up with some compromises. While the design produces longer text outputs and includes more current training information, it can not evaluate images like its predecessor. Alex Albert, who leads designer relations at Anthropic, composed on X that the earlier variation, Claude 3 Haiku, will stay offered for users who require image processing abilities and lower expenses.

The brand-new design is not yet offered in the Claude.ai web user interface or app. Rather, it works on Anthropic’s API and third-party platforms, consisting of AWS Bedrock. Anthropic markets the design for jobs like coding tips, information extraction and labeling, and material small amounts, however, like any LLM, it can quickly make things up with confidence.

“Is it good enough to justify the extra spend? It’s going to be difficult to figure that out,” Willison informed Ars. “Teams with robust automated evals against their use-cases will be in a good place to answer that question, but those remain rare.”

Learn more

As an Amazon Associate I earn from qualifying purchases.

You May Also Like

About the Author: tech