Currencies36080
Market Cap$ 3.29T+7.77%
24h Spot Volume$ 74.84B-8.18%
DominanceBTC55.81%+0.49%ETH11.07%+2.03%
ETH Gas0.04 Gwei
Cryptorank
/

Amazon unveils Trainium3 chip as it accelerates push into AI hardware


by Jai Hamid
for CryptoPolitan
Amazon unveils Trainium3 chip as it accelerates push into AI hardware

Share:

Amazon rolled out its newest AI training chip, Trainium3, this week as it moves straight at the hardware grip held by Nvidia and Google.

The accelerator is already running inside a small group of AWS data centers and opens to customers on Tuesday, according to an interview with Dave Brown, vice president at Amazon Web Services. Dave said the company is not easing into this.

“As we get into early next year, we’ll start to scale out very, very quickly,” he said. The goal is simple. Sell more compute directly from Amazon racks instead of watching developers send that spend elsewhere.

AWS still leads global cloud by rented compute and storage. That lead has not carried clean into large‑scale AI training. Some builders lean on Microsoft because of its link with OpenAI.

Others go to Google and its in‑house chips. Amazon is now using Trainium3 to pull price‑sensitive teams back under its roof. The basic pitch is lower cost per unit of work while keeping everything inside AWS.

Amazon pushes Trainium3 at cloud scale

Trainium3 lands about one year after Amazon deployed its last version. That pace sits at the fast end of chip standards. When the chip first powered on in August, one AWS engineer joked, “The main thing we’re gonna be hoping for here is just that we don’t see any kind of smoke or fire.” The fast upgrade rhythm also mirrors Nvidia’s public plan to ship a new chip every year.

Amazon says Trainium chips run the heavy compute behind AI models at a lower cost and better power use than Nvidia’s top GPUs. Dave said, “We’ve been very pleased with our ability to get the right price performance with Trainium.” The company is leaning hard on that price angle as model sizes rise and training bills keep climbing.

There is still a limit. Amazon’s chips do not carry the deep software libraries that let teams move fast on Nvidia hardware. Bedrock Robotics, which uses AI to drive construction equipment without human control, runs its main systems on AWS servers. When it trains models to guide an excavator, it still uses Nvidia chips. Kevin Peterson, chief technology officer at Bedrock Robotics, said, “We need it to be performant and easy to use. That’s Nvidia.”

Most Trainium capacity right now flows to Anthropic. The chips run inside data centers in Indiana, Mississippi, and Pennsylvania. Earlier this year, AWS said it linked more than 500,000 Trainium chips to train Anthropic’s latest models. Amazon plans to raise that to 1 million chips by the end of the year.

Amazon is tying Trainium’s future to Anthropic’s growth and to its own AI services. Outside of Anthropic, the company has named very few large customers so far. That leaves analysts with limited data to judge how well Trainium performs in wider use.

Anthropic also spreads its own compute risk. It still uses Google’s Tensor Processing Units and signed a deal this year with Google that provides access to tens of billions of dollars in computing power.

Amazon revealed Trainium3 during re: Invent, its annual user conference. The event has shifted into a nonstop display of AI tools and infrastructure aimed at developers who build new models and companies willing to pay for access at scale.

Amazon rolls out Nova updates and opens Nova Forge

On Tuesday, Amazon also updated its main AI model family, known as Nova. The new Nova 2 line includes a version called Omni.

Omni accepts text, images, speech, or video as input. It can respond with both text and images. Amazon is selling a mix of input types and model cost as a package designed for daily use at scale.

Amazon continues to price its models around performance per dollar. Past Nova models did not place near the top in standard test rankings that score answers to fixed questions. The company is leaning on live use instead of test charts.

Rohit Prasad, who leads much of Amazon’s model work and its Artificial General Intelligence team, said, “The real benchmark is the real world,” and added that he expects the new models to compete in live settings.

Amazon is also opening deeper model control to advanced users through a new product called Nova Forge that lets teams pull versions of Nova models before training ends and shape them using their own data.

Reddit already uses Nova Forge to build a model that checks whether a post breaks safety rules. Chris Slowe, Reddit’s chief technology officer, said many AI users reach for the biggest possible model for every task instead of training one with narrow focus. “The fact that we can make it an expert in our specific area is where the value comes from,” he said.

With Trainium3 now active in data centers and Nova models updated at the same time, Amazon is pushing on two fronts at once. The hardware fight plays out against Nvidia. The model push runs against Microsoft‑backed OpenAI and Google. The next phase now moves into hands‑on customer use at full cloud scale.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Read the article at CryptoPolitan

In This News

Share:

In This News

Share:

Read More

Video generation platform Luma AI is preparing for a major expansion in London

Video generation platform Luma AI is preparing for a major expansion in London

Video generation platform Luma AI, a $4 billion startup backed by Nvidia, is preparin...
Nvidia stock soars 3% on Tuesday: is Synopsys catalyst for next-gen GPU push?

Nvidia stock soars 3% on Tuesday: is Synopsys catalyst for next-gen GPU push?

Nvidia stock (NASDAQ: NVDA) jumped 3% on Tuesday as markets digested the chipmaker’s ...

Amazon unveils Trainium3 chip as it accelerates push into AI hardware


by Jai Hamid
for CryptoPolitan
Amazon unveils Trainium3 chip as it accelerates push into AI hardware

Share:

Amazon rolled out its newest AI training chip, Trainium3, this week as it moves straight at the hardware grip held by Nvidia and Google.

The accelerator is already running inside a small group of AWS data centers and opens to customers on Tuesday, according to an interview with Dave Brown, vice president at Amazon Web Services. Dave said the company is not easing into this.

“As we get into early next year, we’ll start to scale out very, very quickly,” he said. The goal is simple. Sell more compute directly from Amazon racks instead of watching developers send that spend elsewhere.

AWS still leads global cloud by rented compute and storage. That lead has not carried clean into large‑scale AI training. Some builders lean on Microsoft because of its link with OpenAI.

Others go to Google and its in‑house chips. Amazon is now using Trainium3 to pull price‑sensitive teams back under its roof. The basic pitch is lower cost per unit of work while keeping everything inside AWS.

Amazon pushes Trainium3 at cloud scale

Trainium3 lands about one year after Amazon deployed its last version. That pace sits at the fast end of chip standards. When the chip first powered on in August, one AWS engineer joked, “The main thing we’re gonna be hoping for here is just that we don’t see any kind of smoke or fire.” The fast upgrade rhythm also mirrors Nvidia’s public plan to ship a new chip every year.

Amazon says Trainium chips run the heavy compute behind AI models at a lower cost and better power use than Nvidia’s top GPUs. Dave said, “We’ve been very pleased with our ability to get the right price performance with Trainium.” The company is leaning hard on that price angle as model sizes rise and training bills keep climbing.

There is still a limit. Amazon’s chips do not carry the deep software libraries that let teams move fast on Nvidia hardware. Bedrock Robotics, which uses AI to drive construction equipment without human control, runs its main systems on AWS servers. When it trains models to guide an excavator, it still uses Nvidia chips. Kevin Peterson, chief technology officer at Bedrock Robotics, said, “We need it to be performant and easy to use. That’s Nvidia.”

Most Trainium capacity right now flows to Anthropic. The chips run inside data centers in Indiana, Mississippi, and Pennsylvania. Earlier this year, AWS said it linked more than 500,000 Trainium chips to train Anthropic’s latest models. Amazon plans to raise that to 1 million chips by the end of the year.

Amazon is tying Trainium’s future to Anthropic’s growth and to its own AI services. Outside of Anthropic, the company has named very few large customers so far. That leaves analysts with limited data to judge how well Trainium performs in wider use.

Anthropic also spreads its own compute risk. It still uses Google’s Tensor Processing Units and signed a deal this year with Google that provides access to tens of billions of dollars in computing power.

Amazon revealed Trainium3 during re: Invent, its annual user conference. The event has shifted into a nonstop display of AI tools and infrastructure aimed at developers who build new models and companies willing to pay for access at scale.

Amazon rolls out Nova updates and opens Nova Forge

On Tuesday, Amazon also updated its main AI model family, known as Nova. The new Nova 2 line includes a version called Omni.

Omni accepts text, images, speech, or video as input. It can respond with both text and images. Amazon is selling a mix of input types and model cost as a package designed for daily use at scale.

Amazon continues to price its models around performance per dollar. Past Nova models did not place near the top in standard test rankings that score answers to fixed questions. The company is leaning on live use instead of test charts.

Rohit Prasad, who leads much of Amazon’s model work and its Artificial General Intelligence team, said, “The real benchmark is the real world,” and added that he expects the new models to compete in live settings.

Amazon is also opening deeper model control to advanced users through a new product called Nova Forge that lets teams pull versions of Nova models before training ends and shape them using their own data.

Reddit already uses Nova Forge to build a model that checks whether a post breaks safety rules. Chris Slowe, Reddit’s chief technology officer, said many AI users reach for the biggest possible model for every task instead of training one with narrow focus. “The fact that we can make it an expert in our specific area is where the value comes from,” he said.

With Trainium3 now active in data centers and Nova models updated at the same time, Amazon is pushing on two fronts at once. The hardware fight plays out against Nvidia. The model push runs against Microsoft‑backed OpenAI and Google. The next phase now moves into hands‑on customer use at full cloud scale.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Read the article at CryptoPolitan

In This News

Share:

In This News

Share:

Read More

Video generation platform Luma AI is preparing for a major expansion in London

Video generation platform Luma AI is preparing for a major expansion in London

Video generation platform Luma AI, a $4 billion startup backed by Nvidia, is preparin...
Nvidia stock soars 3% on Tuesday: is Synopsys catalyst for next-gen GPU push?

Nvidia stock soars 3% on Tuesday: is Synopsys catalyst for next-gen GPU push?

Nvidia stock (NASDAQ: NVDA) jumped 3% on Tuesday as markets digested the chipmaker’s ...