Code Velocity
AI ya Biashara

Hitimisho la AI Kinachozaa: Kuongeza Kasi kwenye SageMaker kwa Kutumia Vipimo vya G7e

·4 dakika kusoma·AWS·Chanzo asili
Shiriki
Vipimo vya Amazon SageMaker AI G7e vinavyoongeza kasi ya hitimisho la AI kinachozaa kwa kutumia GPU za NVIDIA RTX PRO 6000 Blackwell.

Vipimo vya G7e: Enzi Mpya kwa Hitimisho la AI kwenye SageMaker

Mandhari ya AI kinachozaa inabadilika kwa kasi isiyo na kifani, ikichochea mahitaji endelevu ya miundombinu yenye nguvu zaidi, inayonyumbulika, na yenye ufanisi wa gharama. Leo, Code Velocity inayo furaha kuripoti maendeleo muhimu kutoka AWS: upatikanaji wa jumla wa vipimo vya G7e kwenye Amazon SageMaker AI. Vinavyoendeshwa na GPU za NVIDIA RTX PRO 6000 Blackwell Server Edition, vipimo hivi vipya vimepangwa kurekebisha vigezo vya hitimisho la AI kinachozaa, vikitoa watengenezaji na makampuni utendaji na uwezo wa kumbukumbu usio na kifani.

Amazon SageMaker AI ni huduma inayosimamiwa kikamilifu inayowapa watengenezaji na wanasayansi wa data zana za kujenga, kufundisha, na kupeleka mifumo ya kujifunza kwa mashine kwa kiwango kikubwa. Utambulisho wa vipimo vya G7e unaashiria wakati muhimu kwa mizigo ya kazi ya AI kinachozaa kwenye jukwaa hili. Vipimo hivi vinatumia GPU za kisasa za NVIDIA RTX PRO 6000 Blackwell, kila moja ikijivunia GB 96 ya kumbukumbu ya GDDR7. Ongezeko hili kubwa la kumbukumbu linaruhusu upelekaji wa mifumo msingi (FMs) mikubwa zaidi moja kwa moja kwenye SageMaker AI, ikishughulikia hitaji muhimu la programu za AI za hali ya juu.

Mashirika sasa yanaweza kupeleka mifumo kama GPT-OSS-120B, Nemotron-3-Super-120B-A12B (aina ya NVFP4), na Qwen3.5-35B-A3B kwa ufanisi wa ajabu. Kipimo cha G7e.2xlarge, chenye GPU moja, kinaweza kuhifadhi mifumo ya vigezo 35B, wakati G7e.48xlarge, chenye GPU nane, huongeza hadi mifumo ya vigezo 300B. Unyumbufu huu unatafsiriwa kuwa faida zinazoonekana: utata uliopunguzwa wa uendeshaji, kuchelewa kidogo, na uokoaji mkubwa wa gharama kwa mizigo ya kazi ya hitimisho.

Kufafanua Hatua Kubwa ya Utendaji wa Kizazi cha G7e

Vipimo vya G7e vinawakilisha hatua kubwa juu ya vizazi vilivyotangulia, G6e na G5, vikitoa utendaji wa hitimisho hadi mara 2.3 zaidi ikilinganishwa na G6e. Vipimo vya kiufundi vinaonyesha maendeleo haya ya kizazi. Kila GPU ya G7e inatoa kipimo data cha GB 1,597/s, ikiongeza mara mbili kwa ufanisi kumbukumbu ya kila GPU ya G6e na mara nne ya G5. Zaidi ya hayo, uwezo wa mtandao umeboreshwa kwa kiasi kikubwa, ukiongezeka hadi Gbps 1,600 kwa EFA kwenye ukubwa mkubwa zaidi wa G7e. Ongezeko hili la mara 4 zaidi ya G6e na mara 16 zaidi ya G5 linafungua uwezo wa hitimisho la nodi nyingi lenye kuchelewa kidogo na matukio ya uboreshaji wa kina ambayo hapo awali yalichukuliwa kuwa hayawezekani.

Huu hapa ni ulinganisho unaoonyesha maendeleo katika vizazi kwenye kiwango cha GPU 8:

TabiaG5 (g5.48xlarge)G6e (g6e.48xlarge)G7e (g7e.48xlarge)
GPU8x NVIDIA A10G8x NVIDIA L40S8x NVIDIA RTX PRO 6000 Blackwell
Kumbukumbu ya GPU kwa kila GPU24 GB GDDR648 GB GDDR696 GB GDDR7
Jumla ya Kumbukumbu ya GPU192 GB384 GB768 GB
Kipimo Data cha Kumbukumbu ya GPU600 GB/s per GPU864 GB/s per GPU1,597 GB/s per GPU
vCPU192192192
Kumbukumbu ya Mfumo768 GiB1,536 GiB2,048 GiB
Kipimo Data cha Mtandao100 Gbps400 Gbps1,600 Gbps (EFA)
Hifadhi ya Ndani ya NVMe7.6 TB7.6 TB15.2 TB
Hitimisho dhidi ya G6eBaseline~1xUp to 2.3x

Kwa GB 768 kubwa ya jumla ya kumbukumbu ya GPU kwenye kipimo kimoja cha G7e, mifumo ambayo hapo awali ilihitaji usanidi tata wa nodi nyingi kwenye vipimo vya zamani sasa inaweza kupelekwa kwa urahisi wa ajabu. Hii inapunguza kwa kiasi kikubwa kuchelewa kati ya nodi na gharama za uendeshaji. Pamoja na msaada wa usahihi wa FP4 kupitia Tensor Cores za kizazi cha tano na NVIDIA GPUDirect RDMA juu ya EFAv4, vipimo vya G7e vimeundwa bila shaka kwa ajili ya mahitaji ya LLM, AI yenye njia nyingi, na mitiririko kazi ya hitimisho ya kiuwakala ya hali ya juu kwenye AWS.

Kesi Mbalimbali za Matumizi ya AI Kinachozaa Zinatukuzwa na G7e

Mchanganyiko imara wa msongamano wa kumbukumbu, kipimo data, na uwezo wa mtandao wa hali ya juu hufanya vipimo vya G7e kuwa bora kwa anuwai pana ya mizigo ya kazi ya AI kinachozaa ya kisasa. Kuanzia kuboresha AI ya mazungumzo hadi kuendesha uigaji tata wa kimwili, G7e inatoa faida zinazoonekana:

  • Chatbot na AI ya Mazungumzo: Muda mfupi wa Toki la Kwanza (TTFT) na utendaji wa juu wa vipimo vya G7e huhakikisha uzoefu shirikishi unaoitikia haraka na usio na mshono, hata wakati wa kukabiliana na mizigo mikubwa ya watumiaji wa wakati mmoja. Hii ni muhimu kwa kudumisha ushiriki wa watumiaji na kuridhika katika mwingiliano wa AI wa wakati halisi.
  • Mitiririko Kazi ya Kiuwakala na Kuita Zana: Kwa mabomba ya Uzalishaji Ulioimarishwa kwa Urejeshaji (RAG) na mifumo ya kiuwakala, uingizaji wa muktadha wa haraka kutoka hifadhi za urejeshaji ni muhimu sana. Uboreshaji wa mara 4 wa kipimo data cha CPU-hadi-GPU ndani ya vipimo vya G7e unawafanya kuwa na ufanisi wa kipekee kwa shughuli hizi muhimu, kuwezesha mawakala wa AI wenye akili na nguvu zaidi.
  • Uzalishaji wa Maandishi, Muhtasari, na Hitimisho la Muktadha Mrefu: Kwa GB 96 ya kumbukumbu kwa kila GPU, vipimo vya G7e vinashughulikia kwa ustadi akiba kubwa za Thamani Muhimu (KV). Hii inaruhusu muktadha wa hati ulioenea, ikipunguza kwa kiasi kikubwa hitaji la kufupisha maandishi na kuwezesha hoja tajiri zaidi, na yenye kina juu ya pembejeo kubwa.
  • Uzalishaji wa Picha na Mifumo ya Maono: Ambapo vipimo vya kizazi kilichopita vilikumbana mara kwa mara na makosa ya kumbukumbu kamili na mifumo mikubwa yenye njia nyingi, uwezo wa kumbukumbu uliokuzwa mara mbili wa G7e unatatua vizuizi hivi kwa urahisi, ukifungua njia kwa programu za AI za picha na maono zenye utata zaidi na azimio la juu.
  • AI ya Kimwili na Kompyuta ya Kisayansi: Zaidi ya AI ya jadi inayozaa, kompyuta ya kizazi cha Blackwell ya G7e, msaada wa FP4, na uwezo wa kompyuta ya anga (ikiwa ni pamoja na DLSS 4.0 na cores za RT za kizazi cha 4) hupanua matumizi yake kwa nakala pacha za kidijitali, uigaji wa 3D, na hitimisho la hali ya juu la mifumo ya AI ya kimwili, ikifungua mipaka mipya katika utafiti wa kisayansi na matumizi ya viwandani.

Upelekaji Uliorahisishwa na Upimaji wa Utendaji

Kupeleka mifumo ya AI kinachozaa kwenye vipimo vya G7e kupitia Amazon SageMaker AI kimeundwa kuwa rahisi. Watumiaji wanaweza kupata daftari la mfano hapa linalorahisisha mchakato. Mahitaji ya awali kwa kawaida yanajumuisha akaunti ya AWS, jukumu la IAM kwa ufikiaji wa SageMaker, na ama Amazon SageMaker Studio au kipimo cha daftari cha SageMaker kwa mazingira ya maendeleo. Muhimu, watumiaji wanapaswa kuomba kiasi kinachofaa kwa ml.g7e.2xlarge au vipimo vikubwa zaidi kwa matumizi ya SageMaker AI endpoint kupitia kiweko cha Service Quotas.

Ili kuonyesha faida kubwa za utendaji, AWS ilipima Qwen3-32B (BF16) kwenye vipimo vya G6e na G7e. Mzigo wa kazi ulijumuisha takriban tokeni 1,000 za kuingiza na tokeni 560 za kutoa kwa kila ombi, ikiiga kazi za kawaida za muhtasari wa hati. Mipangilio yote miwili ilitumia kontena asilia la vLLM lenye caching ya kiambishi tamati imewezeshwa, kuhakikisha ulinganisho wa moja kwa moja.

Matokeo ni ya kuvutia. Ingawa G6e ya msingi (ml.g6e.12xlarge yenye GPU 4x L40S kwa $13.12/saa) ilionyesha utendaji mzuri kwa kila ombi, G7e (ml.g7e.2xlarge yenye 1x RTX PRO 6000 Blackwell kwa $4.20/saa) inaonyesha hadithi tofauti kabisa ya gharama. Kwa uendeshaji wa pamoja wa uzalishaji (C=32), G7e ilifikia $0.79 ya kushangaza kwa kila tokeo la tokeni milioni moja. Hii inawakilisha kupungua kwa gharama kwa mara 2.6 ikilinganishwa na $2.06 ya G6e, ikichochewa na kiwango cha chini cha saa cha G7e na uwezo wake wa kudumisha utendaji thabiti chini ya mzigo, ikithibitisha kuwa utendaji wa juu hauhitaji kuja kwa gharama kubwa.

Mustakabali wa Hitimisho la AI Kinachozaa lenye Ufanisi wa Gharama

Utambulisho wa vipimo vya G7e kwenye Amazon SageMaker AI ni zaidi ya uboreshaji wa nyongeza tu; ni hatua ya kimkakati ya AWS kurahisisha upatikanaji wa AI kinachozaa chenye utendaji wa juu. Kwa kuchanganya nguvu ghafi ya GPU za NVIDIA RTX PRO 6000 Blackwell na uwezo wa kuongeza kiwango na usimamizi wa SageMaker, AWS inawezesha mashirika ya ukubwa wote kupeleka mifumo mikubwa zaidi na tata ya AI kwa ufanisi na ufanisi wa gharama usio na kifani. Maendeleo haya yanahakikisha kwamba maendeleo katika AI kinachozaa yanaweza kutafsiriwa kuwa programu za kivitendo, zilizo tayari kwa uzalishaji katika anuwai kubwa ya tasnia, zikiimarisha msimamo wa SageMaker AI kama jukwaa linaloongoza kwa uvumbuzi wa AI.

Maswali Yanayoulizwa Mara kwa Mara

What are G7e instances and how do they benefit generative AI inference?
G7e instances are the latest generation of GPU-accelerated computing instances available on Amazon SageMaker AI, specifically designed to accelerate generative AI inference workloads. They are powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, offering significant advancements in memory capacity, bandwidth, and overall inference performance. For generative AI, G7e instances mean faster Time To First Token (TTFT), higher throughput, and the ability to host much larger foundation models (FMs) within a single instance, or even on a single GPU. This translates into more responsive AI applications, reduced operational complexity, and substantial cost savings for deploying and running large language models (LLMs), multimodal AI, and agentic workflows. Their enhanced capabilities make them ideal for interactive applications requiring high-performance, cost-effective inference.
Which NVIDIA GPU powers the new G7e instances, and what are its key features?
The new G7e instances on Amazon SageMaker AI are powered by the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Each of these cutting-edge GPUs provides an impressive 96 GB of GDDR7 memory, which is double the memory capacity per GPU compared to the previous G6e instances. Key features also include 1,597 GB/s of GPU memory bandwidth per GPU, support for FP4 precision through fifth-generation Tensor Cores, and NVIDIA GPUDirect RDMA over EFAv4. These features collectively contribute to the G7e instances' superior inference performance, memory density, and low-latency networking, making them exceptionally capable for demanding generative AI tasks.
How do G7e instances compare to previous generations (G6e, G5) in terms of performance and memory?
G7e instances demonstrate a significant generational leap over G6e and G5. They deliver up to 2.3x inference performance compared to G6e instances. In terms of memory, each G7e GPU offers 96 GB of GDDR7 memory, effectively doubling the per-GPU memory of G6e and quadrupling that of G5. A top-tier G7e.48xlarge instance provides an aggregate of 768 GB total GPU memory. Furthermore, networking bandwidth scales up to 1,600 Gbps with EFA on the largest G7e size, a 4x jump over G6e and 16x over G5. This vast improvement in memory, bandwidth, and networking allows G7e instances to host models that previously required multi-node setups on older instances, simplifying deployment and reducing latency.
What types of generative AI workloads are best suited for deployment on G7e instances?
G7e instances are exceptionally well-suited for a broad range of modern generative AI workloads due to their high memory density, bandwidth, and advanced networking. These include: Chatbots and Conversational AI, ensuring low Time To First Token (TTFT) and high throughput for responsive interactive experiences; Agentic and Tool-Calling Workflows, benefiting from 4x improved CPU-to-GPU bandwidth for fast context injection in RAG pipelines; Text Generation, Summarization, and Long-Context Inference, accommodating large KV caches for extended document contexts with 96 GB per-GPU memory; Image Generation and Vision Models, overcoming out-of-memory errors for larger multimodal models that struggled on previous instances; and Physical AI and Scientific Computing, leveraging Blackwell-generation compute, FP4 support, and spatial computing capabilities for digital twins and 3D simulation.
What is the cost efficiency of G7e instances compared to G6e for generative AI inference?
G7e instances offer significantly improved cost efficiency for generative AI inference compared to G6e instances. Benchmarks deploying Qwen3-32B showed that G7e achieved $0.79 per million output tokens at production concurrency (C=32). This represents a remarkable 2.6x cost reduction compared to G6e’s $2.06 per million output tokens for a similar workload. This cost saving is primarily driven by G7e’s substantially lower hourly rate (e.g., $4.20/hr for ml.g7e.2xlarge vs. $13.12/hr for ml.g6e.12xlarge) combined with its ability to maintain consistent and high throughput under load, making it a more economical choice for large-scale deployments.
What are the memory capacities for deploying LLMs on single and multi-GPU G7e instances?
G7e instances offer substantial memory capacities for deploying large language models (LLMs). A single-node GPU, specifically a G7e.2xlarge instance, can effectively host foundation models with up to 35 billion parameters in FP16 precision. For larger models, scaling across multiple GPUs within a single instance dramatically increases capacity: a 4-GPU node (G7e.24xlarge) can deploy models up to 150 billion parameters, while an 8-GPU node (G7e.48xlarge) can handle models as large as 300 billion parameters. This impressive scalability provides organizations with the flexibility to deploy a wide range of LLMs without the complexities of multi-instance distributed setups.
What are the prerequisites for deploying solutions using G7e instances on Amazon SageMaker AI?
To deploy generative AI solutions using G7e instances on Amazon SageMaker AI, several prerequisites must be met. You need an active AWS account to host your resources and an AWS Identity and Access Management (IAM) role configured with appropriate permissions to access Amazon SageMaker AI services. For development and deployment, access to Amazon SageMaker Studio or a SageMaker notebook instance is recommended, though other interactive development environments like PyCharm or Visual Studio Code are also viable. Crucially, you must request a quota for at least one `ml.g7e.2xlarge` instance (or a larger G7e instance type) for Amazon SageMaker AI endpoint usage through the AWS Service Quotas console, as these are new and specialized instance types.

Baki na Habari

Pokea habari za hivi karibuni za AI kwenye barua pepe yako.

Shiriki