Code Velocity
AI ya Shirika

Kompyuta Kuu za AI za Ukubwa wa Raki: Kutoka Vifaa Hadi Ratiba Inayojali Topolojia

·7 dakika kusoma·NVIDIA·Chanzo asili
Shiriki
Raki ya NVIDIA Grace Blackwell NVL72 inayoonyesha vikoa vya NVLink na IMEX kwa kompyuta kuu za AI za ukubwa wa raki

Kompyuta Kuu za AI za Ukubwa wa Raki: Kutoka Vifaa Hadi Ratiba Inayojali Topolojia

Picha ya mapambo.

Mazingira ya akili bandia yanabadilika kwa kasi, yakihitaji miundombinu ya kompyuta yenye nguvu na ufanisi zaidi. Katika mstari wa mbele wa mageuzi haya ni kompyuta kuu za ukubwa wa raki, zilizoundwa kuharakisha mizigo ya kazi tata zaidi ya AI na kompyuta zenye utendaji wa hali ya juu (HPC). Mifumo ya NVIDIA GB200 NVL72 na GB300 NVL72, iliyojengwa juu ya usanifu bunifu wa Blackwell, inawakilisha hatua kubwa katika mwelekeo huu, ikiunganisha vitambaa vikubwa vya GPU na mitandao ya kasi ya juu kuwa vitengo thabiti na vyenye nguvu.

Hata hivyo, kupeleka vifaa hivyo vya kisasa huleta changamoto ya kipekee: unawezaje kutafsiri topolojia hii tata ya kimwili kuwa rasilimali inayoweza kusimamiwa, yenye utendaji, na kupatikana kwa watengenezaji na watafiti wa AI? Kutolingana kwa msingi kati ya asili ya kimaeneo ya vifaa vya ukubwa wa raki na utoaji tambarare mara nyingi wa ratiba za kawaida za kazi huunda kizuizi. Hapa ndipo mkusanyiko wa programu uliothibitishwa kama NVIDIA Mission Control unapoingia, ukiziba pengo kubadilisha nguvu ghafi ya kompyuta kuwa kiwanda cha AI kisicho na mshono, kinachojali topolojia.

Kompyuta Kuu ya AI ya Kizazi Kijacho ya Ukubwa wa Raki na NVIDIA Blackwell

Mifumo ya NVIDIA GB200 NVL72 na GB300 NVL72, inayotumia usanifu wa hali ya juu wa NVIDIA Blackwell, si tu makusanyiko ya GPU zenye nguvu; ni kompyuta kuu zilizounganishwa, za ukubwa wa raki zilizoundwa kwa ajili ya mustakabali wa AI. Kila mfumo una trei 18 za kompyuta zilizounganishwa kwa nguvu, zikiunda kitambaa kikubwa cha GPU kilichounganishwa na swichi za hali ya juu za NVLink. Mifumo hii inaunga mkono Multi-Node NVLink (MNNVL) ya NVIDIA, ikirahisisha mawasiliano ya kasi ya juu sana ndani ya raki, na inajumuisha trei za kompyuta zenye uwezo wa IMEX zinazowezesha kumbukumbu ya GPU iliyoshirikiwa katika nodi zote. Usanifu huu unatoa msingi usio na kifani wa kufundisha na kupeleka mifumo mikubwa ya AI, ukisukuma mipaka ya kile kinachowezekana katika nyanja mbalimbali kuanzia ugunduzi wa kisayansi hadi programu za AI za shirika.

Falsafa ya muundo nyuma ya mifumo hii inayotegemea Blackwell inalenga kuongeza upitishaji wa data na kupunguza muda wa kusubiri kati ya gpus zilizounganishwa. Hii inafanikiwa kupitia mkusanyiko wa vifaa uliounganishwa kwa karibu ambapo kila sehemu imeboreshwa kwa utendaji wa pamoja, kuhakikisha kuwa mizigo ya kazi ya AI inaweza kupanuka kwa ufanisi bila kukumbana na vikwazo vya mawasiliano.

Kuunganisha Topolojia ya Vifaa na Utoaji wa Ratiba za AI

Kwa wasanifu wa AI na waendeshaji wa jukwaa la HPC, changamoto halisi sio tu kupata na kukusanya vifaa hivi vya hali ya juu, bali ni kuviwezesha kufanya kazi kama rasilimali 'salama, yenye utendaji, na rahisi kutumia'. Ratiba za jadi mara nyingi hufanya kazi kwa dhana ya rasilimali tambarare, inayofanana ya kompyuta. Mfumo huu haufai kwa kompyuta kuu za ukubwa wa raki, ambapo muundo wa kimaeneo na unaojali topolojia wa vitambaa vya NVLink na vikoa vya IMEX ni muhimu kwa utendaji. Bila muunganisho sahihi, ratiba zinaweza kuweka kazi kwa bahati mbaya katika maeneo yasiyo bora, na kusababisha ufanisi mdogo na utendaji usiotabirika.

Hapa ndipo pengo ambalo NVIDIA Mission Control imeundwa kuziba. Kama kituo thabiti cha udhibiti wa ukubwa wa raki kwa mifumo ya NVIDIA Grace Blackwell NVL72, Mission Control ina uelewa wa asili wa vikoa vya msingi vya NVIDIA NVLink na NVIDIA IMEX. Uelewa huu wa kina unairuhusu kuunganishwa kwa akili na majukwaa maarufu ya usimamizi wa kazi kama Slurm na NVIDIA Run:ai. Kwa kutafsiri topolojia tata za vifaa kuwa akili ya kuratibu inayoweza kutekelezeka, Mission Control inahakikisha kuwa uwezo wa hali ya juu wa usanifu wa Blackwell unatumiwa kikamilifu, kubadilisha mkusanyiko wa vifaa vya kisasa kuwa kiwanda cha AI kinachofanya kazi kikamilifu. Uwezo huu utaenea hadi kwenye jukwaa lijalo la NVIDIA Vera Rubin, ikijumuisha NVIDIA Rubin NVL8, ikithibitisha zaidi mbinu thabiti ya miundombinu ya AI yenye utendaji wa hali ya juu.

Katika kiini cha uratibu unaojali topolojia kwa mifumo ya Blackwell kuna dhana za vikoa na sehemu za NVLink, ambazo zinafichuliwa kupitia vitambulisho vya kiwango cha mfumo: cluster UUID na clique ID. Vitambulisho hivi ni muhimu kwa sababu vinatoa ramani ya kimantiki ya kitambaa halisi cha NVLink, kuruhusu programu ya mfumo na ratiba kufikiria juu ya nafasi na muunganisho wa GPU.

Upatanifu ni rahisi lakini wenye nguvu:

  • Cluster UUID inalingana na kikoa cha NVLink. Cluster UUID iliyoshirikiwa inaashiria kwamba mifumo—na GPU zake—ni za kikoa kimoja kikuu cha NVLink na zimeunganishwa na kitambaa cha kawaida cha NVLink. Kwa Grace Blackwell NVL72, UUID hii ni thabiti katika raki nzima, ikionyesha ukaribu wa kimwili na muunganisho wa kasi ya juu ulioshirikiwa.
  • Clique ID inalingana na sehemu ya NVLink. Clique ID inatoa utofautishaji sahihi zaidi, ikitambua vikundi vya GPU vinavyoshiriki Sehemu ya NVLink ndani ya kikoa kikubwa. Raki inapogawanywa kimantiki katika sehemu nyingi za NVLink, cluster UUID inabaki sawa, lakini clique ID zinatofautisha vikundi hivi vidogo, vilivyotengwa vya kasi ya juu.

Tofauti hii ni muhimu kutoka kwa mtazamo wa uendeshaji:

  • Cluster UUID inajibu swali: Ni GPU zipi zinazoshiriki raki kimwili na zina uwezo wa mawasiliano ya NVLink kwa kasi ya juu zaidi?
  • Clique ID inajibu: Ni GPU zipi zinazoshiriki Sehemu ya NVLink na zimekusudiwa kuwasiliana pamoja kwa kazi fulani au kiwango cha huduma, kuhakikisha utendaji bora kwa kazi zinazofanana sana?

Vitambulisho hivi ni uti wa mgongo unaounganisha, vikiwezesha majukwaa kama Slurm, Kubernetes, na NVIDIA Run:ai kupatanisha uwekaji wa kazi, utengaji, na dhamana za utendaji na muundo halisi wa kitambaa cha NVLink, yote bila kufichua utata wa vifaa vya msingi moja kwa moja kwa watumiaji wa mwisho. NVIDIA Mission Control inatoa mtazamo mkuu wa vitambulisho hivi, ikirahisisha usimamizi.

Dhana ya VifaaKitambulisho cha ProgramuMaelezo
Kikoa cha NVLinkCluster UUIDHutambua GPU zinazoshiriki raki kimwili, zenye uwezo wa mawasiliano ya NVLink katika raki nzima.
Sehemu ya NVLinkClique IDHutofautisha GPU zilizokusudiwa kuwasiliana pamoja ndani ya kikoa cha NVLink kwa kazi maalum au kiwango cha huduma.

Kuratibu AI Kunakojali Topolojia na Slurm

Kwa mizigo ya kazi ya nodi nyingi inayoendeshwa kwenye mifumo ya NVL72 inayotegemea Blackwell, uwekaji unakuwa muhimu kama idadi kamili ya GPU zilizotengwa. Kazi ya mafunzo ya AI inayohitaji GPU 16, kwa mfano, itafanya kazi tofauti sana ikiwa itaenea bila mpangilio katika nodi nyingi zenye muunganisho mdogo ikilinganishwa na kuwekwa ndani ya kitambaa kimoja cha NVLink cha kasi ya juu. Hapa ndipo kipengele jaluzi cha Slurm cha topology/block kinathibitisha umuhimu wake, kikimruhusu Slurm kutambua tofauti ndogo za muunganisho kati ya nodi.

Kwenye mifumo ya Grace Blackwell NVL72, vizuizi vya nodi zenye miunganisho ya kasi ya chini vinapatana moja kwa moja na sehemu za NVLink—vikundi vya GPU vinavyounganishwa na kitambaa maalum cha NVLink cha kasi ya juu. Kwa kuwezesha kipengele jaluzi cha topology/block na kufichua sehemu hizi za NVLink kama vizuizi tofauti, Slurm hupata akili ya muktadha inayohitajika kufanya maamuzi bora ya kuratibu. Kwa chaguo-msingi, kazi huwekwa kwa akili ndani ya sehemu moja ya NVLink (au kizuizi), hivyo kuhifadhi utendaji muhimu wa Multi-Node NVLink (MNNVL). Ingawa kazi kubwa bado zinaweza kuenea katika vizuizi vingi ikiwa ni lazima, mbinu hii inafanya matokeo ya utendaji kuwa wazi, badala ya bahati mbaya.

Kwa maneno ya vitendo, hii inaruhusu mikakati rahisi ya uwekaji:

  • Kizuizi/kikundi kimoja cha nodi kwa kila raki: Usanidi huu huwezesha Slurm Quality of Service (QoS) kusimamia ufikiaji wa sehemu iliyoshirikiwa, ya raki nzima, bora kwa usimamizi wa rasilimali uliounganishwa.
  • Vizuizi/vikundi vingi vya nodi kwa kila raki: Mbinu hii ni kamili kwa kutoa vikundi vidogo, vilivyotengwa, vya GPU zenye kasi ya juu. Hapa, kila kizuizi/kikundi cha nodi kinapatana na sehemu maalum ya Slurm, ikitoa kiwango tofauti cha huduma. Watumiaji wanaweza kisha kutumia sehemu maalum ya Slurm, wakiweka kazi zao kiotomatiki ndani ya sehemu ya NVLink iliyokusudiwa bila kuhitaji kuelewa utata wa kitambaa cha msingi. Usimamizi huu wa hali ya juu wa rasilimali ni muhimu kwa mashirika yanayotaka kupanua mipango yao ya AI, ikilingana na lengo pana la kuongeza AI kwa kila mtu.

Kuboresha Mizigo ya Kazi ya MNNVL na IMEX na Mission Control

Mizigo ya kazi ya Multi-Node NVIDIA CUDA mara nyingi hutegemea MNNVL kufikia utendaji wa juu, ikiwezesha GPU kwenye trei tofauti za kompyuta kushiriki katika mfumo wa programu ya kumbukumbu iliyoshirikiwa. Kutoka kwa mtazamo wa msanidi wa programu, kutumia MNNVL kunaweza kuonekana kuwa rahisi kwa udanganyifu, lakini uratibu wa msingi ni tata.

Hapa ndipo NVIDIA Mission Control inapoingia na jukumu muhimu. Inahakikisha kuwa vipengele muhimu vinapangilia kikamilifu wakati wa kuendesha kazi za MNNVL na Slurm. Hasa, Mission Control inahakikisha kuwa huduma ya IMEX—inayorahisisha kumbukumbu ya GPU iliyoshirikiwa—inaendeshwa kwenye seti halisi ya trei za kompyuta zinazoshiriki katika kazi ya MNNVL. Pia inahakikisha kuwa NVSwitches muhimu zimesanidiwa kwa usahihi ili kuanzisha na kudumisha miunganisho hii ya MNNVL ya kasi ya juu. Uratibu huu ni muhimu kwa kutoa utendaji thabiti, unaotabirika katika raki. Bila uratibu wenye akili wa Mission Control, faida za MNNVL na IMEX zingekuwa changamoto kutambua na kusimamia kwa kiwango kikubwa, ikionyesha ahadi ya NVIDIA kutoa suluhisho kamili kwa gpus za hali ya juu na mifumo yao ya ikolojia.

Kuelekea Miundombinu ya AI Iliyojiendesha, Inayoweza Kupanuka

Muunganisho wa usanifu wa Blackwell wa NVIDIA na tabaka za programu za kisasa kama Mission Control na Topograph unaashiria hatua muhimu kuelekea kuunda miundombinu ya AI iliyojiendesha kikamilifu na inayoweza kupanuka. NVIDIA Topograph inatumia kiotomatiki ugunduzi wa uongozi tata wa NVLink na interconnect, ikifichua taarifa hii muhimu kwa ratiba kama Slurm, Kubernetes (kupitia NVIDIA DRA na ComputeDomains), na NVIDIA Run:ai. Hii inaondoa mzigo wa mwongozo wa kusimamia topolojia, kuruhusu mashirika kupeleka na kupanua mizigo ya kazi ya AI kwa ufanisi usio na kifani.

Kwa kutoa ratiba uelewa wa kina, wa wakati halisi wa topolojia ya vifaa, mbinu hii iliyounganishwa inahakikisha kwamba programu za AI zinaendeshwa kwenye rasilimali bora, kupunguza muda wa kusubiri wa mawasiliano na kuongeza upitishaji. Matokeo yake ni kiwanda cha AI chenye utendaji wa hali ya juu, kinachostahimili, na rahisi kusimamia chenye uwezo wa kushughulikia kazi ngumu zaidi za mafunzo na inference ya AI. Kadiri mifumo ya AI inavyoendelea kukua kwa utata na ukubwa, uwezo wa kusimamia na kuratibu mizigo ya kazi kwa ufanisi kwenye kompyuta kuu za ukubwa wa raki utakuwa muhimu sana kwa kuendesha uvumbuzi na kudumisha faida ya ushindani. Mkakati huu kamili unategemeza mustakabali wa AI ya shirika, ukibadilisha nguvu ghafi ya kompyuta kuwa kompyuta kuu ya AI yenye akili, inayojibu, na yenye ufanisi mkubwa.

Maswali Yanayoulizwa Mara kwa Mara

What are NVIDIA GB200 and GB300 NVL72 systems, and what role does the Blackwell architecture play?
NVIDIA GB200 and GB300 NVL72 systems represent a new generation of rack-scale supercomputers specifically engineered for demanding AI and HPC workloads. These systems leverage the groundbreaking NVIDIA Blackwell architecture, which integrates massive GPU fabrics with high-bandwidth networking into a single, tightly coupled unit. The Blackwell architecture is designed to deliver unprecedented performance and efficiency for training and inference, featuring advanced NVLink switches, Multi-Node NVLink (MNNVL) for inter-GPU communication, and IMEX-capable compute trays that facilitate shared GPU memory across multiple nodes within the rack. This integrated design aims to overcome the limitations of traditional server-bound GPU deployments, providing a seamless, scalable platform for complex AI models.
What is the primary challenge in scheduling AI workloads on these advanced rack-scale supercomputers?
The core challenge lies in the significant mismatch between the intricate, hierarchical physical topology of rack-scale supercomputers and the often simplistic abstractions presented by conventional workload schedulers. While systems like the NVIDIA GB200/GB300 NVL72 boast sophisticated NVLink fabrics and IMEX domains, schedulers typically perceive a flat pool of GPUs and nodes. This can lead to inefficient resource allocation, sub-optimal performance due to poor data locality or communication bottlenecks, and increased operational complexity for platform operators. Without topology-aware scheduling, the inherent advantages of rack-scale integration, such as high-bandwidth interconnections, cannot be fully leveraged for AI workloads.
How does NVIDIA Mission Control address the operational complexities of rack-scale AI scheduling?
NVIDIA Mission Control acts as a crucial control plane that bridges the gap between the complex hardware topology of NVIDIA Grace Blackwell NVL72 systems and the needs of workload management platforms like Slurm and NVIDIA Run:ai. It provides a native, deep understanding of NVLink and IMEX domains, translating physical hardware relationships into logical identifiers that schedulers can interpret. By centralizing the view of cluster UUIDs and clique IDs, Mission Control enables precise, topology-aware job placement, ensures proper workload isolation, and guarantees consistent performance by aligning computations with the optimal underlying hardware fabric. This effectively transforms raw infrastructure into an efficient, manageable AI factory.
Explain the concepts of Cluster UUID and Clique ID in the context of NVLink topology and their operational significance.
Cluster UUID and Clique ID are system-level identifiers that encode a GPU's position within the NVLink fabric, making the complex topology understandable to system software and schedulers. The Cluster UUID corresponds to the NVLink domain, indicating that systems and their GPUs belong to the same physical rack and share a common NVLink fabric. For Grace Blackwell NVL72, this UUID is consistent across the entire rack. The Clique ID provides a finer distinction, corresponding to an NVLink Partition. GPUs sharing a Clique ID belong to the same logical partition within that domain. Operationally, the Cluster UUID answers which GPUs physically share a rack and can communicate via NVLink, while the Clique ID answers which GPUs share an NVLink Partition and are intended to communicate together for a specific workload, enabling finer-grained resource allocation and performance optimization.
How does Slurm's topology/block plugin enhance AI workload placement on NVL72 systems?
Slurm's topology/block plugin is essential for efficient AI workload placement on NVIDIA NVL72 systems by making Slurm aware that not all nodes (or GPUs) are equal in terms of connectivity and performance. On Grace Blackwell NVL72 systems, blocks of nodes with lower-latency connections directly map to NVLink partitions, which are groups of GPUs sharing a high-bandwidth NVLink fabric. By enabling this plugin and exposing NVLink partitions as 'blocks,' Slurm gains the necessary context to make intelligent placement decisions. This ensures that multi-GPU jobs are preferentially allocated within a single NVLink partition to preserve MNNVL performance, preventing performance degradation that could occur if jobs were spread indiscriminately across different, less-connected segments of the supercomputer. It allows for optimized resource utilization and predictable performance for demanding AI tasks.
What is Multi-Node NVLink (MNNVL), and how does IMEX facilitate it for shared GPU memory?
Multi-Node NVLink (MNNVL) is a key technology that allows GPUs across different compute nodes within a rack-scale system to communicate directly with high bandwidth and low latency, essential for scaling large AI models. MNNVL enables a shared-memory programming model across these distributed GPUs, making it appear to applications as a single, massive GPU fabric. IMEX (Infiniband Memory Expansion) is the underlying technology that facilitates MNNVL. IMEX-capable compute trays are designed to enable shared GPU memory across nodes by leveraging NVIDIA's advanced networking. While MNNVL simplifies the programming model for developers, Mission Control plays a crucial role behind the scenes to ensure that IMEX services are correctly provisioned and synchronized with MNNVL jobs, guaranteeing that the benefits of shared GPU memory are fully realized without exposing the underlying complexities to the end-user.
What are the key benefits of implementing topology-aware scheduling for AI workloads on rack-scale supercomputers?
Implementing topology-aware scheduling offers several significant benefits for AI workloads on rack-scale supercomputers. Firstly, it ensures optimal performance by intelligently placing jobs on GPUs that have the highest bandwidth and lowest latency connections, minimizing communication overheads inherent in distributed AI training. Secondly, it enhances resource utilization by preventing inefficient spreading of jobs across disparate hardware segments, leading to more predictable performance and better throughput. Thirdly, it simplifies management for platform operators by abstracting hardware complexities while providing clear isolation boundaries between workloads, improving system stability and security. Ultimately, topology-aware scheduling transforms complex hardware into a highly efficient, scalable, and manageable 'AI factory,' accelerating research and development while reducing operational burden.
How does NVIDIA Topograph contribute to the automated discovery and scheduling of supercomputer topologies?
NVIDIA Topograph is a critical component that automates the discovery of the intricate NVLink and interconnect hierarchy within rack-scale supercomputers. This automated discovery is essential because manually configuring and maintaining detailed topology information for large-scale systems would be prone to errors and highly time-consuming. Topograph exposes this detailed fabric information to workload schedulers, including Slurm and Kubernetes (through NVIDIA DRA and ComputeDomains), as well as NVIDIA Run:ai. By providing schedulers with an accurate and real-time view of the hardware topology, Topograph enables them to make intelligent, automated placement decisions. This ensures that AI workloads are scheduled in a topology-aware manner from the outset, optimizing performance, resource allocation, and overall system efficiency, which is crucial for building and operating scalable AI factories.

Baki na Habari

Pokea habari za hivi karibuni za AI kwenye barua pepe yako.

Shiriki