Code Velocity
AI kwa Biashara

Upachikaji wa Lugha Nyingi kwa Kiwango Kikubwa: Ziwa la Data la AI kwa Vyombo vya Habari na Burudani

·5 dakika kusoma·AWS·Chanzo asili
Shiriki
Mchoro unaoonyesha usanifu wa ziwa la data la AI la upachikaji wa lugha nyingi la AWS kwa utafutaji wa video, ukionyesha mtiririko wa data kutoka S3 hadi OpenSearch kupitia Nova na Bedrock.

Kuvuka Mipaka ya Utafutaji wa Video kwa Upachikaji wa Lugha Nyingi

Tasnia ya vyombo vya habari na burudani imejaa bahari kubwa za maudhui ya video. Kuanzia rekodi za zamani hadi upakiaji wa kila siku, wingi wake unazifanya mbinu za jadi za ugunduzi wa maudhui — uwekaji vitambulisho kwa mikono na utafutaji unaotegemea maneno muhimu — kuwa zisizofaa na mara nyingi zisizo sahihi. Mbinu hizi za zamani hupambana kunasa utajiri kamili na muktadha wa kina uliopachikwa ndani ya video, na kusababisha fursa zilizokosekana za kutumia tena maudhui, uzalishaji wa haraka, na uzoefu bora wa mtazamaji.

Ingia katika enzi ya upachikaji wa lugha nyingi. AWS inafanya upainia wa suluhisho linalopita mipaka hii, likiwezesha uwezo wa utafutaji wa kisemantiki katika hifadhidata kubwa za video. Kwa kutumia nguvu za miundo ya Amazon Nova na Huduma ya Amazon OpenSearch, waundaji na wasambazaji wa maudhui wanaweza kwenda mbali zaidi ya maneno muhimu ya juu juu ili kuelewa na kufikia maktaba zao za media kweli. Mbinu hii bunifu inaruhusu maswali ya lugha asilia kuchunguza kina cha habari za kuona na kusikia, ikileta usahihi usio na kifani katika ugunduzi wa maudhui.

Kuonyesha uwezo huu kwa kiwango cha kuvutia, AWS imechakata video 792,270 kutoka kwenye Rejista ya Data Huria ya AWS, ikijumuisha saa 8,480 za maudhui ya video. Mradi huu kabambe, uliochukua saa 41 tu kuchakata zaidi ya sekunde milioni 30.5 za video, unaangazia uwezo wa kupanuka na ufanisi wa mbinu hii inayoendeshwa na AI. Gharama ya mwaka wa kwanza, ikiwa ni pamoja na uingizaji wa mara moja na Huduma ya OpenSearch ya kila mwaka, ilikadiriwa kuwa na ushindani mkubwa wa $23,632 (kwa OpenSearch Service Reserved Instances) hadi $27,328 (kwa on-demand). Suluhisho kama hilo hubadilisha kimsingi jinsi kampuni za media zinavyoingiliana na mali zao za kidijitali, zikifungua njia mpya za kuchuma mapato kutokana na maudhui na mtiririko wa kazi za uzalishaji. Mabadiliko haya ya dhana kuelekea uelewa wa kisemantiki ni maendeleo muhimu kwa AI kwa Biashara katika media.

Kuelewa Usanifu wa Ziwa la Data la AI Lenye Uwezo wa Lugha Nyingi Unaoweza Kupanuka

Katika msingi wake, mfumo huu wenye nguvu wa utafutaji wa video wenye uwezo wa lugha nyingi umejengwa juu ya mtiririko miwili ya kazi iliyounganishwa: uingizaji wa video na utafutaji. Vipengele hivi huunganishwa bila mshono kuunda ziwa la data la AI linaloelewa na kufanya maelezo tata ya maudhui ya video yaweze kutafutika.

Bomba la Kuingiza Video

Bomba la kuingiza limeundwa kwa uchakataji sambamba na ufanisi. Linatumia matukio manne ya Amazon EC2 c7i.48xlarge, yakiratibu hadi wafanyakazi 600 sambamba kufikia kiwango cha uchakataji wa video 19,400 kwa saa. Video zilizopakuliwa awali kwenye Amazon S3 kisha huchakatwa na API isiyolingana ya Upachikaji wa Lugha Nyingi wa Amazon Nova. API hii hugawanya video kwa akili katika vipande bora vya sekunde 15 — usawa kati ya kunasa mabadiliko makubwa ya tukio na kudhibiti kiasi cha upachikaji uliotengenezwa. Kila kipande kisha hubadilishwa kuwa upachikaji wa vipimo 1024, unaowakilisha vipengele vyake vilivyojumuishwa vya sauti na picha. Ingawa upachikaji wa vipimo 3072 unatoa uaminifu wa hali ya juu, chaguo la vipimo 1024 linatoa akiba ya gharama ya hifadhi mara 3 na athari ndogo kwenye usahihi kwa programu hii, na kuifanya kuwa chaguo la kimantiki kwa kiwango kikubwa.

Ili kuongeza uwezo wa kutafutika zaidi, Amazon Nova Pro (au Nova 2 Lite mpya zaidi na yenye gharama nafuu zaidi) inatumiwa kuzalisha vitambulisho 10-15 vya maelezo kwa kila video kutoka kwa taksonomia iliyoelezwa awali. Mbinu hii mbili inahakikisha kuwa maudhui yanaweza kugunduliwa kupitia kufanana kwa kisemantiki na ulinganifu wa maneno muhimu ya jadi. Upachikaji huu huhifadhiwa katika faharisi ya k-NN ya OpenSearch, iliyoboreshwa kwa utafutaji wa kufanana kwa vekta, wakati vitambulisho vya maelezo vinawekwa kwenye faharisi tofauti ya maandishi. Utengano huu unaruhusu maswali rahisi na yenye ufanisi. Bomba hudhibiti vikwazo vya uhuru wa Bedrock (kazi 30 zinazofanya kazi sambamba kwa kila akaunti) kupitia foleni imara ya kazi na utaratibu wa kupiga kura, kuhakikisha uchakataji unaoendelea na unaoendana.

Chini ni uwakilishi wa kuona wa mchakato huu tata wa uingizaji:

Mchoro 1: Bomba la kuingiza video likionyesha mtiririko kutoka hifadhi ya video ya S3 kupitia Upachikaji wa Lugha Nyingi wa Nova na Nova Pro hadi kwenye faharisi mbili za OpenSearch

Kuimarisha Uwezo Mbalimbali wa Utafutaji wa Video

Usanifu wa utafutaji umeundwa kwa matumizi mengi, ukitoa njia nyingi za ugunduzi wa maudhui:

  1. Utafutaji wa Maandishi-kwenye-video: Watumiaji wanaweza kuweka maswali ya lugha asilia, kama vile "picha ya drone ya jiji lenye shughuli nyingi usiku" au "picha ya karibu ya mpishi akiandaa mlo wa kifahari." Mfumo hubadilisha maswali haya kuwa upachikaji, kisha hutumia faharisi ya k-NN ya OpenSearch kupata sehemu za video au video nzima zinazolingana kisemantiki na maelezo, hata kama maneno halisi hayapo kwenye metadata yoyote. Hii ni bora kwa ugunduzi wa maudhui angavu na utayarishaji wa hadithi.

  2. Utafutaji wa Video-kwa-video: Kwa matukio ambapo mtumiaji ana klipu ya video na anataka kupata maudhui yanayofanana, hali hii ni bora. Kwa kulinganisha upachikaji wa video ya kuingiza moja kwa moja na zile zilizo kwenye faharisi ya k-NN ya OpenSearch, mfumo unaweza kutambua maudhui yanayofanana kwa kuona na kusikia. Hii ni muhimu sana kwa kutambua picha za B-roll, kuhakikisha uthabiti wa maudhui, au kugundua kazi zinazotokana.

  3. Utafutaji Mseto: Ukichanganya bora ya walimwengu wote wawili, utafutaji mseto huunganisha kufanana kwa vekta na ulinganifu wa maneno muhimu ya jadi. Suluhisho lililopendekezwa linatumia mbinu ya uzito (k.m., kufanana kwa vekta 70% na ulinganifu wa maneno muhimu 30%). Hii inahakikisha usahihi wa hali ya juu na umuhimu, ikiruhusu metadata maalum kuongoza utafutaji huku uelewa wa kisemantiki ukitoa ulinganifu mpana wa kimuktadha. Mbinu hii inafaa hasa kwa maswali magumu yanayonufaika na vitambulisho sahihi na uelewa wa dhana.

Mchoro 2: Usanifu wa utafutaji wa video ukionyesha hali tatu za utafutaji – maandishi-kwenye-video, video-kwa-video, na utafutaji mseto unaochanganya k-NN na BM25

Upelekaji wa Gharama Nafuu na Masharti ya Awali

Kupeleka ziwa la data la AI lenye hali ya juu kama hilo kunahitaji kuzingatia kwa makini miundombinu na gharama, ambazo AWS imeboresha kwa ufanisi. Jumla ya gharama ya kuchakata hifadhidata kubwa, takriban saa 8,480 za maudhui ya video, ilifikia makadirio ya jumla ya mwaka wa kwanza ya $27,328 (kwa OpenSearch on-demand) au $23,632 (kwa OpenSearch Service Reserved Instances).

Uchambuzi wa uingizaji unaangazia vipengele muhimu vinavyoendesha gharama:

  • Uchakataji wa Amazon EC2: $421 (matukio 4x c7i.48xlarge spot kwa saa 41)
  • Upachikaji wa Lugha Nyingi wa Amazon Bedrock Nova: $17,096 (sekunde milioni 30.5 kwa bei ya $0.00056/sekunde kwa kundi)
  • Uwekaji vitambulisho wa Nova Pro: $571 (video 792K, takriban tokeni 600/video kwa wastani)
  • Huduma ya Amazon OpenSearch: $9,240 (kwa mahitaji ya kila mwaka) au $5,544 (Iliyohifadhiwa kila mwaka)

Masharti ya Awali kwa Utekelezaji: Kurejea au kurekebisha suluhisho hili, utahitaji:

  1. Akaunti ya AWS yenye ufikiaji wa Amazon Bedrock katika us-east-1.
  2. Python 3.9 au matoleo ya baadaye.
  3. AWS Command Line Interface (AWS CLI) iliyosanidiwa na stakabadhi zinazofaa.
  4. Domain ya Huduma ya Amazon OpenSearch (r6g.large au kubwa zaidi inapendekezwa), toleo 2.11 au baadaye, ikiwa na programu-jalizi ya k-NN imewashwa.
  5. Bucket ya Amazon S3 kwa hifadhi ya video na matokeo ya upachikaji.
  6. Ruhusa za AWS Identity and Access Management (IAM) kwa Amazon Bedrock, Huduma ya OpenSearch, na Amazon S3.

Suluhisho linatumia huduma na miundo maalum ya AWS:

  • Amazon Bedrock na amazon.nova-2-multimodal-embeddings-v1:0 kwa upachikaji.
  • Amazon Bedrock na us.amazon.nova-pro-v1:0 au us.amazon.nova-2-lite-v1:0 kwa uwekaji vitambulisho.
  • Huduma ya Amazon OpenSearch 2.11+ na programu-jalizi ya k-NN.
  • Amazon S3 kwa hifadhi.

Kutekeleza Suluhisho la Utafutaji wa Video Wenye Uwezo wa Lugha Nyingi

Kuanza na usanifu huu kunahusisha mbinu iliyopangwa ya kusanidi mazingira yako ya AWS. Hatua ya kwanza muhimu ni kuanzisha ruhusa zinazohitajika.

Hatua ya 1: Unda Majukumu na Sera za IAM

Utahitaji kuunda jukumu la IAM linalopa programu au huduma yako mamlaka ya kuingiliana na vipengele mbalimbali vya AWS. Jukumu hili lazima lijumuishe ruhusa za kuita miundo ya Amazon Bedrock (kwa uzalishaji wa upachikaji na uwekaji vitambulisho), kuandika data kwenye faharisi za OpenSearch, na kufanya shughuli za kusoma/kuandika kwenye bucket za Amazon S3 ambapo maudhui yako ya video na matokeo yaliyochakatwa yanapatikana.

Huu hapa ni mfano wa muundo wa sera ya msingi ya IAM:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:StartAsyncInvoke",
        "bedrock:GetAsyncInvoke",
        "bedrock:List"
      ],
      "Resource": "arn:aws:bedrock:us-east-1::foundation-model/amazon.nova-*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::your-video-bucket/*",
        "arn:aws:s3:::your-video-bucket"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "es:ESHttpPost",
        "es:ESHttpPut",
        "es:ESHttpDelete",
        "es:ESHttpGet"
      ],
      "Resource": "arn:aws:es:us-east-1:*:domain/your-opensearch-domain/*"
    }
  ]
}

Sera hii inatoa ruhusa maalum muhimu kwa uendeshaji wa bomba. Kumbuka kubadilisha vishikilizi nafasi kama your-video-bucket na your-opensearch-domain kwa majina halisi ya rasilimali zako. Baada ya usanidi wa IAM, utaendelea na kusanidi bucket zako za S3, kusanidi domain yako ya Huduma ya OpenSearch na k-NN imewashwa, na kuendeleza mantiki ya uratibu inayotumia API za Bedrock kwa uingizaji. Mfumo huu imara unahakikisha kwamba kampuni za media na burudani zinaweza kudhibiti, kugundua, na kuchuma mapato kutokana na maktaba zao za maudhui zinazokua kwa ufanisi, ikionyesha hatua kubwa katika akili ya maudhui. Suluhisho hili la kina ni mfano wa jinsi uwezo wa kisasa wa AI, hasa katika uelewa wa lugha nyingi, unavyofafanua upya viwango vya tasnia kwa usimamizi wa maudhui na upatikanaji. Ni ushahidi wa nguvu ya kuunganisha miundo ya hali ya juu ya AI na miundombinu ya wingu inayoweza kupanuka kutatua changamoto za AI kwa Biashara za ulimwengu halisi, kukuza maendeleo yanayofanana na yale yanayoonekana katika mtiririko wa kazi wa AI wa Agentic.

Maswali Yanayoulizwa Mara kwa Mara

What is a multimodal AI data lake for media and entertainment workloads?
A multimodal AI data lake for media and entertainment is an advanced system designed to store, process, and enable intelligent search across vast collections of video content. Unlike traditional keyword-based systems, it leverages AI models, specifically multimodal embeddings, to understand the nuanced meaning and context within audio and visual data. This allows for semantic search capabilities, where users can query content using natural language descriptions or by providing another video, moving beyond simple tags to find relevant moments or entire videos based on their actual content. AWS's solution utilizes services like Amazon Nova for embedding generation and Amazon OpenSearch Service for efficient storage and retrieval of these high-dimensional vectors, making it ideal for large-scale content libraries.
How does the video ingestion pipeline handle large-scale datasets?
The video ingestion pipeline detailed in the article is engineered for massive scale, demonstrating processing of nearly 800,000 videos totaling over 8,480 hours of content. It employs a distributed architecture using multiple Amazon EC2 instances (e.g., c7i.48xlarge) to parallelize video processing. Key to its efficiency is the asynchronous API of Amazon Nova Multimodal Embeddings, which segments videos into optimal chunks (e.g., 15-second segments) and generates 1024-dimensional embeddings. To manage Bedrock's concurrency limits, the pipeline implements a job queue with polling, ensuring continuous processing. Additionally, Amazon Nova Pro (or Nova Lite) is used to generate descriptive tags, further enriching the metadata. These embeddings and tags are then efficiently indexed into Amazon OpenSearch Service's k-NN and text indices respectively, preparing the data for rapid search.
What types of video search capabilities does this solution enable?
This multimodal AI data lake solution provides three powerful video search capabilities, significantly enhancing content discovery. First, **Text-to-video Search** allows users to input natural language queries (e.g., 'a person surfing at sunset') which are then converted into embeddings and matched semantically against video content, going beyond exact keyword matches. Second, **Video-to-video Search** enables users to find similar video segments or entire videos by comparing their embeddings directly, useful for content recommendations or identifying duplicates. Third, **Hybrid Search** combines the strengths of both semantic vector similarity and traditional keyword matching (e.g., 70% vector, 30% keyword) for maximum accuracy and relevance, especially when dealing with complex queries that benefit from both contextual understanding and specific metadata.
Which AWS services are critical for building this multimodal embedding solution?
Several core AWS services are critical for constructing this scalable multimodal embedding solution. At its heart are **Amazon Bedrock** and its **Nova Multimodal Embeddings** for generating high-dimensional vector representations from video and audio, and **Nova Pro** (or **Nova Lite**) for intelligent tagging. **Amazon OpenSearch Service** (specifically with its k-NN plugin) serves as the scalable vector database to store and query these embeddings, alongside a traditional text index for metadata. **Amazon S3** (Simple Storage Service) is essential for storing the raw video files and the outputs of the embedding process. **Amazon EC2** provides the compute power for orchestrating the ingestion pipeline and managing the large-scale processing of video data. Additionally, **AWS IAM** is vital for securing access and permissions across these integrated services.
What are the cost considerations for deploying such a large-scale multimodal video search system?
Deploying a large-scale multimodal video search system, as demonstrated by the processing of over 8,000 hours of video, involves significant but manageable costs. The article provides a detailed breakdown, estimating a first-year total cost of approximately $23,632 to $27,328. This cost is primarily divided into two components: one-time ingestion costs and ongoing annual Amazon OpenSearch Service costs. Ingestion is dominated by Amazon Bedrock Nova Multimodal Embeddings usage, charged per second of processed video, and Nova Pro tagging. Amazon EC2 compute for orchestration also contributes but is comparatively smaller. OpenSearch Service costs can be optimized by using Reserved Instances over on-demand pricing. Careful planning and monitoring of resource usage, especially Bedrock API calls and OpenSearch cluster sizing, are key to managing and optimizing these expenditures.
Why is semantic search using multimodal embeddings superior to traditional keyword search for video content?
Semantic search, powered by multimodal embeddings, offers a profound advantage over traditional keyword search for video content by enabling a deeper, contextual understanding. Keyword search is limited to exact matches of words and phrases, often failing to capture synonyms, related concepts, or the visual and auditory nuances of video. For instance, searching for 'people talking' might miss a scene where individuals are silently communicating through gestures. Multimodal embeddings, however, convert the rich information from both audio and video into dense numerical vectors. These vectors capture the meaning, style, and context, allowing for queries based on conceptual similarity rather than just lexical matches. This means users can find relevant content even if the exact keywords aren't present, or describe a visual scene using natural language, significantly improving content discovery and relevance in large video archives.
How does the Amazon Nova family of models contribute to this solution?
The Amazon Nova family of models plays a central role in enabling this advanced multimodal video search solution. Specifically, **Amazon Nova Multimodal Embeddings** is the backbone for transforming raw video and audio into actionable high-dimensional vectors (embeddings). It intelligently segments videos and extracts combined audio-visual features, allowing for sophisticated semantic comparisons. This model is crucial for both text-to-video and video-to-video search functionalities. Additionally, **Amazon Nova Pro** (or the more cost-effective **Nova Lite**) is utilized for generating descriptive tags. These tags enrich the video metadata, enabling hybrid search scenarios where both conceptual similarity and specific keywords can be used to refine search results. Together, these Nova models empower the system to understand, categorize, and make searchable the complex information contained within video content.
What are the benefits of using OpenSearch Service's k-NN index in this architecture?
Amazon OpenSearch Service's k-NN (k-Nearest Neighbor) index is a cornerstone of this multimodal video search architecture, providing the capability to efficiently store and query high-dimensional vector embeddings. The primary benefit is enabling rapid and accurate semantic search. When a query (text or video) is converted into an embedding, the k-NN index can quickly find the 'k' most similar video embeddings within the vast dataset. This is far more efficient than traditional database lookups for vector similarity. It allows for real-time semantic search across millions of video segments. By integrating seamlessly with other OpenSearch capabilities, it also facilitates hybrid search, combining vector similarity with traditional text-based filtering and scoring, ensuring a powerful and flexible search experience that scales with the size of the media library.

Baki na Habari

Pokea habari za hivi karibuni za AI kwenye barua pepe yako.

Shiriki