2024³â 11¿ù 02ÀÏ Åä¿äÀÏ
 
 
  ÇöÀçÀ§Ä¡ > ´º½ºÁö´åÄÄ > Business

·£¼¶¿þ¾îºÎÅÍ µÅÁöµµ»ì±îÁö... ³ë·ÃÇØÁø »ç±âÇà°¢

 

Á¤Ä¡

 

°æÁ¦

 

»çȸ

 

»ýÈ°

 

¹®È­

 

±¹Á¦

 

°úÇбâ¼ú

 

¿¬¿¹

 

½ºÆ÷Ã÷

 

ÀÚµ¿Â÷

 

ºÎµ¿»ê

 

°æ¿µ

 

¿µ¾÷

 

¹Ìµð¾î

 

½Å»óÇ°

 

±³À°

 

ÇÐȸ

 

½Å°£

 

°øÁö»çÇ×

 

Ä®·³

 

Ä·ÆäÀÎ
Çѻ츲 ¡®¿ì¸®´Â ÇѽҸ²¡¯ ½Ò ¼Òºñ Ä·ÆäÀÎ ½Ã...
1000¸¸¿øÂ¥¸® Àΰø¿Í¿ì, °Ç°­º¸Çè Áö¿ø ¡®Æò...
- - - - - - -
 

Introducing GMI Cloud: New On-Demand Instances Accelerate Access to NVIDIA GPUs

GMI Cloud, with roots in Taiwan, uses its supply chain advantage to grant companies instant and affordable NVIDIA GPU compute power for navigating the race to adopt AI
´º½ºÀÏÀÚ: 2024-05-23

SANTA CLARA, CALIF. -- GMI Cloud, the emerging GPU cloud platform designed for AI and ML workloads, is accelerating access to NVIDIA GPUs. Its new On-Demand cloud compute offering, available today, is built for companies that are serious about leveraging AI and unlocking the door from prototyping to production. Users can access GMI Cloud’s On-Demand GPU computing resources almost instantaneously.

The Surge in Demand for Compute

The current surge in demand for AI compute power requires companies to be strategic in their approach. In a fast-evolving landscape, organizations are being asked to pay a 25-50% down payment and sign up for a 3-year contract with the promise of getting access to GPU infrastructure in 6-12 months. The AI shift has forced companies to need more flexible computing power.

Instant GPUs, Infinite AI

Leveraging its ties to Realtek Semiconductors (TPE: 2379) and GMI Technologies (TPE: 3312) as well as Taiwan’s robust supply chain ecosystem, GMI Cloud is able to ensure quicker deployment and higher efficiency in operations. The physical presence in Taiwan cuts the GPU delivery time down from months to days compared to non-Taiwanese GPU providers. GMI Cloud is poised to become the most competitive new entrant in this market.

“Our mission is to empower humanity’s AI ambitions with instant, efficient GPU cloud,” said Alex Yeh, Founder and CEO of GMI Cloud. “We’re not just building a cloud—we’re creating the backbone of the AI era. GMI Cloud is dedicated to transforming how developers and data scientists leverage NVIDIA GPUs and how all humans benefit from AI.”

Why It Matters

Technology leaders are seizing the opportunities presented by the growing AI wave, and organizations of all sizes are hitting walls when it comes to accessing compute power.

Startups, for example, don’t have the budget or long-term forecasting to pay a down payment for a large GPU installation. They need the flexibility to scale up or down based on their traction. This requires the option to pay for GPU as an operating expense rather than locking in capital that could be spent on hiring competitive AI talent. On-Demand access provides an instant, cost-effective, and scalable option for teams that need access to GPU compute, without requiring special skills to set up the infrastructure.

Large enterprises face hurdles as well. Enterprise data science teams, for example, require the flexibility to experiment, prototype, and evaluate AI applications to get ahead of competitors before the AI wave passes them by. However, not every enterprise is ready to commit to the long-term contracts and unproven capital expenditures required for larger compute reserves. The flexibility of instant GPU access allows those data science teams to run several prototyping projects that require processing large datasets or fine-tuning models without taking significant investment risks.

Get Started Now

GMI Cloud is a GPU cloud platform, powered by NVIDIA, with a rich Kubernetes-managed preloaded software stack designed specifically for AI and ML workloads which includes prebuilt images with NVIDIA TensorRT and will soon support all NVIDIA prebuilt containers including inference servers like NVIDIA Triton. With competitive pricing offered at $4.39/hour for NVIDIA H100 Tensor Core GPUs, GMI Cloud offers affordable on-demand access compared to larger organizations charging up to 4X that cost for on-demand in an effort to lock users into larger reserve packages. The instance type and size is purposefully designed to efficiently deploy, fine-tune, and inference models ranging in size from Llama3 8b, 70b, Mixtral 8x7b, Google Gemma, Stable Diffusion, and more.



 Àüü´º½º¸ñ·ÏÀ¸·Î

J.P. Morgan Launches Private Markets Data Solutions for Institutional Investors
Sparking Positive Change Within Local Communities
Tecnotree Celebrates Multiple Go-Lives Across LATAM and EMEA
NTT DATA Establishes Generative AI Talent Development Framework to Train 200,000 Employees and Develop 30,000 Experts
Alpega Introduces Connecta, Europe¡¯s First Open Logistics Network
Visa¡¯s Growth Corporates Working Capital Index Reveals 300% Increase in Working Capital Efficiency
The Future of Flight Has Arrived With the Unveiling of the Next Generation of Cessna Citation Business Jets

 

Aviation industry to add 45,900 aircraft worth $3.3 trillion over the ...
Galderma Delivers Record Net Sales of 3.259 B USD in the First Nine Mo...
EIG¡¯s MidOcean Energy Completes Acquisition of Additional 15% Interes...
DNIB.com Reports Internet Has 362.3 Million Domain Name Registrations ...
Xsolla Significantly Expands Payment Solutions in Cambodia and Indones...
[Executive Corner] Smart Life Solutions for a Zero Labor Home
Asia Pacific Consumers Embrace Resilience and Intentional Spending

 


°øÁö»çÇ×
´º½ºÁö ÇÑÀÚÇ¥±â 'ãæÚ¤ó¢'
´º½º±×·ì Á¤º¸ ¹Ìµð¾î ºÎ¹® »óÇ¥µî·Ï
¾ËÇÁ·Ò °è¿­ »óÇ¥, »óÇ¥µî·Ï ¿Ï·á
¾Ë¶ã°Ç¼³, »óÇ¥µî·Ï ¿Ï·á
Á¸Â÷´åÄÄ, ±Û²Ã º¯°æ »óÇ¥µî·Ï ¿Ï·á

 

ȸ»ç¼Ò°³ | ÀÎÀçä¿ë | ÀÌ¿ë¾à°ü | °³ÀÎÁ¤º¸Ãë±Þ¹æħ | û¼Ò³âº¸È£Á¤Ã¥ | Ã¥ÀÓÇÑ°è¿Í ¹ýÀû°íÁö | À̸ÞÀÏÁÖ¼Ò¹«´Ü¼öÁý°ÅºÎ | °í°´¼¾ÅÍ

±â»çÁ¦º¸ À̸ÞÀÏ news@newsji.com, ÀüÈ­ 050 2222 0002, Æѽº 050 2222 0111, ÁÖ¼Ò : ¼­¿ï ±¸·Î±¸ °¡¸¶»ê·Î 27±æ 60 1-37È£

ÀÎÅͳݴº½º¼­ºñ½º»ç¾÷µî·Ï : ¼­¿ï ÀÚ00447, µî·ÏÀÏÀÚ : 2013.12.23., ´º½º¹è¿­ ¹× û¼Ò³âº¸È£ÀÇ Ã¥ÀÓ : ´ëÇ¥ CEO

Copyright ¨Ï All rights reserved..