In a new partnership, Samsung Electronics and NAVER will collaborate to develop semiconductor solutions that are optimized for hyperscale artificial intelligence.
In order to maximize the speed and power efficiency of large-scale AI models, the two companies plan to combine their respective strengths in semiconductor design and manufacturing with their established capabilities in AI.
NAVER Corporation, a global internet company with top-notch AI technology, and Samsung Electronics, the world leader in advanced memory technology, have announced today that they will be working together on a wide-ranging project to develop semiconductor solutions that are tailored for hyperscale artificial intelligence (AI) models. This collaboration will have a significant impact. Using technologies such as computational storage, processing-in-memory (PIM), and processing-near-memory (PNM), as well as Compute Express Link (CXL), the companies intend to pool their hardware and software resources in order to significantly accelerate the handling of massive AI workloads. This will be accomplished by leveraging Samsung’s next-generation memory technologies.
Recent developments in hyperscale artificial intelligence have resulted in an explosion in the amount of data that needs to be processed. However, the performance and efficiency limitations of today’s computing systems present significant challenges when it comes to meeting these heavy computational requirements, which fuels the need for new AI-optimized semiconductor solutions.
In order to develop such solutions, there must be a significant convergence of the fields of AI and semiconductors. In order to create solutions that take the performance and power efficiency of large-scale AI to a new level, Samsung is combining its expertise in semiconductor design and manufacturing with NAVER’s experience in the development and verification of AI algorithms and AI-driven services.
Memory and storage that are able to support high-speed data processing in AI applications have been introduced by Samsung over the course of many years. These innovations range from computational storage (SmartSSD) and PIM-enabled high-bandwidth memory (HBM-PIM) to next-generation memory that is able to support the Compute Express Link (CXL) interface. Together with NAVER, Samsung will now work to improve the performance of these memory technologies in order to advance large-scale AI systems.
NAVER is going to continue to work on improving HyperCLOVA, a hyperscale language model that has more than 200 billion parameters, while also working to improve its compression algorithms. This will result in a model that is both simpler and significantly more efficient in terms of computation.
According to Jinman Han, Executive Vice President of Memory Global Sales & Marketing at Samsung Electronics, “Through our collaboration with NAVER, we will develop cutting-edge semiconductor solutions to solve the memory bottleneck in large-scale AI systems.” NAVER is a search engine that specializes in artificial intelligence. “With tailored solutions that reflect the most pressing needs of AI service providers and users, we are committed to broadening our market-leading memory lineup, including computational storage, PIM, and more, in order to fully accommodate the ever-increasing scale of data,” the company stated. “With these solutions, we will be able to expand our market share.”
“We believe we can create an entirely new class of solutions that can better tackle the challenges of today’s AI technologies by combining our acquired knowledge and know-how from HyperCLOVA with Samsung’s prowess in the manufacturing of semiconductors,” said Suk Geun Chung, Head of NAVER CLOVA CIC. “HyperCLOVA is where we gained our knowledge and know-how.” With the help of this strategic partnership, we are looking forward to enhancing our capabilities in the field of artificial intelligence (AI) and gaining a further advantage over our competitors in this field.