Artificial intelligence (AI), once a futuristic concept, has now penetrated deep into our daily life. From home appliances to smart devices, Internet, healthcare, and autonomous driving, the list of AI applications influencing our industry and lifestyle cannot be exhaustive. As the AI technology expands its coverage, semiconductors, notably AI semiconductors are also undergoing dramatic changes. Not only traditional suppliers entrenched in the existing semiconductor industry but also other global big-tech companies are turning up the heat for AI chip development programs by investing astronomical sums and engaging in a variety of M&A deals. Following is a list of questions and answers on AI semiconductor, expected to be a pillar of the future semiconductor industry.
What is the difference between ordinary semiconductors and AI semiconductors?
AI learns from enormous amount of data and infers conclusions therefrom. Accommodating and processing learning data in a short period of time requires an AI semiconductor, a special processor. AI semiconductor is a non-memory chip specialized in performing an enormous number of computations required for implementing AI services at a super-fast rate with extreme power efficiency. It is regarded as the brain of AI applications.
Before an AI semiconductor was developed, a combination of CPU (central processing unit) and GPU (graphic processing unit) played the role of the brain. Although such processing units were capable of supporting AI applications, their combined performance was compromised by non-AI computations and exposed to inefficiency in terms of cost requirements and power consumption as they were not developed specifically for AI systems from the ground up. Processing enormous data in the same way as human brain does involves significant power consumption and requires super-fast processing rates. This is why AI-specific semiconductors optimized to process AI algorithms at the cost of compromised versatility in comparison with CPU or GPU began to emerge. Since it is specifically designed to support deep learning algorithms of AI systems, AI semiconductors are also referred to as neural processing units (NPU).
How can AI semiconductors perform better than a combination of CPU and GPU?
A CPU is the brain of a computer processing all inputs, outputs, and commands. As they are designed to process data sequentially in series, CPUs are not optimized for AI applications that need to process massive number of computations in parallel. GPUs were considered an alternative to overcome the drawback. Although initially developed to process high-quality graphics in 3D games, GPUs have evolved into a kind of AI semiconductors as it processes data in parallel.
As GPUs were not originally made for AI computations, a new breed of semiconductors specially aligned to the needs of AI and designed to maintain the parallel processing mechanism of GPUs were born, which refer to FPGA or NPU in ASIC* format. Optimized to perform parallel computations, GPUs are effective in processing big learning data as required by AI, but need further optimization considering that AI algorithms were still needed to infer conclusions from analyzed data. In addition, the interface with memory that stores intermediate data in learning and inference processes has significant impact on system performance and energy consumption. But, AI-specific NPUs factor all those issues into consideration, resulting in high performance and excellent energy efficiency. When we take a picture with a smartphone, the NPU recognizes a person, blurring the image around the person while the GPU superimposes a fuzzy filtering layer over the picture.
FPGA (Field Programmable Gate Array) is highly flexible as the hardware inside the chip can be reprogrammed to suit applicable purposes whereas ASIC (Application Specific Integrated Circuit) is customized to specific purposes, featuring high efficiency. AI chips in ASIC format are primarily developed by global IT companies. Neuromorphic chips are modeled after the neuron (nerve cell) and synapse (neuronal junction) architecture that exists in the human brain. They are the next-generation AI semiconductors under development that feature higher performance and efficiency but have lower versatility than their predecessors.
* Application Specific Integrated Circuit: semiconductor customized to implement a system specific to the requirements of applicable application.
Who makes AI semiconductors and how big is the AI semiconductor market?
As AI has emerged as a game changer for the future IT industry, the AI semiconductor market is riding on the boom. To secure vantage points in the AI semiconductor market, not only traditional chipmakers including Qualcomm, Intel, and NVIDIA but also global big-tech companies such as SK telecom, Google, Amazon, Apple and Tesla are jumping on the AI chip bandwagon. Announcing a transition toward an AI service company, SK telecom is spearheading the AI semiconductor industry after commercializing its AI chip SAPEON X220 in 2020.
Some other big-tech companies are also considering the path of developing proprietary AI chips from the ground up as it is more advisable to develop AI semiconductors specific to their own service offerings. For example, Tesla is developing proprietary AI chops specific to autonomous driving applications.
Market research firm Gartner projected that the AI semiconductor market will grow to 34.3 billion dollars (approximately 40 trillion won) by 2023, and account for 31.3% of the entire system semiconductor market by 2030.
Among the semiconductors used for AI applications, highly versatile CPU and GPU technologies have already entered the maturing state and the market growth is now being driven by optimized low-power, high-efficiency ASIC solutions. Market demand for AI chips is expected to come not only from high-performance servers used in data centers but also from devices such as automobiles and smartphones, and shift from learning applications to inference systems. Initially, demand for learning applications such as machine learning algorithms will be significant, but inference chips implementing AI services based on learning data are expected to drive the market growth in the long run.
What has prompted SK telecom to develop proprietary AI chips?
SK telecom unveiled SAPEON X220, Korea’s first AI semiconductor, in November, 2020. AI services are one of the five business lines of SK telecom, and many SK telecom services have incorporated deep learning, vision AI, and voice/chatting AI technologies. In recognition of the market growth dynamics and rapidly increasing internal demands for 5G MEC, machine learning server, SK telecom ventured into the future-oriented semiconductor market.
SAPEON is the compound word of SAPiens meaning mankind and aEON indicating eternity, signifying SK telecom’s will to provide sustainable benefits of innovation empowered by AI chips to human race. SAPEON performs deep learning computations 1.5 times faster than GPU, consumes 80% less energy and sells for half the price of a GPU. SK telecom unveiled SAPEON to global audiences in this year’s CES2022 and MCW22. Next year, the company plans to release a sequel of X220 optimized to inference, which is X330 combined with a real-time learning feature.
SK telecom will also a make greater dent in the global market by launching an AIaaS (AI as a Service) strategy to provide an integrated package of solutions required for AI service delivery ranging from AI chip-based hardware to AI algorithm and API (application programming interface).
How will SK telecom tap into the global market?
To venture into the global AI semiconductor market this year, SK telecom organized the SK ICT Alliance with SK square and SK hynix and incorporated a US subsidiary specialized in the AI chip business, SAPEON Inc. In addition, the company established SAPEON Korea to cover South Korea and the rest of Asia.
SAPEON Inc. will target global big-tech companies based in the United States as primary customers and will serve as an outpost from which SK telecom will drive the entry of SAPEON into the global market and seek to expand the coverage of its AI chip business.
This content has been created on the basis of the contents in SK telecom Newsroom.
Click ‘Original Contents’ to view the original copy (Korean) of the content.