Article Source: AI Monkey

Image Source: Generated by Wujie AI
Apple has recently provided M3 chips for artificial intelligence developers, allowing them to seamlessly handle large models with billions of parameters on MacBook. Apple stated in its blog post, "Support for up to 128GB of memory unlocks workflows that were previously impossible on a notebook."
Currently, only the 14-inch MacBook Pro supports M3, M3 Pro, and M3 Max chips, while the 16-inch MacBook Pro only supports M3 Pro and M3 Max configurations. Apple also claims that its enhanced neural engine accelerates powerful machine learning (ML) models while protecting privacy.
Developers can now run the largest open-source LLM (Falcon with 180 billion parameters) with minimal quality loss on the 14-inch laptop.
However, running open-source LLM on a laptop is not new. Previously, AI professionals have also experimented with M1. Anshul Khandelwal, Co-founder and CTO of Invideo, tested a 65 billion parameter open-source LLM on his MacBook (supported by M1). He said, "It's not far when every tech person will have a local LLM."
Aravind Srinivas, Co-founder and CEO of Perplexity.ai, jokingly remarked that once the MacBook becomes powerful enough in terms of FLOPs for every M1 chip, large organizations using MacBooks and high-speed internet will be regulated and required to report their existence to the government.
M3 for AI Workloads
Apple claims that the M3 series chips are currently 15% faster than the M2 series chips and 60% faster than the M1 series chips. Obviously, M2 and M3 have significant differences in performance and other specifications. The latest chips from Apple have the same number of cores, but a different balance of performance and efficiency cores (6 each, 8 P and 4 E), and support up to 36GB of memory instead of 32GB.
The M3 chip supports unified memory of up to 128GB, doubling the capacity compared to its predecessors M1 and M2 chips. This expanded memory capacity is particularly important for AI/ML workloads that require a large amount of memory resources for training and executing large language models and complex algorithms.
In addition to the enhanced neural engine and expanded memory support, the M3 chip also adopts a redesigned GPU architecture.
This architecture is built specifically to achieve outstanding performance and efficiency, combining dynamic cache, grid shading, and ray tracing features. These advancements are designed to accelerate AI/ML workloads and optimize overall computing efficiency.
Unlike traditional GPUs, the new M3 prominently features a GPU with "dynamic cache" functionality, using local memory in real time to improve GPU utilization and significantly enhance the performance of demanding professional applications and games.
For users of graphics-intensive applications such as game developers and AI tools related to photos like Photoshop, the GPU's capabilities will be beneficial. Apple claims its speed can reach up to 2.5 times that of the M1 series chips, with hardware-accelerated grid shading and improved performance at lower power consumption.
Apple and the World
Apple is not alone, as other manufacturers such as AMD, Intel, Qualcomm, and NVIDIA are also heavily investing in enhancing edge capabilities to enable running large AI workloads on laptops and personal computers.
For example, AMD recently introduced AMD Ryzen AI, which includes the first integrated AI engine for x86 Windows laptops and is the only integrated AI engine of its kind.
On the other hand, Intel is pinning its hopes on the 14th generation Meteor Lake. It is the first Intel processor with a tiled architecture that can mix and match different types of cores, such as high-performance cores and low-power cores, to achieve the best balance of performance and power efficiency.
Recently, Qualcomm also launched Snapdragon X Elite. The company's CEO, Cristiano Amon, claimed that its performance surpasses Apple's M2 Max chip, with comparable peak performance and a 30% reduction in power consumption. Meanwhile, NVIDIA is also investing in edge use cases and quietly working on designing CPUs compatible with the Microsoft Windows operating system using Arm technology.
AI developers are increasingly running and experimenting with language models locally, and the developments in this field are truly fascinating. Given the latest advancements in this field, Apple is slowly but surely becoming the preferred choice for AI developers.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。