Apple Denies Limiting AI Functionality to iPhone 15 Pro Series for Profit, Citing Insufficient Computing Power on Older Devices
Apple has announced that its Apple Intelligence (AI) feature will be available on iOS 18 and later versions, but only on devices equipped with Apple A17 Pro or Apple M series chips, meaning that only iPhone 15 Pro, iPhone 15 Pro Max, and certain iPad and Mac models with M1-M4 chips will be able to use the AI feature.
Why don't other devices support AI functionality? Normally, this would be seen as a ploy by Apple to encourage users to purchase new devices, as the company has done with previous new features. However, Apple has responded to these allegations, denying that it is limiting AI functionality to the latest iPhone models for profit.
According to Apple's AI team leader and marketing manager, the reason for the limitation is that older iPhone models lack sufficient computing power to run the AI feature smoothly. They stated that any MacBook or iPad equipped with an Apple M series chip can use Apple Intelligence, and the A17 Pro chip on the iPhone 15 Pro+ has a 16-core NPU unit, providing 35TOPS of computing power, which is twice that of the A16 chip.
The AI models require extremely high computing power during inference, and while it is theoretically possible to run these models on older devices, the slow speed makes them unusable. Apple has taken issue with the notion that it is limiting AI functionality to sell more new iPhones, saying that if it were a conspiracy to sell new devices, it would be "smart" enough to exclude older Mac and iPad models from supporting the feature as well.
It's worth noting that Apple also mentioned that memory capacity plays a role in determining the usability of AI models on devices, but did not provide specific details on how much of a difference 6GB or 8GB of memory would make when running Apple AI models.
The iPhone 14 Pro series uses 6GB of memory, while the iPhone 15 Pro series uses 8GB of memory. In terms of computing power, the A16 chip has 17TOPS, which is significantly lower than the 35TOPS of the A17 Pro chip.
Therefore, it seems that the main factor is still the computing power of the chip, which determines the speed of AI model inference. While the 2GB difference in memory may not have a significant impact, it is still a factor.
Additionally, Apple's AI functionality involves some features that run on the device itself, requiring local chip computing power, while others directly call the OpenAI API, sending data to cloud servers for processing. So, should cloud-related features be open to all users?