The Future of Apple Intelligence

Apple's announcement of Artificial Intelligence features in their iOS and MacOS operating systems at WWDC this year signaled their unique technological perspective. Instead of dazzling users with cutting-edge features, they identified specific elements of generative AI that haven't achieved product-market fit   and built compelling user experiences around them. Apple's greatest strength is delighting users by finding creative ways to turn technology into an effective product, but its rollout strategy fails to take advantage of its main competitive differentiator.

Apple is the only company I will allow to have my personal data. It isn't because I trust them—it's because Apple's current business strategy doesn't provide a financial incentive to share my information. I wouldn't dream of adding pages of personal information to GhatGPT in a prompt. OpenAI is an immature, volatile company under pressure from activist investors to monetize everything they can.

I was excited when Craig Federighi emphasized on-device processing and later unveiled Private Cloud Compute—a secure way to offload more computationally intensive requests to a data center—as it signaled Apple was playing to their strengths. Apple has all of my data: contacts, notes, reminders, appointments, and text messages. They know when I go to sleep, how I've slept, and when I wake up. Every aspect of my daily routine, including when and where I go, is in a massive database about me. Why haven't any of the Apple Intelligence features released in iOS 18.1 or 18.2 taken advantage of this information?

Why don't Siri Suggestions in Messages sound like I wrote them? I've been using Messages for over a decade, surely enough data exists to replicate my unique voice. Why hasn't Siri become my personal assistant, surfacing the right information at the right time based on my past behavior? Why can't I receive personal insights into how I can improve the efficiency of my schedule? In short, why isn't Apple Intelligence helping me become a happier, more productive person when all of the requisite technological framework exists?

Perhaps Apple is iterating its way toward this goal, but it isn't adequately signaling the future it envisions. As Apple continues to expand its AI capabilities, it must leverage the wealth of user data it already has to truly enhance personalized experiences. The company's cautious approach to data privacy has won them loyalty, but this very asset—deep user trust—could also be the key to transforming their AI from functional to indispensable. Now is the time for Apple to double down on its unique strengths—delivering privacy-conscious, contextually aware intelligence that doesn't just react to commands but anticipates needs. This would not only secure Apple's leadership in the AI space but also set a new standard for how technology can enrich our lives.

The Tyranny of Expectations

Over the last decade, we’ve witnessed significant consolidation among technology companies. What was once a landscape full of small upstarts vying for dominance has amalgamated into a few unmovable pillars, setting the direction for the entire tech sector. These companies, namely Google, Microsoft, and Apple, have used their power to prevent disruption in the industry. The dominance of tech giants has often stifled competition and innovation, as their vast resources allow them to acquire potential competitors or replicate their products swiftly. This consolidation has led to a tech ecosystem where a handful of companies control vast swathes of data, infrastructure, and consumer attention. Their extensive user bases and integrated ecosystems make it challenging for new entrants to gain a foothold, as the barriers to entry are extraordinarily high. It’s difficult to remember a time when the technology landscape favored the new upstart over these powerful incumbents; however, AI, with its transformative potential and rapid pace of advancement, represents a unique challenge to this status quo.

Startups like OpenAI benefit from a clean slate, unencumbered by legacy products and consumer expectations. This freedom allows them to push boundaries and take risks established companies might avoid. The agility and willingness to embrace failure in AI experimentation can lead to breakthroughs that tech giants, focusing on stability and reliability, may miss. Today’s tech giants are tethered to existing consumer expectations built from years of using their products. Consumers don’t have these same baked-in expectations for upstarts like OpenAI, giving them far more leeway to experiment with an immature technology where results are often unpredictable. Users shrug when ChatGPT provides a result containing gibberish, but a result from Google Gemini instructing people to eat rocks sparks outrage. Ironically, a long track record of creating polished user experiences creates a tyranny of expectations that hurts their ability to innovate with immature, unproven technology.

The long-term future of the tech industry rests on the adaptability of these giants to the AI-driven paradigm shift. Will they leverage their resources to innovate and stay ahead, or will they become victims of their success, unable to move swiftly enough to embrace the new possibilities AI offers? I am confident that Google, Apple, and Microsoft, with their vast resources and established positions, are not at immediate risk of losing dominance. However, artificial intelligence presents an opening for smaller, more nimble competitors in a way we haven’t seen in years. The key is for these giants to recognize the potential of AI and use it to their advantage, ensuring their continued relevance and dominance in the industry.

Apple, the DOJ, and the DMA

While technology evolves at a breakneck pace, regulatory bodies designed in a bygone era of slow, incremental progress often find themselves in a perpetual game of catch-up. This dynamic is particularly evident in the United States Department of Justice’s (DOJ’s) scrutiny of Apple, a company known for its stringent control over its ecosystem. The complaint highlights five examples of “suppress[ing] technologies that would have increased competition among smartphones.” The complaint centers on accusations of the company suppressing technologies that could have increased competition in the smartphone market.

The Department of Justice cites five primary examples:

  • Suppressing Third-party Super Apps: By limiting the capabilities and integration of third-party applications, Apple effectively restricts the potential for all-in-one solutions that could compete with its services.

  • Blocking Cloud-streaming Apps: Apple's App Store policies have been criticized for limiting the functionality of cloud-streaming services, potentially stifling innovation and competition.

  • Preventing Third-party Messaging Apps from Achieving Quality Parity with Apple Messages: This practice allegedly undermines consumer choice by disadvantaging alternative messaging platforms.

  • Artificially Limiting Connectivity of Third-party Smartwatches: By doing so, Apple ensures that its own smartwatch remains the most compatible and feature-rich option for iPhone users.

  • Denying Access to Third-party, Cross-platform Digital Wallets: This restriction potentially limits the financial services ecosystem available to Apple device users, keeping them within Apple's proprietary Wallet app.

These allegations suggest a strategic effort by Apple to maintain its dominance in the smartphone and related markets by hindering competitors' ability to offer viable alternatives to consumers.

A Misguided Comparison to Microsoft's Antitrust Case

The complaint also makes a controversial comparison to the 1998 antitrust case against Microsoft, implying that the ruling against Microsoft paved the way for Apple's success in the smartphone era. This oversimplification overlooks the seismic shifts in technology, particularly the advent of mobile computing and smartphones, areas where Microsoft initially lagged. Apple's rise was less about Microsoft's constraints and more about seizing the opportunities presented by new technologies and consumer demands.

A Constructive Path Forward

The challenges and controversies surrounding the DOJ's approach to regulating Apple underscore a fundamental truth: litigation and investigations alone are not sufficient to foster a healthy, competitive tech ecosystem. What is needed is a clear set of industry expectations, codified into law by Congress, that balances innovation with fair competition. This legislative approach should be informed by a deep understanding of technology, a commitment to consumer welfare, and a nuanced appreciation of the global competitive landscape.

We must strive for a regulatory framework that is as dynamic and innovative as the technology it seeks to govern, ensuring a future where competition thrives, and consumers benefit from a wealth of choices.

Beginning a Spatial Journey

Composing this article from Mt. Hood on Apple Vision Pro

My mind was racing as I assertively tugged on the large glass door of my local Apple Store. Despite arriving ten minutes early to my appointment, a friendly employee welcomed me to a demonstration area. The store was nearly empty, and the early morning light was beaming through the south-facing glass facade onto tables full of Apple hardware. As I was seated, a second employee sat a small tray containing my Apple Vision Pro on the table in front of me. The purpose-built tray satisfyingly cradled the contours of the headset. After learning how to pick it up without smudging the glass front, I gently lowered the headset into place. It was oddly anticlimactic, as if nothing happened. I could see the store just as it was a moment ago. Then I remembered that I had a computer in front of my face. I wasn't seeing the room – I was seeing a real-time representation of the room. The illusion worked!

Over 30 minutes, I littered the store with windows, browsed the web on Mt. Hood, and swam with sharks. At one point, a butterfly gracefully flew in front of me and briefly landed on my outstretched hand. As it did, I thought I felt the gentle tickle of its spindly legs. Of course, I didn't actually feel it. It's all a clever illusion, but it's convincing enough that my brain failed to register a difference. As expected for a first-generation product, there were occasionally small cracks in the illusion, but I left impressed.

The demo was a tightly curated experience that matched the device's strengths. How does Apple Vision Pro feel in actual use? Is it a productivity tool or merely a fancy media consumption device?

An Infinite Canvas

In a previous article, I mentioned that Apple Vision Pro did not appear to have a “killer app” at launch; however, I’m starting to think the killer app may be the VisionOS interface itself. The ability to effortlessly scale work beyond the boundaries of traditional displays is profoundly liberating. Apple Vision Pro frees the user from the constraints of the physical world. An infinite canvas is more than just a spatial characteristic; it's a metaphor for these technologies' seemingly boundless potential. Apple Vision Pro forces us to rethink our relationship with computers fundamentally and breaks new ground in workflow management.

Retina Resolution?

One of my primary concerns was resolution. Can Apple Vision Pro resolve text well enough to read for significant periods? While text doesn't appear quite as razor-sharp as it presents on the 5K Studio Display, it’s more than enough for a strain-free reading experience.

It's too early to determine how successful this product will be, but I can't help but feel that we're at the beginning of another pivotal moment in tech, where we fundamentally rethink its role in our lives. Today’s Vision Pro is a preview – a window into the future. The device is slightly heavy, the battery life isn’t what we’ve become accustomed to with Apple products, and visionOS contains a few irritating bugs. Despite these flaws, Apple Vision Pro offers a unique, visceral experience. My excitement for this product transcends the enthusiasm I felt at the release of the original iPhone. Apple Vision Pro reminds me of the wonder I experienced using my first computer, our IBM Personal System/2, as a child. My, how far we’ve come.

Preparing the Way

AI-generated image of a futuristic VR/AR headset using DALL-E 3.

In Fall 2015, Tim Cook proclaimed, "We believe the future of TV is apps." As it turns out, the future of television wasn't apps. Will the future of computing be spatial? On Friday, February 2nd, we will begin to find out when Apple Vision Pro, the company's first augmented reality headset, goes on sale.

Like most new platforms, Apple Vision Pro does not appear to have a "killer app." The future of Apple's new headset is far from assured. Apple needs third-party developers' cooperation to help secure the success of its new platform; however, the company's introduction of a Core Technology Fee (CTF) in response to the European Union's Digital Markets Act alienated a large swath of the developer community. Many popular streaming apps like Netflix, YouTube, and Spotify appear uninterested in developing a native VisionOS app.

Apple Vision Pro is the first headset that excites me as a user and developer. Despite the uncertainty, I ordered the new device and am actively developing a VisionOS app. Its emphasis on augmenting reality rather than escaping it provides a compelling user story and ultimate flexibility for developing productivity-focused apps.

I'm looking forward to documenting my initial experience this Friday and can't wait to see what we collectively do with it.

Scaling Silicon

The Apple M1 Ultra - Two M1 Max dies connected via an interposer

Apple raised eyebrows in 2020 when the company announced plans to transition from Intel processors to chips designed in-house, marking the end of a 15-year partnership with Intel.1 For long-time followers of technology, it was reminiscent of Steve Jobs' announcement at the 2005 Worldwide Developers Conference (WWDC), where he revealed Apple's plan to transition from PowerPC to the x86 architecture from Intel. Like the x86 transition fifteen years earlier, the rollout of Apple silicon went astonishingly smoothly despite the fundamental incompatibility between x86 and ARM instruction sets. For the first time in the recent past, Intel, Advanced Micro Devices (AMD), and Apple have taken divergent strategies in microarchitecture design. Each strategy has its own strengths and weaknesses, so it will be fascinating to see how well each approach scales to the future's cost, efficiency, and performance demands. AMD's chiplet design offers pricing advantages over Intel at the expense of bandwidth constraints and increased latency. Apple's system-on-a-chip (SoC) strategy requires larger dies but offers complete integration; however, we may be seeing the first cracks in Apple's ARM SoC strategy after scaling back plans for a high-end Mac Pro.2 According to Mark Gurman's reporting, a Mac Pro with an SoC larger than the M1 Ultra would likely have a starting cost of $10,000. To get a better perspective on the pricing challenges Apple may be facing when designing an SoC for the Mac Pro, let's explore how yield and cost change as die size increases.

For example, we can calculate the number of rectangular dies per circular wafer for Apple's basic M1 SoC and the M1 Max using basic geometry:

M1

Die Dimensions

10.9 mm x 10.9 mm

Die Size

118.81 mm2

Scribe Width

200 µm

Wafer Diameter

300 mm

Edge Loss

5.00 mm

Die Per Wafer

478


M1 Max

Die Dimensions

22 mm x 20 mm

Die Size

440 mm2

Scribe Width

200 µm

Wafer Diameter

300 mm

Edge Loss

5.00 mm

Die Per Wafer

117

The smaller M1 dies give us four times the quantity per wafer over the M1 Max. This is one factor influencing the cost of physical materials, but things get really interesting once we begin calculating yields. The process of fabricating working silicon wafers is delicate and rife with the opportunity to produce imperfections. Defects can be caused by contamination, design margin, process variation, photolithography errors, and various other factors. Yield is a quantitative measure of the quality of the semiconductor process and is one of the most important factors in wafer cost. The measure used for defect density is the number of defects per square centimeter. Assuming a standard defect density of 0.1/cm2 using a variable defect size yield model for TSMC's N5 node, our two wafers possess vastly different yields:3

M1

Die Dimensions

10.9 mm x 10.9 mm

Die Size

118.81 mm2

Scribe Width

200 µm

Wafer Diameter

300 mm

Edge Loss

5.00 mm

Die Per Wafer

478

Defect Density 3

0.1 cm2

Yield

88.9%

Good Dies Per Wafer

425

M1 Max

Die Dimensions

22 mm x 20 mm

Die Size

440 mm2

Scribe Width

200 µm

Wafer Diameter

300 mm

Edge Loss

5.00 mm

Die Per Wafer

117

Defect Density 3

0.1 cm2

Yield

65.4%

Good Dies Per Wafer

76

This yield disparity increasing from 4x to just over 5.5x further inflates our larger die's already higher manufacturing cost. Just how much of a price difference? Extrapolating data from the Center for Security and Emerging Technology, we can estimate that a 300 mm wafer created using TSMC's N5 node costs just under $17,000.4 Therefore, our M1 has a theoretical materials cost of $40 while our M1 Max has a cost of $223. Given that an M1 Ultra is two M1 Max dies connected via an interposer, the raw silicon cost of the Ultra is likely around $450. While all these figures are nothing more than conjecture, they clearly illustrate how quickly costs skyrocket and yields shrink as die size increases.

Where does this leave the Mac Pro and the Apple silicon roadmap? Cost-effective silicon capable of performance significantly higher than the current top-tier SoC will likely require a more advanced lithography process to decrease transistor size. A likely candidate is TSMC's N3 node, which is where Apple is headed over the subsequent few product cycles. However, the rate at which manufacturers are able to decrease transistor size is rapidly slowing, as evidenced in the chart below, so a more fundamental rethinking of chip manufacturing is on the horizon.

TSMC. (2021, June). 3nm Technology. https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_3nm

One certainty is that we are entering an exciting period of technological advancement that is beginning to disrupt the market. The ability of technology companies to adapt quickly is shifting from a mere competitive advantage to a requirement for survival. The future belongs to those who dare to think without boundaries.

References:

1 Gurman, M., & King, I. (2020, June 22). Apple-made computer chips coming to Mac, in split from Intel. Bloomberg.com. Retrieved December 26, 2022, from https://www.bloomberg.com/news/articles/2020-06-22/apple-made-computer-chips-are-coming-to-macs-in-split-from-intel?sref=9hGJlFio

2 Gurman, M. (2022, December 18). Apple scales back high-end Mac Pro plans, weighs production move to Asia. Bloomberg.com. Retrieved December 30, 2022, from https://www.bloomberg.com/news/newsletters/2022-12-18/when-will-apple-aapl-release-the-apple-silicon-mac-pro-with-m2-ultra-chip-lbthco9u

3 Cutress, I. (2020, August 25). ‘Better yield on 5nm than 7nm’: TSMC update on defect rates for N5. AnandTech. https://www.anandtech.com/show/16028/better-yield-on-5nm-than-7nm-tsmc-update-on-defect-rates-for-n5

4 Kahn, S., & Mann, A. (2022, June 13). AI chips: what they are and why They Matter. Center for Security and Emerging Technology. Retrieved December 30, 2022, from https://cset.georgetown.edu/publication/ai-chips-what-they-are-and-why-they-matter

Further Reading:

Agrawal, V. D. (1994). A tale of two designs: the cheapest and the most economic. Journal of Electronic Testing, 5(2–3), 131–135. https://doi.org/10.1007/bf00972074