The Future of Apple Intelligence

Apple's announcement of Artificial Intelligence features in their iOS and MacOS operating systems at WWDC this year signaled their unique technological perspective. Instead of dazzling users with cutting-edge features, they identified specific elements of generative AI that haven't achieved product-market fit   and built compelling user experiences around them. Apple's greatest strength is delighting users by finding creative ways to turn technology into an effective product, but its rollout strategy fails to take advantage of its main competitive differentiator.

Apple is the only company I will allow to have my personal data. It isn't because I trust them—it's because Apple's current business strategy doesn't provide a financial incentive to share my information. I wouldn't dream of adding pages of personal information to GhatGPT in a prompt. OpenAI is an immature, volatile company under pressure from activist investors to monetize everything they can.

I was excited when Craig Federighi emphasized on-device processing and later unveiled Private Cloud Compute—a secure way to offload more computationally intensive requests to a data center—as it signaled Apple was playing to their strengths. Apple has all of my data: contacts, notes, reminders, appointments, and text messages. They know when I go to sleep, how I've slept, and when I wake up. Every aspect of my daily routine, including when and where I go, is in a massive database about me. Why haven't any of the Apple Intelligence features released in iOS 18.1 or 18.2 taken advantage of this information?

Why don't Siri Suggestions in Messages sound like I wrote them? I've been using Messages for over a decade, surely enough data exists to replicate my unique voice. Why hasn't Siri become my personal assistant, surfacing the right information at the right time based on my past behavior? Why can't I receive personal insights into how I can improve the efficiency of my schedule? In short, why isn't Apple Intelligence helping me become a happier, more productive person when all of the requisite technological framework exists?

Perhaps Apple is iterating its way toward this goal, but it isn't adequately signaling the future it envisions. As Apple continues to expand its AI capabilities, it must leverage the wealth of user data it already has to truly enhance personalized experiences. The company's cautious approach to data privacy has won them loyalty, but this very asset—deep user trust—could also be the key to transforming their AI from functional to indispensable. Now is the time for Apple to double down on its unique strengths—delivering privacy-conscious, contextually aware intelligence that doesn't just react to commands but anticipates needs. This would not only secure Apple's leadership in the AI space but also set a new standard for how technology can enrich our lives.

The Meaning of Art

Since ChatGPT burst onto the scene in the Fall of 2022, the concept of "AI art" has bothered me. I couldn't quite understand why, but I knew it rubbed me the wrong way. After much thought, I now understand why I do not consider AI-generated content art.

My fundamental apprehension in AI-generated art isn't the quality – it's that it fails to address the purpose of art. Art speaks to the infinite depths of the human experience, the endless color palette of our emotions, our greatest fears, and our inexhaustible aspirations. Art communicates an idea. The creation of art is often a long journey of self-reflection. Art is as much a cathartic learning experience for the artist as it is an intellectual journey for the viewer. Short-circuiting the creation process with artificial intelligence gives us no more significant insights. We've learned nothing from the process, and our ability to learn and grow from the experience has been taken from us.

Art is a uniquely human experience. When Vladimir Horowitz returned to his home country of Moscow in 1986, it was the 81-year-old pianist's first recital in the Soviet Union since he left his homeland 61 years ago to make a career in the West. He was well past his prime, yet many in the audience cried unabashedly during portions of the recital. Horowitz returned on stage for six curtain calls after he had played three encores. Listen to his interpretation of Liszt's Deuxième Année V; Sonnette 104 del Petarca or Scriabin's Etude Op. 8, No. 10. It is the sound of a man who has lived a full life, who is openly struggling in front of the audience, proof that youth is an aberration and wisdom only comes with time. Unfortunately, that hard-earned wisdom almost always comes too late to be fully realized by unencumbered virtuosity. This performance is a reflection of the complexities and ironies of life. It has taken on a deeper meaning than the notes on the page.

Wladyslaw Szpilman spent the last 56 years of his life without family after they were all murdered by Nazis in World War II. Though a simple piece, Chopin's Nocturne C sharp-minor takes on a haunting, melancholy atmosphere under Mr. Szpilman's fingers.

These small fragments of beauty, sparkling against the dark backdrop of an otherwise ugly world, can never be replicated by artificial intelligence. Art is a uniquely human celebration of ambition, resilience, and creativity that artificial intelligence can never match.

The Tyranny of Expectations

Over the last decade, we’ve witnessed significant consolidation among technology companies. What was once a landscape full of small upstarts vying for dominance has amalgamated into a few unmovable pillars, setting the direction for the entire tech sector. These companies, namely Google, Microsoft, and Apple, have used their power to prevent disruption in the industry. The dominance of tech giants has often stifled competition and innovation, as their vast resources allow them to acquire potential competitors or replicate their products swiftly. This consolidation has led to a tech ecosystem where a handful of companies control vast swathes of data, infrastructure, and consumer attention. Their extensive user bases and integrated ecosystems make it challenging for new entrants to gain a foothold, as the barriers to entry are extraordinarily high. It’s difficult to remember a time when the technology landscape favored the new upstart over these powerful incumbents; however, AI, with its transformative potential and rapid pace of advancement, represents a unique challenge to this status quo.

Startups like OpenAI benefit from a clean slate, unencumbered by legacy products and consumer expectations. This freedom allows them to push boundaries and take risks established companies might avoid. The agility and willingness to embrace failure in AI experimentation can lead to breakthroughs that tech giants, focusing on stability and reliability, may miss. Today’s tech giants are tethered to existing consumer expectations built from years of using their products. Consumers don’t have these same baked-in expectations for upstarts like OpenAI, giving them far more leeway to experiment with an immature technology where results are often unpredictable. Users shrug when ChatGPT provides a result containing gibberish, but a result from Google Gemini instructing people to eat rocks sparks outrage. Ironically, a long track record of creating polished user experiences creates a tyranny of expectations that hurts their ability to innovate with immature, unproven technology.

The long-term future of the tech industry rests on the adaptability of these giants to the AI-driven paradigm shift. Will they leverage their resources to innovate and stay ahead, or will they become victims of their success, unable to move swiftly enough to embrace the new possibilities AI offers? I am confident that Google, Apple, and Microsoft, with their vast resources and established positions, are not at immediate risk of losing dominance. However, artificial intelligence presents an opening for smaller, more nimble competitors in a way we haven’t seen in years. The key is for these giants to recognize the potential of AI and use it to their advantage, ensuring their continued relevance and dominance in the industry.

The Road to AGI is Longer Than You Think

In 1965, Time Magazine made bold projections about the wonders awaiting us from the burgeoning field of technology. While we have seen technological wonders in the last 50 years, almost none of the predictions featured in the magazine came to pass:

"Men such as IBM Economist Joseph Froomkin feel that automation will eventually bring about a 20-hour work week, perhaps within a century, thus creating a mass leisure class. Some of the more radical prophets foresee the time when as little as 2% of the work force will be employed, warn that the whole concept of people as producers of goods and services will become obsolete as automation advances. Even the most moderate estimates of automation's progress show that millions of people will have to adjust to leisurely, 'nonfunctional' lives, a switch that will entail both an economic wrench and a severe test of the deeply ingrained ethic that work is the good and necessary calling of man."

Technology experts continue to overestimate the positive impact of technological advances on the average person. In fact, many recent pronouncements have a very similar ring to the quote above. This line of thought is particularly pervasive in the Artificial Intelligence space today. It's understandable that these sweeping claims are appearing anew – perhaps no other technological advancement has advanced so rapidly since the dawn of the information era. While these accomplishments are remarkable, the fantastical claims that artificial general intelligence (AGI) is just around the corner is incorrect for several reasons.

"The Last Mile"

Nearly every seasoned engineer is familiar with the 90/10 rule, which states that 90% of the work required to finish a project will take roughly 10% of the timeline. The last 10% of work will consumed 90% of the time. While this rule of thumb isn't always a perfect indicator, we see this play out repeatedly.

Five years ago, Tesla appeared poised to deliver a fully autonomous Level 5 vehicle in the next few years; however, the cars manufactured today remain at a humble partial automation (Level 2). Microprocessor design is another example. Transistor size has decreased far more slowly over the last decade than in previous decades. Each successive decade has seen a decrease in speed with which transistors have shrunk. As it turns out, Moore's Law has a limit. This slowed progress mainly stems from significantly more difficult engineering problems as density increases beyond a certain point. Quantum effects such as electron tunneling, where electrons can pass through an extremely thin gate, suddenly become major roadblocks.

Challenge Parity

The trajectory of progress is uncertain and often veers off in unexpected directions. For instance, while the digital age promised enhanced connectivity and access to information, it also gave rise to issues like misinformation, cyberbullying, and digital addiction – challenges that were scarcely anticipated as we heralded the arrival of the internet area. This tendency to overlook potential pitfalls in the face of new technology underscores a common shortfall in our predictive mental models: they often mirror the current zeitgeist and neglect the nuanced complexities of the future.

Systematic Underestimation of Inequality and Corporate Greed

The predominance of Silicon Valley as a hub for technological innovation and prediction can create a skewed perspective on the future of technology. The region's unique ecosystem of venture capital, start-ups, and cutting-edge research tends to foster an echo chamber of ideas and optimism, primarily driven by those who benefit most from technological advances. This demographic, often composed of affluent, technologically savvy individuals, may not fully grasp the broader social and economic challenges faced by less privileged communities worldwide. Consequently, predictions from this vantage point can overlook crucial issues such as digital divides, access to technology, and the varying impacts of automation on different socio-economic groups.

While the advancements in technology we have seen in recent years are impressive, we must approach predictions about the future of technology with caution. The road to AGI is longer than we think, and we must be mindful of the potential pitfalls and challenges that may arise along the way. It is important to consider the impact of technology on all members of society, especially those who may be less privileged. By taking a more nuanced and inclusive approach to technological progress, we can ensure that the benefits of these advancements are more widely shared, and that we are better prepared to address the challenges that lie ahead.

Interpretability in Machine Learning

Since OpenAI released its large language model (LLM) chatbot, ChatGPT, machine learning, and artificial intelligence have entered mainstream discourse. The reaction has been a mix of skepticism, trepidation, and panic as the public comes to terms with how this technology will shape our future. Many fail to realize that machine learning already shapes the present, and many developers have been grappling with introducing this technology into products and services for years. Machine learning models are used to make increasingly important decisions – from aiding physicians in diagnosing serious health issues to making financial decisions for customers.

How it Works

I strongly dislike the term "artificial intelligence" because what the phrase describes is a mirage. There is no complex thought process at work – the model doesn't even understand the information it is processing. In a nutshell, OpenAI's model powering ChatGPT calculates the statistically most probable next word given the immediately surrounding context based on the enormous amount of information developers used to train the model.

A Model?

Let's say we compiled an accurate dataset containing the time it takes for an object to fall from specific heights:

Height Time
100 m 4.51 sec
200 m 6.39 sec
300 m 7.82 sec
400 m 9.03 sec
500 m 10.10 sec

What if we need to determine the time it takes for that object to fall from a distance we don't have data for? We build a model representing our data and either interpolate or extrapolate to find the answer:

{\displaystyle \ t=\ {\sqrt {\frac

Models for more complex calculations are often created with neural networks, mathematical systems that learn skills by analyzing vast amounts of data. A vast collection of nodes evaluate a specific function and pass the result to the next node. Simple neural networks can be expressed as mathematical functions, but as the number of variables and nodes increase, the model can become opaque to human comprehension.

The Interpretability Problem

Unfortunately, opening many complex models and providing a precise mathematical explanation for the decision is impossible. In other words, models often lack human interpretability and accountability. We often can't say, mathematically speaking, exactly how the network makes the distinction it does; we only know that its decisions align with those of a human. It doesn't require a keen imagination to see how this presents a problem in regulated, high-stakes decision-making.

Let's say John visits a lender and applies for a $37,000 small business loan. The lender needs to determine the probability that John will default on the loan, so they feed John's information into an algorithm, which computes a low score causing a denial. By law, the lender must provide John with a statement of the specific reasons for the denial. In this scenario, what do we tell John? Today, we can reverse engineer the model and provide a detailed answer, but even simple models of tomorrow will quickly test the limits of human understanding as computing resources become more powerful and less expensive. So how do we design accountable, transparent systems in the face of exponentially growing complexity?

Solutions?

Proponents of interpretable models suggest limiting the number of variables used in a model. The problem with this approach becomes apparent after considering how neural networks weigh variables. Models multiply results by coefficients that determine the relative importance of each variable or calculation before passing them to the next node. These coefficients and variables are often between 20 and 50 decimal places long, containing positive and negative numbers. While understanding the data underpinning a decision is essential, more is needed to truly elucidate a clear explanation. We can partially solve this problem by building tooling to abstract implementation details and provide a more intelligible overview of the model; however, this still only provides an approximation of the decision-making process.

Other thought leaders in machine learning argue that the most viable long-term solutions may not involve futile attempts to explain the model but should instead focus on auditing and regulating performance. Do large volumes of test data reveal statistical trends of bias? Does analyzing the training data show any gaps or irregularities that could result in harm? Unfortunately, this does not solve the issue in my hypothetical scenario above. I can't conclusively prove that my current decision was correct by pointing to past performance.

Technology is simply moving too rapidly to rely on regulations, which are, at best, a lagging remedy. We must pre-emptively work to build explainability into our models, but doing this in an understandable and actionable way will require rethinking our current AI architectures. We need forward-looking solutions that address bias at every stage of the development lifecycle with strong internal governance. Existing systems should undergo regular audits to ensure small changes haven't caused disparate impacts.

I can't help but feel very lucky to live in this transformative sliver of time, from the birth of the personal computer to the beginning of the internet age and the machine learning revolution. Today's developers and system architects have a massive responsibility to consider the impact of the technology they create. The future adoption of AI heavily depends on the trust we build in our systems today.