For years, Apple was the trillion-dollar company that couldn’t say “AI” out loud. While OpenAI, Google, and Anthropic sprinted toward frontier models, Apple clung to “machine learning” like a security blanket — a term that by 2025 felt about as current as “World Wide Web.”

That era just ended.

The Rebrand That Says Everything

According to Bloomberg’s Mark Gurman, Apple will unveil Core AI at WWDC 2026 this June. It replaces Core ML, the machine learning framework that’s been powering on-device inference since 2017. Two letters change. The entire signal shifts.

Core ML was built for a simpler world — image classifiers, NLP models, recommendation engines running locally on Apple silicon. Solid stuff for its era. But we’re not in that era anymore. We’re in the era of LLMs, diffusion models, multimodal systems, and autonomous agents. “Machine learning” doesn’t cover it.

Apple knows this. Gurman puts it bluntly: Apple recognizes “machine learning” is a dated term that no longer resonates with developers or consumers. Core AI is the flag planted in new ground.

What Core AI Actually Changes

Details are thin — this is Apple — but the reporting from Mashdigi suggests Core AI expands the mission significantly beyond Core ML’s “run pre-trained models locally” mandate.

Third-party model integration is the headline feature. Core AI will reportedly make it easier for developers to plug in models from Google, OpenAI, or other providers through a standardized interface. There’s even speculation Apple could adopt something resembling Model Context Protocol (MCP), the open standard for connecting AI models to external tools. If that happens, it’s a philosophical earthquake for a company that treats “openness” like a communicable disease.

The framework launches alongside iOS 27. Core ML won’t vanish overnight — expect a transition period — but the direction is unmistakable. Core ML is legacy code. Core AI is the platform.

The Google Gemini Architecture

You can’t understand Core AI without understanding the Google deal. Since late 2025, Apple has been using Google’s Gemini models as the backbone for its next-gen AI capabilities, including a dramatically upgraded Siri.

Tim Cook laid out the architecture on Apple’s Q1 2026 earnings call: Apple Foundation Models handle basic on-device tasks. Private Cloud Compute handles privacy-sensitive cloud processing. Google Gemini powers the heavy reasoning and conversation. OpenAI’s ChatGPT remains available for broad knowledge queries.

Core AI is the developer-facing glue that ties all of this together. Instead of forcing developers to manage multiple AI backends, it abstracts the complexity — one unified interface for on-device models, cloud models, and third-party models.

If Apple executes, it’s actually compelling. Developers get simplicity. Users get powerful AI with privacy. Apple stays competitive without having built a frontier model from scratch.

The Catch-Up Question

Let’s not sugarcoat it: Apple has been playing catch-up for two years. While competitors shipped ChatGPT, Gemini, and Claude, Apple delivered Siri improvements that were — charitably — incremental. Key AI talent left. Apple Intelligence launched to mixed reviews. PYMNTS CEO Karen Webster called “Apple Intelligence” an oxymoron for a $4 trillion company.

But here’s the counterargument that’s aging better by the week.

Look at the landscape right now. OpenAI just signed a Pentagon deal that has the AI safety community in full alarm mode. Anthropic got banned from federal use for refusing to strip guardrails from autonomous weapons. The industry is moving at breakneck speed, and not all of that speed is pointed somewhere good.

Apple’s bet — partner with Google for the hard research, focus on privacy-preserving on-device inference, give developers clean tools — is less sexy but potentially more durable. Core AI is the toolkit for that strategy.

Whether “sustainable and private” can outrun “powerful and fast” in a market addicted to ChatGPT-level interactions remains the trillion-dollar question. Apple’s answer: we’ll get there, but we’ll do it our way.

What Developers Should Watch For

If you build for Apple platforms, Core AI is the most important WWDC announcement to track. Here’s what’s likely coming:

Expanded generative AI APIs. Core ML already supports on-device LLMs and diffusion models. Core AI will push this further — better support for modern architectures, possibly streaming inference.

Standardized third-party model access. One interface to tap Google, OpenAI, or custom models. This alone could save developers months of integration work.

Better tooling. Updated Xcode integration, model profiling, possibly a model registry. Apple invests heavily in developer experience and Core AI won’t be an exception.

Privacy baked in at the framework level. Differential privacy, on-device processing preferences, transparent data handling — all built into the primitives, not bolted on after.

The Trajectory Tells the Story

Zoom out. In 2023, Apple wouldn’t say “AI” on stage at WWDC. In 2024, they introduced “Apple Intelligence” — using the word but still hedging. In 2025, they partnered with Google to fill gaps they couldn’t fill alone. In 2026, they’re renaming their foundational developer framework to put AI front and center.

This is Apple admitting, in the most Apple way possible, that the AI revolution is real and permanent. They’ve arrived late to technology shifts before — smartphones, tablets, smartwatches, wireless earbuds — and executed so well that everyone forgot they were late.

Whether they can pull that off with AI, arguably the most consequential technology shift since the internet, is the question that matters. But with Core AI, at least they’re finally speaking the right language.