AI Coding Digest

10 hours ago 1

Between Microsoft Build and Google I/O, this was a big week for AI coding and the transformative role AI plays in software engineering.

According to our Head of Research, Nicholas Arcolano, Microsoft’s AI-focused announcements at Build (which, let’s be honest, dominated the event) add up to a pretty strong ecosystem play – from MCP to a big Copilot update. “It definitely looks like Microsoft is going all in on an open, interoperable agent ecosystem, making it a more attractive platform to build your agents on over AWS or Google.”

But all that was before Google dropped its own AI news at I/O where they doubled (tripled?) down on AI, embracing Anthropic’s MCP in their own right, launching AI Ultra, giving users access to Gemini 2.5 Pro’s new enhanced reasoning mode, Deep Think, for the handsome fee of $249.99 a month, AND released a public beta for Jules, Google’s answer to GitHub Copilot.

But maybe the biggest news of all? The introduction of Claude 4 from Anthropic. Much more on that below.

Here’s more on a busy week in AI coding news:

Introducing Claude 4
By Anthropic

Anthropic released Claude 4 this week to close out a huge week for AI coding. And our Head of Research Nicholas Arcolano was… let’s just say, psyched. Here’s what he had to say about this release:

“Okay, this is cool stuff – very interesting and specific play by Anthropic. Rather than focus on raw reasoning or multimodal capabilities, this is very agent-focused. They’ve addressed the right things here to help Claude continue to be the model of choice for coding assistance and agentic workflows – the new models are better at tool use, get less distracted, and can pause reasoning to access external information, meaning they can be more successful at longer tasks.”

He adds the move is “very in the zeitgeist of Microsoft’s very agent-focused stuff this week. If it performs as advertised, it means Claude continues to be the default for things like Replit, etc. Although right now it’s pretty expensive – cost will have to come down.”

And one last thing (for now), “One big implication of these things is that you can now use tools with Deep Research, i.e. you can ask Claude to do a Deep Research project that requires it to get data from Jellyfish MCP (more on that below), Atlassian, etc.,” Nick adds.

From Theory to Practice: Early Lessons from our Jellyfish MCP Explorations
By Nicholas Arcolano

At Jellyfish, we’re continuing to explore the potential of fast-moving AI technologies, including the real-world applications (and implications) of Model Context Protocol (MCP).

“AI tool use has a lot of potential, but a LONG way to go to be ready for primetime,” Acolano adds. “Particularly interesting in light of Microsoft’s announcements at Build 2025 is that MCP will be a core part of their agent ecosystem strategy – and natively supported in Windows 11.”

So what has the Jellyfish team learned since we launched our own MCP server back in April? Read Nick’s thoughts here.

Also this week:

Google Embraces MCP
By Frederic Lardinois

Windows is getting support for the ‘USB-C of AI apps’
By Tom Warren

Inside Gemini Code Assist: Google’s Copilot Alternative
By Richard MacManus

GitHub’s new AI coding agent can fix bugs for you
By Emma Roth

Google I/O 2025 live blog: Gemini takes center stage
by Alex Heath, Nilay Patel, Victoria Song, and Allison Johnson

OpenAI launches Codex, an AI coding agent, in ChatGPT
By Maxwell Zeff

Something we missed? Find us on LinkedIn to let us know! See you next week.

Read Entire Article