• FavTutor
  • Posts
  • ChatGPT coming to Siri & Apple Intelligence

ChatGPT coming to Siri & Apple Intelligence

Including the latest AI news of the week

In partnership with

Hello, AI Enthusiasts!

Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments from the past week in one place, just for you.

In Today’s Newsletter: 😀 

  • ChatGPT coming to Siri & Apple Intelligence

  • Zoom Launches AI Companion 2.0

  • Ideogram launches new website and AI image editor

  • Microsoft's DIFF Transformer promises LLMs with fewer hallucinations

Apple
💡 ChatGPT coming to Siri & Apple Intelligence

The latest Apple Intelligence developer beta includes ChatGPT integration with Siri, announced at the company’s developer conference. ChatGPT will also be used as part of a feature that Apple calls Visual Intelligence, where the phone’s camera can identify text or objects and even translate signs in real time.

Insights for you:

  • Apple says Siri will automatically detect when it needs help answering a question and ask if you’d like to use ChatGPT each time it attempts to access the chatbot.

  • They are also integrating ChatGPT into its system-wide Writing Tools so you’ll be able to ask the bot to help you compose text.

  • The company also clarifies that ChatGPT won’t store your data and won’t use your data for training its OpenAI models.

Unlock Windsurf Editor, by Codeium.

Introducing the Windsurf Editor, the first agentic IDE. All the features you know and love from Codeium’s extensions plus new capabilities such as Cascade that act as collaborative AI agents, combining the best of copilot and agent systems. This flow state of working with AI creates a step-change in AI capability that results in truly magical moments.

Zoom
👋 Zoom Launches AI Companion 2.0

As part of its evolution to an AI-first work platform for human connection, Zoom launched Zoom AI Companion 2.0. Users can find AI Companion via a convenient side panel within Zoom Workplace. AI Companion 2.0 is bolstered with new capabilities, including:

New Features:

  • Expanded Context: AI Companion 2.0 can understand the user’s work context, remember prior interactions, and provide relevant suggestions.

  • Synthesis of Information: It gathers information from Zoom Workplace apps like Zoom Meetings, Team Chat, Docs, and Mail, as well as external sources such as Microsoft Outlook, Google Calendar, and more, to provide clear summaries and answers to questions.

  • Action-Taking: AI Companion 2.0 can help users identify and complete the next steps and action items. It will also integrate with Zoom Tasks by late 2024 to assist with tracking and completing tasks.

Ideogram
🖌️ Ideogram launches new website and AI image editor

Ideogram has completely redesigned its website and introduced a new feature that puts it ahead of competitor Midjourney. The new interface maintains a clean design with a central prompt input box, community-generated inspiration images displayed below, and a minimalist sidebar on the left.

Insights for you:

  • The standout feature of this update is Canvas, a new image editing tool that Ideogram says provides users with an "infinite creative canvas" for image manipulation.

  • Canvas provides AI-powered editing features like Magic Fill for inpainting, which allows users to add or modify elements within an existing image, and outpainting, which allows users to expand images beyond their original boundaries.

  • Some Canvas features are available for free, including the ability to remix images in up to two Canvas screens.

Microsoft
😇 Microsoft's DIFF Transformer promises LLMs with fewer hallucinations

Microsoft Research has created a new AI architecture called the "Differential Transformer" (DIFF Transformer) designed to enhance focus on relevant context while reducing interference. According to the researchers, this approach shows improvements in various areas of language processing.

Insights for you:

  • DIFF Transformer is designed to increase attention to relevant contexts and reduce interference.

  • The core of the DIFF Transformer is "differential attention". This involves calculating two separate softmax attention maps and subtracting them from each other. This is intended to eliminate common noise, similar to noise-canceling headphones.

  • In tests, the DIFF Transformer achieved comparable performance to conventional transformers with around 65% of the model size or training data. It showed clear advantages in longer contexts, up to 64,000 tokens.