- FavTutor
- Posts
- ChatGPT Advanced Voice Mode gains vision capabilities
ChatGPT Advanced Voice Mode gains vision capabilities
Including the latest AI news of the week
Hello, AI Enthusiasts!
Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments for the past 24 hours in one place, just for you.
In Today’s Newsletter: 😀
ChatGPT Advanced Voice Mode gains vision capabilities
Anthropic’s Claude 3.5 Haiku is now generally available
OpenAI
🔍️ ChatGPT Advanced Voice Mode gains vision capabilities
OpenAI is expanding its Advanced Voice Mode with live video and screen sharing capabilities. The company is rolling out these new features gradually starting now.
According to OpenAI's announcement, ChatGPT's Advanced Voice Mode now supports live video and screen sharing. Users can share visual context in real-time with ChatGPT to make conversations more natural and useful. The company first showed this feature during the GPT-4o launch. The mode will support more than 50 languages.
Insights for you:
OpenAI is extending the capabilities of its Advanced Voice Mode in ChatGPT to include live video and screen sharing. This allows users to share visual context with the AI model in real time, making conversations more natural and useful.
The new features will be rolled out in phases: Teams users and most Plus and Pro subscribers will get access immediately, followed by Europe at a later date. Business and education users will follow early next year.
As a seasonal feature, OpenAI is also introducing a Santa Chat, which will be available for the rest of December and uses the same speech technology as the Advanced Voice Mode. For the first conversation with the AI Santa, the usual usage limits will be waived once.
Need a personal assistant? We do too, that’s why we use AI.
Ready to embrace a new era of task delegation?
HubSpot’s highly anticipated AI Task Delegation Playbook is your key to supercharging your productivity and saving precious time.
Learn how to integrate AI into your own processes, allowing you to optimize your time and resources, while maximizing your output with ease.
Anthropic
😁 Anthropic’s Claude 3.5 Haiku is now generally available
Anthropic quietly rolled out its fastest AI model, Claude 3.5 Haiku, to all Claude users on web and mobile platforms, expanding from its previous API-only availability — though no official announcement has been made.
Insights for you:
Haiku 3.5 was released in November along with Claude’s computer use feature — beating the previous top model 3 Opus on key benchmarks.
The model excels at coding tasks and data processing, offering impressive speed and performance with high accuracy.
Haiku features a 200K context window, which is larger than competing models, while also integrating with Artifacts for a real-time content workspace.
The initial release drew criticism for Haiku’s API pricing, which was increased 4x over 3 Haiku to $1 per million input tokens and $5 per million output tokens.