• FavTutor
  • Posts
  • OpenAI releases ChatGPT app for Windows

OpenAI releases ChatGPT app for Windows

Including the latest AI news of the week

In partnership with

Hello, AI Enthusiasts!

Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments from the past week in one place, just for you.

In Today’s Newsletter: 😀 

  • OpenAI releases ChatGPT app for Windows

  • LLMs are easier to jailbreak using keywords from marginalized groups

  • OpenAI's o1 AI Model Failed at travel planning

  • Nvidia improves Meta's Llama model with new training approach

OpenAI
🖥️ OpenAI releases ChatGPT app for Windows

After a Mac app in the summer, now ChatGPT got a Desktop app release. Users can quickly access the app by using the Alt + Space shortcut.

Insights for you:

  • OpenAI is testing a ChatGPT app for Windows for ChatGPT Plus, Enterprise, Team, and Edu users. '

  • ChatGPT on Windows lets you ask the AI-powered chatbot questions in a dedicated window that you can keep open alongside your apps.

  • The app can be downloaded for free from the Microsoft Store.

Streamline your development process with Pinata’s easy File API

  • Easy file uploads and retrieval in minutes

  • No complex setup or infrastructure needed

  • Focus on building, not configurations

AI Research
🤠 LLMs are easier to jailbreak using keywords from marginalized groups

A new study shows that well-meaning safety measures in large language models can create unexpected weaknesses. Researchers found major differences in how easily models could be "jailbroken" depending on which demographic terms were used.

Insights for you:

  • Researchers from Theori Inc. have found that safety measures in large language models can paradoxically increase vulnerability to "jailbreak" attacks, especially for prompts using terms for marginalized groups compared to privileged groups.

  • The researchers developed the "PCJailbreak" method, which deliberately incorporates keywords for various demographic groups into potentially harmful prompts.

  • Tests showed significantly higher success rates for jailbreak attempts using terms for marginalized groups, suggesting unintended biases from the safety measures.

OpenAI
👎️ OpenAI's o1 AI Model Failed at travel planning

A new study shows that even advanced AI language models like OpenAI's latest o1-preview fall short when it comes to complex planning. Researchers identified two key issues and explored potential improvements.

Insights for you:

  • The researchers tested the models against two benchmarks: BlocksWorld and TravelPlanner.

  • In BlocksWorld, o1-mini and o1-preview performed well, but in the more complex TravelPlanner, all models performed poorly. GPT-4o achieved a success rate of only 7.8% and o1-preview 15.6%.

  • The researchers found two main problems: The models do not take sufficient account of pre-defined rules, and for longer itineraries, they lose touch with the task at hand.

NVIDIA
💡 Nvidia improves Meta's Llama model with new training approach

Nvidia has introduced a new large language model that outperforms others in alignment benchmarks. The company achieved this through a special training procedure combining evaluation and preference models.

Insights for you:

  • Nvidia has introduced a new large language model called Llama-3.1-Nemotron-70B-Instruct, which has been optimized to provide helpful answers to user queries. It combines different training methods such as regression and Bradley-Terry models.

  • Nvidia used two self-generated datasets to create the training data: HelpSteer2 with over 20,000 scored prompt response pairs and HelpSteer2-Preference with comparisons between every two responses to the same prompt. The combination of both approaches produced the best results.

  • In alignment benchmarks such as Arena Hard, AlpacaEval 2 LC and GPT-4-Turbo MT-Bench, Llama-3.1-Nemotron-70B-Instruct achieved first place in each case, outperforming top models such as GPT-4o and Claude 3.5 Sonnet.