• FavTutor
  • Posts
  • DeepSeek AI is now Uncensored

DeepSeek AI is now Uncensored

Including the latest AI news of the week

Hello, AI Enthusiasts!

Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments for the past 24 hours in one place, just for you.

In Today’s Newsletter: 😀

  • DeepSeek AI is now Uncensored

  • Google's New AI Helps You Explore New Career Paths

  • LLMs favor other LLMs when it comes to mistake

Perplexity
🤭 DeepSeek AI is now Uncensored

DeepSeek’s R1 AI model became the talk of the town when it showed performance similar to OpenAI’s flagship model. However, it is also known for the “Chinese” censorship. Now, Perplexity AI has unveiled R1 1776, a modified version of the Deepseek-R1 language model specifically designed to overcome this censorship.

Insights for you:

  • Perplexity AI has released a version of the open-source Chinese language model Deepseek R1, dubbed R1 1776, to remove Chinese censorship through post-training.

  • To remove the censorship, Perplexity collected data on censored topics in China, identified approximately 300 censored topics, and captured 40,000 multilingual user prompts that produced censored responses.

  • According to Perplexity, evaluations and benchmarks show that R1 1776 covers censored topics comprehensively and without bias and that the decensoring did not affect math and reasoning skills.

Google
👔 Google's New AI Helps You Explore New Career Paths

Google announced a new experimental AI tool called Career Dreamer, to find patterns and connect the dots between your unique experiences, educational background, skills, and interests.

Insights for you:

  • Google is rolling out an experimental career exploration tool called Career Dreamer.

  • The AI tool works by finding patterns between your experiences, educational background, skills, and interests.

  • Career Dreamer can help point out your unique skills, find relevant career options, and draft cover letters and resumes.

AI Research
👉️ LLMs favor other LLMs when it comes to mistake

A new study of how language models evaluate each other has uncovered a troubling pattern: as these systems become more sophisticated, they're increasingly likely to share the same blind spots.

Insights for you:

  • Researchers developed a new metric called CAPA (Chance Adjusted Probabilistic Agreement) to measure the similarity of errors made by language models. This metric is crucial when AI systems evaluate and control other AIs.

  • Experiments show that AI models acting as "judges" tend to favor models with similar errors, while more powerful models learn more during training from data provided by dissimilar, weaker models.

  • As language models become more advanced, their errors become more similar, which can raise security concerns when AI systems are used to supervise other AI systems.