• FavTutor
  • Posts
  • OpenAI now makes their AI Chips

OpenAI now makes their AI Chips

Including the latest AI news of the week

Hello, AI Enthusiasts!

Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments for the past 24 hours in one place, just for you.

In Today’s Newsletter: 😀

  • OpenAI now makes their AI Chips

  • AI is bad at news, BBC finds

  • Deepseek-R1 and OpenAI o1 have 'underthinking' problem

OpenAI
💿️ OpenAI now makes their AI Chips

According to Reuters, OpenAI is developing its first generation of in-house AI chips. The project aims to create a chip capable of both training and running AI models, although initial deployment may be limited in scale. OpenAI expects to begin mass production in 2026.

Insights for you:

  • OpenAI is designing its first AI chip, with plans to partner with TSMC for manufacturing using its advanced 3-nanometer process.

  • They see the custom chip primarily as a way to gain leverage in negotiations with other suppliers, given the surging demand for AI chips.

  • While Nvidia currently dominates the AI chip market, tech giants like Amazon, Microsoft, and Meta have been trying to develop their hardware for years.

There’s a reason 400,000 professionals read this daily.

Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.

BBC
📰 AI is bad at news, BBC finds

In a new BBC research, they submitted 100 questions about current news and asked the chatbots to cite BBC articles as sources. They found that 51% of answers had significant issues of some form, while 19% of answers introduced factual errors, such as incorrect factual statements, numbers, and dates.

Insights for you:

  • BBC study finds leading AI chatbots (ChatGPT, Copilot, Gemini, and Perplexity) consistently distort news content, raising concerns about information accuracy.

  • AI inaccuracies include false statements about health recommendations, current officeholders, and global events.

  • Also, they noted that Copilot and Gemini "had more significant issues" than ChatGPT or Perplexity.

AI Research
🧠 Deepseek-R1 and OpenAI o1 have 'underthinking' problem

Chinese researchers have discovered why AI models often struggle with complex reasoning tasks: They tend to drop promising solutions too quickly, leading to wasted computing power and lower accuracy.

Insights for you:

  • Researchers from China have discovered that reasoning models like OpenAI's o1 often struggle with complex tasks due to "underthinking."

  • OpenAI's o1 frequently jumps between different problem-solving approaches, often starting fresh with phrases like "Alternatively…", leading to models using more computing power when they arrive at wrong answers.

  • The study found that when models gave wrong answers, they used 225% more computing tokens and changed strategies 418% more often compared to correct solutions.