• FavTutor
  • Posts
  • AI Trick to Improve Code 100x & It’s So Simple

AI Trick to Improve Code 100x & It’s So Simple

Including the latest AI news of the week

In partnership with

Hello, AI Enthusiasts!

Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments for the past 24 hours in one place, just for you.

In Today’s Newsletter: 😀

  • AI Trick to Improve Code 100x & It’s So Simple

  • Nvidia's new Cosmos models can understand physics through video

  • Training AI models using smaller models can be better

AI Research
🖥️ AI Trick to Improve Code 100x & It’s So Simple

A simple request in prompt helped Sonnet 3.5 create code that runs 100 times faster than its first attempt. BuzzFeed Senior Data Scientist Max Woolf reported the results of an experiment to see what would happen if you repeatedly asked an AI to write better code.

Insights for you:

  • A simple experiment with Claude Sonnet 3.5 demonstrates that repeatedly prompting the model with "write better code" can lead to a 100-fold improvement in the execution time of Python code.

  • The LLM was asked to write Python code to find the difference between the largest and smallest numbers with a digit sum of 30 in a million random numbers between 1 and 100,000. The original code took 657 milliseconds to run. After getting this first solution, Max kept prompting it to "write better code." By the final iteration, it was down to just 6 milliseconds—a 100x speedup.

  • However, from the fourth iteration onwards, Claude 3.5 started incorporating enterprise functions into the code without being explicitly instructed.

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

Nvidia
☄️ Nvidia's new Cosmos models can understand physics through video

Nvidia has unveiled its video-based approach to generating training data for robots and self-driving cars using what it calls "world models" on its Cosmos platform. According to Nvidia CEO Jensen Huang, robotics may be approaching its "ChatGPT moment" thanks to these world models.

Insights for you:

  • Nvidia has introduced World Foundation Models on its Cosmos platform, which can generate photorealistic training data for robots and autonomous vehicles, potentially reducing the need for costly real-world testing.

  • The models were trained using an extensive dataset consisting of 9,000 trillion tokens extracted from 20 million hours of video material, enabling them to generate physics-based videos from various inputs, including text, images, video, and robot sensor or motion data.

  • They will be available under an open model license to accelerate the work of the robotics and AV community.

AI Research
👼 Training AI models using smaller models can be better

A joint team from Google Research and DeepMind has developed a training method called SALT (Small model aided large model training) that cuts training time by a lot. However, the approach is unique: letting smaller AI models teach the bigger ones.

Insights for you:

  • Google researchers have developed a new method called SALT that speeds up the training of LLMs by 28% using smaller AI models as assistant teachers. The method works in two stages:

  • First, the large model learns from the smaller model through knowledge distillation, with the smaller model helping in areas where it can already make good predictions. The large model is then trained conventionally.

  • The smaller model proves especially helpful in areas where it already makes solid predictions. For these simpler tasks, the larger model learns more quickly and reliably, before switching to traditional training for more complex challenges.