- FavTutor
- Posts
- DeepSeek AI will be Banned in the US
DeepSeek AI will be Banned in the US
Including the latest AI news of the week
Hello, AI Enthusiasts!
Welcome to FavTutor’s AI Recap! We’ve gathered all the latest and important AI developments for the past 24 hours in one place, just for you.
In Today’s Newsletter: 😀
China’s DeepSeek AI will be Banned in the US
Mistral AI Chatbot can now provide answers at a super fast speed
Anthropic's new AI system can block Jailbreak attempts
DeepSeek
⛔️ China’s DeepSeek AI will be Banned in the US
A new bipartisan bill seeks to ban Chinese AI chatbot DeepSeek from US government-owned devices to “prevent our enemy from getting information from our government.” Lawmakers said DeepSeek’s generative AI program could acquire data from U.S. users and store it for unidentified use by Chinese authorities.
Insights for you:
A bipartisan congressional bill is being introduced to ban China's DeepSeek artificial intelligence software from government devices.
Australia, Italy, Taiwan, and South Korea have already enacted similar bans over security concerns.
Texas has already become the first state in the United States to ban the DeepSeek, due to escalating concerns about national security.
There’s a reason 400,000 professionals read this daily.
Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.
Mistral
⚡️ Mistral AI Chatbot can now provide answers at a super fast speed
Mistral provided a feature upgrade to the AI assistant called “Flash Answers,” available to users of all tiers, that provides answers up to 1,000 words per second. They say this makes Le Chat the fastest AI assistant currently available.
Insights for you:
Mistral AI has introduced new features to Le Chat, including Flash Answers, which can generate responses of up to 1,000 words per second.
Flash Answers apply only to text-based requests. If an image is included, response times may be longer.
Other new features include a code interpreter for code execution, scientific analysis and visualization, and the integration of Black Forest Labs' Flux Ultra image generation model.
Anthropic
🔒️ Anthropic's new AI system can block Jailbreak attempts
Anthropic’s new security system is called Constitutional Classifiers, which can prevent people from tricking AI models into giving harmful responses. The tests revealed that while an unprotected Claude model allowed 86% of manipulation attempts through, the protected version blocked more than 95%.
Insights for you:
Anthropic has developed a new security technology called "Constitutional Classifiers" which is designed to protect AI language models from manipulation attempts by detecting and blocking unauthorized input.
With 3,000 hours of testing by 183 participants, no one managed to bypass all the prototype's security measures.
Despite its robustness to jailbreaks, this prototype system refused too many harmless queries and cost a lot of computational resources to run.