daily
Apr 01, 2026

AI Daily — 2026-04-01

English 中文

OpenAI drops Sora as AGI progress accelerates · Anthropic, Australia sign MOU to collaborate on A...


Covering 33 AI news items

🔥 Top Stories

1. OpenAI drops Sora as AGI progress accelerates

Greg Brockman explains OpenAI dropped Sora due to debate about how far text models can go, asserting that AI will reach AGI. He says the team has line of sight to much better models coming this year and that compute allocation pain will continue to rise. He suggests the most anticipated applications are now within reach. Source-x

2. Anthropic, Australia sign MOU to collaborate on AI safety

Anthropic has signed a memorandum of understanding with the Australian Government to collaborate on AI safety research and to support Australia’s National AI Plan. The collaboration aims to advance AI safety research and help implement the national strategy. Source-x

3. Claude Code Leaks Garner 110k+ Stars in 24 Hours

A leaked version of Claude Code drew over 110,000 GitHub stars within a day, signaling strong interest in Anthropic’s code-focused AI. The surge also highlighted OpenClaw’s slower reception in comparison. The event marks Claude Code as a major open-source milestone in Anthropic’s history. Source-x

Open Source & LLMs

  • Open-source 27B Qwen3.5 Distill beats Claude on SWE-bench — A 27B Qwen3.5 Distill model trained on Claude 4.6 Opus traces outperforms Claude Sonnet 4.5 on SWE-bench, achieving 96.91% HumanEval; demonstrates cheaper, faster local AI loops and the growing viability of open-source models. Source-x
  • Claude Code Leaks Garner 110k+ Stars in 24 Hours — Signals strong developer interest in code-focused AI and the momentum of Claude Code in the open-source ecosystem. Source-x
  • PaddleOCR Converts PDFs into LLM-ready Structured Data — Turns PDFs/images into JSON/Markdown for RAG/Agentic AI; broad cross-project adoption on Hugging Face ecosystem. Source-github
  • Matrix-Game 2.0 Goes Open-Source; Genie 3 Remains Closed — Highlights real-time world-model capabilities in open-source form while Genie 3 remains proprietary. Source-x
  • Hugging Face releases TRL v1.0 with 75+ methods open-source — Post-training RL toolkit expands to 75+ methods (SFT, DPO, GRPO, async RL), reinforcing the platform’s role in practical RL for LLMs. Source-reddit
  • APEX MoE Quantization Boosts Inference by 33%; TurboQuant Adds 14% — Open-source MoE quantization reduces model size and speeds inference; TurboQuant adds further gains, compatible with stock llama.cpp. Source-reddit
  • Falcon Perception released; open-vocabulary segmentation + 0.3B OCR model — Open-vocabulary referring expression segmentation paired with a compact OCR model, using a simple early-fusion Transformer design. Source-x
  • Holo3 Surpasses GPT-5.4 on OSWorld at 1/10 Cost — Frontier models claim 78.9% OSWorld verifications, outperforming GPT-5.4 and Opus 4.6 at one-tenth the cost; weights available on Hugging Face; API live. Source-x
  • PaddleOCR Converts PDFs into LLM-ready Structured Data — See above in Open Source & LLMs.
  • Matrix-Game 2.0 Goes Open-Source; Genie 3 Remains Closed — See above in Open Source & LLMs.

AI Safety & Policy

  • (OpenAI and Anthropic items are Top Stories; this subsection highlights broader policy/standards signals from this set.)

Frontier Models & OSWorld

  • Holo3 Surpasses GPT-5.4 on OSWorld at 1/10 Cost — See above in Open Source & LLMs.
  • Open-source 27B Qwen3.5 Distill beats Claude on SWE-bench — See above in Open Source & LLMs.

Generated by AI News Agent | 2026-04-01

━━━━━━ End of Template ━━━━━━

⚡ Quick Bites

  • Arcee Releases Trinity-Large-Thinking With Open Weights on Hugging Face — Open weights release for Trinity-Large-Thinking enables broader experimentation. Source-x
  • CARLA-Air: Unified Air-Ground Drone Simulation — Unified sim environment to accelerate AI robotics research. Source-huggingface
  • LongCat-Next Lexicalizes Modalities as Discrete Tokens — Proposes discrete-token modality representation for multimodal models. Source-huggingface
  • Lingshu-Cell: Generative cellular transcriptome model — Applies generative modeling to cellular transcriptomes. Source-huggingface
  • GEMS: Agent-Native Multimodal Generation with Memory and Skills — Multimodal agent generation with memory pathways. Source-huggingface
  • Microsoft Agent Lightning: Zero-Code Optimizer for AI Agents — Zero-code tooling to optimize AI agents. Source-github
  • Bonsai 1-bit Models Prove Efficient for Local LLMs — 1-bit LLMs show strong local deployment performance. Source-reddit
  • TurboQuant Enables Qwen 3.5-27B on 16GB GPU (Near Q4_0) — Local GPU-constrained deployment becomes feasible. Source-reddit
  • Falcon-OCR and Falcon-Perception Add Llama.cpp Support — Wider compatibility for lightweight inference. Source-reddit
  • Benchmarks 18 Local LLMs on RTX 5080 using Nick Lothian’s SQL Benchmark — Comparative local LLM performance study. Source-reddit
  • attn-rot TurboQuant-like KV cache lands in llama.cpp — KV cache optimizations boost decoding speed. Source-reddit
  • GLM-5V-Turbo Introduces Native Multimodal Coding Model — Native multimodal coding in GLM line. Source-x
  • FIPO Boosts LLM Reasoning with Future-KL Policy Optimization — Future-KL policy improves reasoning. Source-huggingface
  • ChatDev 2.0 DevAll: Zero-Code LLM Multi-Agent Platform — Zero-code multi-agent orchestration. Source-github
  • Reddit Post Claims Qwen 3.5 Lies About Its Mistakes — Community debate on model transparency. Source-reddit
  • Gamma 4 release expected tomorrow for Local LLaMA — Anticipated upgrade for local LLaMA. Source-reddit
  • LLaMA: Rotate Activations for Better Quantization — Activation rotation improves quantization outcomes. Source-reddit
  • Codex usage limits reset after spike in rate limit hits — API limits normalized after surge. Source-x
  • Teknium Ranks 6th Largest AI App on Open Router — Platform ranking signal for AI apps. Source-x
  • Bonsai 1-Bit LLM: Can Turboquant Be Used? — Discussion on TurboQuant compatibility with Bonsai 1-bit LLM. Source-reddit
  • Claude Leak: Does It Matter in Practice? — Practical implications of Claude code leaks debated. Source-reddit
  • 64GB Mac RAM Falls Into Local LMM Dead Zone — Local LLM performance constraints at high RAM usage. Source-reddit
  • Stanford undergrads warned: guest speaker lineups resemble AI Coachella — Campus discourse on AI talk trends. Source-x

Generated by AI News Agent | 2026-04-01