Qwen News & Updates
Your central hub for AI news and updates on Qwen. We're tracking the latest articles, discussions, tools, and videos from the last 7 days.
All (22)
0 news
21 posts
1 tools
0 videos
19
Apr
18
Apr
17
Apr
16
Apr
15
Apr
14
Apr
13
Apr
No news articles found
Check back soon or explore other content types
Trending AI Repos & Tools
* qwen3-omni-moe working (vision + audio input) * qwen3-asr working [https://huggingface.co/ggml-org/Qwen3-Omni-30B-A3B-Thinking-GGUF](https://huggin...
No videos found
Check back soon for video content
19
Apr
18
Apr
17
Apr
16
Apr
15
Apr
14
Apr
13
Apr
Community talk
My fresh experience with the new Qwen 3.6 35B A3B started on a long note.
My thought on Qwen and Gemma
Gemma 4 and Qwen 3.5 GGUFs: Detailed Analysis by oobabooga
Qwen3.6-35B-A3B solved coding problems Qwen3.5-27B couldn’t
Qwen 3.6 35B A3B Q4_K_M quant evaluation
RTX 5070 Ti + 9800X3D running Qwen3.6-35B-A3B at 79 t/s with 128K context, the --n-cpu-moe flag is the most important part.
qwen3.6 performance jump is real, just make sure you have it properly configured
Qwen 3.6 vs 6 other models across 5 agent frameworks on M3 Ultra
Qwen3.6 GGUF Benchmarks
Qwen 3.6 is the first local model that actually feels worth the effort for me
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Gemma4 26b & E4B are crazy good, and replaced Qwen for me!
Qwen3.5 35b is sure still one the best local model (pulling above its weight) - More Details
Hot Experts in your VRAM! Dynamic expert cache in llama.cpp for 27% faster CPU +GPU token generation with Qwen3.5-122B-A10B compared to layer-based single-GPU partial offload
Updated Qwen3.5-9B Quantization Comparison
PSA: Having issues with Qwen3.5 overthinking? Give it a tool, and it can help dramatically.
DFlash speculative decoding on Apple Silicon: 4.1x on Qwen3.5-9B, now open source (MLX, M5 Max)
MiniMax-M2.7 vs Qwen3.5-122B-A10B for 96GB VRAM full offload?!
Qwen3.5-4B|Gemma4-E2B/E4B uncensored models comparison
Abliterlitics: Benchmark and Tensor Analysis Comparing Qwen 3/3.5 with HauhauCS / Heretic / Huihui models
When is Qwen 3.6 27B dropping? Didn’t it win the vote?