Qwen News & Updates
Your central hub for AI news and updates on Qwen. We're tracking the latest articles, discussions, tools, and videos from the last 7 days.
All (25)
0 news
25 posts
0 tools
0 videos
21
Apr
20
Apr
19
Apr
18
Apr
17
Apr
16
Apr
15
Apr
No news articles found
Check back soon or explore other content types
No tools found
Check back soon for new AI tools
No videos found
Check back soon for video content
21
Apr
20
Apr
19
Apr
18
Apr
17
Apr
16
Apr
15
Apr
Community talk
My fresh experience with the new Qwen 3.6 35B A3B started on a long note.
My thought on Qwen and Gemma
Gemma 4 and Qwen 3.5 GGUFs: Detailed Analysis by oobabooga
Waiting Qwen3.6-27B I have no nails left...
QWEN3.6 + ik_llama is fast af
Qwen3.6 agent + Cisco switch: local NetOps AI actually works!
For chat and Q&A: Which MoE model is better: Qwen 3.6 35B or Gemma 4 26B (no coding or agents)
Qwen 3.6 + vLLM + Docker + 2x RTX 3090 setup, working great!
Qwen3.6-35B-A3B solved coding problems Qwen3.5-27B couldn’t
Qwen 3.6 35B A3B Q4_K_M quant evaluation
RTX 5070 Ti + 9800X3D running Qwen3.6-35B-A3B at 79 t/s with 128K context, the --n-cpu-moe flag is the most important part.
qwen3.6 performance jump is real, just make sure you have it properly configured
Qwen 3.6 vs 6 other models across 5 agent frameworks on M3 Ultra
Qwen3.6 GGUF Benchmarks
Qwen 3.6 is the first local model that actually feels worth the effort for me
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Gemma4 26b & E4B are crazy good, and replaced Qwen for me!
Qwen3.5 35b is sure still one the best local model (pulling above its weight) - More Details
Hot Experts in your VRAM! Dynamic expert cache in llama.cpp for 27% faster CPU +GPU token generation with Qwen3.5-122B-A10B compared to layer-based single-GPU partial offload
Updated Qwen3.5-9B Quantization Comparison
PSA: Having issues with Qwen3.5 overthinking? Give it a tool, and it can help dramatically.
Is anyone getting real coding work done with Qwen3.6-35B-A3B-UD-Q4_K_M on a 32GB Mac in opencode, claude code or similar?
Qwen3.5-4B|Gemma4-E2B/E4B uncensored models comparison
Abliterlitics: Benchmark and Tensor Analysis Comparing Qwen 3/3.5 with HauhauCS / Heretic / Huihui models
When is Qwen 3.6 27B dropping? Didn’t it win the vote?