Topic: Qwen
Community talk
Rising Tools
Search
Browse by Topics
Explore Tags
ai
artificial intelligence
llm
gpt
chatgpt
claude
gemini
google
openai
google ai
deepmind
alphabet
generative ai
anthropic
gpt-5
foundation model
trending tool
chatbot
ai safety
multimodal
sora
sam altman
dall-e
grok
large language model
ai agent
mcp
meta
llama
gpt-4o
gpu
code generation
rag
groq
nvidia
ai chip
hardware
jensen huang
diffusion
prompt engineering
rising tool
vibe coding
ai pair programming
open source
genai
qwen
agentic ai
ollama
small language model
agi
image generation
computer vision
fine-tuning
model training
vllm
deepseek
xai
elon musk
autonomous agent
tpu
semiconductor
npu
retrieval augmented generation
vector database
embeddings
database
humanoid
robotics
embodied ai
ai regulation
ai ethics
policy
governance
03
Sep
02
Sep
01
Sep
31
Aug
30
Aug
29
Aug
28
Aug
I just released a big update for my AI research agent, MAESTRO, with a new docs site showing example reports from Qwen 72B, GPT-OSS 120B, and more.
Qwen / Tongyi Lab launches GUI-Owl & Mobile-Agent-v3
After deepseekv3 I feel like other MoE architectures are old or outdated. Why did Qwen chose a simple MoE architecture with softmax routing and aux loss for their Qwen3 models when there’s been better architectures for a while?