Gen Ai News & Updates
Your central hub for AI news and updates on Gen Ai. We're tracking the latest articles, discussions, tools, and videos from the last 7 days.
All (3)
3 news
0 posts
0 tools
0 videos
19
Oct
18
Oct
17
Oct
16
Oct
15
Oct
14
Oct
13
Oct

Intel Foundry secures contract to build Microsoft's Maia 2 next-gen AI processor on 18A/18A-P node, claims report — could be first step in ongoing partnership - Tom's Hardware
Intel Foundry secures contract to build Microsoft's Maia 2 next-gen AI processor on 18A/18A-P node, claims report — could be first step in ongoing par...

Microsoft has reportedly partnered with Intel Foundry to produce its next-generation AI processor, Maia 2, on Intel's 18A node.
Key Takeaways:
Key Takeaways:
- The deal will give Microsoft access to a US-based chip supply chain, reducing its reliance on TSMC.
- Intel's 18A production starts before TSMC's competing N2 tech, potentially giving Intel a performance advantage.
- Microsoft's next-gen AI processor will likely use near-reticle-sized compute dies, which Intel's 18A process is on track to support.

Vals AI Report Shows Gen AI Tools Outperforming Lawyers on Legal Research Tasks - Law.com
Vals AI Report Shows Gen AI Tools Outperforming Lawyers on Legal Research Tasks Law.com...

Unlock Faster, Smarter Edge Models with 7x Gen AI Performance on NVIDIA Jetson AGX Thor
A defining strength of the NVIDIA software ecosystem is its commitment to continuous optimization. In August, NVIDIA Jetson AGX Thor launched, with up...

NVIDIA Jetson AGX Thor achieves up to a 7x increase in generative AI performance with recent software updates, enabling developers to run the latest AI models on the edge.
Key Takeaways:
Key Takeaways:
- NVIDIA Jetson AGX Thor now offers up to a 7x increase in generative AI performance with recent software updates.
- The platform provides day 0 support for the latest generative AI models, including GPT-oss and multiple Nemotron models.
- Quantization and speculative decoding techniques can significantly accelerate LLM and VLM inference on Jetson Thor, with W4A16 providing the highest inference speeds and lowest memory footprint.
No tools found
Check back soon for new AI tools
No videos found
Check back soon for video content
19
Oct
18
Oct
17
Oct
16
Oct
15
Oct
14
Oct
13
Oct
No community posts found
Check back soon for discussions