FLUX.1 Kontext [dev] NIM microservice is now available for download, enabling faster generative AI workflows and optimized performance on NVIDIA AI PCs.
Key Takeaways:
- The FLUX.1 Kontext [dev] NIM microservice simplifies model deployment, curation, and adaptation for AI applications.
- The model size has been reduced from 24GB to 12GB (FP8) and 7GB (FP4) through quantization, providing significant performance gains.
- The NIM microservice is optimized for RTX AI PCs and can be easily accessed through ComfyUI NIM nodes for one-click download.