Play. Create. Manipulate.
FLUX.1 Kontext models go beyond text-to-image. Unlike previous flow models that only allow for pure text based generation, FLUX.1 Kontext models also understand and can create from existing images. With FLUX.1 Kontext you can modify an input image via simple text instructions, enabling flexible and instant image editing - no need for finetuning or complex editing workflows. The core capabilities of the the FLUX.1 Kontext suite are:
Character consistency
Preserve unique elements of an image, such as a reference character or object in a picture, across multiple scenes and environments.
Local editing
Make targeted modifications of specific elements in an image without affecting the rest.
Style Reference
Generate novel scenes while preserving unique styles from a reference image, directed by text prompts.
Interactive Speed
Iterate at minimal latency for both image generation and editing.
Flux.1 Kontext allows you to iteratively add more instructions and build on previous edits, refining your creation step-by-step with minimal latency, while preserving image quality and character consistency.
Get started with FLUX.1 Kontext
Redefine what's possible with consistent, context-aware image generation
FLUX.1 Kontext [max]
Maximum Performance at High Speed
Our new premium model brings maximum performance across all aspects – greatly improved prompt adherence and typography generation meet premium consistency for editing without compromise on speed.
FLUX.1 Kontext [pro]
A pioneer for fast, iterative image editing
A unified model delivering local editing, generative modifications, and text-to-image generation in FLUX.1 quality. Processes text and image inputs for precise regional edits or full scene transformations at breakthrough speeds, pioneering iterative workflows that maintain character consistency across multiple editing turns.
FLUX.1 Kontext [dev]
Open-weights, distilled variant of Kontext, our most advanced generative image editing model.