Back

FLUX.1 Kontext

238 points9 hoursbfl.ai
minimaxir7 hours ago

Currently am testing this out (using the Replicate endpoint: https://replicate.com/black-forest-labs/flux-kontext-pro). Replicate also hosts "apps" with examples using FLUX Kontext for some common use cases of image editing: https://replicate.com/flux-kontext-apps

It's pretty good: quality of the generated images is similar to that of GPT-4o image generation if you were using it for simple image-to-image generations. Generation is speedy at about ~4 seconds per generation.

Prompt engineering outside of the examples used on this page is a little fussy and I suspect will evolve over time. Changing styles or specific aspects does indeed work, but the more specific you get, the more it tends to ignore the specifics.

a21286 hours ago

It seems more accurate than 4o image generation in terms of preserving original details. If I give it my 3D animal character and ask it for a minor change like changing the lighting, 4o will completely mangle the face of my character, it will change the body and other details slightly. This Flux model keeps the visible geometry almost perfectly the same even when asked to significantly change the pose or lighting

echelon3 hours ago

gpt-image-1 (aka "4o") is still the most useful general purpose image model, but damn does this come close.

I'm deep in this space and feel really good about FLUX.1 Kontext. It fills a much-needed gap, and it makes sure that OpenAI / Google aren't the runaway victors of images and video.

Prior to gpt-image-1, the biggest problems in images were:

  - prompt adherence
  - generation quality
  - instructiveness (eg. "put the sign above the second door")
  - consistency of styles, characters, settings, etc. 
  - deliberate and exact intentional posing of characters and set pieces
  - compositing different images or layers together
  - relighting
Fine tunes, LoRAs, and IPAdapters fixed a lot of this, but they were a real pain in the ass. ControlNets solved for pose, but it was still awkward and ugly. ComfyUI was an orchestrator of this layer of hacks that kind of got the job done, but it was hacky and unmaintainable glue. It always felt like a fly-by-night solution.

OpenAI's gpt-image-1 solved all of these things with a single multimodal model. You could throw out ComfyUI and all the other pre-AI garbage and work directly with the model itself. It was magic.

Unfortunately, gpt-image-1 is ridiculously slow, insanely expensive, highly censored (you can't use a lot of copyrighted characters or celebrities, and a lot of totally SFW prompts are blocked). It can't be fine tuned, so you're suck with the "ChatGPT style" and (as is called by the community) the "piss filter" (perpetually yellowish images).

And the biggest problem with gpt-image-1 is because it puts image and text tokens in the same space to manipulate, it can't retain the exact precise pixel-precise structure of reference images. Because of that, it cannot function as an inpainting/outpainting model whatsoever. You can't use it to edit existing images if the original image mattered.

Even with those flaws, gpt-image-1 was a million times better than Flux, ComfyUI, and all the other ball of wax hacks we've built up. Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition and that OpenAI would win the space forever. We'd be left with only hyperscalers of AI building these models. And it would suck if Google and OpenAI were the only providers of tools for artists.

Black Forest Labs just proved that wrong in a big way! While this model doesn't do everything as well as gpt-image-1, it's within the same order of magnitude. And it's ridiculously fast (10x faster) and cheap (10x cheaper).

Kontext isn't as instructive as gpt-image-1. You can't give it multiple pictures and ask it to copy characters from one image into the pose of another image. You can't have it follow complex compositing requests. But it's close, and that makes it immediately useful. It fills a much-needed gap in the space.

Black Forest Labs did the right thing by developing this instead of a video model. We need much more innovation in the image model space, and we need more gaps to be filled:

  - Fast
  - Truly multimodal like gpt-image-1
  - Instructive 
  - Posing built into the model. No ControlNet hacks. 
  - References built into the model. No IPAdapter, no required character/style LoRAs, etc. 
  - Ability to address objects, characters, mannequins, etc. for deletion / insertion. 
  - Ability to pull sources from across multiple images with or without "innovation" / change to their pixels.
  - Fine-tunable (so we can get higher quality and precision) 
 
Something like this that works in real time would literally change the game forever.

Please build it, Black Forest Labs.

All of those feature requests stated, Kontext is a great model. I'm going to be learning it over the next weeks.

Keep at it, BFL. Don't let OpenAI win. This model rocks.

Now let's hope Kling or Runway (or, better, someone who does open weights -- BFL!) develops a Veo 3 competitor.

I need my AI actors to "Meisner", and so far only Veo 3 comes close.

cuuupid7 hours ago

Honestly love Replicate for always being up to date. It’s amazing that not only do we live in a time of rapid AI advancement, but that every new research grade model is immediately available via API and can be used in prod, at scale, no questions asked.

Something to be said about distributors like Replicate etc that are adding an exponent to the impact of these model releases

meowface4 hours ago

I have no affiliation with either company but from using both a bunch as a customer: Replicate has a competitor at https://fal.ai/models and FAL's generation speed is consistently faster across every model I've tried. They have some sub-100 ms image gen models, too.

Replicate has a much bigger model selection. But for every model that's on both, FAL is pretty much "Replicate but faster". I believe pricing is pretty similar.

bfirsh3 hours ago

Founder of Replicate here. We should be on par or faster for all the top models. e.g. we have the fastest FLUX[dev]: https://artificialanalysis.ai/text-to-image/model-family/flu...

If something's not as fast let me know and we can fix it. ben@replicate.com

echelon3 hours ago

Hey Ben, thanks for participating in this thread. And certainly also for all you and your team have built.

Totally frank and possibly awkward question, you don't have to answer: how do you feel about a16z investing in everyone in this space?

They invested in you.

They're investing in your direct competitors (Fal, et al.)

They're picking your downmarket and upmarket (Krea, et al.)

They're picking consumer (Viggle, et al.), which could lift away the value.

They're picking the foundation models you consume. (Black Forest Labs, Hedra, et al.)

They're even picking the actual consumers themselves. (Promise, et al.)

They're doing this at Series A and beyond.

Do you think they'll try to encourage dog-fooding or consolidation?

The reason I ask is because I'm building adjacent or at a tangent to some of this, and I wonder if a16z is "all full up" or competitive within the portfolio. (If you can answer in private, my email is [my username] at gmail, and I'd be incredibly grateful to hear your thoughts.)

Beyond that, how are you feeling? This is a whirlwind of a sector to be in. There's a new model every week it seems.

Kudos on keeping up the pace! Keep at it!

echelon4 hours ago

A16Z invested in both. It's wild. They've been absolutely flooding the GenAI market for images and videos with investments.

They'll have one of the victors, whoever it is. Maybe multiple.

minimaxir6 hours ago

That's less on the downstream distributors, more on the model developers themselves realizing that ease-of-accessibility of the models themselves on Day 1 is important for getting community traction. Locking the model exclusively behind their own API won't work anymore.

Llama 4 was another recent case where they explicitly worked with downstream distributors to get it working Day 1.

skipants7 hours ago

> Generation is speedy at about ~4 seconds per generation

May I ask on which GPU & VRAM?

edit: oh unless you just meant through huggingface's UI

zamadatix5 hours ago

The open weights variant is "coming soon" so the only option is hosted right now.

minimaxir7 hours ago

It is through Replicate's UI listed, which goes through Black Forest Labs's infra so would likely get the same results from their API.

mdp20214 hours ago

Is input restricted to a single image? If you could use more images as input, you could do prompts like "Place the item in image A inside image B" (e.g. "put the character of image A in the scenery of image B"), etc.

carlosdp4 hours ago

There's an experimental "multi" mode you can input multiple images to

echelon4 hours ago

Fal has the multi image interface to test against. (Replicate might as well, I haven't checked yet.)

THIS MODEL ROCKS!

It's no gpt-image-1, but it's ridiculously close.

There isn't going to be a moat in images or video. I was so worried Google and OpenAI would win creative forever. Not so. Anyone can build these.

vunderba7 hours ago

I'm debating whether to add the FLUX Kontext model to my GenAI image comparison site. The Max variant of the model definitely scores higher in prompt adherence nearly doubling Flux 1.dev score but still falling short of OpenAI's gpt-image-1 which (visual fidelity aside) is sitting at the top of the leaderboard.

I liked keeping Flux 1.D around just to have a nice baseline for local GenAI capabilities.

https://genai-showdown.specr.net

Incidentally, we did add the newest release of Hunyuan's Image 2.0 model but as expected of a real-time model it scores rather poorly.

EDIT: In fairness to Black Forest Labs this model definitely seems to be more focused on editing capabilities to refine and iterate on existing images rather than on strict text-to-image creation.

Klaus236 hours ago

Nice site! I have a suggestion for a prompt that I could never get to work properly. It's been a while since I tried it, and the models have probably improved enough that it should be possible now.

  A knight with a sword in hand stands with his back to us, facing down an army. He holds his shield above his head to protect himself from the rain of arrows shot by archers visible in the rear.
I was surprised at how badly the models performed. It's a fairly iconic scene, and there's more than enough training data.
vunderba6 hours ago

Some of these samples are rather cherry picked. Has anyone actually tried the professional headshot app of the "Kontext Apps"?

https://replicate.com/flux-kontext-apps

I've thrown half a dozen pictures of myself at it and it just completely replaced me with somebody else. To be fair, the final headshot does look very professional.

mac-mc2 hours ago

I tried a professional headshot prompt on the flux playground with a tired gym selfie and it kept it as myself, same expression, sweat, skin tone and all. It was like a background swap, then I expanded it to "make a professional headshot version of this image that would be good for social media, make the person smile, have a good pose and clothing, clean non-sweaty skin, etc" and it stayed pretty similar, except it swapped the clothing and gave me an awkward smile, which may be accurate for those kinds of things if you think about it.

minimaxir6 hours ago

Is the input image aspect ratio the same as the output aspect ratio? In some testing I've noticed that there is weirdness that happens if there is a forced shift.

doctorpangloss4 hours ago

Nobody has solved the scientific problem of identity preservation for faces in one shot. Nobody has even solved hands.

emmelaich3 hours ago

I tried making a realistic image from a cartoon character but aged. It did very well, definitely recognisable as the same 'person'.

anjneymidha8 hours ago
liuliu7 hours ago

Seems implementation is straightforward (very similar to everyone else, HiDream-E1, ICEdit, DreamO etc.), the magic is on data curation (which details are lightly shared).

krackers5 hours ago

I haven't been following image generation models closely, at a high level is this new Flux model still diffusion based, or have they moved to block autoregressive (possibly with diffusion for upscaling) similar to 4o?

liuliu4 hours ago

Diffusion based. There is no point to move to auto-regressive if you are not also training a multimodality LLM, which these companies are not doing that.

rvz8 hours ago

Unfortunately, nobody wants to read the report, but what they are really after is to download the open-weight model.

So they can take it and run with it. (No contributing back either).

anjneymidha7 hours ago

"FLUX.1 Kontext [dev]

Open-weights, distilled variant of Kontext, our most advanced generative image editing model. Coming soon" is what they say on https://bfl.ai/models/flux-kontext

sigmoid107 hours ago

Distilled is a real downer, but I guess those AI startup CEOs still gotta eat.

dragonwriter6 hours ago

The open community has a done a lot with the open-weights distilled models from Black Forest Labs already, one of the more radical being Chroma: https://huggingface.co/lodestones/Chroma

refulgentis7 hours ago

I agree that gooning crew drives a lot of open model downloads.

On HN, generally, people are more into technical discussion and/or productizing this stuff. Here, it seems declasse to mention the gooner angle, it's usually euphemized as intense reactions about refusing to download it involving the words "censor"

eamag3 hours ago
ttoinou7 hours ago

How knowledgable do you need to be to tweak and train this locally ?

I spent two days trying to train a LoRa customization on top of Flux 1 dev on Windows with my RTX 4090 but can’t make it work and I don’t know how deep into this topic and python library I need to study. Are there scripts kiddies in this game or only experts ?

minimaxir7 hours ago

The open-source model is not released yet, but it definitely won't be any easier than training a LoRA on Flux 1 Dev.

ttoinou7 hours ago

Damn, I’m just too lazy to learn skills that will be outdated in 6 months

johnnyApplePRNG7 hours ago

And yet you're not too lazy to explain your laziness in more than the 5 words it required on a social media site. Hm.

+1
ttoinou7 hours ago
throwaway6751177 hours ago

Just use https://github.com/bghira/SimpleTuner

I was able to run this script to train a Lora myself without spending any time learning the underlying python libraries.

ttoinou7 hours ago

Well thank you I will test that

dagaci4 hours ago

SimpleTuner is dependant on Microsoft's DeepSpeed which doesnt work on Windows :)

So you probably better off using Ai-ToolKit https://github.com/ostris/ai-toolkit

Flemlo7 hours ago

It's normally easy to find it ore configured through comfyui.

Sometimes behind patreon if some YouTuber

3abiton7 hours ago

> I spent two days trying to train a LoRa customization on top of Flux 1 dev on Windows with my RTX 4090 but can’t make

Windows is mostly the issue, to really take advantage, you will need linux.

ttoinou7 hours ago

Even using WSL2 with Ubuntu isn't good enough ?

fagerhult4 hours ago

I vibed up a little chat interface https://kontext-chat.vercel.app/

ilaksh6 hours ago

Anyone have a guess as to when the open dev version gets released? More like a week or a month or two I wonder.

amazingamazing7 hours ago

Don’t understand the remove from face example. Without other pictures showing the persons face, it’s just using some stereotypical image, no?

sharkjacobs7 hours ago

There's no "truth" it's uncovering, no real face, these are all just generated images, yes.

amazingamazing7 hours ago

I get that but usually you would have two inputs, the reference “true”, and the target that it to be manipulated.

nine_k7 hours ago

Not necessarily. "As you may see, this is a Chinese lady. You have seen a number of Chinese ladies in your training set. Imagine the face of this lady so that it won't contradict the fragment visible on the image with the snowflake". (Damn, it's a pseudocode prompt.)

+1
amazingamazing7 hours ago
ilaksh6 hours ago

Look more closely at the example. Clearly there is an opportunity for inference with objects that only partially obscure.

vessenes7 hours ago

Mm, depends on the underlying model and where it is in the pipeline; identity models are pretty sophisticated at interpolating faces from partial geometry.

Scaevolus7 hours ago

The slideshow appears to be glitched on that first example. The input image has a snowflake covering most of her face.

jorgemf7 hours ago

I think they are doing that because using real images the model changes the face. So that problem is removed if the initial image doesn't show the face

pkkkzip5 hours ago

They chosen Asian traits that Western beauty standards fetishize that in Asia wouldn't be taken serious at all.

I notice American text2image models tend to generate less attractive and more darker skinned humans where as Chinese text2image generate attractive and more light skinned humans.

I think this is another area where Chinese AI models shine.

viraptor4 hours ago

> They chosen Asian traits that Western beauty standards fetishize that in Asia wouldn't be taken serious at all.

> where as Chinese text2image generate attractive and more light skinned humans.

Are you saying they have chosen Asian traits that Asian beauty standards fetishize that in the West wouldn't be taken seriously at all? ;) There is no ground truth here that would be more correct one way or the other.

throwaway3141555 hours ago

> notice American text2image models tend to generate less attractive and more darker skinned humans where as Chinese text2image generate attractive and more light skinned humans

This seems entirely subjective to me.

turnsout3 hours ago

Wow, that is some straight-up overt racism. You should be ashamed.

nullbyte8 hours ago

Hopefully they list this on HuggingFace for the opensource community. It looks like a great model!

vunderba7 hours ago

From their site they will be releasing the DEV version - which is a distilled variant - so quality and adherence will suffer unfortunately.

minimaxir7 hours ago

The original open-source Flux releases were also on Hugging Face.

vessenes7 hours ago

Pretty good!

I like that they are testing face and scene coherence with iterated edits -- major pain point for 4o and other models.

jnettome7 hours ago

I'm trying to login to evaluate this but the google auth redirects me back to localhost:3000

andybak7 hours ago

Nobody tested that page on mobile.

layer86 hours ago

> show me a closeup of…

Investigators will love this for “enhance”. ;)

bossyTeacher4 hours ago

I wonder if this is using a foundation model or a fine tuned one

fortran778 hours ago

It still has no idea what a Model F keyboard looks like. I tried prompts and editing, and got things that weren't even close.

stephen375 hours ago

I got it working when I provided an image of a Model F keyboard. This is the strength of the model, provide it an input image and it will do some magic

Disclaimer: I work for BFL

yorwba8 hours ago

You mean when you edit a picture of a Model F keyboard and tell it to use it in a scene, it still produces a different keyboard?

refulgentis7 hours ago

Interesting, would you mind sharing? (imgur allows free image uploads, quick drag and drop)

I do have a "works on my machine"* :) -- prompt "Model F keyboard", all settings disabled, on the smaller model, seems to have substantially more than no idea: https://imgur.com/a/32pV6Sp

(Google Images comparison included to show in-the-wild "Model F keyboard", which may differ from my/your expected distribution)

* my machine, being, https://playground.bfl.ai/ (I have no affiliation with BFL)

jsheard7 hours ago

Your generated examples just look like generic modern-day mechanical keyboards, they don't have any of the Model Fs defining features.

AStonesThrow7 hours ago

Your Google Images search indicates the original problem of models training on junk misinformation online. If AI scrapers are downloading every photo that's associated with "Model F Keyboard" like that, the models have no idea what is an IBM Model F, or its distinguishing characteristics, and what is some other company's, and what is misidentified.

https://commons.wikimedia.org/wiki/Category:IBM_Model_F_Keyb...

Specifying "IBM Model F keyboard" and placing it in quotation marks improves the search. But the front page of the search is tip-of-the-iceberg compared to whatever the model's scrapers ingested.

Eventually you may hit trademark protections. Reproducing a brand-name keyboard may be as difficult as simulating a celebrity's likeness.

I'm not even sure what my friends look like on Facebook, so it's not clear how an AI model would reproduce a brand-name keyboard design on request.

refulgentis6 hours ago

I agree with you vehemently.

Another way of looking at it is, insistence on complete verisimilitude in an image generator is fundamentally in error.

I would argue, even undesirable. I don't want to live in a world where a 45 year old keyboard that was only out for 4 years is readily imitated in every microscopic detail.

I also find myself frustrated, and asking myself why.

First thought that jumps in: it's very clear that it is in error to say the model has no idea, modulo there's some independent run that's dramatically different from the only one offered in this thread.

Second thought: if we're doing "the image generators don't get details right", it would seem to be there a lot simpler examples than OPs, and it is better expressed that way - I assume it wasn't expressed that way because it sounds like dull conversation, but it doesn't have to be!

Third thought as to why I feel frustrated: I feel like I wasted time here - no other demos showing it's anywhere close to "no idea", its completely unclear to me whats distinctive about a IBM Model F Keyboard, and the the wikipedia images are worse than Google's AFAICT.

SV_BubbleTime8 hours ago

Single shot LORA effect if it works as they cherry pick will be a game changer for editing.

As with almost any AI release though, unless it’s open weights, I don’t care. The strengths and weaknesses of these models are apparent when you run them locally.

ttoinou7 hours ago

They’re not apparent when you run them online ?