Back

Phind 2: AI search with visual answers and multi-step reasoning

537 points29 daysphind.com

Hi HN! Michael here. We've spent the last 6 months rebuilding Phind. We asked ourselves what types of answers we would ideally like and crafted a new UI and model series to help get us there. Our new 70B is completely different from the one we launched a year ago.

The new Phind goes beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers more meaningful:

- "explain photosynthesis" - https://www.youtube.com/watch?v=cTCpnyICukM#t=7

- "how to cook the perfect steak" - https://www.youtube.com/watch?v=cTCpnyICukM#t=55

- "quicksort in rust" - https://www.youtube.com/watch?v=cTCpnyICukM#t=105

Phind is also now able to seek out information on its own. If it needs more, it will do multiple rounds of additional searches to get you a more comprehensive answer:

- "top 10 Thai restaurants in SF, their prices, and key dishes" - https://www.youtube.com/watch?v=rIQQcDIIHFQ#t=11

It can also perform calculations, visualize their results, and verify them in a Jupyter notebook:

- "simulate 100 coin flips and make graphs" - https://www.youtube.com/watch?v=YP3PZ4MKGCg#t=8

- "train a perceptron neural network using Jupyter" - https://www.youtube.com/watch?v=YP3PZ4MKGCg#t=45

This blog post contains an overview of what we did as well as technical deep dives into how we built the new frontend and models.

I'm super grateful for all of the feedback we've gotten from this community and can't wait to hear your thoughts!

tenpoundhammer28 days ago

Paid for it and tried out the full experience, beats anything else I've tried by a wide margin.

My prompt,

"I'm considering buying stock in the company with symbol NU. The most important thing to me is answering the question, is the stock likely to rise in the future. Please help create a list of questions that will help me to understand the likely hood of this. Also please help to anwser those questions. Please highlight the global economic environment for the company. Any unique challenges and unique advantages. Finally let me know what others think of it"

Results: I know this stock well all though I'm not a pro. It nailed all of the relevant aspects and hits the analysis right on for everything I know about it. Pulled lot's of helpful resources and most importantly the information was timely enough to be relevant. The timely part is where other LLMS have failed miserably. I've gotten good analysis from other LLM products but they have always been way out of date which makes them useless.

omdv28 days ago

My experience was different. I have a particular question, which I am yet to see any model to answer correctly and I just tried it out.

Q: "what are the major economic and earnings events next week and how they can affect SPX volatility and price?"

A: It gave a long and nicely formatted answer with clever visuals, using the right words about FOMC and inflation. Coherent reasoning overall, albeit quite shallow. But the economic and earnings calendars for next week are complete hallucinations, even the dates, so the whole analysis is nonsense:

Wed, Feb 21 Consumer Price Index (CPI)

Wed, Feb 21 EIA Crude Oil Inventories

Thu, Feb 22 Producer Price Index (PPI)

toastau25 days ago

I think this is a pretty good answer. "Why?" could extend to the theory or more detail on the influences I guess. https://chatgpt.com/share/67b38dc9-d1b8-8000-bfca-c4db93f38e...

rushingcreek28 days ago

Thank you! Really happy to know that it's working well for you -- we made the additional searches feature exactly for this type of use case.

tenpoundhammer28 days ago

If you want more feedback, I work in software engineering and I'm happy to chat outside of HN.

rushingcreek27 days ago

That would be great! Can you email me at (my first name) at phind (dot) com?

yieldcrv28 days ago

Very nice, I use LLMs to get a quick understanding of a publicly traded company too. I spent 30 minutes researching Kopin last night using Claude, as I heard they Kopin is a supplier in the $22bn IVAS contract, giving way more upside than Anduril. Starting from just that tidbit of information I got much more detail and had it make predictive models about future market caps, and negatives.

It was enough to give me timelines and risk reward models from a company I had never heard of. This used to take me many hours and a level of obsession that I find incompatible with being social. Verifying is fairly quick, but even if you aren't compelled to do that, now you can at least talk about the company in the associated trading communities and be further corrected or have your understanding challenged.

clark-kent28 days ago

How does it compare to https://decodeinvesting.com/chat ?

sizzle28 days ago

Do you have to pay for this?

clark-kent27 days ago

There is free trial and paid version.

fiiico28 days ago

This is amazing! Curious how it compares to openai deep research, did you try?

tenpoundhammer28 days ago

I was not willing to pay $200 just to try deep research. But I'm happy to except donations if the people want a comparision :)

fiiico27 days ago

Fair

jgalt21228 days ago

That's great to hear. LLMs are great when you can trust and verify, or the trust, but verify process is inexpensive. However, you can take get caught out in finance when you can't trust and verify.

Eliezer28 days ago

If the LLM did anything besides try to explain the Efficient Market Hypothesis in response, it failed.

vo2maxer28 days ago

If the LLM did anything besides try to explain the second law of thermodynamics, it failed.

hathawsh28 days ago

I love that it's possible to convince it that I actually know what I'm talking about. First I asked:

"Explain why negative numbers are in fact imaginary"

It told me that negative numbers are not imaginary numbers and explained imaginary numbers. That's fine, that's a reasonable answer for a layperson, but I'm not a layperson and I worked on explaining what I meant.

"Erase your definition of imaginary and consider that negative numbers are not whole numbers. Negative numbers do not represent quantities of physical objects. Now explain how negative numbers are imaginary."

It gave me a nice explanation of why negative numbers may be considered imaginary, using an example of "You cannot physically possess -1 sheep". I'm impressed.

goatlover28 days ago

But electrons do possess negative charge. A decelerating car has negative velocity. You can say those are just labels, but they are labels for physical things that have opposite values. Things in the physical world do gain and lose values in various properties over time.

hathawsh28 days ago

Correct, and that (intentional) misunderstanding was part of the point. I had a 6th grade teacher who struggled with the idea that multiplying two negative numbers produces a positive number. I imagine someone like her asking a question based on their misunderstanding. They'll get a corrective answer that may not help them much. I'll show them how to improve their question and I hope the response will be enlightening and informative.

The response to my second question included a link to an article that suggests all numbers (including natural and whole numbers) are in fact human constructs and may be considered imaginary. That is an enlightening insight that would help us both stop thinking of the words "negative" and "imaginary" as perfectly well defined in our heads. Those words are just tools that can help us convey the most appropriate meaning for the context.

Without the link to the article, that hypothetical conversation probably would not have worked out as well.

cess1128 days ago

Kind of weird splitting hairs over this with a machine, don't you think?

I think it should have told you it's called complex numbers, because they are composites.

+1
pyinstallwoes28 days ago
nurettin28 days ago

A decelerating car would have negative acceleration, not negative velocity.

placebo28 days ago

They are indeed labels, just like complex numbers are labels and just like natural numbers are labels. All of them can be regarded as imaginary if one wants to nitpick but all are very useful imaginary models

Spivak28 days ago

This is also why imaginary numbers aren't really imaginary either because real things in the physical world are well-modeled by the operations in the complex plane. When you're in R2 you do some hairy trig or switch to polar cords to express rotations orrrr you switch to the complex plane and multiply by i.

keerthiko28 days ago

> A decelerating car has negative velocity.

not really your point, but ??

a decelerating car has negative acceleration, and until it starts reversing relative to its start, it has velocity in whichever direction it started in -- presumably positive if that was your initial frame of reference. of course if you decided positive was the opposite direction from which the car was already going in, well, it started with negative velocity.

also to the GP, if you owe someone a sheep but don't have any, you really do have -1 sheep.

pyinstallwoes28 days ago

Yeah but a speaker moving in vs out which is positive or negative doesn’t mean having imaginary things, it’s positional relative to displacement of a thing. Sine waves +1, 0, -1. -1 is exactly like +1 but the other way. So it goes for electrons.

refulgentis28 days ago

Category error created via multiple mathematical and physics misconceptions.

- A decelerating car does not have negative velocity

- "negative velocity" is assuredly nonphysical, which rips the middle out of an argument based on physicality

- Velocity is a vector quantity, as is acceleration. (the steelman'd version of this argument is s/velocity/acceleration)

- The negative isn't physical, it's pidgin for algebra so late middle high schoolers / early high schoolers can imitate physics without learning any of the above

d0mine26 days ago

There is no one true theory of everything. All models are false -- some are useful.

Many different math concepts are used successfully in physics. Even if they may be outside everyday intuition for a layman, that doesn't make them less "real"

+1
refulgentis26 days ago
toxik28 days ago

Newton’s third law? Pidgin for high schoolers. Got it.

+1
refulgentis27 days ago
d0mine26 days ago

You can own -1 sheep. It is called debt.

fosterfriends29 days ago

I used the first version of Phind for some time and loved it. As Perplexity and ChatGPT got better, I started shifting more of my traffic back to them. Excited to see y’all still in the race and giving competators a run for their money. I appreciate your focus on developers as an audience, might give you an edge over tools serving a broader base.

CSMastermind28 days ago

I'd agree with this. I tried Phind just now and found it still behind Perplexity for the product search use cases I tried it out for. Glad there's competition in the space though.

rushingcreek28 days ago

We have a products UI on the way :)

iszomer27 days ago

Pretty cool, visual learners are gonna' be thrilled. Also sorta' ties into r/FUI quite neatly too..

ruffered29 days ago

I use phind and find the new features to be overly verbose.

The flow chart diagrams rarely give me any insight and often only confuse the point, or just clutter the answer, drowning out the pertinent details.

The code editor actually makes it so you are unable to even see or copy the code. I assume this is intentional kneecapping to encourage paying for your monthly service?

Instead, I now just have to prepend to every question I ask:

“Only answer using plaintext, avoid using your code editor and diagram features:”.

(Hilariously this prepend prompt method was suggested by phind itself when I angrily asked “how do I shut off all of these new features?!”)

Which is an additional hassle for me, but so be it.

When I ask it to write me a SELECT statement it upsets me that it is burning unnecessary fossil fuels to give me a flow chart of reasoning through SQL querying pipelines.

Perhaps the feature is meant for people who are unsure what they want, but for me, I just want the answer with links to sources in the least verbose way possible.

I’d appreciate a checkbox that I could click to just get a straightforward answer.

(Also, side note, I only use the free tier and there is a limited number of free uses for some larger models, and when you use those freebies it gives a countdown for “until uses refresh” and when that countdown finishes the uses fail to reset, only the countdown itself resets. Which is fine, I accept that I only use the freely offered model, previously “instant” currently “70B”, with its clear flaws, but it’s just another frustrating UI feature that seems to fail to live up to its promises so I am, again, just confused why it’s there?)

rushingcreek29 days ago

Thanks for the feedback. Have you tried setting your answer profile in https://www.phind.com/settings/profile?

You can tell it to "only answer using plaintext" there and it will be automatically applied across your searches.

ruffered29 days ago

That would require me to make an account, which requires providing you my email, and I am uninterested in doing either of those things.

do_not_redeem29 days ago

So the product has a builtin feature where you can tell it what you want, but instead of using that feature you want it to read your mind?

+1
rufferedd28 days ago
econ28 days ago

Why bring up the email if you are not making an account?

I gave up on account creation for some projects and store the user preferences in local storage. It is an amazingly annoying feature in that it is very hard for the user to erase the data but you can't smoke your cigar and have it too.

+1
rufferedd28 days ago
evrenesat28 days ago

You can create an address bar search entry for your browser, like this:

  "https://www.phind.com/search/?q=%s Only answer using plaintext."
https://github.com/evilpie/add-custom-search-engine works nicely for Firefox.
omega___28 days ago

You don't have to go through the trouble of setting up a custom search engine for that, you can just use keyword bookmarks

https://support.mozilla.org/en-US/kb/bookmarks-firefox#w_how... https://www.reddit.com/r/firefox/comments/o3yfeo/just_discov...

cholantesh29 days ago

>Perhaps the feature is meant for people who are unsure what they want, but for me, I just want the answer with links to sources in the least verbose way possible.

Did you try including that in your prompt?

rufferedd28 days ago

I mentioned that I do exactly that in the comment you are replying to.

Is it a sad state for any tool when one has to specify only wanting the thing they asked for with less verbosity?

Especially when said tool is costly to run, both financially for the service provider and environmentally.

To me, it is, but hey, opinions, ya know?

cholantesh28 days ago

If it's costly to run, maybe it's reasonable to expect that persisting settings, which is also costly, has some level of gatekeeping? I'm as ambivalent about LLMs as the next guy, but these are frankly nonsense concerns.

rufffered28 days ago

Either you are confused or maybe I am?

The ‘costly’ bit is referring to when asking for a SELECT statement instead of just receiving the SELECT statement I am given pages of overly verbose flowery text, reminders, and custom generated flow chart diagrams.

My understanding is most of these things charge per token so I am assuming generating more tokens incurs more costs, and I’m saying it’s generating needless tokens that frustrate me, so cui bono these additional costs?

Also more token generation means more energy resources consumed, so it’s burning the planet to frustrate someone it is trying to convince to become a paying customer.

How are these nonsense concerns?

It is unnecessary to persist any settings if I can just click a “just answer the question” checkbox before clicking send on the prompt.

albert_e28 days ago

Settings > Profile > Preferences

New Threads Are Public [Enabled By Default]

Do Not Train on My Data [Disabled by Default]

The first one seems problematic -- does it mean what it seems to suggest? all conversations are "public" ?? Do users know this going in? i am afraid they could be pasting lot of private or confidential stuff without realizing this.

Sn0wCoder29 days ago

Hi Michael, thank you for all the hard work that goes into the Phind models and congratulations on the new UI. Been a paying customer since first finding Phind here on HN in September.

Question: are there any plans to allow access via API to integrate the Phind models with the Continue plug-in (would also love to integrate into my personal RAGs)? Mostly using IntelliJ and integration would be awesome. Do have the VS Code plugin setup and use that when needed. Also running the smaller Phind models locally to use with Continue, but that only gets me so far without needing to open the real UI. If the API opened both the 405B for chat and the 70B for auto complete would be a big step in gaining more paying customers (IMO). No need to open the other models as those can be done with other API keys (if one wanted).

If there are no plans to open the models via API are there plans to support other IDEs (IntelliJ) with the chat feature?

Please let us know!

rushingcreek29 days ago

Thank you! We do plan to support an API this year. We have deprecated our VS Code extension, however, as we're going all-in on search.

mikestorrent28 days ago

I think it's a great move. I use Phind daily because I can ask it a question like "hey, what should I use for X" or "how do I connect Y to Z" and with refinement I can hone in on serious answers in ways that I cannot with Google searches.

I think building comparison tables is one of my favourite things to do here. Saves me considerable amounts of time and saves me from my biases to some extent.

I think the new Mermaid support is a great idea. It sure is handy that, before LLMs were even a thing, we were already collectively working on so many textual, readable languages to describe things like this! I am going to try to use it to create some architectural diagrams by adding requirements one by one.

pcchristie28 days ago

VS Code is the main way I use Phind. I love it as a way to assist my learning to code.

WhitneyLand29 days ago

On a positive note, this is a nice look at the future and a direction that existing experiences seem likely to evolve towards.

However I did find myself wondering how crucial really were the model changes?

Imagine trying to implement these features as a wrapper around frontier apis and no 70B bespoke model.

Starting with a user query, we could ask for relevant svg diagrams, fit instruction steps or guides into layout templates, or even filter photos as the demo shows.

How much less would this simple approach leave us with?

rushingcreek29 days ago

Ah, great question! We tried using off-the-shelf models and found that they are incredibly bad at generating these visual components reliably, let alone with the nuance (such as color, placement, details, etc.) that would be expected in a great answer.

We write more about this in the technical model blog post: https://www.phind.com/blog/phind-2-model-creation.

anon37383928 days ago

That's a great blog post, really informative and interesting. Do you recommend any other helpful resources on creating and curating datasets for post-training?

Xmd5a28 days ago

I'd love to read an in-depth explanation of how you improved your LLM abilities to lay out diagrams.

rushingcreek28 days ago

We used a system of LLM critics to generate a high-quality dataset for this. More in the blog post linked in the answer above.

schmorptron28 days ago

Oh hey, I used phind a lot a while back and really enjoyed it. Sort of fizzled out of it for other stuff a few months back, but this looks pretty exciting! The visual explanations are great. I've been getting gemini to output ASCII diagrams to explain things, will definiely be checking this out.

srameshc29 days ago

I asked "What is Atproto that bluesky uses" and it does a Technical Architecture illustration and it is wonderful. I think that is probably your new value and might be a good idea to highlight it somewhere at the top rather than showing the whitepaper.

pdq29 days ago

It just summarized this page and included a few of the images.

[1] https://docs.bsky.app/docs/advanced-guides/atproto

asadm29 days ago

probably shown in the photosynthesis video example but yes it seems it uses mermaid or something similar to express diagrams?

rushingcreek29 days ago

Yep, we trained the model to produce Mermaid when a diagram might be helpful. There are a lot of nuances that went into this, such as where in the answer to place the diagram and how to ensure a good contrast within the diagram for maximum readability.

We have some more details on this in the model technical deep dive blog post :)

sddhrthrt28 days ago

I still don't understand why these models can't be more "trustworthy", but I also don't understand the theory, I'd love to hear what you all think about this.

I asked it a question that's pretty subjective and culturally specific, and I appreciate that I got a reasonable answer back. The question was "should I?" and the answer was "definitely, don't miss it" in three different ways. However, I found that the literal sources it quoted didn't have the same opinion it expressed to me so convincingly. I asked a clarifying question and it goes "okay so I read the material, and it actually says it's optional".

So why not read the material? I wonder if it could even embed the website in the results, giving the website the traffic and ad space. I wonder if a meta browser is a better product for these tools.

https://screenshot.click/13-56-232ze-p3nzf.png

brap28 days ago

I think one factor is that all these LLMs are tuned to be ridiculously agreeable, almost everything you say will be met with some variation of “you’re absolutely right!”.

It’s like, look, I’m definitely not “absolutely right” 90% of the time, so how the hell am I supposed to trust what you’re saying?

I would prefer a model that’s tuned to prefix answers with “no, dumbass. Here’s why you’re an idiot:”. And yes you can promot them to answer this way, but they’re simply not wired to challenge you except for very trivial things.

real_jiakai17 days ago

Congratulations on the impressive Phind 2 rebuild! The visual answers and multi-step reasoning capabilities look fantastic. I noticed that currently only the Phind series models support image uploads, while Claude 3.7 Sonnet in Phind 2 doesn't have this functionality yet. Do you have plans to enable image upload support for Claude 3.7 Sonnet in the near future? Being able to use Claude's multimodal capabilities within Phind would be a great addition, especially considering how powerful Claude's latest updates have been. Thanks!

Gabriel_Martin28 days ago

Hi Michael, the range of unique interactions and little pieces of visual feedback for users in your demos is super impressive, and as a designer, I hope you don't mind if I shoot my shot.

I'm ending a MIT xPro course on designing AI products in about a week, and I've been looking for places to put all my learnings to work, might your team have some room for a coding/prototyping UX developer/designer who has spent a lot of time thinking about AI-oriented HCI?

I think in 8 weeks of a co-op / or even part time basis, I could build out a ton of ideas you haven't gotten around to defining yet, but want to get a feel for the UX of. I'd love to hear from you (my email in in my profile).

jadbox29 days ago

Well done. After running a few sample queries I seem to get about the same quality answers as I'd get from perplexity, although with more images included. Overall great progress and a solid competitor to other AI search engines I'd say.

rushingcreek29 days ago

Thank you :)

SubiculumCode28 days ago

I liked phind and subscribed for quite a few months. I used it for both search and for coding. imo, chatgpt search is terrible...It constantly relies on spammy sources.. phind resultsback then tended to find the quality sources, letting it give you quality answers.

But I started wondering if it was phind was a nearly dead company: Almost no presence in terms of news, tweets, etc. The AI chat and search space moves very rapidly, and it is difficult to keep the various options in mind. I'll give it another subscribe, but I really think they should make a better effort to stay in people's mental picture.

rushingcreek28 days ago

Thank you! We went quiet while we rebuilt the entire product, but now we're back and will do everything we can to stay at the forefront.

SubiculumCode28 days ago

I would really appreciate a way to make the information output denser. I know a lot of people like space, but a dense layout really works better for some people, so that I can see more of the output on a single screen at the same time.

SubiculumCode28 days ago

Awesome. Subscribed again.

(Your lucky day cuz my openai subscription ended last night)

xtracto28 days ago

I just cancelled my subscription last week. I found that I used it less and less and for the odd time I needed an LLM answer DeepSeek was enough for me.

prabal9729 days ago

This is amazing. I paid for Phind many months ago for coding QnA. However, this is a much better product and even beats Perplexity, ChatGPT premium, etc. Congratulations to the Phind team!

rushingcreek29 days ago

Thank you :)

garganzol28 days ago

As far as a search engine goes, Phind beats Goodle 10x times in programming topics.

Google Search is extremely dumb nowadays as it promotes ad-banner-ized results and Reddit-style drama posts. Those things have a close to zero value, but Google shows them in top results to earn money and imaginary "engagement". The trend is getting worse every year. I predict that one day AI assistants like Phind will smoke conventional search engines out of the water in every imaginable area.

jwpapi28 days ago

I remember when I was in school and our religion teacher told us that 200 years ago there were still people that knew almost everything that was known to humanity back then. This wasn’t possible anymore.

However now I feel like it’s possible again.

https://www.phind.com/search/cm73yk4wc0002336u6brx4jp1

It helps greatly to learn new topics.

oniuyung28 days ago

> It helps greatly to learn new topics.

I guess when you want to learn 33% of something, but be told you learned all of it.

The link used to say there are 2 quarks, though it is now simply a display of poor security management.

qingcharles28 days ago

When I click on that link it gives me this meme image for some reason as the first image in the response. What did you see when you made the query?

https://www.citizen.org/wp-content/uploads/these-new-ai-frie...

weberer28 days ago

There seems to be some mixup with cached images.

https://i.imgur.com/Bw9DlxW.png

shiandow28 days ago

I think those people 200 years ago mostly succeeded by having a very narrow definition of what was considered 'known' to 'humanity'.

genewitch28 days ago

for anyone arriving - phind has an issue where anyone can edit those links and they persist.

I'll tag whoever was claiming to be from phind up above.

I verfied this by copying the link in OP to my computer and opening it there, and it had my edits, not 1=2 edits, and assuredly not what jwpapi wanted to link.

kekebo28 days ago

While partly debated, the first part sounds like a prime candidate for Dunning Kruger effect in (historic) action

kristopolous28 days ago

Studying science history basically my entire adult life im pretty sure you'd have to go back many thousands of years for this to be true

kekebo28 days ago

Such confidence ratings over unavailable/lost/missing data appear like a luxury to where I'm from (science wise). Does historical analysis come with heuristic privileges over other fields in that regard?

kristopolous28 days ago

There was certainly a time when discrete knowledge didn't collectively aggregate over generations.

But even in prehistoric times there were things like boat making, midwifery, metallurgy, sea navigating, animal husbandry, farming and seed cultivation, tool and weapon making, navigational shortcuts, the wide variety of spoken languages, medicinal knowledge, knowledge of fresh water sources and hunting techniques ...

Squaring the number to 40,000 years isn't far back enough. 200,000 years ago, for instance, people weaved grass beddings out of on insect repellant plants, wore clothes, had shoes ... Seafaring is at least 130,000 years ago.

Specialization and collaboration is a core part of being human.

modeless28 days ago

Phind seemed like the best AI+search product last time I tried it. I don't know why I don't use it more, honestly, maybe I should.

diegoperini28 days ago

Phind does one thing and it does it really well. I use it because it has a real, positive impact on my growth. I love the fact that it is not Jarvis, but just a really helpful library assistant. Please don't lose the focus! Stay awesome.

ck_one28 days ago

It's better than perplexity on certain tasks! Pretty impressive!

Feedback: the mermaid charts are a bit annoying when they are not needed

Our of curiosity, what web index do you guys use right now and does it make sense to build your own index at some point?

Blau12328 days ago

Gotta respect the hustle of the guys. Some work colleague recommended it to me over a year ago but I stopped using it after a couple searches, then never heard from them again after perplexity exploded. Hope it works out for them!

mvieira3829 days ago

Wow, this could conquer the "normie" crowd that likes using ChatGPT to answer search-like prompts. The UI is so much cleaner than any other alternative and the outputs are like I'm reading an article about the subject.

rushingcreek29 days ago

Thank you!

hiccuphippo29 days ago

I asked something on phind yesterday and got pleasantly surprised with the images going along with the response. Very nice!

putlake29 days ago

In your "how to cook the perfect steak" video [1] there's a picture of various doneness levels of a steak. It's a fantastic picture. The creator of that picture will get jackshit from this. Phind gets value, the user gets value but the creator does not.

You're hyperlinking to the source, which is nice. But there's no reason for the user to click through so it won't really help the creator. The upshot of all this is that the open web will have less incentives for content creators. Someone's got to create new data for the AI borg. In future, these creators are less likely to be independent bloggers/photographers. Perhaps biased media outlets will be the only ones with the incentives to create.

[1] https://www.youtube.com/watch?v=cTCpnyICukM#t=55

mvieira3829 days ago

Couldn't you say the same for something like Google Images, though? I might click on the link to an image once in a blue moon if I find it funny, but I usually don't even click to maximize it...

Maxion29 days ago

Yeah and they got sued and lost.

rushingcreek29 days ago

We'd love to partner directly with content creators. This is something we're thinking about a lot and are having conversations on how to go about it.

I will also personally do everything in my power to keep us from running ads on Phind. I want to keep Phind honest and authentic, and will do everything I can to make it a net positive for the internet.

Maxion29 days ago

By displaying other peoples images that you do not have a license for you are breaking copyright law and are open to lawsuits.

LouisSayers28 days ago

I'm no lawyer, but the law is nuanced when it comes to copyright.

Fair use exists which permits usage of copyrighted content in certain situations.

I'm not saying that they aren't in violation, just that copyright doesn't automatically ban others from using your material - it typically (laws are different in every country) depends on how (and how much of) your material is used.

https://en.wikipedia.org/wiki/Fair_use

dougb528 days ago

Could you change your product to only show images that have Creative Commons licenses? https://daminion.net/articles/tips/how-to-use-images-legally...

viraptor28 days ago

I'd love it if the BAT / Flattr / Coil style idea actually started working. Flattr failed to get attention from the actual users, but it could be revived to pay the website creators from "indirect visits" like using their content in phind or perplexity. (in addition to continuing the direct payments)

dingnuts28 days ago

>I want to keep Phind honest and authentic

Choosing to violate the copyright of content creators sure is a funny way to be honest or authentic.

econ29 days ago

If such a place exists you could serve an ad only where it is extremely useful given the current context.

I've only seen this once.

amelius29 days ago

Sounds like: asking for forgiveness is better than asking for permission ...

codetrotter29 days ago

> Someone's got to create new data for the AI borg

AI companies could always go the route that the creators of Pokemon Go went:

> The model uses geolocation information from scans players submit of real-world locations while playing Pokémon Go.

https://www.usatoday.com/story/tech/2024/11/23/niantic-pokem...

Make a game, have people submit pictures and video to you as part of the game.

Where Niantic were using Pokemon Go for collecting scans of locations, other games could get players to take other kinds of pictures and video too.

"Oh, look! There's a 500XP bonus if I can shoot a picture of a perfectly grilled steak within the next 45 minutes!"

ewams28 days ago

I've been trying to use tools like these for data aggregation, analysis, and finding trends. One major issue I experience with almost every product is I will tell it to "sort by column x" and it either doesn't do it, or will state that it did it when in fact it did not do the sorting. Check out this query: https://www.phind.com/search/cm7444is700011y6rtzk6l5th

monort28 days ago

Can you add the ability to ask questions via GET requests? This is needed to add a site search keyword into the browser. Make it work only when logged in, if you are afraid of bot requests.

There is also a standard to add it automatically ( https://stackoverflow.com/questions/38670851/whats-a-script-... )

rushingcreek28 days ago

This is supported! You can use the https://www.phind.com/search?q=my+question syntax.

monort28 days ago

Thank you!

genewitch28 days ago

when someone gives a link: https://www.phind.com/search/cm73yk4wc0002336u6brx4jp1

anyone else can edit those links and have it persist.

Like, when i clicked that link it was 3 prompts trying to get phind to prove 1=2. Now it's about the TV show weeds and down quarks.

I don't want to expend any more effort to report than this because linkedin wanted me to pay money to send a message to whoever was claiming to be from phind in comments

so i send that link to a friend. it shows them 1=2 convo, they edit it. i hit refresh, it shows their edits.

This isn't just some "oh it's the same IP" or "cookies" or whatever. My cellphone isn't on my wifi, and my friend lives 1100 miles from me on a different provider.

Furthermore, i am unsure of what the OP originally had in that link, because they said it showed that AI could help teach new concepts, and this looks like a joke meme prompting. So who knows what's going on.

A guess? the links are SESSION links instead of CONTENT LINKS. A session link allows edits and a content link is just a hotlink to render the content that the session created.

ewoodrich27 days ago

The original link was a question about quarks/atoms when I checked it last night, I noticed all the images had some strange caching issues at the time where they were replaced by ads. But now it's a completely different search! So it looks like you're right, that's a pretty major issue...

mbar8428 days ago

The pricing on these products is frustrating. To even try out the Phind-405B model I have to pay $20 and then I get unlimited queries which I never asked for? For a product I don't even know if I'll ever use again, give me usage based pricing.

rushingcreek28 days ago

Thank you, I hear you. We'll find a way to make it so you can try some pro features, like automatic additional searches or Phind-405B, for free.

dr_kiszonka28 days ago

I know it is not ideal but could you do a one-time $5 week trial? Or simply a $5.99 per week plan?

goingtosleep28 days ago

I'd like to try the premium models too, but I've already spent $20 on other services, so I'm a little hesitant. Hopefully, there will be a smaller plan or a trial option.

jaetee28 days ago

Unlimited queries is a great differentiator and maybe the strongest selling point for me. I didn’t realize that was the case and now I’m considering paying. Agree it would be great to be able to have a short test drive of a few queries when you sign up. Reducing friction to find fit would absolutely accelerate adoption of their paid plan.

esperent28 days ago

There's no need to be an early adopter, wait a few weeks and if it's worth $20, lots of people on HN will tell you, just as they currently do with Kagi search.

maz1b28 days ago

Good to see non-mainstream companies like Phind find interesting and creative ways to be competitive. If I was the founder of a company that wasn't an Open AI wrapper (i mean this as matter-of-fact, not in a negative way), I would look to Phind's launch of this product as inspiration, especially considering how massive the warchests are of my mainstream competitiors.

iandanforth28 days ago

Your product launch video deserves more than an "um" filled single take.

Moosdijk28 days ago

Since the homepage is a prompt to interact with the bot, I directed my questions about possible plans to it.

It said: "Up to 10 uses of best model per day"

When asked what one use means, it kept checking the docs and told me: "Each interaction with Phind's models counts as a usage".

When asked to clarify, it ultimately said: "I cannot confirm exactly whether this would count as your third usage today, as I don't have direct access to your usage statistics. To check your current daily usage count, you can visit your settings page at https://phind.com/settings. This will show you your exact usage count for the day and remaining available uses based on your subscription tier."

Since I'm not a user and am inquiring about details before subscribing, I had hoped the bot had some more information on that.

tacone28 days ago

The new version feels great.

What I always liked the most about it though, is how fast the webpage loads up when I reach out for it. The entry barrier feels lower than the competition, so I usually ask Phind first, then eventually try with other engines.

Keep it up guys, I'm a big fan.

voytec28 days ago

These graphs were available to me for some time now but I have not really found them all that interesting. For some reason, I had to wait for the diagram to render and then for the text to slowly render.

To my poor understanding of LLMs, when the diagram was slowly created - the text behind it should already "be there" and should have been displayed immediately after the diagram but this was not the case. Also, often the slowly drawn diagram was only presenting my rather clear (for LLM) prompt: "i did this, when situation was this and that, and than this happened; question: why the result was A and not B?"

I found myself falling back to Claude more often than not over using Phind 70B and 405B models. I found it kind of... more gimmicky than useful.

rushingcreek28 days ago

Thanks for the feedback. We're working on making the answers a lot faster for the Phind models. I can see how it would be frustrating to be waiting for the diagram to generate when you're waiting for text or for an example.

Oras28 days ago

This is awesome, I was thinking of creating an agent to generate spec and sequence diagrams (using Mermaid), but this takes it to a new level being generic. I tried a question about how does LLM post training work and it made a nice workflow about it.

ZeroCool2u29 days ago

Whoa, okay I really like how it can generate diagrams on the fly to explain complex workflows.

EVa5I7bHFq9mnYK28 days ago

Asked a tricky tax question concerning AMT on LTCG. Phind 70b answered quickly and correctly. ChatGP-4o thought longer and answered incorrectly. o3-mini gave correct answer, but spent a lot of time thinking. Impressive.

syntaxing29 days ago

Probably a loaded question but how has your 70B model stood against the test of time? I would expect models like Qwen2-72B to outperform yours noticeably? (I get why you can’t simply fine tune that due to the restrictions on that license)

rushingcreek29 days ago

Our new 70B is actually completely different from the one we launched a year ago -- we should probably update the naming to make that more clear!

What do you think about its performance on your queries?

syntaxing29 days ago

Oh interesting, I have to give it a go. I used to use Phind pretty often when it first came out and I told a ton of people about it! I remember the good old days where you offered pretty much unlimited access to ChatGPT/GPT-3. But with your (understandable) quota restrictions and there ended up being so many different services, I haven’t used your service in a while. I had so many subscriptions and the $20 didn’t provide the value I needed at the time. With all these new features, I might revisit it again after my complimentary Perplexity Pro (from my ISP) expires in like 6 months.

dang29 days ago

(I added that sentence to your text at the top)

bsenftner28 days ago

I'm on Phind's free plan, and have an OpenAI API account, which gives me access to the suite of OpenAI's models on their playground. I go to Phind's free option first and get what I need then, 80% of the time.

mariopt29 days ago

I have been using Phind for building SQL queries and it's actually great to learn more about SQL, and the results are mostly correct. Just try to ask it to build some advanced query then you discover you don't truly know SQL.

sgc28 days ago

I actually did just the same thing the other day, and it managed to make a fantastically complex query (by my standards) with relatively little fuss. It made quick work of building an entire php/slim api quite nicely as well. You need to know what you are doing and correct some mistakes related to specific implementations (in my case SQLite specifics and the Psr library I was using in Slim - not sql or php in general), but overall a pleasant experience.

SalmoShalazar28 days ago

Was curious about how this would perform for something I was actually interested in this morning. I live in Canada and want to sell my PS5 to GameStop, and I was wondering what accessories to include. Overall it performed quite poorly answering this. First, it claimed GameStop doesn’t operate in Canada (wrong, EB Games rebranded awhile back), second, it thought I had to “charge” my PS5 first. It then listed some components to bring in but was missing the HDMI cable requirement. Overall not great for this particular query.

golergka28 days ago

So I decided to run a simple comparison with OpenAI's mainstrem model based on a vague question which would require follow-ups.

----

# Phind 50b

User: how do I a whole coca-cola bottle

Assistant: I notice your question isn't quite clear. Could you please specify what you'd like to know about Coca-Cola bottles? Are you interested in:

How to identify/distinguish different types of Coca-Cola bottles? The history and design of Coca-Cola bottles? Something specific about collecting Coca-Cola bottles? Or perhaps something else entirely? Once you clarify your interest, I'll be happy to provide detailed information tailored to your specific question!

User: I just accidentally a whole coca-cola bottle and how I need to it

I notice your message isn't quite clear. Could you please rephrase what happened with the Coca-Cola bottle? Are you saying you:

Accidentally dropped/broke a bottle? Spilled the contents? Found an old bottle somewhere? Something else? Once you clarify what happened, I'll be able to provide specific guidance on how to handle the situation safely and effectively.

----

# gpt-4o with a custom eigenprompt

User: how do I a whole coca-cola bottle

Assistant: do you mean: 1. consume an entire coca-cola bottle in one go? (drink it fast, chug it, etc.) 2. acquire a whole bottle? (buy, steal, manifest from the void) 3. destroy a whole bottle? (smash, melt, vaporize) 4. become a whole coca-cola bottle? (metaphysical transformation, brand loyalty taken too far)

clarify and i will give you the optimal strat.

User: I just accidentally a whole coca-cola bottle and how I need to it

Assistant: okay so if you accidentally the whole coca-cola bottle, and now you need to it, then we need to determine: 1. did you drink it too fast and now your stomach is rebelling? (solution: burp aggressively, lie down, accept fate) 2. did you drop it and now there’s sticky chaos? (solution: paper towels, baking soda, maybe a flamethrower) 3. did you somehow ingest the actual bottle? (solution: seek medical assistance bc that’s wild) 4. did you turn it into an existential problem? (solution: embrace the absurdity, become the bottle)

gimme specifics, and i’ll troubleshoot this disaster with you.

----

Overall, I think that phind model is clearly inferior — it shows less imagination in the face of uncertainty and certainly much less creativity and humour.

ewams28 days ago

I'm surprised either of them didn't pick up that this is a meme, for what, 20 years?

sfjailbird28 days ago

LOL. I don't much care for OpenAI, but ChatGPT as a product is very well done.

nextworddev28 days ago

Phind used to be focused on developer questions. Did you guys pivot?

rushingcreek28 days ago

With this release we are aiming to be simultaneously better for developer questions while also competing as a broad AI search engine for just about any question.

nextworddev28 days ago

thanks for the clarification!

aargh_aargh28 days ago

I asked one question where I expected a nice non-trivial visual diagram and my expectations were met and exceeded:

> How to make an enchanting table in Minecraft from scratch?

remipch28 days ago

As an occasional user, what I really liked in your previous version was that Phind always gave the list of sources.

So my workflow was :

- ask a question

- quickly read the given answer

- go to the main sources to decide if they are trustworthy or not

I noticed that the new version does not always give the sources, even when I ask. Is this intentional or do you plan to revert this in the near future ?

It also solves the problem of not reducing the traffic to the content creators you use.

Anyway, congratulations for making this tool and offering a free plan to test it.

rushingcreek28 days ago

Interesting, do you mind sharing some of the queries you tried? We avoid doing automatic searches for certain followups if the existing sources already contain the correct information, but we should be giving improved sources compared to the original version of Phind.

remipch26 days ago

Hi Michael,

[0] is a thread reproducing what I described. In that case I would like Phind's last answer to provide me with a link to the documentation or source code of 'NormalizedLandmarkList'.

I hope you find it useful.

[0] https://www.phind.com/search/cm77vnj8e0000206g4mgp66br

remipch28 days ago

Unfortunately, I don't remember them anymore and I regularly clear my browser cache.

Maybe it's just the case you're describing but I'm pretty sure I was really looking for appropriate sources and they weren't provided.

If you want, I can send you my queries if I see this again.

krishadi28 days ago

The UI is brilliant !

1. I like the way sources are listed on the right panel, that makes them easily visible as i scroll through the answer. 2. The rendering of the answers using rich visual components is nice to look at. I personally hate information dense reports that ChatGPT spews, which isn't easy to consume quickly.

What I miss though is iterations on the answer, as in, how do I specifically zoom into one aspect of the answer, and then easily zoom out and continue.

rushingcreek28 days ago

Thank you! We actually support "zooming into" different parts of an answer. The chat conversation is now stored as a tree, so you can ask a followup, then revise the followup, and switch between the revisions below the answer when doing a third followup.

krishadi28 days ago

How do I point it to follow up or zoom into a particular part of the answer?

lastdong28 days ago

Super neat UI, it really takes me back to when Flickr came out and looked so polished compared to anything else at the time. Love the flow charts too.

lbeurerkellner28 days ago

The diagrams are really cool. Congrats on the launch.

inerte28 days ago

This has also happened with Perplexity and R1 to me, but it was just one word. Suddenly the text goes into another language. I just clicked homepage’s suggestion pill “Ukraine war latest”

https://paste.pics/STGU6

midnitewarrior28 days ago

Thank you for innovating. I am a subscriber and use phind daily and I've noticed the improvements.

Is there any way I can provide feedback or contribute feature ideas/requests? I've got some things I'd like to share if your team is prepared for the feedback.

dnpls28 days ago

I asked for a map of Paris with all the arrondissements (since there's no way to get that from google maps) and it showed me a diagram instead of a map. Chatgpt at least gives me a link to the wikipedia page with the map image.

rushingcreek28 days ago

Thanks for sharing. It just worked well for me: https://www.phind.com/search/cm753jbx800013b60odargu2h.

ipsum228 days ago

I haven't tried Phind, but the steak example has a lot of errors. A good half of them are wrong, but for example, it says not to flip too often. Actually, flipping frequently improves even heat distribution in the steak.

nostrebored28 days ago

But ruins the sear? Seems like an odd nit

yawnxyz28 days ago

You can still sear it at the end with a tiny bit of butter

ipsum228 days ago

No, you still get a dark brown sear.

behnamoh28 days ago

Wish we could have PowerPoint slides generated by phind. It already can generate the SVGs/PNGs. It's such a no-brainer to extend that to slides!

Maybe one day we'll have short clips generated by AI, something like 3Blue1Brown videos...

tarkin228 days ago

I was a bit fan of phind.com for a while and I loved it. But I've been using duckduckgo's AI offering since its UI is a little smoother/faster/easier to use. Has anyone found any good reason to use phind over ddg?

reacharavindh28 days ago

I wish there was a way to use this via an API instead paying for yet another subscription…

I’m going to pass on the niceties shown here until I can try them out paying for what I use instead of an API..

rubymamis28 days ago

Their model seem to be fine-tuned on top of llama 70b, I wonder if they tried fine-tuning on other models and compared their performance.

cincauhangus25 days ago

I don't understand why the Service is unavailable in my region for more than 6 months now.

suraci28 days ago

it's really impressive.

Phind is the first model I've ever seen that attempts to interact with users in a more user-friendly way.

abhimanyue199828 days ago

love this, been an early user of phind since the vscode days, this is a significant improvement. Cheers!

albert_e28 days ago

How is this available for free without sign-up?

Does it not cost a ton to run this service amd maintain the infra?

saikatsg29 days ago

Love the new user experience, kudos!

joshSzep29 days ago

I am very impressed and I am getting great results for queries like "Show me how to use langchain in python".

However I am disappointed that when I provide a url it can not read the page. Given that this is a search engine I would expect it to be able to read any public URL I provide it. For example I attached a PDF of my resume and provided a link to a public job description and asked it to generate a cover letter tailored to my experience for this position. This is something I have done with easy success with ChatGPT GPT-4o, but Phind throws its hands up. :(

rushingcreek29 days ago

Thank you! Appreciate the feedback. Custom links are something we should be able to do -- we'll run some checks and make sure that feature is working properly. Thanks for letting me know.

joshSzep29 days ago
barrenko28 days ago

To beat alternatives, I still think you guys would need to go something super-niche.

NelsonMinar28 days ago

Phind continues to impress me. It is my favorite LLM application, I use it daily.

nedsma28 days ago

Super impressive. Definitely gonna play around with this. Awesome work!

jumpyfrog27 days ago

> Service is unavailable in this region.

That's a pity.

dangoodmanUT28 days ago

Gotta normalize the audio on the recordings

stainablesteel28 days ago

really awesome results, this has been great since day 1 and i'm glad to see you guys keeping up

esafak28 days ago

Can someone who has used either You.com or perplexity recently tell me how they compare? The phind landing page is uninformative. I would encourage them to use it to explain their value proposition relative to the alternatives, which are well-funded. I could not find pricing details; is there no paid plan?

From their videos above I gathered that they format their results and incorporate media better?

wordpad2528 days ago

the diagrams are accurate and amazing!

what do you use to draw them?

rushingcreek28 days ago

We use Mermaid!

android52128 days ago

any plan to release an API?

weird-eye-issue28 days ago

"Error streaming answer. We have alerts for this and are working on fixing it."

oidar28 days ago

Is this a fine tune of another model, or is it trained from scratch?

TeeMassive29 days ago

I just want a summary of search results with linked sources.