> Morgan: Literally Danny said he sat with an engineer team with examples of people in the room and said why aren’t they showing up and they did their “debugging process” and couldn’t figure it out.
Meanwhile a single Swede with a single desktop class machine in his living room created a search engine so good that I would often switch to it when Google failed.
These days I use Kagi, which has prioritization and block lists (which I don't use because the results are good out of the box).
Wanna know what is really interesting about the Kagi story?
While Kagi is building its own index, for a long time they were kind of reselling a wrapped version of Google + Bing results, but still were extremely much better IMO.
I have two theories:
- either Kagi has some seriously smart systems that read in the first tens of results and reshuffle them
- or more likely in my opinion the reason why results have been so good is because kagi has api access which bypass the "query expander and stupidifier"[1] on the way in to Google and the personalization thing on the way out. That way they just interact with the core of Google search which somehow still works.
[1]: "stupidifier" the thing in the Google pipeline that rewrites
- "obscure-js-lib" (think one that a previous dev used, that I now need to debug
- to "well-knowm-js-lib-with-kind-of-similar-name".
Or decide that when I search for Angular "mat-table" I probably want some tables with mats on even if they don't have anything to do with Angular.
The web is just much more hostile now. It's not a conspiracy by Google. Over the years the bad actors are chipping away. Now with LLM content it's going to be even harder to pick out the actually useful content, since even humans will have a hard time discerning.
ML experts at Google (El-Mahdi El-Mhamdi at least, if I recall correctly) who has since left warned that LLMs should be avoided because they made products chaotic and hard to control...
Meanwhile a single Swede with a single desktop class machine in his living room created a search engine so good that I would often switch to it when Google failed.
These days I use Kagi, which has prioritization and block lists (which I don't use because the results are good out of the box).
Wanna know what is really interesting about the Kagi story?
While Kagi is building its own index, for a long time they were kind of reselling a wrapped version of Google + Bing results, but still were extremely much better IMO.
I have two theories:
- either Kagi has some seriously smart systems that read in the first tens of results and reshuffle them
- or more likely in my opinion the reason why results have been so good is because kagi has api access which bypass the "query expander and stupidifier"[1] on the way in to Google and the personalization thing on the way out. That way they just interact with the core of Google search which somehow still works.
[1]: "stupidifier" the thing in the Google pipeline that rewrites
- "obscure-js-lib" (think one that a previous dev used, that I now need to debug
- to "well-knowm-js-lib-with-kind-of-similar-name".
Or decide that when I search for Angular "mat-table" I probably want some tables with mats on even if they don't have anything to do with Angular.
reply