Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Universal Summarizer (kagi.com)
477 points by simonebrunozzi on Feb 3, 2023 | hide | past | favorite | 164 comments


I had it summarize a recipe for me. It did a good job, but definitely editorialized. The URL I sent included just the recipe for split pea soup, but the summary added that "Split pea soup is a hearty and comforting dish that is easy to make" (which is true, but wasn't in the original) and said that the soup "is sure to be a hit with the whole family" (again, true, but wasn't in the original). So I feel like "summary" isn't the right word exactly. Still amazing. I tested it with a bunch of different blog posts that I was very familiar with and the summary always turned out accurate.


It's not "editorialized" in the sense that there are no humans involved.

There are two types of automatic document summarization methods, abstractive and extractive.

Abstractive analyze the original and then synthesize a summary, whereas extractive summaries are concatenated projections (sequences of text spans) from the original text.

The phenomenon you observed suggests this was produced by an abstractive summarizer. Pre-trained neural language models for abstractive summarizations that create pieces of new text that are untrue or make sense is a phenomenon often referred to as "hallucinating" models. In my research, I found extractive summaries are more suitable for professional applications; extractive models never hallucinate, and the worst mistakes are (1) leaving out an important part and (2) creating misunderstandings through left out context.

The leaders in the field of summarization research do not believe there can be something like a "universal summarizer", because the same document can be summarized for very different purposes, even by the same person on different occasions). For example, a lawyer may read a case summary one time to get up to speed about what her client got accused of in a lower court trial, and ten years later she may refer so said case again for a trial procedural detail that re-occurred in another case on a completely different subject matter.


This should be more visible.

Both in terms of "everyone should see this comment" and "maybe Kagi should be labelling this more truthfully"


To be fair, it’s also under the `labs` subdomain, which is their experimental stuff.


“ Rafal Pastuszak is available throughout the week for a coffee and a chat about ideas big and small. He also has a bi-weekly 2h slot which he uses for pair programming and mentoring sessions with friends, old and new. He also offers a 3 minute rant session, where he listens to the other person rant and then they spend the remaining 7 minutes contemplating the pointlessness of their endeavors.

Rafal is interested in a wide range of topics, from product development to mental health, and from photography to emerging technologies. He is also open to pair programming with anyone on their own problem. His goal is to get to know people and what they have been up to.

It's an interesting offer for anyone looking for someone to bounce off an idea or learn something new.”

Fair enough, there’s a bit of editorialising but the main gist is fine.


> So I feel like "summary" isn't the right word exactly.

Might be a close cousin of hallucination.


This is much better than I expected, it's given a good answer for everything I've tried so far.

https://labs.kagi.com/ai/sum?url=https://timecube.2enp.com

> Dr. Gene Ray is a scientist and philosopher who has discovered the Time Cube, a mathematical proof that there are four simultaneous 24 hour days in a single rotation of the Earth. He believes that the academic and religious teaching of singularity is boring and evil, and that it is destroying the analytical brain of humanity. He has offered a $10,000 reward to Wikipedia to disprove his math, and has been interviewed on Air America Radio. He believes that the nuclear waste burial in Nevada is an example of evil stewardship, and that the only way to save humanity is to acknowledge the Cubic Creation Principle of opposites. He believes that educators are lying bastards who suppress Cubic Creation debate, and that students should demand free speech to discuss it. He believes that belief in a nonvalue god equates to nonacceptance of Cubic knowledge, and that word has no inherent value.


About the author Gene Ray:

> Dr. Otis Eugene "Gene" Ray was an electrician and inventor who created the website for Time Cube in 1997. He proclaimed himself a Doctor of Cubism and Wisest Human in 2002 and was known for his anti-Semitic, homophobic, and racist views. He ran four separate websites, all revolving around his apparent insanity, and promised $10,000 to anyone who could refute the Time Cube theory. He died in 2015 at the age of 87, but his domain registration for timecube.com was renewed in 2015 under his name. His most coherent sentence was "God is the supreme excuse for human adults to absolve themselves of any obligation to preserve natural resources for their own children during their lifetime." His most famous quote is "1 x 1 = +1 is stupid and evil."

https://labs.kagi.com/ai/sum?url=https://rationalwiki.org/wi...


Oops, the summarizer lost some minus signs there! The original quote is actually

> −1 x −1 = +1 is stupid and evil.


wow, it made a succint and understandable summary of one of the most incomprehensible websites ever in like, seconds


So far pretty good. I am a Kagi Search subscriber, and they keep surprising me to the upside.

Jerome Powell’s press conference https://www.federalreserve.gov/mediacenter/files/FOMCprescon...

At a press conference on February 1, 2023, Chair Powell of the Federal Open Market Committee (FOMC) discussed the FOMC's decision to raise the policy interest rate by 25 basis points. Powell noted that recent indicators point to modest growth of spending and production this quarter, and that job gains have been robust, with employment rising by an average of 247,000 jobs per month over the last three months. He also noted that total PCE prices rose 5.0 percent over the 12 months ending in December, and that core PCE prices rose 4.0 percent. Powell emphasized that the FOMC is strongly committed to returning inflation to its 2 percent objective, and that the Committee decided to raise interest rates by 25 basis points today in order to assess the economy's progress toward its goals. He also noted that the FOMC is carefully watching the economy and inflation, and that it is important that overall financial conditions continue to reflect the policy restraint that is being put in place in order to bring inflation down to 2 percent. Powell concluded that his base case is that the economy can return to 2 percent inflation without a significant downturn or increase in unemployment.


This is quite a bit better than all of the other attempts at building AI summarizers I've seen. One thing that's really nice about it is that it seems to pick out surprising or unique information, rather than just trying to provide the most general abstraction or summary of the article. My team has a journal club where we go through ML papers, and I feel like this will be a good tool to familiarize myself with academic papers before going into the details.


it's very good at that..how does it manage to do that?


Things that are unsurprising probably are frequently mentioned in the training data. Just based on that you would expect that language models have an easy time judging how novel information is, you just have to get it to use that information.

I would expect GPT3 to be able to do that, but I don't think this is GPT


Impressive.

URL: https://qntm.org/responsibility

Summary:

> This story follows Tim and Diane, two scientists who have successfully built a quantum computer. They use it to simulate a universe, and eventually discover that they are in fact living in a simulation themselves. They realize that any changes they make to the simulation will be reflected in their own universe, and must be careful not to do anything that could be harmful. They also discover that the simulation is part of an infinite tower of quantum computers, with their own universe being near the bottom. This realization is both exciting and daunting, as they now have the power to play God, but must be careful not to do anything that could have disastrous consequences.

Could it be using ChatGPT behind the curtains?


Openai charges per token of input, a summarizing service where you feed it large amounts of input would have lousy economics.


Isn't the economics of all chatgpt related products just charging more than they paid for use?


Imagine the following scenario:

If you're a wrapper around ChatGPT, summarizing ten thousand tokens costs $100.

If you use an in-house net, it costs you $1 of GPU time.

Is $100 larger than $1? Think it through step by step. Show your work.


The Summarizer uses extra knowledge beyond what's given by the URL you submit. I tried giving it a link to a synopsis of "Terminator 2: Judgment Day" from IMDB. I then copied and pasted the synopsis into a pastebin online content-hosting site, and gave the Summarizer the URL of the pastebin. Both results are impressive but quite different:

When fed https://www.imdb.com/title/tt0103064/plotsummary/?ref_=tt_st... :

Terminator 2: Judgment Day is a 1991 science fiction action film that follows the story of John Connor and his mother Sarah as they attempt to prevent a nuclear war called Judgment Day. The film follows John and Sarah as they are pursued by a T-1000, a liquid metal android sent back in time to kill Sarah. John and Sarah enlist the help of a reprogrammed Terminator to protect them from the T-1000. In the end, John and Sarah manage to destroy the T-1000 and the chip that would have caused Judgment Day. The film ends with John and Sarah embracing, relieved that they have prevented the nuclear war. An interesting aspect of the film is the Terminator's ability to learn and understand human emotions, which is demonstrated when it refuses to kill John's foster parents and when it grasps Sarah's hand in a gesture of understanding.

Versus https://pastebin.com/Bk183Jky :

The movie opens with shots of traffic and children in playgrounds, followed by a flash-forward to Los Angeles in 2029 AD. The survivors of the nuclear fire called the war Judgment Day are fighting against robotic tanks and androids. John Connor and his mother Sarah Connor are on the run from the T-1000, a liquid metal android sent to kill them. They enlist the help of the Terminator, a cyborg sent from the future to protect them. After a series of intense battles, the Terminator and Sarah manage to destroy the T-1000 in a pool of molten metal. However, the Terminator reminds them that there is still one more chip which must be destroyed; it points at its own skull.

Notice that the first summary mentions "1991 science fiction", but none of the words science, fiction, or 1991 are to be found at that URL.


Did you try running both examples multiple times? In my experience, these LLMs are not deterministic so even the same prompt can give some pretty different responses.


I asked it to summarize the transcript for a podcast about an episode of Star Trek. One minor issue in grammar, but otherwise, it does a remarkable job of making a summary:

> In the Star Trek episode "The Cage", Captain Pike is held captive by a race of advanced televisions. He is offered a life of luxury in a cage, but he realizes that this is not the life he wants and manages to escape. The televisions watch everything on TV and have become too voyeuristic and passive. At the end of the episode, Pike makes the right choice and is able to see the difference between the falsehood of the fantasy and the difficult but more acceptable reality. This episode highlights the importance of exploration and being better than oneself, and is a reminder of the dangers of becoming too passive and voyeuristic. It also shows how technology can be used to create illusions and how these illusions can be used to manipulate people.


Impressed with their long document summarization, any ideas how they do this? Seems beyond normal GPT limitations; either they have a more powerful model (doubt it) or hacked around the limitations?

e.g. good summary for a very long text https://labs.kagi.com/ai/sum?url=http://localroger.com/prime...


Thanks for noticing! (dev here)

We have an in-house model we've been developing since 2019, just for summarization of long documents in real time. We'll try to find some time to blog about the high level design.


I'd pay for an API to this


I second this, API please.


Third this! Firms serving the financial world, tracking PR statements, etc. would likely pay high rates for this.


Jealous. I have been trying to 'configure' my own local summarizer. As I find the ChatGPT limitations on length to be frustrating. For example, I wanted to take a transcription of a troubleshooting session with user and support agent (me). I wanted a summary of what we did essentially, overall. But doing this with ChatGPT takes so. many. copy/paste actions. I just wanted to give it all, and say, get a summary of every 5 minutes.

Secondly, this lab needs to stick around. It makes me want to subscribe, just for access to this.


> an in-house model we've been developing since 2019

We will be very interested in how you tackled the problem of "understanding" the input text - to differentiate it from current "fakeries of actual speakers".

I.e. how you implemented the "intelligent" parts - or, which simulation of actual intelligent processing (if any) it contains.


Is there an API for this? I'd love to make an Alexa skill that reads out summarized Wikipedia articles for me.


Looking forward to a technical writeup on what you all have done, looks impressive!


I tried this with a random Magic the Gathering card: https://labs.kagi.com/ai/sum?url=https://scryfall.com/card/m...

> Jubilant Mascot is a 1/1 creature with an ability that allows you to pay {3}{W} at the beginning of combat on your turn to support two other target creatures. This means that you can put a +1/+1 counter on each of up to two other target creatures. Support can target a creature another player controls, and multiple abilities can target the same creature. If some, but not all, targets for a spell become illegal, the remaining targets are affected as appropriate. It is interesting to note that Jubilant Mascot is legal in Legacy, but not in Modern or Historic.

Pretty neat how it picked out an interesting fact about the cards legality. And it seems to handle parsing what a keyword does.

Very cool stuff!


> Pretty neat how it picked out an interesting fact about the cards legality

Based on my experiments plus some of the comments here it looks as if the intro "It is interesting to note" happens quite often.

My guess is that it finds a main thread, keeps it in the summarization, and then adds the nearest outlier as "interesting fact".


Tried it with code:

https://bellard.org/quickjs/pi_bigdecimal.js

This code uses the QuickJS bigdecimal type to calculate the value of pi to a given precision. It does this by using the Chudnovsky algorithm, which is a series of calculations that can be used to approximate pi. The code is written in Javascript and uses BigInt and BigDecimal to perform the calculations. It is interesting to note that the code also takes into account the possibility of bad rounding for the last digits, and adds extra digits to reduce the probability of this happening.


no way! this can also summarize / explain code!? wow!


It can't. It can _look up_ code.


Summarized a little-known record label, and even seems to have inferred some of the label's ethics. There are a couple goofs from parsing the bandcamp page (the label offers "download help" and some goofs where album titles are listed as artists, but generally quite impressive:

Fire Is Free is a transient private press bedroom label for psychedelic space-noise drones and other music. It features a variety of artists, such as Hlmn, Agreeable Goldmines, Versioning, A Beamed Gelling Rose, and more. The label offers streaming and download help, and all of its music is copyright-free. It is an interesting example of a DIY label that is dedicated to creating and sharing music without any restrictions. The label's catalogue includes a variety of genres, from ambient and dub to acid and noise, making it a great resource for fans of experimental music.


I'm interested in what the opposite might look like. "What would this look like if it were a short book rather than a longform article?"

On a different note, the rise of these types of language models mirrors what I understand to be the rise of human consciousness--a handy heuristic to lean upon that gradually becomes too handy to actually put down. Over time, the user is overshadowed by the tool and loses sight that any other cognitive world existed. It's hard for me not to see Julian Jaynes' idea of bicamerality and "verbal delusions" piloting the brain being repeated with computers.

In fact, I think I'll use this weekend to go read The Origin of Consciousness in the Breakdown of the Bicameral Mind with this idea in mind.


Awesome - I went to Project Gutenberg, picked a title that I was curious about but for which the Bibliographic Record does not have a description. Gave the Kagi Summarizer the URL to the HTML version of the book and got a good description. Finally a way to make my book browsing easier!!!

I wonder if the summarizer's summary can be added to the Project Gutenberg Bibliographic Record?


When I link it to a well known piece of text, does it go to its wikipedia page (or something of the sort) and summarize that, instead of analyzing the actual text?

For example I linked to Thus Spoke Zarathustra by Nietzsche - I got a summary that highly resembles something a wiki page would say about the text and quite unlike what I'd expect from reading the text itself.

I've gotten a very similar feeling giving it Plato's Republic.

Update: I've linked it to The Idiot by Dostoevsky and it didn't do well at all. Strange piece of technology - reminiscent of ChatGPT.


I just tested it on the full text of a legal case in which I was involved (as a lawyer). It prepared a superb summary. But then I tested it on a second, more complex, case, and it did an average job.


I just tried it with a blogpost of my own that's written in a somewhat localized Spanish and it summarized it pretty well in English, which is quite surprising. Good job team kagi!


Tried it with Arthur C. Clarke's "La Estrella" ("The Star", Spanish version), and it did quite well too, although it somewhat downplays the faith crisis at the end.

> Arthur C. Clarke's "La Estrella" tells the story of a group of scientists who are exploring a distant star system and discover a planet that was once home to an advanced civilization. The planet is now a desolate wasteland, but the scientists find evidence of the civilization's existence in the form of a giant monument and thousands of visual records. Through their research, they are able to determine that the civilization was destroyed by a supernova that occurred in the same year as the birth of Jesus Christ. This discovery leads the scientists to question the existence of God and the purpose of the universe. The story highlights the fragility of life and the power of the universe, and serves as a reminder of the importance of cherishing the time we have on Earth.

The key moments are in Spanish for some reason, and half of them are hilarious (in a cute naive AI way):

- La fe de Arthur C. Clarke se vio puesta a prueba cuando descubrió una estrella que había explotado hace miles de años.

- Los astrónomos chinos detectaron una supernova en 1054 sin saber de qué se trataba.

- La nebulosa del Fénix es una cabeza de alfiler, una tenue cáscara de gas que rodea a una estrella única.

- Los habitantes de la estrella habían construido un monumento de una milla de altura para llamar la atención de otras razas.

- La explosión de la estrella había destruido todos los planetas que orbitaban alrededor de ella, excepto uno.

- Los habitantes de la estrella habían dejado miles de registros visuales y máquinas para proyectar estos, junto con instrucciones gráficas.

- La civilización de la estrella había sido destruida en pleno florecimiento, sin dejar supervivientes.

- Arthur C. Clarke se preguntó si la destrucción de la estrella se podía conciliar con la misericordia de Dios.

- Arthur C. Clarke llegó a la conclusión de que el Universo no ti[truncated]


This is nothing short of remarkable! The video summary in particular feels like magic. I am exceptionally eager to learn more about their datasets, model architecture, and training process.


Based on a few samples and experiments on it:

this tool should avoid trying to mimic structured speech - as real understanding does not appear from the outputs and "coherent" speech is consequently a dubious transvestity.

It should instead, if based on "recognizing salient points in text", do that as well and possible and just highlight textual extracts from the original - leaving intelligent contextualization of the highlighted chunks to the intelligent skimmer.


I signed up for Kagi last month, and haven't used another search engine since.

I've already gotten my money back in time savings from good search results, I didn't realize they were also working on other valuable things like this as well.


Not sure what to make of its summary of Moby Dick (as presented in the Project Gutenberg text version) -- https://labs.kagi.com/ai/sum?url=https://www.gutenberg.org/f.... The summary is full of platitudes. I'm wondering how they were generated. Did they actually arise directly from the text itself or from outside commentary of Moby Dick that Kagi had ingested?

> The story follows the adventures of a whaling crew led by Captain Ahab, who is obsessed with hunting down a white whale named Moby Dick. The crew includes Queequeg, a harpooner from a distant island, and Starbuck, the first mate. Along the way, they encounter many dangers, including storms, sea monsters, and the wrath of Moby Dick himself. The story is a meditation on the power of nature, the dangers of obsession, and the importance of friendship and loyalty. The message of the story is that even in the face of great danger, it is possible to find courage and strength in friendship and loyalty.


Compare the summary of the book to the summary of the Wikipedia article on Moby Dick (https://labs.kagi.com/ai/sum?url=https://en.wikipedia.org/wi...)

> Moby-Dick is a novel by Herman Melville about the obsessive quest of Ahab, captain of the whaling ship Pequod, for revenge on the white sperm whale Moby Dick, which crippled him on the ship's previous voyage. The novel follows Ishmael, a sailor on the Pequod, as he narrates the story of Ahab's monomaniacal pursuit of Moby Dick. Along the way, Ishmael and his fellow crew members encounter a variety of characters and situations, including the mysterious Fedallah, a harpooner who prophesies Ahab's death; Pip, a young African American cabin-boy who jumps in panic from Stubb's whale boat; and the Delight, a whaling ship that is badly damaged and with five of her crew left dead by Moby Dick. The novel also contains a variety of themes, including the nature of evil, the power of fate, and the search for identity. Ultimately, the novel suggests that while revenge may be a powerful motivator, it can also lead to destruction and death.


Bit meta, but asking it to summarise this comment thread gives an interesting result…

> Jessica Yamada is a therapist working with a patient, Victoria, who is struggling with the possibility of uncontrollable, incurable plagues. Jessica is also trying to help a young woman, Lily, who is torn between fatalism and wanting to help her friends stick together.

Not sure why it picked this particular post as representative.


This works very well! The Key Moments options seems to contains a bit more "meat and potatoes" than the normal summary at this time. I had it summarize a textbook chapter on cartesian space in 3d game development: https://labs.kagi.com/ai/sum?url=https://gamemath.com/book/c...

The final sentence of the summary gave me a chuckle: "Interestingly, 360 can be divided evenly in a large number of cases without needing fractions, which was apparently a good thing to early civilizations."

As well, the Key Moments seemed to grab items that are probably not very important: "In 3DS Max, the default orientation of the axes is for +x to point right/east, +y to point forward/north, and +z to point up."

Cannot wait for this to get tied into Kagi search and for the possibility of an API.


Seems pretty solid for technical documentation! Great job folks at kagi!

I tried to break it with the 'Tao of Programming'[0].

It was not particularly outstanding but arguably many real humans would also fail to process the allegories/humor and distill that text.

[0] https://www.mit.edu/~xela/tao.html


I fed it my own blog post: https://maheepk.net/posts/nothing-to-display/

> This week, a person experienced the consequences of a broken MacBook Pro display after a fluke of probability. They had to resort to using Duet to connect their iPad as a display, and Apple's Sidecar was found to be completely useless. The person learned lessons about Apple's products, such as the need for more durable displays, and the usefulness of Duet. They also learned lessons about software development, such as the importance of testing accessibility for blind users. Despite the unfortunate incident, the person is still looking for a way to make the best of the situation.

Compliment aside, seems like a good feature for Orion browser. Great work.



I threw a few different things at it and it did pretty well IMHO: https://twitter.com/bradneuberg/status/1621619127545905152?s...


I am truly amazed at how fast and good it is at summarizing videos, it is (or will be) definitely an amazing tool for the Kagi search engine. I guess they have trained/finetuned a specific model to summarize video transcripts, since the content is delivered so differently.


Quite useful IMHO to summarize some Hacker News posts, here's an example [0].

Funny story: I tried to build something like this years ago, called MNMN, guessing that AI would have eventually be able to do the summarizes well enough. [1]

The name was a play on the song "Manamanà".

[0]: https://labs.kagi.com/ai/sum?url=https://plei.one/blog/flutt...

[1]: https://github.com/simonebrunozzi/MNMN


The song being https://m.youtube.com/watch?v=8N_tupPBtWQ ?

It surely reminded me of that one!


Yes! Even better, this [0]. Elio is a famous Italian singer.

[0]: https://www.youtube.com/watch?v=5dN0EYphtA0


Great! Thank you!


It's very nice. Apparantly it doesn't follow pages of an HN thread. It summarized this https://news.ycombinator.com/item?id=34579175 to mostly about staying warm. See https://labs.kagi.com/ai/sum?url=https://news.ycombinator.co...

This is how it summarized Yishan's thread on content moderation https://labs.kagi.com/ai/sum?url=https://twitter.com/yishan/... It obviously got confused when he side stepped into his CO2 mantra middle of the thread.

From these two links, it feels like this engine mostly focuses on the beginnings of the URL, things that come later on a page are not given the same priority when summarizing.


I got a really hilarious response; the bot apparently thought the noscript tag was particularly important (Article source[1]):

ChatGPT is a technology that can help writers with their projects, but it is not a replacement for real writing. Despite feeling overwhelmed and stupid at first, experienced writers know that these feelings will eventually pass and they will be able to create an argument. This article by Christopher Grobe discusses why he is not scared of ChatGPT and how it can help him with his writing. He explains that it can help him to see the outline of an argument and to address the quibbles and confusions of an audience. He also highlights the importance of allowing javascript and access to certain URLs in order to view the content of the page. Finally, he encourages readers to contact him with their thoughts and questions about the article.

[emphasis mine]

[1] Source: https://www.chronicle.com/article/why-im-not-scared-of-chatg...


This from a competing search engine summarizer for that link.

```ChatGPT can be a useful tool for helping students to develop their writing skills and think more critically, but it should be used to complement and enhance traditional methods of instruction, not replace them. By recognizing the limits of ChatGPT and pushing themselves beyond those limits, students can learn to identify and address the weaknesses in their own writing, and they can learn to structure their arguments in a more logical and coherent way.```


This is hilarious indeed. I expect LLM technology to get better but I don't expect the hilarious and nonsensical bits to completely disappear. Interesting future...


Caused by this, likely what the crawler faced:

https://imgur.com/a/0SqkPM4

Summarizing content is still dependent on pages on the messy web being actually read properly.


Anybody else's first instinct was to get the summary of the Christian Bible? Isn't able to complete the task in case you're interested with the source[0] I submitted.

[0]http://triggs.djvu.org/djvu-editions.com/BIBLES/DRV/Download...


It can ! We set the timeout lower as this got on HN to make sure it worked for everyone. Ran this document in backened directly and it worked. It is 1600 pages of text and it took about 3 mins, but it did produce a summary! To see what, check tomorrow as we lift the timeout restriction :)


Hopefully it will work for: THE HISTORY OF TOM JONES, A FOUNDLING.

That's another long book I'm curious about ;)

Currently it's just stuck.


OK, now we're talking.


Here is a quick bookmarklet that uses this to open the summary for the current page in a new tab.

Highlight the following and drag it to your bookmarks bar.

    javascript:Object.assign(document.createElement("a"), {href: `https://labs.kagi.com/ai/sum?url=${encodeURIComponent(window.location)}`, target: "_blank"}).click();


Thank you! Very helpful.


http://www.kibo.com/kibopost/happyweb/index.shtml

> HappyWeb is a new project that aims to make the Web a better place by introducing new features such as more font sizes, more colors, alphabetized bookmarks, faster <BLINK>, Y2K problem solved, edible eye candy, no viruses, easier on the eyes, new HTML tags, and rules and punishments to make everyone happy. It also introduces new type styles such as <BW>, <BOLD&SPICY>, <FLORIDSCRIPT>, <DISTRESSEDTYPEWRITER>, and <UGLY>. Interesting features include the <MAGNET> tag which produces an invisible field to drag the surfer's arrow-pointer towards it, and the <SMILEY> tag which simplifies the use of smileys. Punishments for those who don't follow the rules include being framed, charged a nickel, and being forced to watch Fox television programming with HTML tags added.


Just tried my own blog article and it's pretty accurate.

https://labs.kagi.com/ai/sum?url=https://langsoul.com/blog/l...

This blog post explores the idea that reading can help us understand the perspectives of others and shape our own views. It also discusses the limitations of reading, such as not being able to experience the calming effects of the sea or the grandeur of nature. The author then shares a personal experience of a 2km walk that turned out to be a path of suffering, with multiple inclines and hills. He emphasizes that only through experiencing the pain and exhaustion of the journey can one truly understand the path of suffering. This post highlights the importance of experiencing life in order to gain a deeper understanding of it.


I fed it The Last Question. While the resulting summary is impressive, I suspect there may be some cheating/plagiarism going on because it includes commentary with no origin in the text.

The Last Question is a science fiction short story by Isaac Asimov about two attendants of Multivac, a giant computer, who make a bet over highballs. The question they ask is whether mankind will ever be able to restore the sun to its full youthfulness even after it had died of old age. The story follows the question through the centuries as mankind develops interstellar travel and builds a better and more intricate computer, the Universal AC. In the end, the Universal AC is unable to answer the question due to insufficient data, but it is able to demonstrate the answer, restoring the Universe from chaos. This story is a fascinating exploration of the power of technology and the limits of human knowledge.


It's interesting. Feels like an answer that someone who didn't know about, or not understand, the concepts of the story (mostly entropy) would give.


Like all other models backed by LLM, Kagi summarizer can process text in other languages quite well. I let it summarized for me the SPIEGEL interview with Weizenbaum and Heafner 45 years ago, and it did a splendid job. The summary was presented in English, but the highlights are captured in German.

Of course, it missed some (subjective) important points like during the discussion about the principal transferability of human experience (and knowledge): we human have our inner values and processes, which we could not express (easily) in our languages. Therefore, it's also impossible to transfer such human experience into the computer.

https://www.spiegel.de/politik/es-ist-eine-explosion-des-qua...


Tried it with one of my blog posts: https://learnbyexample.github.io/python-regex-surprises/

>This post provides a few examples of regular expressions that may surprise the reader. It covers topics such as the difference between the $ and \Z anchors, the use of start and end arguments for compiled methods, the behavior of undefined escape sequences, the use of octal and hexadecimal escapes in the replacement section, the behavior of quantifiers, and the difference between greedy and possessive quantifiers. It also highlights the importance of using keyword arguments when using the sub() function, and the need to use the regex module's constants when using the regex module. Overall, this post provides a great overview of some of the nuances of regular expressions.

At first I thought it was just making use of subheadings but on a closer look it does pick up points from the content. I think this could be very useful in trying to come up with short description for meta tags.

Edit: just saw that there's also a "Key Moments" button too!

>- The $ and \Z anchors are not equivalent. The $ anchor can match both the end of string and just before \n if it is the last character, while \Z will only match the end of string.

>- You can specify start and end index arguments for compiled methods.

>- When using the re.MULTILINE flag, the ^ and $ anchors will match at the start and end of every input line, and also after the last newline character.

>- Undefined escape sequences will raise an exception.

>- Only octal escapes are allowed inside raw strings in the replacement section.

>- Referring to the text matched by a capture group with a quantifier will give only the last match, not the entire match.

>- \b[a-z](on|no)[a-z]\b is not same as \b[a-z][on]{2}[a-z]\b.

>- (a*|b*) is not same as (a|b)*.

>- You should always pass flags as a keyword argument for the sub() function.

>- When using the regex module, the constants should also be used from the regex module.


Idea: create a website that hosts the questions and responses from summarizers and language models on a webpage like StackOverflow. If Google implemented this it could save their business from ChatGPT.


Amazing job! I tried to summarize Lithuanian article. It did a pretty good job, but for some reason alphabet letters such as ąčęėįšž that are unique to Lithuanian were missing from the output.

https://labs.kagi.com/ai/sum?url=https://www.delfi.lt/news/d...


Here's a small API wrapper in Go: https://github.com/sashabaranov/kagi-summarizer-api


Oh, reminds me I had an email address @kagi.com, a long, long time ago :-)


I was wondering whether this is still the same company. I remember getting cheques from Kagi and German bank employees eying them suspiciously.



I asked them one time and they said they have no relation to Kee (who ran the shareware processing site), and just bought the domain when it was available. No connection to the old site. (I also used to get the occasional check from the old Kagi!)


I know, for quite a while I got quite a bit of income from them, good old day of Mac Sharewares :-)


It works on Amazon product pages and news home pages. Wow.


I was trying to built something similar using GPT3 but I realized it was not reliable. I tried Kagi to summarize the link for this YC thread https://news.ycombinator.com/item?id=34646389

This was the summary:

Jessica Yamada is a therapist working with a patient, Victoria, who is struggling with the possibility of uncontrollable, incurable plagues. Jessica is also trying to help a young woman, Lily, who is torn between fatalism and wanting to help her friends stick together. Finally, Jessica is talking to Eidolon, a powerful man who is losing his powers and is trying to fight a monster to the death. Jessica is trying to help her patients in a variety of ways, from providing coping mechanisms to offering moral support. She is also encouraging them to push beyond their limits and to structure their arguments in a more logical and coherent way. An interesting detail is that Ratz, a character in the bar in the novel Catch 22, has a prosthetic arm, which adds to the atmosphere of the bar. The story highlights the fragility of life and the power of the universe, and serves as a reminder of the importance of cherishing the time we have on Earth. The message of the story is that even in the face of great danger, it is possible to find courage and strength in friendship and loyalty. Thoreau encourages readers to explore their own inner depths and to strive for truth and simplicity. He argues that material wealth is not as important as the wealth of the mind, and that true happiness comes from within.


I imagine a combination of: RSS reader (something like Feedly) + some kind of embedded "semantic preview" for the long articles (Summarizer) + any kind of offline "read it later" service for the articles distilled from web 2.x or web 3.x or web N.x artifacts.

I'll pay for such a service each month and will be happy.

I donate the idea to the author of the Summarizer :)


Kagi's search blows Google out of the water too


Tried to get it to summarize this comment page on hacker news, but the output seems to refer to another document.

https://labs.kagi.com/ai/sum?url=https://news.ycombinator.co...


I asked it to summarize this HN comment page, and all it talks about are the Turner Diaries. Any idea why it's fixated on impalallama's comment?

https://labs.kagi.com/ai/sum?url=https://archive.ph/3JDtf


It is because of the cache, as someone probably summarized the page very early on.

Current summary straight from the API:

"Universal Summarizer is a tool developed by Kagi that is able to quickly and accurately summarize articles, videos, and other texts. It is able to pick out unique and interesting details from the text, such as the fact that a character in a book has a prosthetic arm, or that a certain card is legal in Legacy but not in Modern or Historic. It is also able to provide summaries of complex texts, such as Herman Melville's Moby Dick, and even provide summaries of technical documentation. It is also able to summarize texts in different languages, such as Spanish. The tool is also able to provide summaries of videos, and is able to provide summaries of texts that are difficult to summarize, such as the allegories and humor in the Tao of Programming. It is an impressive tool that is sure to be a great asset to the Kagi search engine, and could be used to improve the results and UI of the main search product."


Nope, I took an archive.ph snapshot and that's what I pointed it to - totally new URL.


I guess it is a good way to crowdsource search index.

The thing about Google search and byproducts, is that I start to think it is deliberately crippled by US gov. Power structures. I can not explain any other way why it sucks so much and there is zero to none innovation, at least that available to the end users.


Not bad with a single out of context chapter from a genre web fiction serial that is hard to understand without context. The language and writing style sounds a LOT like ChatGPT. Feels like the model asked a bunch of different questions (what is interesting to note, what's important, etc.)

>Jessica Yamada is a therapist working with a patient, Victoria, who is struggling with the possibility of uncontrollable, incurable plagues. Jessica is also dealing with a malfunctioning suit that has interrupted her sessions with her patient. She is also dealing with a colleague who is making her out to be a freak. Jessica is also trying to help a young woman, Lily, who is torn between fatalism and wanting to help her friends stick together. Finally, Jessica is talking to Eidolon, a powerful man who is losing his powers and is trying to fight a monster to the death. It is interesting to note that Jessica is trying to help her patients in a variety of ways, from providing coping mechanisms to offering moral support.

https://parahumans.wordpress.com/category/stories-arcs-11/ar...

Also, unintentionally got this line perfect (given later story stuff...)

'a powerful man who is losing his powers and is trying to fight a monster to the death'


If it's a LLM, it's probably read a lot of webfiction.


A while back I recorded an unscripted youtube video with some of my thoughts about automation and economics. In the end I felt the video was long winded and unfocused, and that I should certainly focus on scripted content in the future (which I have subsequently done on my other channel). However this summarizer took seconds and produced what I thought was a decent summary of my video.

Video since someone will ask: https://youtu.be/Sx_2-xwWJUk

Summary: "Taylor Alexander is a robotics enthusiast who is working on a four-axis 3D printed robot arm. He is interested in the economic analysis of Karl Marx and Richard Wolff, and believes that collective ownership of businesses has certain benefits. He suggests that people should form cooperatives to create a more equitable economy. He also talks about the potential of robots to automate production and eliminate complexity in the economy, and suggests that if people own the machines that produce what they need, they will receive the benefits of their production. He also talks about the potential of open source hardware, and how it could be used to create robots that produce goods at a low marginal cost. Finally, he talks about the importance of nutrition and how it can help people lead healthier lives."

Not to attract weird comments but I find it interesting that it chose to use he/him pronouns for me, even though I specifically intro myself in the video saying I use they/them pronouns. The summary of the video is otherwise pretty great.


I just tried the same thing with an unscripted 23-minute video I recorded and put on YouTube in December. If I were to use Kagi’s summary myself, I would want to make minor changes to the wording in a couple of places; otherwise it is perfect.

Here’s the video and the summary:

https://youtu.be/KwNhAzQN-Nc

“Open AI's release of Chat GPT two weeks ago has caused shockwaves around the world. This AI has the potential to revolutionize language learning and teaching, as it can provide human-like interaction and adapt to the learner's needs. Previous attempts to use technology for language learning have largely failed due to the lack of human interaction and motivation. Chat GPT could provide a solution to this, as it could be used to create conversations, explain language, and build vocabulary. It could also be used to create virtual characters that are attractive to users. While there is a concern that this technology could replace language teachers, it is likely that their role will become more like a coach, encouraging and motivating students.”



I got that error when trying to summarize a dumb two-paragraph article I once wrote. If I had to guess I'd say that's what you get when the backend cannot identify what the main text is (which is a surprisingly hard problem).


First chapter of Catch 22 - comically low quality

URL: https://thefreeonlinenovel.com/con/catch-22_chapter-1---the-...

Summary

Yossarian is in the hospital with a pain in his liver that falls just short of being jaundice. He is comfortable in the hospital and censors letters to pass the time. One day, the chaplain arrives and Yossarian is surprised to find that they have a mutual friend. The chaplain is shy and apologetic, but Yossarian is friendly and warm. He warns the chaplain of the craziness of the other wards, and the chaplain promises to be careful. In the ward, there is a soldier in white with two useless legs and two useless arms, and a colonel with a vortex of specialists trying to determine what is wrong with him. The woman with curly ash-blond hair visits him every day. Eventually, the patients start to leave the hospital, and Yossarian tells the doctors that his pain has gone away.


Ouch! Horribly wrong on https://www.agedfish.com/blog/meiwaku-and-the-brompton-bell....

Maybe the Star Wars reference threw it.


This is a great tool for survival in the face of the popularity of independent journalism.

Perfect. Thanks!


Fwiw I use AndiSearch and they already have a great summarizer integrated directly into search results that they launched last week.

There is a Summarize button on every result card that can show a summary. The summaries are much faster. And they are clearly much higher quality.

Unlike Kagi, they don't spam every HN post that mentions search with shill comments. And unlike Kagi, it is free and anyone can use it. It does not require login, so you can use it anonymously. This looks like a rushed attempt to copy the AndiSearch summary feature.

I already find that the summaries on AndiSearch are so good that I can use it to filter which articles I need to read. The Reader mode is also very good.

If you like this, I'd recommend comparing it. You can paste in a url and get a summary and reader mode for it. I find searching for the title works better. I haven't seen it posted here yet.

https://andisearch.com/


I have never used AndiSearch, how can I summarize a video, book or pdf with it like Kagi's "universal" summarizer?


I find it does well with ebooks. I haven't tried with pdfs or videos. The quality of the AndiSearch summaries for the things I use every day like articles, Wikipedia and technical documentation is very good.

The Summarize button is on the search results, therefore you don't need to visit a separate site and paste in the url. Additionally, you already know when a summary is available as the button only shows when it is. It is new but already I feel that it changes how you use a search engine having this feature available right there on the results.

The Kagi one said 'No summary available at this moment, please try again later.' for many of the urls I tried.


>I haven't tried with pdfs or videos

It does seem it's not able to do that.

>The Kagi one said 'No summary available at this moment, please try again later.' for many of the urls I tried.

I honestly find AndiSearch more finicky in not showing the "Summarize" button or giving error.


It is reliable for me. Here is a test to compare. I did a search for 'google chatgpt clone'.

AndiSearch generates a good summary for every result.

https://andisearch.com/?q=google+chatgpt+clone

Maybe Kagi has HN hug of death but it returns the 'no summary available' message for me now.


>Maybe Kagi has HN hug of death but it returns the 'no summary available' message for me now.

what are you entering? It's not a search engine, you need to insert a link to an article, book, pdf, video etc... Look at the examples.


I am (obviously) copying and pasting the same urls that appear in the search results I shared individually into the url entry form on the Kagi summarizer page.


If this is the case then it all works for me on the Kagi's summarizer (using your example).


Also, being realistic, I do not believe that many normal people try to summarize the full text of old public domain ebooks. If you search for Moby Dick, much better resources come up, and that is what you want a summary for. People want a summary when they are searching to know if they should read more.


Summarizing long articles, videos, etc. is why these summarizers are so useful, you talk about web search but what OP linked is not a search engine (the Kagi search engine is, but this demo is not, it is just a summarizer that you use if you already have, for example, an article found on HN), you are talking about two different tools.


this is really awesome. Especially for people who are dyslexic or can't concentrate for a while by reading longer texts. At least they will know what the article is about. I also like the "key points" button.


This is really good. Well done!


I tried it to summarise some of my own blogs posts, it is pretty good. Sometimes the summary makes the same point several times, and its tendency to insert content not in the source material makes it hard to trust.


Strange! Sometimes, when I pasted the link directly, it just shows "no healthy upstream". Click the second time on this article has the same effect. And there is no way to go back to it home page.


For some articles, it showed "No summary available at this moment, please try again later." Does it mean the engine is momentarily overloaded, or the summaries are not instant? Perhaps Kagi runs the AI summary service in the background, fed through a human pipeline, then published it? Otherwise, it's hard to explain the curation found by other users and now the non-availability.


Same here, tried from different networks and device, RN all i get is "no healthy upstream"


Bluntly, it doesn't work. I handed it this: https://labs.kagi.com/ai/sum?url=https://www.gutenberg.org/c...

It correctly identified it as War and Peace, then gave a standard spiel and War and Peace, along with a bunch of bullet points none of which were "Key Moments". But... it didn't actually read it. If it had, it might have noticed that I only linked to Book 1.

But I could have figured out it was War and Peace just by reading the title, and half of the rest from the wikipedia page.


I am looking for an AI that can say what a company does, even put it in a category. Sadly this solution incorporates the marketing jargon when I tested it: "Leading," etc.


I provided a very subjective book in pdf format, and it has returned a decent summary for such a difficult task, also in impressive time, and in Portuguese. Amazing.


This is incredibly well done. Congrats to the team at Kagi, I was genuinely blown away by the speed and accuracy of the results.


My first thought is grabs the text from the website (or transcript from a video) and asks GPT-3 to summarize via the API. No?


They said having their own model


Well.

> Case is in a bar called the Chatsubo, a place for professional expatriates. He is talking to the bartender, Ratz, who has a prosthetic arm. Ratz and Case joke around and then a prostitute who was sitting next to Case leaves. Suddenly, a drunken Australian starts talking about how the Chinese invented nerve-splicing. This causes Case to become bitter and he expresses his feelings to his glass. This passage paints a vivid picture of the atmosphere in the bar and the characters that inhabit it. The interesting detail is that Ratz has a prosthetic arm, which is a unique detail that adds to the atmosphere of the bar

The results are interesting. The in-production use-case, where the tech will be most reliable, not evident.

Edit:

It seems (also seeing the "Key moments" section) that elemental attempts to understand the details (and structure) of the text should output reliability values, and employ them with importance towards the final output.

Edit2:

It seems from further tests that the "understanding" may be very limited, and that the "trick" is more in terms of "trying to identify salient parts and present them reformulated - without attempting to understand them".

Edit3:

It seems that the attempt of identification of salient points is confirmed as a main mechanism - but, in absence of understanding, the re-formulation just degrades information. For example: the original being, «Bronze is the first metal that gets its own age, which began around 3300 BCE in Mesopotamia. Other metals were certainly in use before it — especially copper — but the addition of a small amount of tin to existing copper technology changed everything. Bronze was a step up in hardness, durability, and resistance to corrosion [...]», the summary «Bronze is the first metal to have its own age, beginning around 3300 BCE in Mesopotamia. It was a step up in hardness, durability, and resistance to corrosion» betrays faults in changing the initial 'gets' to 'have', and in missing that those qualities of bronze are there in comparison to copper.

--

...I see we have a sniper here, as so common: well, do not forget to make your criticism explicit (assuming it will just take its time to elaborate). Edit: still need more time?


PLEASE make this work with question answering


Funny that you mention it, as that is exactly what we have been working on :)

It is not publicly linkable but we demoed this in our Discord few days ago (check #ai channel)

https://kagi.com/discord


Really cool. I was think on hacking something similar for personal use, thankfully I discovered this.


Awesome. does this use only the raw text or does it get help from other summarizers online?


I grabbed a random article in my to read list and it didn’t help me understand the article.

https://tasshin.com/blog/strategy-101/

Lots of words I was unfamiliar with because they were domain specific terms and they were not defined


I was an early paying Kagi user but started using !g more and more over the last few month. I am not sure what they changed. Recently I decided to switch my default search back to Google as basically in 90% of all search queries I found the result I searched for on Google instead of Kagi.


I'm the opposite; I use Google less and less.


very impressive, not a 100% but I did give it a hard text to summarize


I tried 5 different URLs to mainly text documents, and it said "no summary available" for all of them. Is it the hug of death, or am I doing something wrong? I'm a Kagi subscriber, logged in and everything.


I tried an article and it said article not ready yet.


> No matter how hard you try, you will never see any ads in Kagi!

famous last words. :-/


This is really cool. Do you have an API? :)


We do, and we thought about opening this as a part of other APIs we already have.

How would you price this?


I have a few thousand things to summarize, and am a paying subscriber. What happens to my account if I automate your page?

I'm happy to pay. I think it's too much but I'd probably unhappily pay $0.05 per summary when they're this good. I'd rather pay $0.005 or less, then I'd plug it into KB tools like Obsidian as well as our bookmarking / link logging workflow firm-wide.

Which reminds me, I want to be able to "gift" Kagi search to all employees. I actively don't want them to have to use Google. Is there a way for an automatic company @domain x per seat billing, or should I use gifting?

Finally, what would it take for kagi.com to be part of the Apple One / iCloud+ offering? That needs to happen.



Probably by usage similar to OpenAI since I assume your costs are correlated (compute etc).

You could do a hobby plan which is free up to a certain no. of requests/tokens per day or hour so developers can start building without any friction (I think this is important and sort of expected as most beloved dev tools do it). You can minimize your costs by offering this on shared resources so inference time is a little slow and API can go down at times when there is high usage but its free so users wont mind.

And then have pro plans for higher usage / 99.9999 SLA / fast inference etc (maybe a min subscription with a pay if you go over per 100k tokens option...similar to Vercel with bandwidth and serverless functions).


Cool, pricing per tokens processed makes sense, thanks for brainstorming this.


Would love to use an API, per token pricing is a good approach (with use limits like OpenAI). If you need testers, I have some use case (long form non-fiction content). LMK at ml[at]summarity[dot]com


GPT cannot currently do this (for long documents), and you have a commonly requested use case. For now you can charge as much as they do and have a decent business I'm guessing, but future depends on LLMs from OpenAI and other getting into this space.

I would also like to see controllable length output if that's feasible with your model.


Does it rely on OpenAI under the hood?


Unlikely, it does not seem to handle foreign languages very well.


Here is the summary it gives for Walden: Conclusion https://en.wikisource.org/wiki/Walden_(1893)_Thoreau/Chapter...

In Walden, Thoreau encourages readers to explore their own inner depths and to strive for truth and simplicity. He argues that material wealth is not as important as the wealth of the mind, and that true happiness comes from within. He also encourages readers to be independent and to take risks, and to not be afraid to explore the unknown. He compares the courage of a soldier to that of a footpad, and suggests that it is nobler to explore one's own inner world than to chase after material possessions. He also suggests that it is better to be content with what one has than to strive for superfluous wealth. He concludes by encouraging readers to strive for truth and to not be afraid to explore the unknown.

I think shows the limits of these kinds of statistical word approaches. They aren't necessary "wrong" but they fail to give any sense of the spirit of optimism in the original. I would have been far more impressed if it just quoted the ending:

The life in us is like the water in the river. It may rise this year higher than man has ever known it, and flood the parched uplands; even this may be the eventful year, which will drown out all our muskrats. It was not always dry land where we dwell. I see far inland the banks which the stream anciently washed, before science began to record its freshets. Every one has heard the story which has gone the rounds of New England, of a strong and beautiful bug which came out of the dry leaf of an old table of apple-tree wood, which had stood in a farmer's kitchen for sixty years, first in Connecticut, and afterward in Massachusetts -- from an egg deposited in the living tree many years earlier still, as appeared by counting the annual layers beyond it; which was heard gnawing out for several weeks, hatched perchance by the heat of an urn. Who does not feel his faith in a resurrection and immortality strengthened by hearing of this? Who knows what beautiful and winged life, whose egg has been buried for ages under many concentric layers of woodenness in the dead dry life of society, deposited at first in the alburnum of the green and living tree, which has been gradually converted into the semblance of its well-seasoned tomb -- heard perchance gnawing out now for years by the astonished family of man, as they sat round the festive board -- may unexpectedly come forth from amidst society's most trivial and handselled furniture, to enjoy its perfect summer life at last!

I do not say that John or Jonathan will realize all this; but such is the character of that morrow which mere lapse of time can never make to dawn. The light which puts out our eyes is darkness to us. Only that day dawns to which we are awake. There is more day to dawn. The sun is but a morning star.


doesn't work on my articles


no healthy upstream


One of the things I find myself doing a lot is trying to make sense of a given HN discussion -- so I fed an archived version of this discussion to the summarizer: (https://labs.kagi.com/ai/sum?url=https://archive.is/HdWx7):

> Universal Summarizer is a tool developed by Kagi that is able to quickly and accurately summarize articles, videos, and other texts. It is able to pick out unique and interesting details from the text, such as Ratz having a prosthetic arm in the bar in the novel Catch 22. It is also able to summarize technical documentation and Moby Dick, as well as provide a summary of the themes in Walden. It is able to summarize videos as well, and is a useful tool for developers to familiarize themselves with academic papers. It is also able to provide summaries in multiple languages, and is able to detect jokes and humorous elements in the text. It is an impressive tool that is sure to be a hit with developers and researchers alike.


I pay for Kagi search. This doesn't interest me in the least. They have been breaking down the costs of running their business and justifying their prices, which I'm cool with. If it turns out I'm paying to subsidize this kind of thing, it's not going to work for me anymore.

(I don't mean to say it's bad or anything, just I don't care about it. It's an interesting downside of a paid business model. Google can do whatever because it's all "free" but presumably paying customers care more about not paying for a bunch of stuff they don't want)


Seems weird as a customer of a search engine to not care about an innovation that's so tightly related to search. The ability to accurately synthesize information is something that will almost definitely be used to improve results and UI of the main search product.


I do not pay for the Kagi search engine, but it is quite obvious to me that this powerful summarizer will greatly help the search engine, which will soon be able to easily index videos, pdfs, books, etc. and extract the most relevant content from it with ease.


Figuring out what a document is about is one of the most central problems in building a search engine. Being able to regurgitate it in a way that makes sense to humans is just a nice benefit.


I don’t know why. Most businesses work on projects that are not strictly related to the core product. Imagine how much cheaper your iPhone would be if you weren’t subsidizing apple car, augmented reality research, etc.


I imagine an iPhone wouldn't exist if its research wasn't subsidized with Macs or iPads earlier.


Im on the opposite site: Currently, I do not use Kagi. Such features would convince me to switch (and therefore, pay).


I also pay for Kagi search and have a usage value of ~55% from what I pay and I'm fine with experiments.


Right, maybe I should have framed my comment a bit differently: this is not core functionality I would pay for.

Experiments are cool and it would be dumb to make suggestions about how they do R&D. I was more reacting to the thought that this would become part of the product offering.


Oof, that inspired me to check and mine's at 17%. I'm fine with experiments, too. (It reminds me of the old Google Labs part of search.)


Would you recommend Kagi search to look up software dev?


Yes.


I put in a (mostly) transcript of the Randy Pausch lecture on Time Management [1] and the summarizer gave me:

Time management is an important skill to have in order to lead a happier and more productive life. Julie A. Zelenski, a professor at UVA, gave a talk on time management, drawing from two books and her own experiences. She recommends having a filing system, using speaker phones, and standing during phone calls to save time. She also suggests doing the ugliest task first, having a timer on the phone, and leaving thank-you notes on the desk. Zelenski also recommends scheduling meetings, phone calls, and mundane tasks, and treating people with respect and dignity. An interesting point she made was to use a power drill to carve pumpkins instead of a knife!

Julie Zelenski, ha. AI be dreamin'.

I liked the examples and you're always going to have some weird loss of content when you summarize but the AI taking the joke by Randy and then imagining a surname is still amusing.

[1] https://jamesclear.com/great-speeches/time-management-by-ran...


> The Turner Diaries is a novel by Andrew Mac Donald (William Pierce) that follows the story of a man named Turner, a member of a revolutionary organization that is fighting against the oppressive System. Turner and his comrades use acts of terror, sabotage, and violence to fight against the System, and they eventually succeed in overthrowing it. The novel follows Turner's journey as he and his comrades struggle to survive and fight against the System, and it also explores the themes of racism, civil rights, and the power of the government. The novel ultimately conveys the message that violence and terror can be used to fight against oppressive systems, but that it should be done with caution and with a clear understanding of the consequences.

Does an interesting and good job are presenting the problem with summarization. You read this and you might think The Turner Diaries where a Star Wars adventure story about fighting an evil empire.


This is a completely reasonable and insightful comment.Along with any comments critical of Kagi, it is downvoted to oblivion. I have to say the way Kagi shills overwhelm any discussion on HN is incredibly frustrating. They hijack every thread. They even post about HN being their main source of customers. They have hacked HN, no two ways about it.

For anyone who might feel I am being unreasonable or that this post was somehow wrong-headed, this is Wikipedia's description of The Turner Diaries.

"The Turner Diaries is a 1978 novel by William Luther Pierce, published under the pseudonym Andrew Macdonald. It depicts a violent revolution in the United States which leads to the overthrow of the federal government, a nuclear war, and, ultimately, a race war which leads to the systematic extermination of non-whites. All groups opposed by the novel's protagonist, Earl Turner—including Jews, non-whites, "liberal actors", and politicians—are murdered en masse."

The commenter was reasonable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: