Authors are from Northeaster University, Shenyang, China, not the Northeastern U in Boston. Don't understand why the two Chinese professors write an LLM book in english, definitely not from experiences, probably under pressure to publish.
On the contrary, ArXiv is for pre-prints, i.e. not (yet) peer-reviewed. Off the top of the my head, it was initially used by physicists who often have huge collaborations and long reviewing time. Then the ML community invaded the space later on. This does not mean a peer-reviewed paper cannot go there of course.
As an academic, I always thought of arxiv as where you put your papers first, before they are peer reviewed. Before that we used our webpages, but they kept breaking.
The book too it self aware, though you do have to make it to page ii.
> In writing this book, we have gradually realized that it is more like a compilation of "notes" we
have taken while learning about large language models. Through this note-taking writing style, we
hope to offer readers a flexible learning path. Whether they wish to dive deep into a specific area
or gain a comprehensive understanding of large language models, they will find the knowledge
and insights they need within these "notes".
Assume you are a college instructor for a Freshman Computer Science course.
Your job is to take a pdf file from the internet and teach the topics to you students.
You will do this by writing paragraphs or bullet points about any and all key concepts in the PDF necessary to cover the topic in 2 hours of lectures
The pdf file is at https://arxiv.org/pdf/2501.09223
Build the lecture for me.