A long time ago, due to an improbable sequence of events, I found myself teaching Computer Graphics at my alma mater, a year after I took the class myself. Over the following five years or so, my approach to the subject evolved, and I really got the hang of it.
After I stopped teaching, I took my notes, handouts and slides, and made them into a series of articles that I put on my website, where they remained in relative obscurity. Hacker News managed to find it every once in a while, and it was generally well received, but nothing came out of this. Until April 2019. It made the HN front page again [0], except this time it caught the attention of an editor in No Starch Press [1].
Long story short, today I’m incredibly grateful and excited to tell you that CGFS is coming out as a real book, with pages and all [2]. The folks at NSP graciously agreed to let me publish the updated contents, the product of almost two years of hard editing and proofreading work, for free on my website [3]. But if you’d like to preorder the printed or ebook version, you can use the coupon code MAKE3DMAGIC to get a 35% discount at https://nostarch.com/computer-graphics-scratch.
I’m still somewhat in disbelief that my work is getting published as a book, and this genuinely wouldn’t have happened without your support. So once again, THANK YOU :)
I'm sure you don't remember me, but we used to work together at Improbable – you sent me such a wonderful email when I left to go back to Sweden, thank you!
Anyway, just wanted to say congrats and hope you're doing great buddy. All the best! :o)
Of course I remember you! Could never forget such an impeccably dressed coworker :) I think we were super close to meeting in Miami that one time I had a long layover?
That's right, I was in Florida visiting family. Would love to reconnect, don't hesitate to ping me an email at [email protected] if the feeling is mutual! :o)
It's ok, my spam filter seems to do its job (knock on wood!) Either way I'm past the edit time limit now anyway..
What matters is I got the opportunity to reconnect with an old pal, I'll gladly pay the spam tax if it comes to that. Worst case scenario I guess I'm getting a new email address eventually. :o)
My favorite chapters are those about drawing raster lines and triangles. One of my first programs was writing (in 386 16-bit asm) the routine to draw a triangle, and I love these algorithms dearly.
An important and beautiful thing that is missing in the book is the drawing of anti-aliased lines (easy) and triangles (not trivial!). I find that rendering carefully shaded smooth objects with a pixelized boundary loses a big part of the magic.
Yep, good point. The reason is that the objective of the book is to cover as much material as possible, in the shortest time possible, and in the simplest way possible; so I had to build the shortest path between putPixel() and filtered textures, and that leaves things like antialiasing behind.
I explain a bit of this in the introduction, most of the time I present not the best/fastest algorithm, but the one that it's the simplest to understand. For example, I don't even mention Bresenham's algorithm; instead I present a super simple but inefficient one, which achieves two critical things: (1) you're drawing lines in no time, (2) it motivates the linear interpolation method that is then used for shading, z-buffering and texturing.
I wanted to learn about rendering and graphics for a while.
Almost every modern source however uses engines or big frameworks and skips the parts I actually want to look into. Compiling a reading list or figuring things out myself of course would have been an option, but I guess others can relate to the problem of having too little time for hobby projects :)
Big bonus points also for trying to remain programming language agnostic!
I immediately preordered your book and find it hard to express my excitement, can't wait.
It's a classic in the computer graphics field, like Knuth is for algorithms. I'd recommend it alongside the OP's new book.
EDIT: But for anyone who is reading this wanting to learn about practical computer graphics, CG:P&P is NOT the resource to use! Learning the low-level rasterization algorithms used in computer graphics is an important thing to learn at some point, just like learning assembly language provides valuable insights even if you never touch a line of assembler again. But if you actually want to write graphics code on modern hardware with GPUs, I'd highly recommend Real Time Rendering instead: https://www.amazon.com/Real-Time-Rendering-Fourth-Tomas-Aken...
Again, I read an ancient edition which predated the existence of consumer GPUs. But the edition I read was more than half on the topic of 2D graphics and what you might consider to be mundane things like font rasterization.
You might think this has nothing to do with 3D graphics, but in fact there is a big overlap. The graphics driver + GPU takes a description of a 3D scene and projects it into 2D primitives whichs are rasterized onto a bitmap (the display) just as one might render a font glyph.
So IIRC the book begins with the basic theory and techniques of 2D graphics, and then shows how projective geometry can be used to render 3D scenes onto 2D displays, and progressively adds various corrections which we take for granted, such as perspective-correct interpolation of texture values, which these days is done automatically by the hardware.
I don't think there's any pre-requisites other than a basic understanding of beginner computer science, coding, and algorithms. If you can read Knuth, you can read CG:P&P.
Hi Gabriel, is there a European site from which we can order? The delivery costs from the US are quite punchy, and it makes no sense to send the paper copy when a printer could do as good a job locally. Too many paper-miles? Looking forward to getting hold of it, great work! Mike
Hi Mike! I have nothing to do with distribution, but the book is also available from Amazon [0], B&N [1], Bookshop [2], and Penguin [3], maybe one of these has better shipping to Europe?
Hey, thanks for this. I did go on to find it on Amazon.co.uk, but of course the e-book + paper-book bundle is different. I'll check out the other sites as well. Thanks again.
Congrats on this book! I did a computer graphics class for my undergraduate and we used this fat OpenGL book bible to try and implement some functions.
Things got very hard with ray tracing and later u,v texture mapping, etc. Making a note here so I can try your book out sometime!
This is awesome, but as a kid back in the Pong era, I always wondered how the basic squares of pixels worked. In your travels writing this book, did you ever come across one that explained that in a way friendly to 12 year olds?
The idea of a pixel is that it's the smallest area of the screen that can be independently controlled, thus the designation "pixel" for "picture element".
In most modern phone screens or monitors, each pixel is formed by a group of three smaller elements with fixed colors (usually red, green, and blue) but _variable brightness_. By controlling the brightness of these sub-elements, we can control what the overall color of the pixel appears to be - once you get more than an inch or two away from the screen, the light from the element group blends into what we see as a single color - so 100% green + 100% red + 0% blue looks like bright yellow. 50% each for red, green, and blue looks like a middling grey. You can usually see the structure of the pixel with a magnifying glass, though this is easier with an old TV or monitor than a modern phone.
These pixels are laid out on a regular rectangular grid, and your display controller will offer some way to set the color of each pixel and to then update them all at some (usually) regular interval, for example 60 times per second. In computers, it's common to keep a "frame buffer" around that stores separate values for the red, green, and blue components of every pixel on the screen. If these are 8-bit values, that implies that each R/G/B component can have 256 different levels of brightness, and in combination they allow each pixel to take on one of about 16,000,000 possible colors. So, a program can change pixel colors by writing different values into this buffer and waiting for the updated buffer to be processed by the display controller to change what the display is showing.
Of course, the electronics, physics, chemistry, timing, and logic of display generation have changed quite a bit since Pong. And not all displays even have pixels - vector displays used to be a thing, and are still used in some very specialized applications.
I once bought a schematic for a DIY pong. IIRC it used monostables and integrators to control timings -- scan frequencies yes but no strict pixels needed.
I appreciate the explanation, but it's not for me. I'm trying to find a good book to recommend to my grandkids. I've had fun teaching them how to use old retro computers (c64, Apple II, and such). It's been a fun experience, but I'd like to give them some references that aren't me! I've personally come a long way in my computing skills since I was staring at shiny new Pong screens :)
That's trickier - somewhere along the way I stopped buying books and just do tactical forays onto the net when I need a new bit of domain knowledge.
Details like how and why pixels work are much easier to understand and retain if they're in some kind of context. Gamers learn about pixels to better understand the games or their gear. Devs learn about them in order to control them. Artists learn about them to better understand why computers mangle their art. So maybe choose an area the grandkids are engaged with and look for something there that also touches on display tech.
Thinking of when I was 12 (2005?), microscopes and magnifying glasses helped me understand. Depends on device DPI (cell phones have an insane number of pixels per inch; desktop screens are usually less dense).
If you can zoom in enough to see subpixel elements, and pull up a color wheel, it's very intuitive to see that the screen is made of pixels, and the pixels are made of three elements.
I don't know about books, though. I remember reading the binary/computers chapter of How Things Work and being thoroughly confused at that age. There was some extended allegory about white mammoths and black mammoths.
Did you have The Way Things Work or The New Way Things Work? I had the latter around the same time. I thought the extended computer section (Bill's Gates!) was very helpful. I still really like the visual in which the image of a mammoth is turned into a series of pumpkin/no pumpkin signals being launched through the air, then becomes a picture of a mammoth on the other side of the field. I think it got a bit too technical for me when it started showing how transistors are constructed-- I guess I should've stuck with the pumpkins.
Now I have both editions and have looked through them side-by-side. It's a bit unfortunate that some pages were dropped to make room for the expanded digital section in the newer edition, but well worth the trade-off for more/better explanations in the digital realm.
Just took a look over my bookshelf, but they haven't made it with me through moves.
I do still have Incredible Cross-Sections. That was another of my favorites to flip through. It looks like there are reprints and new versions. Would definitely recommend for kids.
Graphics was somewhat more complicated back in the day. In the days before we had enough memory to hold a full screen framebuffer, there was a lot of magic going on with cathode ray tubes, video signals, dedicated hardware sprite engines, memory access clock cycles, using interrupt handling to "race the beam" and so on.
It's not like modern display panels and HDMI signals and digital protocols are simple, but you don't have to go into the details to understand how pixels and framebuffers work.
If you're trying to explain how pixels work, going back to 1980s era technology is a detour into arcane technical details that aren't really relevant any more. They're pretty cool to read about if you're into retro computer technology but they don't have much educational value or relevance to the modern day.
If they're interested in retro computers, you can always try finding some books from the 1980's. Many titles were written for children. The Apple II was also a fun playground for computer graphics since it was supported in BASIC.
To some extent, I guess that's what I've tried to do here. The linear algebra might be too advanced for a 12 year old (I didn't pick it up until much later!), but on the other hand you don't need to follow all the derivations - a 12 year old can learn a lot just by following the results and the resulting algorithms.
There's also a linear algebra appendix [0] that presents the operations, explains how to use them, and how they can be interpreted, without going in any theoretical depth about why these things are the way they are.
Can you be specific of the context you are interested in?
On modern machines, a pixel is just a set of three numbers indicating how much red, blue and green light should be shown at a particular point. Ex: {0.75 red, 0.0 blue, 0.5 green} for a kinda-dark, orange pixel. The GPU keeps a big 2D grid of these number-triples in memory and on a regular schedule sends out a copy over the DVI cable to your monitor. The monitor has a bit of memory to hold it's copy. And, it has hardware to scan over the grid of numbers to produce a sequence of voltage levels that are used to change the color of the points on the LCD.
There's a bit of math involved in how to do a good job representing colors with numbers and how to convert those numbers to voltages. But, at the most basic level, an image is just a big 2D grid of numbers. If you want to change the image, poke the grid. People want to change images a whole lot. So, we've developed pretty sophisticated hardware and software around poking 2D grid... But, that's a whole other topic.
Great news! Please make sure the links to demos work for a while or you have some archival/preservation in mind. I've read through older technical books whose examples went offline and depending on the subject matter it can be quite a loss.
At first I mostly followed the lecture structure of the previous professor, but over the years I streamlined it a lot, emphasized a "motivated" approach to each topic (e.g. "last chapter we did this, now we try to do this thing, it doesn't quite work, how can we fix it?"), and generally refined and simplified the explanations of everything. Teaching the same material year after year was a fantastic opportunity to identify the recurring sticking points and finding ways around them.
Coincidentally, just last night I was reading an issue of Byte magazine from 1976. It was telling people how to do just this: Computer graphics from scratch.
Except, back then "from scratch" meant soldering together wires and transistors and such.
The big prize was displaying a vector Starship Enterprise on an oscilloscope.
Wow, that sounds pretty hardcore! I suppose what's considered "from scratch" evolves over time. I'm old enough to have written bytes to 0xA000 (and the pain of doing page-switching in VESA modes), so even this "putPixel() on a canvas" is not exactly "from scratch" from that perspective!
The 2019 thread is what led No Starch Press to contact me in the first place, so I'm very grateful to the HN community :) Keep up the incredible work, dang!
It's super gratifying when feedback loops that end up producing real-world benefits pass through HN threads. I love hearing about those. Congratulations on the book. To judge by HN's consistent interest in this material, it must be awesome.
There are several ways to approach this subject. This is a bottom-up approach - how do we get pixels on screen?
Other approaches start from "how do we represent a scene", in the sense of a scene graph or at least lists of vertices, triangle indices, and textures. That cuts the problem into "represent scene" and "render scene". That's a practical division, because that's the interface between what you put into a renderer and what the renderer does with it. Today, you're either using someone else's rendering system, or you're building the rendering system. Probably not both, except as an exercise.
This division is actually a bit above the OpenGL/Vulkan level. Vulkan is complicated because it's mostly about setting up the GPU to run a rendering pipeline, talk to displays, and other housekeeping. And memory management. If you have a library to manage that part, it's not so bad.
It's like old computer books, where you started out learning what the arithmetic/logic unit (the ALU) did, how it interacted with memory, what the instruction decoder did, and so on. This prepared you for assembly language programming. Few courses start out there today.
Incidentally, the caption for figure 12-1 is missing some symbols. It reads "Using instead of doesn’t produce the results we expect."
Thanks for bringing up the issue with the caption. This is a bug in my own pipeline to convert the book to website format; it looks just fine in the book itself. It should read "Using DrawFilledTriangle instead of DrawWireframeTriangle doesn't produce the results we expect." Will fix.
Maybe this is a really dumb question, but as someone who really knows nothing about computer graphics, how can you actually run these examples?
I realize that I could Google and research and find a list of "canvas drawing tools" in a variety of languages that I could then evaluate (despite, again, knowing nothing about graphics), but it'd be SUPER-SUPER-awesome to have a quick note "there are lots of ways to go about making these things happen in the real world, but here's one I recommend if you have no other priors." :-)
Wow thank you! I totally missed the "Source code and live demo" button during my first cursory glance through the chapters, but indeed, there it is. :-)
CG fundamentals was my favourite course in college. Going from drawing lines with the Bresenham algorithm to drawing polygons, filling polygons with Scanline, implementing ZBuffer, texture mapping and so on. Was a lot of fun even though I never ended up working with CG except for some hobby projects.
Especially raytracing! What I used to do when teaching was taking a few weeks to develop the rasterizer, and then say "OK, 4 weeks to go, time to present your final project - you're making this <image of a raytraced scene with reflections and shadows>, which I'm going to explain within the next 2 hours". Some groups genuinely thought I was joking. But then every one of them managed to write a raytracer, their minds were blown, and they used their renders as desktop backgrounds. I loved that :)
I'd also highly recommend Ray Tracing in One Weekend[1], which starts you out with literally just a c++ program to dump bytes into a file which can be opened with most standard image/document viewers. Then you don't have to think about or set up any shaders before you can start to experiment with the rendering algorithms.
I've only had a quick glance but this book looks fantastic, saved for later reading and will definitely consider buying the ebook.
I've been teaching myself 3D/CG programming in my spare time using free web resources, I've been trying to learn it "the hard way" by doing everything relatively from scratch, instead of using a high-level engine I'm using low-level libraries writing my own shaders and implementing my own scene graph etc. This book really seems right up my alley and looks like it'll teach me a lot.
Excellent resource! I really like the fact that everything is centred around the idea of just a single 'PutPixel' function. Makes things much more approachable.
I've took the liberty of rewriting the 'Perspective projection' demo (1) in more up-to-date Javascript, using ES6 classes and all the other niceties that modern browsers have. I also used the regular canvas line drawing methods to focus on just the vertex-to-line conversion, and updated the code so that it will work on any resolution. https://codepen.io/hay/pen/gOLazpm
> "The choice of axes is arbitrary, so we’ll pick something useful for our purposes. We’ll say that Y is up and X and Z are horizontal, and all three axes are perpendicular to each other. Think of the plane XZ as the “floor,” while XY and YZ are vertical “walls” in a square room."
As someone who works with CAD/CNC machines on a daily basis, this is torment. XY is the floor and XZ, YZ are "walls". Same would go for a 3d printer. They're not that arbitrary.
Sorry about that :) In CG and gamedev I've seen "Y is up" a lot more, to the extent that the "depth buffer" is usually called the "Z buffer".
I don't know how this came to be. You make a very good point that XY as the floor makes a lot of sense, especially if you're coming from the context of "I'm drawing a floorplan on a horizontal table" - the floorplan matches the orientation of the actual floor so XY is the most natural choice.
I suppose for a similar reason CG adopted the "XY is a wall" convention, since you start with XY on the screen (a vertical plane in front of you) and only later add depth?
I can see how using Z as the depth axis for the 2d coordinate system of the screen is a natural way to go for a programmer. Maybe I can lay my monitor flat on a glass desk and lay on the floor ;-)
I've been reading the book and enjoying it so far. Thank you.
Y up is very common in computer graphic. As for CAD, I know at least Solidworks have the option to use Y up.
I think the choice of 3D up axis is related to the expected look direction. 3D printer control software evolved from CNC control software, which is looked at from top down, so up is on the build plate plane. Video game commonly looked at from the side, so up is up. Up is then often associated with Y.
And then there the argument about right hand vs left hand coord system...
Computer graphics basically follows what math describes (x,y,z) or what is on the screen or blackboard. On the screen, x-axis is left and right; y-axis is up and down. When z-axis was introduced, it just came in and out of the screen. The convention is negative z values goes into the screen and positive z values out of the screen.
I break the negative-Z convention in the book. The math and the diagrams feel a lot more natural to me if Z goes into the screen. I suppose negative-Z comes from measuring Z as "distance from the screen to me", but then you have a bunch of weird negative values in the equations.
Left-handed vs right-handed coordinate systems. I prefer right-handed, but for some reason most of computer graphics seem to have adopted left-handed.
That was the opposite of fun when I was noodling with writing an OpenGL-based rendering engine for a "fly a spaceship around a 3D maze" game, decade and a half ago.
As to why I prefer right-handed? The lin-alg textbook I had at uni uses a right-handed coordinate system for what it calls "the perceivable room" (yes, it is "the book with the cow on", for those in the know).
When you get to game networking Gaffer On Games is a must read. Efficient game networking layers are all about that UDP [2] and reliable UDP [3]. When web games were huge it would have been pure gold to have WebRTC with built in UDP, NAT/punchthrough etc.
I also wrote about client-side prediction and server reconciliation [0], I suppose you're referencing that; but just to be clear, this book is just about graphics, and has nothing to do with that (other than the fact that multiplayer games generally have graphics)
I was just saying to people interested in games, it is good to get a base level to advanced guide to every part including rendering, networking, gameplay etc that is a good source built from experience and explains things clearly. Yours looks like a great source for networking as well, thanks for adding it.
I see you reference Gaffer on Games and the Valve Latency Compensation article, both are great for understanding game networking and helped me greatly in shipped titles especially prior to Unity/Unreal using stuff like enet [2]/RakNet [3]/custom. Lots of networking libs inherited the best from those like reliable UDP, channels and dealing with NAT. Both are excellent libraries to help understand the full picture and get started. Lots of game engines networking libs are based on those.
Instant buy. This is so awesome. I was just looking into this subject, so this comes at a perfect time for me. Thank you for the coupon :)
I got a good chuckle out of your technical reviewer: "Alejandro currently works in the GPU Software group at a leading consumer electronics com- pany based in Cupertino, California." I wonder where he works? ;)
One of the things that I really like about this course is that it teaches both raytracing and rasterization. Usually, computer graphics tutorials are either the practical, higher-level, rasterization-focused kind (oftentimes around half "how do opengl"), or how the business card raytracer works.
Having a single course written by a single person that presents both major rendering techniques is amazing.
Yeah I find this method way easier to understand than scan conversion. Fabian Giesen has a great series of articles from 2011 that explains it well[0].
Also the original 1988 paper by Pineda is a surprisingly easy read, and it's only 4 pages![1]
Rendering (forward or backward) of algebraic curves (like bezier and nurbs) never gets mentioned in any such collections, despite being essential to any 2D or GUI framework. And if they are, then only as a short: Just subdivide / tessellate it and then put it through the polygon pipeline! However there are far better ways to handle them, which even people in CG seem to be oblivious of.
And of course not to forget all the other approaches to modeling geometry like implicit surfaces, meta-balls, signed distance fields, constructive solid geometry, point clouds, splats, voxels, etc ...
Constructive solid geometry I do mention as an extension to the raytracer [0], but it doesn't have a dedicated chapter. Perhaps for the 2nd Edition? :P
I did some research a year ago. Found a few alternatives, yet none of them was good enough. Here's the issues I remember.
1. It's complicated. Cubic segments can self-intersect or contain singularities.
2. Stroked curves are used a lot. To build stroke edges from the curve, need to offset the curve by half width of the stroke. When you offset a polyline you gonna get another polyline. Yet the offset of Bezier splines is not generally representable as another Bezier spline.
3. In some use cases, hardware-implemented MSAA is the best way of AA. Polygon pipeline gives you that for free. Producing SV_Coverage in pixel shaders to achieve same effect for mid.points of triangle is hard and inefficient.
4. Games have been pushing GPUs for high polygon count for couple decades now, GPUs became really good at that. Also GPUs have early Z rejection that drops pixels or even larger blocks before pixel shader stage. You can't have that if you sending large primitives and computing curves in pixel shaders. Polygon pipeline is not necessarily slower.
I remember as a kid in the 80s trying to do simple things like draw diagonal lines.... painful.... then wanting to draw a circle. That was even more painful. My dad was a physicist and taught me the math needed.... I can remember watching with awe as my circles render slowly. Fun times. I was super impressed when I saw programs draw circles quickly. Now, it's all trivial! But I'm sure many would be lost if the only thing they could do is set a single pixel!
Drawing diagonal lines as a kid in the 80s is exactly how I started :) My very first program (or at least the first one I have evidence of) was about drawing a bunch of lines on a ZX Spectrum [0]. I should make one of these "how it started / how it's going" things.
I studied "Computer Graphics" as an elective subject for my Electrical Engineering degree. It was taught by one of the the maths lecturers I think. (Peter Castle at University of Wollongong, maybe 1984). I do remember him playing I guess a VHS of some scenes of what was state of the art, Star Trek Wrath of Khan, I think). Any way it certainly got me interested in ray tracing and DKBtrace and PoVray later.
Hey thanks for the book as well as the discount code. Quick question, do you think this will be beneficial to me if say I want to write a server side charting library?
Hmmm, probably not. The book doesn't really go much into drawing 2D shapes - just the minimum necessary to be able to draw filled triangles, at which point it jumps into 3D stuff.
There's no exercises per se, but the whole book is one big programming assignment (or rather, two) - you build a raytracer and a rasterizer by the end of it.
I had the same perspective on this. I spent some time building up a 2d graphics library from zero, primarily for drawing very simple UIs on resource-constrained devices.
I initially tried using D3D/Win32 APIs to perform the high-level drawing operations for me, but found that for the scope of functionality that I required, these interfaces were far too heavy-handed. These also have lots of platform-specific requirements and mountains of arcane frustrations to fight off.
I didn't need to interface with some complex geometry or shader pipeline in hardware. Raytracing is hilariously out of scope. I really just needed a dead simple way to draw basic primitives to a 2d array of RGB bytes and then get that to the display device as quickly as possible. What I ended up with is something that isnt capable of very much, but can run on any platform and without a dedicated GPU. I also feel like this was a much better learning experience than if I had slammed my head into the D3D/opengl/et. al. wall.
This is true, but DirectX 11, Metal and modern OpenGL (without cutting edge extensions) are still very accessible to novices, not to mention that you can transfer knowledge between the three of them, so there's little cost in learning a second/third API.
Vulkan and DX12 however are the work of the devil.
Vulkan is soulless, like uefi: all functions have more-or-less the same interface. But I wouldn't say it's the work of the devil. Its greatest sin is boilerplate.
I understand that Vulcan adds a lot of complexity, but OpenGL and Direct3D are surprisingly finite. Once you learn the graphics pipeline, it’s not too hard to begin drawing things. Init window and device, fill vertex and index buffers, load textures, load vertex and pixel shaders, set each of these things as the active thing, draw. Of course you can also explore the extents of all these entities until the sun engulfs earth. Even though I’ve been at it for 30 years, I realized long ago I would never keep up with, learn, and implement every notable technique even if I did nothing else for the rest of my life. But drawing nice looking, animated 3D things is very tractable.
Long story short, today I’m incredibly grateful and excited to tell you that CGFS is coming out as a real book, with pages and all [2]. The folks at NSP graciously agreed to let me publish the updated contents, the product of almost two years of hard editing and proofreading work, for free on my website [3]. But if you’d like to preorder the printed or ebook version, you can use the coupon code MAKE3DMAGIC to get a 35% discount at https://nostarch.com/computer-graphics-scratch.
I’m still somewhat in disbelief that my work is getting published as a book, and this genuinely wouldn’t have happened without your support. So once again, THANK YOU :)
[0] https://news.ycombinator.com/item?id=19584921
[1] http://nostarch.com
[2] https://nostarch.com/computer-graphics-scratch
[3] http://gabrielgambetta.com/computer-graphics-from-scratch