In case you haven't looked at the article, this is looking specifically at the Edge Coloring problem and not the more commonly known Vertex Coloring problem. Vertex Coloring is NP-complete unfortunately.
Hrm... right. It's been a while. And it looks like both Vertex Coloring and Edge Coloring are both NP-complete (because of the O(n) procedure you're talking about and the ability to reduce both problems down to 3-SAT). I've started looking closer at the actual paper to try to figure out what's going on here. Thanks for the reminder, I miss getting to regularly work on this stuff.
Edit: thanks sibling reply for pointing out that it's not a bidirectional transform.
For the edge-coloring problem, the optimal number of colors needed to properly color the edges of G is always either Delta(G) (the maximum degree of G) or Delta(G) + 1, but deciding which one is the true optimum is an NP-complete problem.
Nevertheless, you can always properly edge-color a graph with Delta(G) + 1 colors. Finding such a coloring could in principle be slow, though: the original proof that Delta(G) + 1 colors is always doable amounted to a O(e(G) * v(G)) algorithm, where e(G) and v(G) denote the number of edges and vertices of G, respectively. This is polynomial, but nowhere near linear. What the paper in question shows is how, given any graph G, to find an edge coloring using Delta(G) + 1 colors in O(e(G) * log(Delta(G))) time, which is linear time if the maximum degree is a constant.
Yes. The article ran through this point as follows:
"In 1964, a mathematician named Vadim Vizing proved a shocking result: No matter how large a graph is, it’s easy to figure out how many colors you’ll need to color it. Simply look for the maximum number of lines (or edges) connected to a single point (or vertex), and add 1."
I keep wondering why I ever read Quanta Magazine. It takes a pretty generous reading of "need" to make this a correct statement.
Not really. Coloring a graph is almost always talking about proper coloring, meaning that things that objects that are related receive different colors.
If you read the introduction, you'll also read that the goal is to "color each of your lines and require that for every point, no two lines connected to it have the same color."
Ps. "How many colors a graph needs" is a very well established term in computer science and graph theory.
I think the comment referred to the phrase „a graph needs X (colors or whatever)“. For me, this can be read two ways: 1. „a graph always needs at least X colors“ or 2. „a graph always needs at most X colors“.
Personally, I would interpret this as option 1 (and so did the comment above I assume). In that case, the statement is wrong. But I’d prefer to specify „at most/ at least“ anyways.
Or even better, use actual vocabulary. „For every graph there exists a coloring with X colors.“ or „any graph can be coloured using X colors“.
PS: I also agree with the sentiment about quanta magazine. It’s hard to get some actual information from their articles if you know the topic.
Parent's point is that sometimes (but not always) the store is perfectly fine selling you a car for $1 less than what the "price tag" of Delta(G)+1 dollars asks for, so "need" is a bit inaccurate.
> I'm sure we can all appreciate not climbing 30 flights of stairs, even if it we are physically capable of it.
Totally. And people seem to forget that you can temporarily go from "no disabilities" to "have a disability" to "no disabilities" very quickly. Slip of a knife while cooking can take a hand out of commission for a few days. Stepping on your glasses can make you visually-impaired for a few days. Ear infection can seriously affect your hearing until it's healed.
And there's tech issues that can come up too! A couple of weeks ago I needed to get an embedded Linux device set up with SSH and could only find a spare mouse in the office, no random USB keyboards kicking around. Trying to use the Gnome on-screen keyboard was an exercise in frustration. Some symbols were missing that I needed to type into a shell, for example.
I agree with you wholeheartedly and just want to add one more aspect to this: it also allows you do handle the case where the subject is moving slowly relative to the camera. Easy example is taking long exposures of the moon from a tripod. If you just open the shutter for 30 seconds the moon itself is going to move enough to cause motion blur; if instead you take a series of much faster photos and use image processing techniques to stack the subject (instead of just naively stacking all of the pixels 1:1) you can get much better results.
For bright stuff like the moon, it's my understanding the best way is take really high-speed video, hundreds of frames per second, then pick out the frames which has the least amount of atmospheric distortion and stack those.
So not only can you compensate for unwanted motion of the camera rig, but also for external factors like the atmosphere.
For faint deep-sky objects, IIRC you really do want long exposures, to overcome sensor noise. At least the comparisons I've seen using same total integration time, a few long exposures had much more detail and color compared to lots of short exposures.
That said, lots of short exposures might be all you can do if you're limited by equipment or such, and is certainly way better than nothing.
Yeah, Uber has pretty much got it dialled in. I remember the early days when “taking an Uber” was a weird mysterious thing. Now it is, for me, the most normal thing in the world. I’ve had one mediocre experience: I had a 60 mile one-way trip from an airport and multiple drivers accepted and then cancelled. I went and talk to one of the taxis at the taxi stand and they were asking double what Uber was… so I waited another 15 minutes or so and found a driver. Otherwise it’s been a completely satisfying experience.
Airbnb has definitely gone the opposite way. My first Airbnb experience involved getting woken up by the daughter of the family that lived downstairs asking me if I wanted breakfast for 5€. I was getting whole apartments for 30€/night. Now it’s just as expensive as regular hotels, half of them expect you to wash all of the linen before you leave, and it’s totally unpredictable what you’re going to get. I just book with Hilton instead. There’s free bottled water and snacks waiting for me when I get there, it’s a pretty consistent experience, and free good breakfast at most of them.
Lots of haters today. Love it, great opportunity to build something that solves a problem for you for cheap and gives you an opportunity to play with some tech.
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.
Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.
The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.
Those are decent options but you can still run into really ugly issues if you try to go back too far in time. An example I ran into in the last year or two was a Python library that linked against the system OpenSSL. A chain of dependencies ultimately required a super old version of this library and it failed to compile against the current system OpenSSL. Had to use virtualenv inside a Docker container that was based on either Ubuntu 18.04 or 20.04 to get it all to work.
To a certain extent. The big difference being how difficult it would be to work around or update. If my C code is linking against a library whose API has changed, I can update my code to use the new API so that it builds. In the case I ran into… it wasn’t one of my dependencies that failed to build, it was a dependency’s dependency’s dependency with no clear road out of the mess.
I could have done a whole series of pip install attempts to try to figure out the minimal version bump on my direct dependency that would use a version of the sub-dep that would compile and then adapt my Python code to use that version instead of the version it was pinned to originally.
Capsela was really cool! In the end I feel like Lego with the Technics and Mindstorms stuff caught up but for a while Capsela was some of the coolest stuff you could get for making mechanical/electrical systems.