Hacker Newsnew | past | comments | ask | show | jobs | submit | chriswarbo's commentslogin

EDE is a standalone WM/DE with a Windows 9x look

XPde was a similar project, but with a Windows XP look. Their site seems dead, though I'm sure the software could be found out there somewhere https://web.archive.org/web/20070825005617/http://www.xpde.c...


How does it compare to using icewm or fvwm95?


> I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it.

Ever since "AI" was named at Dartmouth, there have been very smart people thinking that their idea will be the thing which makes it work this time. Usually, those ideas work really well in-the-small (ELIZA, SHRDLU, Automated Mathematician, etc.), but don't scale to useful problem sizes.

So, unless you've built a full-scale implementation of your ideas, I wouldn't put too much faith in them if I were you.


Far more common are ideas that don't work on any scale at all.

If you have something that gives a sticky +5% at 250M scale, you might have an actual winner. Almost all new ML ideas fall well short of that.


If someone else comes along and makes the exact claim I just made, I won't believe it either


Did you try any of your shit at any scale at all?


999999 times out of a million you'd be right.

But, I shouldn't have said anything.


I prefer bevels for that. In particular, I grew up with the Amiga's "3-D look" where embossed = interactive and recessed = informative https://archive.org/details/amiga-user-interface-style-guide...

On a similar note, fuck the "flat" designs which make buttons indistinguishable.

I've even seen UIs which do use bevels on buttons; but only when hovered-over! I don't want to scan my pointer across the screen hoping to find something interactive, like I'm struggling on Monkey Island!


Bevels are indeed very helpful for that. Mystery-meat navigation has been a UI design problem for a long time.


Border-radius makes it easier to implement rounded-rectangles, etc. compared to the tables-of-offset-image-sprites they needed back in the day.

That says nothing about whether rounded-rectangles are "good" or "bad" though.


That's circular reasoning (which is ironic, given the subject at hand).


The point is that it’s rare to see it in nature, so our minds tend to think that they are not natural.


> Just doing this as an afterthought by playing lottery and trying to come up with smart properties after the fact is not going to get you the best outcome.

This sounds backwards to me. How could you write any tests, or indeed implement any functions, if you don't know any relationships between the arguments/return-value, or the state before/after, or how it relates to other functions, etc.?

For the addition example, let's say the implementation includes a line like `if (x == 0) return y`; would you seriously suggest that somebody writing that line doesn't know a property like `0 + a == a`? Would that only be "an afterthought" when "trying to come up with smart properties after the fact"? On the contrary, I would say that property came first, and steered how the code was written.

Incidentally, that property would also catch your "always returns 0" counterexample.

I also don't buy your distinction that "real life code" makes things much harder. For example, here's another simple property:

    delete(key); assert lookup(key) == []
This is pretty similar to the `a + (-a) == 0` example, but it applies to basically any database; from in-memory assoc-lists and HashMaps, all the way to high-performance, massively-engineered CloudScale™ distributed systems. Again, I struggle to imagine anybody implementing a `delete` function who doesn't know this property; indeed, I would say that's what deletion means. It's backwards to characterise such things as "com[ing] up with smart properties after the fact".


I agree with you. However the grand-parent comment has a point that it's not easy to extract testable properties from code that's already written.

It's much easier to proceed explicitly like you suggest: have some properties in mind, co-develop property tests and code.

Often when I try to solve a problem, I start with a few simple properties and let them guide my way to the solution. Like your deletion example. Or, I already know that the order of inputs shouldn't matter, or that adding more constraints to an optimiser shouldn't increase the maximum, etc.

And I can write down some of these properties before I have any clue about how to solve the problem in question.

---

However, writing properties even after the fact is a learnable skill. You just need to practice, similarly to any other skill.


> However the grand-parent comment has a point that it's not easy to extract testable properties from code that's already written.

True, property-based testing does not solve the problem of deriving the intended behavior of code where behavior is not documented (either by requirements documents, or code comments, or tests that aren't just examples but clearly indicate the general behavior they are confirming, or...)

OTOH, PBT can be used to rapidly test hypotheses about the behavior (though intent is another question) of legacy code, which you are going to need to develop and validate to turn it into code that is maintainable (or even to replace it with something new, if you need to generally be compatible.) Determining whether deviations from a hypothesized behavior are intentional or bugs is still an exercise for the user, though whether the deviations are highly general or narrow to specific cases can help to illuminate that decision, and a library like Hypothesis will help determine that.


QuickCheck won't preserve invariants, since its shrinkers are separate from its generators. For example:

    data Rat = Rat Int Nat deriving (Eq, Show)

    genRat = do
      (num, den) <- arbitrary
      pure (Rat num (1 + den))
`genRat` is a QuickCheck generator. It cannot do shrinking, because that's a completely separate thing in QuickCheck.

We can write a shrinker for `Rat`, but it will have nothing to do with our generator, e.g.

    shrinkRat (Rat num den) = do
      (num', den') <- shrink (num, den)
      pure (Rat num' den')
Sure, we can stick these in an `Arbitrary` instance, but they're still independent values. The generation process is essentially state-passing with a random number generator; it has nothing to do with the shrinking process, which is a form of search without backtracking.

    instance Arbitrary Rat where
      arbitrary = genRat
      shrink = shrinkRat
In particular, `genRat` satisfies the invariant that values will have non-zero denominator; whereas `shrinkRat` does not satisfy that invariant (since it shrinks the denominator as an ordinary `Nat`, which could give 0). In fact, we can't even think about QuickCheck's generators and shrinkers as different interpretations of the same syntax. For example, here's a shrinker that follows the syntax of `genRat` more closely:

    shrinkRat2 (Rat n d) = do
      (num, den) <- shrink (n, d)
      pure (Rat num (1 + den))
This does have the invariant that its output have non-zero denominators; however, it will get stuck in an infinite loop! That's because the incoming `d` will be non-zero, so when `shrink` tries to shrink `(n, d)`, one of the outputs it tries will be `(n, 0)`; that will lead to `Rat n 1`, which will also shrink to `Rat n 1`, and so on.

In contrast, in Hypothesis, Hedgehog, falsify, etc. a "generator" is just a parser from numbers to values; and shrinking is applied to those numbers, not to the output of a generator. Not only does this not require separate shrinkers, but it also guarantees that the generator's invariants hold for all of the shrunken values; since those shrunken values have also been outputted by the generator (when it was given smaller inputs).


Yeah, reimplementing the solution just to have something to check against is a bad idea.

I find that most tutorials talk about "properties of function `foo`", whereas I prefer to think about "how is function `foo` related to other functions". Those relationships can be expressed as code, by plugging outputs of one function into arguments of another, or by sequencing calls in a particular order, etc. and ultimately making assertions. However, there will usually be gaps; filling in those gaps is what a property's inputs are for.

Another good source of properties is trying to think of ways to change an expression/block which are irrelevant. For example, when we perform a deletion, any edits made beforehand should be irrelevant; boom, that's a property. If something would filter out negative values, then it's a property that sprinkling negative values all over the place has no effect. And so on.


Here's a property check I wrote yesterday, which found a couple of bugs in a large, decade-old codebase.

I'd just changed a data structure with three components, and made sure the test suite was still passing. I happened to notice a function in the same file, for parsing strings into that data structure, which had a docstring saying that it ignores whitespace at the start/end of the string, and in-between the components. It had tests for the happy-path, like "foo123x" -> ("foo", 123, 'x'), as well as checking that optional components could be left out, and it even checked some failure cases. Yet none of the tests used any whitespace.

I thought it would be good to test that, given that somebody had gone to the effort of documenting it. Yet I didn't want to write a bunch of combination like " foo123x", "foo 123x", "foo123 x", " foo 123x", "foo 123 x", and so on. Instead, I wrote a property which adds some amount of whitespace (possibly none) to each of those places, and assert that it gets the same result as with no whitespace (regardless of whether it's a successful parse or not). I wasn't using Python, but it was something like this:

    def whitespace_is_ignored(b1: bool, b2: bool, b3: bool, s1: int, s2: int, s3: int, s4: int):
      v1 = "foo" if b1 else ""
      v2 = "123" if b2 else ""
      v3 = "x" if b3 else ""

      spaces = lambda n: " " * n
      spaced = "".join([spaces(s1), v1, spaces(s2), v2, spaces(s3), v3, spaces(s4)])
      assert parser(v1 + v2 + v3) == parser(spaced)
The property-checker immediately found that "foo123x " (with two spaces at the end) will fail to parse. When I fixed that, it found that spaces after the first component will end up in the result, like "foo 123x" -> ("foo ", 123, 'x').

Of course, we could make this property more general (e.g. by taking the components as inputs, instead of hard-coding those particular values); but this was really quick to write, and managed to find multiple issues!

If I had written a bunch of explicit examples instead, then it's pretty likely I would have found the "foo 123x" issue; but I don't think I would have bothered writing combinations with multiple consecutive spaces, and hence would not have found the "foo123x " issue.


When I'm "up to my armpits in trying to understand the on-the-ground problem", I find PBT great for quickly find mistakes in the assumptions/intuitions I'm making about surrounding code and helper functions.

Whenever I find myself thinking "WTF? Surely ABC does XYZ?", and the code for ABC isn't immediately obvious, then I'll bang-out an `ABC_does_XYZ` property and see if I'm wrong. This can be much faster than trying to think up "good" examples to check, especially when I'm not familiar with the domain model, and the relevant values would be giant nested things. I'll let the computer have a go first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: