Hacker Newsnew | past | comments | ask | show | jobs | submit | vaporstun's commentslogin

There is no entropy generation here, it is based on the Mersenne Twister, so it is pseudo-random only at this point. If none is provided, the seed is just the current timestamp.

I discuss this a bit more in the docs: http://chancejs.com/#todo

This is good for repeatability, bad if "true random" is needed.

Problem is, this is very tricky. I'd like to add it at some point, but finding a way to add entropy that works both in the browser and in Node would be quite difficult. For example, I could use mouse/keyboard in browser, but not in some server app running on node.

I could use an external service (like random.org) as the underlying random (I built it in such a way that this was easily swappable/replaceable), but then I'm adding network latency. I show an example like this in the docs: http://chancejs.com/#browser

So for now I'm punting on it, with a plan to revisit later.


The other downside of Features in Drupal is that if we export something as a Feature (say a View) and then our site builders tweak and change that View for our clients and then we update some core element of that View and re-export the Feature, that re-export will overwrite any custom changes that were made to that View, thereby trashing any of the custom changes. This is a real problem with Features to which I'm surprised there has not yet been a good solution.


Same problem as other frameworks. You need to know the rules. If you use rails, and the designer changes the CSS without committing the change to git, then next code push will erase his change.


Drupal developer with 4+ years of working with it under my belt.

The nitty gritty is that we use it in an environment where we basically have 2 teams. One that builds custom code/modules that can be installed and enabled in Drupal to add additional functionality and another that actually deploys sites.

The team that deploys sites is savvy, but are not hardcore developers. The thing I've come to over all these years is that Drupal's pluggability still makes it the most powerful tool for what most people use it for -- deploying websites.

I have often surveyed the landscape and thought, "why not just build this in another tool like Rails or Django" and I always come back to it in much the same way the author of this article does -- while those are much more beautiful development frameworks, they just don't have the mojo that Drupal does.

This is likely best illustrated by example. This may or may not be based on a true story ;) Let's say I develop a custom application to interact with my company's backend database products which, let's say, are for events management. Now I pass that custom application off to the site builders who will actually customize that application for each of our hundreds of clients.

(1) The client wants a Google map of all their events

(2) The client wants the Events displayed on a calendar

(3) The client wants Events to filter by Event type

(4) The client wants a tag cloud or something else totally unrelated to the Events system

Built in Rails (Total additional developer time to add these features: 400+ hours)

(1) Damn, back to the drawing board, we need 100 more hours of dev time to develop a custom component to convert the locations into actual location data (lat/long) and display it on a map. We need to interact with the Google Maps API and write the code that will communicate with them via an API key and handle everything.

(2) Damn, back to the drawing board, we need 250 more hours of dev time to build a calendar and ensure we get the layout correct, handle a bunch of special cases (what if there are too many events to display on a single day, some months have 4 rows of weeks, some have 5, etc.)

(3) Developer has to add a form element to take that event type and modify a query to enact that filtering and load the results on a new page. Not a ton of dev time, but dev time nonetheless.

(4) Cross fingers that there is a Ruby Gem for this. Otherwise damn, custom development.

Built in Drupal (Total additional developer time to add these features: 0 hours)

(1) We never heard back from the site builders because they installed the Drupal modules Views, GMap, Location, and were able to create the map themselves from our custom Event data and clicking on the right buttons

(2) Again, the developers never heard back from the site builders because they installed the Drupal modules Views, Calendar and were able to show those events on a calendar.

(3) No developer time because the site builder went into the Views settings and added an Exposed Filter

(4) Site builders install the cloud tag module or the random feature X module, custom developer time rarely needed.

Now let's scale it up, instead of 1 client you have 500 and each want slightly different things. One wants to filter by Event type, one wants to filter reverse chronologically by date. One wants a calendar with a month view, one wants a week view. One has a ton of events and wants a day view. With Drupal, these are all just settings on existing, already built modules. With Rails/Django/Node.js/etc. these all require more dev time. Now of course things could be designed intelligently and parameterized to limit some of the DRY, but there is still dev time required to implement all these different permutations or up front complicated system design to create an architecture that can be configured as richly as a Drupal module developed over years by a community.

I work in Drupal professionally, but play in Rails, Django, Node.js, even Meteor every chance I get because they're so fun and beautiful but at the end of the day I still think Drupal is the right tool for our job in spite of its long list of flaws including its base language, the always horrifying PHP. But it freaking works.

tl;dr This article rang more true than I could have imagined it would. Thanks mcrittenden!


At the risk of giving offense, I have to say some of your comments seem to be written in ignorance of Rails. This whole bit is surprising to me:

"Built in Rails (Total additional developer time to add these features: 400 hours)

(1) Damn, back to the drawing board, we need 100 more hours of dev time to develop a custom component ...

(2) Damn, back to the drawing board, we need 250 more hours of dev time to build a calendar and ensure we get the layout correct...

(3) Developer has to add a form element to take that event type...

Built in Drupal (Total additional developer time to add these features: 0 hours)"

The thing about a programming language that allows for meta-programming is how easy it makes it to stitch together the code you need from components. Ruby is very good in this regard, and also any Lisp would be good in this regard, and my new favorite is Clojure, which is exceedingly excellent in this regard.

I had worked with Rails in 2006, then taken a break from it, then came back to it. I just worked on a very big Rails project in 2011. One thing that surprised me was how little code I had to write. All of the functionality that we needed was in a gem, and we only had to write a few lines of code to customize the operation of each gem. Need to integrate events with a map? There is a gem for that. Need to add slugs to all articles and have them become the id that appears in the URL? There is for a gem for that. I would write maybe 10 or 20 lines of code for each gem, telling it how to interact with our application.

To be productive at Rails, you have to know what gems are out there. You need to keep up with the gems, because they really are central to the productivity boost you can get from Rails. There is a gem for almost any bit of functionality you need, you only have to know which gems are good. If you find yourself writing large amounts of custom code in Rails, then either you are truly tackling a novel problem that no one has ever dealt with before, or you are simply unaware of the gem that you should be using.

Your comments comparing Rails and Drupal surprises me. I feel like you are writing without realizing how Rails development is done.


> Your comments comparing Rails and Drupal surprises me. I feel like you are writing without realizing how Rails development is done.

First, no offense taken.

Next, I do have a fair amount of experience developing in Rails (it's my go-to hobby framework and I MUCH prefer developing in it to Drupal as I thought I imparted in my original comment) and yes, there are gems that get you most of the way there with some of the tasks I described, but our site builders (read: non-developers) are unfamiliar with the command line, could not generally install gems without intervention from developers and, if necessary, could not extend them without developer intervention. That is what I'm trying to describe.

With Drupal, people who interact only with a web browser can install rich components from a library of modules developed by others with integration with the Content Types (basically the equivalent of Rails Models in Drupal) with zero development.

Looking back at my original comment, arguably my time estimates for development were exaggerated to some extent and could be shortened with the use of gems, but the fact remains there is, at current, no way for users, through only a web browser, to enable that kind of rich functionality in Rails. That was the point I was trying to get across.

You don't have to write much code but you still have to write code. This makes it a non-starter for a particular class of user.


"To be productive at Rails, you have to know what gems are out there. You need to keep up with the gems, because they really are central to the productivity boost you can get from Rails. There is a gem for almost any bit of functionality you need, you only have to know which gems are good. If you find yourself writing large amounts of custom code in Rails, then either you are truly tackling a novel problem that no one has ever dealt with before, or you are simply unaware of the gem that you should be using."

This made me smile:

:%s/Rails/Drupal/g

:%s/gem/module/g

"To be productive at Drupal, you have to know what modules are out there. You need to keep up with the modules, because they really are central to the productivity boost you can get from Drupal. There is a module for almost any bit of functionality you need, you only have to know which modules are good. If you find yourself writing large amounts of custom code in Drupal, then either you are truly tackling a novel problem that no one has ever dealt with before, or you are simply unaware of the module that you should be using."


  > If you find yourself writing large amounts of custom 
  > code in Rails, then either you are truly tackling a 
  > novel problem that no one has ever dealt with before,
  > or you are simply unaware of the gem that you should
  > be using.
Sounds like Perl - or any other mature language, though I have never seen one with that many libraries for anything I could think about.

The language that has the libraries I need is always the one I end up using.


The problem with this approach is later on when the user wants things which don't fit in with the Drupal system, modifications can be a real pain, and the site can become more and more brittle over time. Let's take another real-world example:

1. The client wants a google map of all their content 2. The client wants the content displayed elsewhere too 3. The client wants to filter the content by type 4. The client wants a tag cloud 5. The client gets what they want quickly by using drupal and is happy. ... 99. The client wants to change the google maps module to act slightly differently, but this turns out to be very difficult, and their developer takes days to make very simple changes. 100. The client is complaining that their website is slow because it loads over 200 drupal modules, but they need them all. 101. Some code is the db, and some on disk, which makes it very difficult to trace execution. 102. The client can no longer update their drupal as half of their modules are no longer maintained, and they're at the mercy of the module developers to do so, this causes problems using new modules which expect a newer drupal. 103. New developers can't get up to speed because the code-base is so convoluted and the database contains over 200 tables with a byzantine structure caused by drupal's over-abstraction of content types - the client runs through a series of several drupal developers and bleeds money with no significant changes to their site and lots of bugs left unfixed. ... 200. The client gives up and starts again with a site rewrite using another framework, and curses the day they were introduced to drupal.

The up-front development cost may sometimes be lower with drupal (though for many sites I would dispute that, most frameworks make it a matter of minutes or at most hours work to add a tag cloud for example, or add a map, with the advantage that you don't have to use a module to do it), but unless you are very careful and stick with one developer, the huge technical debt you're taking on by using it doesn't justify that. The fact that the developers ever thought it was a good idea to store code and views in the db is a huge red flag, and is a huge temptation to some developers and/or clients to produce a huge complicated, unmaintainable mess. Even if you try to control this it doesn't ledn itself easily to separation of concerns.

I'm sure it's quite possible to produce clean websites with drupal and keep them under control long term, but it's not the pattern of the sites I've seen, and I don't think it really gives you so much more in the longer term over developing in simpler frameworks which don't try to prescribe as closely how the content is structured.


The same can be applied to Rails (let's compare to it) too: 1) You're at the mercy of the gem developers and Rails changes versions faster than Drupal, breaking backwards compability. 2) Number of gems also does have an influence on Rails app performance, at least on an app's startup time.


I started writing a response to this, but then it became its own blog post:

http://news.ycombinator.com/item?id=4607052


I disagree. I think companies ought to develop for every platform, companies ought to create a market, but developers ought to specialize generally.

I don't think most developers are capable of being excellent at iOS development, web development, Android development, desktop development, etc.

Certainly it's not advantageous to specialize only in a particular language (particularly a dying or dead one), but that's quite different from a specializing in a particular platform.

If they have worked with developing in these many areas, they'd likely have some experience in each, but be not experts in any -- there are just too many.


If, as a developer or startup founder, you see yourself as someone capable of creating interesting applications, specializing in one particular client side technology isn't going to get you very far. Apart from games, I don't see many interesting applications built on iOS alone.

But the title of this thread is about "career advice", so if taken to mean "what's going to make me employable in the coming years?", Gruber may have a point. There will be a large number of jobs at ad agencies making cookie cutter "branded experiences" on top of iOS (unless it's swept away by HTML5). It may be a good career choice for people who used to specialize in Flash.

Personally, I don't find it particularly appealing to say the least.


As long as I'm concerned - UNIX is the only platform worth to build on for the long term. Anything else is just a temporary ripple in the ocean of technology on which everyone cashes up and leaves.


A Google search is hardly the way to resolve that. OS X searches for anything with OS or X whereas OSX only searches for things with OSX so of course the former has more hits.

I mean you're right that it's technically "OS X", but your methodology for proof is flawed.


a search for:

os x

does what you say, but a search for

"os x"

does what the GP says, and matches with the numbers cited, too.


While I think this may be true at first, if they gain any critical mass, the "Bank" in BankSimple would become redundant and wasteful.

I think it is good and forward looking to rebrand it Simple before launch so that becomes synonymous with banking. Imagine if Google was instead called GoogleSearch? Google became synonymous for Search and therefore the Search portion of their name is superfluous.

I for one applaud this move and think the branding is nice, clean, and, of course, simple ;)


Like how the "bank" in "Bank of America" is redundant and wasteful?

To clarify, I don't really care either way- I'm sure they had other reasons for dropping the "Bank" from their name, especially since they're not really a bank.


Wells Fargo and Suntrust are well-known banks, too.


Official name "SunTrust Bank", owned by "SunTrust Banks, Inc."

Official name "Wells Fargo Bank, N.A." owned by "Wells Fargo & Company".


That's precisely the point vaporstun was making — we commonly omit the "Bank" from those names because it's seen as redundant.


But I was replying to tdoggette not vaporstun, who was making (I think) the opposite point.

So my response was to point out that actually the banks we think of as not having "bank" in the name do, and it hasn't stopped us from referring to them without it. So if they stuck with Bank Simple, it would help people initially understand what they do, then once (if) they become a household name, people could refer to them as "Simple".

Of course, this is all ignoring the whole legislation stuff.


Many banks in the UK are referred to, and understood, in everyday language without the "bank": Barclays, Halifax, Natwest, Santander, Lloyds to name a few.


I think an important difference is that those are relatively unique words, unlike Simple, which is common and ambiguous.

Telling someone that you have your money in a simple account doesn't convey as much meaning as telling someone you have your money in a Barclay's account.


That's a very good point well put. "Simple account" might even be a description of the level of account you have, causing further confusion.


Colloquially, they're "B of A".


That's true in the Bay Area at least, but not necessarily everywhere. Back when I was their customer I recall visiting NYC, and asking several random strangers if they knew where I could find a "B of A", and none of them knew what I was talking about, until I clarified, "Bank of America".


I'm a New Yorker and I know what you mean by "B of A," as do many of the people I know. I think we're both dealing with small anecdotal data sets here, though.


That could also depend on when he visited. NYC is still ruled by Chase - there's one almost every other block, whereas I struggle to find BofA ATMs.


And I call it "bowfah" myself.


"America" sounds so much better!


No more redundant than Bank of America or Citibank...


I disagree. Even Citibank doesn't use the "bank" part on their main website, instead going by just Citi: http://citi.com

There are also many other large banks that omit "Bank" from their name such as Wells Fargo.

Keep in mind this is not a traditional bank, so following the rules of traditional banks would be inappropriate. They are better off following the rules of progressive startups which have been using simplicity in their names quite successfully as of late (e.g. Square).

As for Bank of America, you couldn't remove the Bank from Bank of America or it'd just be America which doesn't make a whole lot of sense. I don't think Bank of America, aside from controlling most of the banks in this country, is an objectively great brand. Rather, I thought it always tried to piggyback and sound like a federal entity which it's not and always found its name disingenuous.


Actually, http://citi.com redirects to citibank.com for me.


They use both, https://creditcards.citi.com/ is the 'official' domain name for Credit Card side of things.


Thanks for getting it :)


For me at the moment, googling 'simple' produces nothing which looks like this company. It is going to be a hard term to reach the top of.

Of course they are top for 'banksimple' or 'simple bank'.


> _foo_bar=Something&baz=Else

> could parse as either of:

> {foo:{bar:"Something",baz:"Else"}}

> {foo:{bar:"Something"}, baz:"Else"}

I don't think this is right, or at least, it shouldn't be right. I think _foo_bar=Something&baz=Else should only parse as {foo:{bar:"Something"}, baz:"Else"}.

{foo:{bar:"Something",baz:"Else"}} should be _foo_bar=Something&_foo_baz=Else

It should be possible for the writer of the article to fix this without major changes.


> {foo:{bar:"Something",baz:"Else"}} should be _foo_bar=Something&_foo_baz=Else

This makes sense, but is not what I or the author was suggesting. Note that it is somewhat repetitive (foo appears twice). In contrast the new suggestion simply has an possible ; to disambiguate these possibilities.


I am a little turned off by the fact that I have to log in to view this information. What is the point? Is this now Facebook?

I'm interested in the API but don't feel as though I should have to alert Dropbox, by being forced to login, that I am interested in it in order to learn more.


Really? Going a bit far, I'd say. Logging in on the web is a fairly trivial task, and I'd argue that there's a good business reason to gauge interest in a consumer API before dumping a ton of man hours into building it.


Right, and they can gauge interest easily and anonymously without requiring me to tell them specifically that I am interested in it.

I value my privacy and find it silly that I have to log in. They offer no information as to why I must log in, I am just hit with a log in page. If this were under an NDA or something and it was an agreement into which I entered willingly, I would not be so opposed. As it is, they are simply farming my data and I don't like giving that up freely.

I think there is the growing problem that most people are increasingly willing to divulge their personal information, specifically online, and then have a backlash at anyone who doesn't feel comfortable doing so. It's trivial you say, just as it would be trivial for me to be forced to show identification whenever I get on a bus, but that doesn't mean I am going to be alright with doing so. Like the argument that only people who have anything to hide advocate for privacy. This example may sound a bit extreme but it is essentially the same argument.

Anyway, you're entitled to your own opinion, but I don't think it's going too far to be perturbed at giving up some of my privacy to look at an API.


Loggly's site and app are on two completely different stacks. The app is what you log into, and most of it sits behind authentication. I'm speculating here, but if we were to do some type of semi-automated documentation, it would PROBABLY sit behind authentication as well, as it would live in the app side of the house. Not because we wanted to 'track' someone. We can do that when you write a program and then test it! :P

I'm certainly not speaking for Dropbox here, but my point is that there do exist reasons why a company might want you to log in to see documentation. Maybe you are on a beta trial of V3.0 of the APIs, and need to see alternate docs, or some of the examples require you to have a hash that is used in examples, or...etc.

Seems to me you are being a bit pedantic. I doubt seriously that Dropbox is going to violate your privacy.


What's the penalty for lying to Dropbox with useless email information?


This is only why its in beta...


> Why would they want to pay more for iPhones with locked out features (like calls and email) instead of cheaper iPods?

My guess is that they want a persistent network connection for the devices and it ends up being cheaper to just use the cell network for data with an iPhone than it would be to outfit every one of their stores with adequate wi-fi coverage for an iPod, ensure sufficient bandwidth on their outgoing network to support all devices, and they may have lower maintenance costs in the long run as they will not have to pay IT staff or other support to keep the wireless network maintained and, over time, upgraded.


The article mentions that Lowe's has already been working to offer wifi in their stores, so it doesn't seem like too much of a reach to just use ipods.

It seems like it would be expensive to have 25 cell phone data plans per store, but maybe they are able to cut some sort of corporate deal.


Right, I did read and notice that in the article, but the distinction I'd make is that there is a big difference from "having wifi at their stores," and covering every inch of a those very large stores. Many of them are >100K square feet[1]. To cover that adequately with wifi, in a fashion such that their business can reasonably rely on it at all times, is a significant investment. I'd guess that they are going to have wifi near the customer service desk or whatever so they can outwardly say, "we have wifi!" but not bother attempting to offer adequate coverage with the rest because, in addition to being quite large, they are often filled with items that would significantly attenuate wifi signal. (such as copper pipes, metal shelving throughout the store, etc.)

[1] http://media.lowes.com/history/


Another thing which the author seems to leave out is the usability paradigms for different sized screens.

As far as Apple goes, with their 2 formats for their mobile devices(x), iPhone and iPad. Things that work on one device wouldn't necessarily work on the other. I can't remember what Gruber said on this subject or if it was Steve Jobs himself, but I think they both said some variation.

On the iPhone, it makes sense for many UI elements to take up the entire screen, for example a list of contacts. But on a device with the form factor of the iPad, that looks stupid. Likewise for the opposite - with a device the size of the iPad, it makes sense to have multiple panes and popovers such as in the Mail application. If that were on a device the size of the iPhone, it would be simply unusable. So things formatted for one device wouldn't work on the other and vice versa.

So, on a device in between those 2 primary form factors Apple has chosen, which approach makes the most sense? I would posit that neither really works. The iPhone's "take up the whole screen with a single thing" seems too large and the iPad's "multiple pane" approach would likely not be usable on a smaller device. Maybe that size screen is fine for such things if the pixel density is raised such that the resolutions match, but I would guess it would feel very crowded if you just took the iPad UI and crunched it down significantly.(y)

So it's hard to develop good UI for these 'tweener screens because in some contexts the UI paradigms from the iPhone fit better and in others an iPad approach would be better. Sure, developers could have logic saying that, in some contexts on a 7" iOS device, use the iPhone UI layout and in others use the iPad UI layout, but that further complicates things.

I won't say that no one could design a UI for such a device, in fact I think the Blackberry Playbook has the best UI of the devices I've seen at this size, but I think it's difficult for Apple because it doesn't naturally fit either the iPhone or iPad UI scheme. And it seems like most Android devices pick which paradigm to use by the OS - those devices which have a Tablet factor use Honeycomb (Android 3.x) and those which just scale up the phone version use Gingerbread or earlier. (Android 2.x) It will be interesting to see how they deal with this issue when they merge the two.

(x) Essentially anyway. Technically they have three when including the iPhone non-Retina and iPhone Retina but since one is just double the other, there is not a different form factor

(y) Remember, it seems like the difference between 7" and 9.7" is trivial, "only 2.7 inches!" one may say, but note first that 2.7 is almost 1/3 smaller than 9.7. Second, note that screen size is calculated on the diagonal. So by the rules of Pythagoras, it is actually significantly smaller.


If you were to squash the iPad's pixels onto a smaller screen, the real problem would be the tap targets. They either scale down with the resolution and become physically too small, or you keep the same physical size and what have you gained? On a touch device, higher res can only really mean finer graphics, not more screen space.


Regarding point x, it's not that screen size is calculated on the diagonal; doubling the diagonal (and keeping the same aspect ratio) doubles the width and height as well.

The key is that area is the square of any linear dimension (again, for fixed aspect ratio). If you double the diagonal, you quadruple the area. If you increase the diagonal from 7" to 9.7", you almost double the area (1.92x to be precise).


"Regarding point x, it's not that screen size is calculated on the diagonal; doubling the diagonal (and keeping the same aspect ratio) doubles the width and height as well."

I assume you meant point y. Anyway, right you are! This is what I tried to express by saying "By the rules of Pythagoras" because I was lazy and didn't want to take the time to calculate it precisely. I also assumed most people here would understand the implications, but on reading your comment I realized they may not and what I visualize in my head is likely far different from what everyone else does. Thank you for making it much more clear and explicit for me!


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: