Here's an example of the kinds of features Github's staff is working on that takes a significant amount of time and coordination, and it's why we don't see 100 new "who of your friends also starred this repo?" features each week.
Yep. I was indirectly responding to a complaint from that thread in which the commenter wondered why Github wasn't announcing new UI features every day since the friend-starring feature was just "an afternoon's worth of work".
And then you have to inform the service when there are relevant changes which requires secure communications to the service, retry attempts in the event of failure, and debugging...?
What kinds of consumers, you mean? I've found them most useful for integration with new hosted services that haven't yet made it onto the external services list provided out-of-the-box by Github. For instance, when you want to try out (or develop) some new project tracker or hosted CI solution.
I'm a little surprised they still don't offer any auth configuration for webhooks the way they do for most external services, so you're stuck with basic auth on the post-receive URL.
There's some HMAC-SHA1 available but it's hidden in their REST API. I really don't understand why the option is not available via their web interface, especially since it is already supported in their backend.
I'm stoked to see the webhooks get an overhaul. I was just about to use them for a project, and this'll be a great help.
I wish they'd do similarly for the services. Currently, editing multiple services across repos is a bit of a pain, due to the length of the list and how it jumps you to the top on any interactions. (Originally, I thought this announcement meant they'd revamped that as well)
Out of curiosity, is there a generally-accepted best practice for rate-limiting / authenticating webhook pushes? I'm planning to run a daemon that listens for them, and while the effect of somebody finding the URL and hammering it isn't likely to be catastrophic, I'd still like to rate limit it if I can. Alternately, are GitHub's webhook-sending IPs static enough to put in an IP set and add to my firewall?
If I were building something to handle webhooks I'd probably have a really minimal API which just takes the message as it comes in and then queues it for later processing - that way you can either decide to let the queue back up, or add some more workers.
Having said that, I'm pretty sure GitHub publish a list of their public IPs, so you could also limit what is allowed to call your API.
This new feature will make things so much easier to set up and debug, I'm thrilled. Before, you could not create a pull request hook without using the API, and this caused a lot of people confusion. We set up the hook automatically in Leeroy, but this will allow people to set it up manually if they want, and debug when it doesn't work.
Just to share an awesome GitHub feature that I learned about when doing a similar integration as you; your repository also tracks all Pull Request related commits. Really helpful when you're trying to build these sorts of tools in the future. If you set refspec to the following then you'll get automated builds for any commit activity related to all Pulls. Its saved me tons of time writing middleware to handle these things.
+refs/pull/*:refs/remotes/origin/pr/*
The major Jenkins plugins have their respective build-on-new-commits, so you can automate these builds and just treat these like feature branches.'
digging through some of the events we can hook on to here: http://developer.github.com/webhooks/#events, i noticed the `gollum` event, which is triggered whenever a wiki page gets updated. i'm curious why its named that?
You could, but you'd have to have some intermediate service to handle that process. One great choice is Travis CI[1] which is really a continuous integration service, but would let you do what you want. You can also just attach multiple URLs to a single git remote so you push to both GitHub and Heroku all the time.
However, I think running a test suite before deploying is a much safer option as it stops you from accidentally deploying something you haven't properly tested.
We have a preview API in place for these types of deployment tasks. Lots of folks, travis/etc, seem to have piggy backed on top of successful builds too.