Seconding Temporal, it is definitely a second-gen workflow platform with approaches that make many Airflow frustrations nonexistent.
In particular, workflows are extremely durable, they can easily just sit indefinitely waiting on a condition and you can redeploy workers whenever, and workflows will resume exactly where they were.
We've been using Temporal for the past year and have found it to be an absolute joy.
Being able to model business processes--some of which may take multiple _years_, and require human approval/rejection steps--using procedural go/java/js, and knowing that once that function starts executing, it is basically indestructible (barring the complete destruction of temporal's backend database), is just really cool. Being able to outsource a lot of the complexity involved in using event sourcing to manage distributed transactions frees up our developers to worry about just the high-end business logic.
Congrats to the team! I'm excited to see Temporal gaining some more traction.
we use event sourcing under the hood for its fault tolerance and scalability (and tracing/observability). it’s abstracted away for you by our SDKs.
in other words when you use us you get the main benefits of ES without the downside of having to code it up yourself, which is a common pitfall of homegrown ES systems.
If I DID want to get access to the event sourcing mechanisms under the hood - can I?
My hypothesis is that I can use Temporal to implement high quality DDD pipelines that follow CQRS - having explicit access to the event sourcing mechanisms would be fantastic!
Otherwise, it's an issue as most DBs can be argued to use some form of event sourcing mechanisms under the hood but they arn't built to be exposed outside the DB - hence an event sourcing mechanism has to be reimplemented (hence pitfall of homegrown ES systems).
you could, by directly accessing the backing db, but we dont encourage that. The other way you could do it is read-only access by polling our event history APIs - that would be doable.
i try to say "we use event sourcing under the hood" rather than "we do event sourcing for you" for this reason - if you use us and expect the same level of control as homegrown youre gonna be disappointed
What's the design pattern you recommend Temporal users use to implement multiple consumers?
Usecase: Handle a large, spiky backlog by dynamically sharding the consumer input, starting and putting multiple consumers to work until the backlog is within nominal specs.
Eg: Shipping orders during a sales event. Each S&H order takes the same response time but multiple S&H orders can be parallelized and there are a few hundred thousand orders pending. Of course, S&H order can fail, in which case they are restored to the pending state after a timeout or loss of a lock
ah for this one you dont need multiple “consumers”, you need multiple workers, in our model. they’re kinda the same thing when u get down to it but this is the paradigm shift.
every temporal workflow goes into an assigned task queue, and Temporal handles distributing/load balancing to multiple workers polling that queue. you can have 10000 orders coming in simultaneously to 1 task queue being processed by 5 workers for example and Temporal would register heavy load but would still process through all that work in due time. the beauty is that when you write workflows you dont have to worry about acquiring locks or whatever, just write as though you had one durable long running process per order. its very freeing.
this is not to say it does everything, ie Temporal is not a replacement for a true pubsub model. we just described N workers processing 1 type of workflow initiated 10,000 times, but its not designed for 1 event type initiated 10,000 times that are reacted to by N types of processes that are supposed to be completely decoupled.
Would you ever want autocomplete on, say, a field where you're inputting a 2FA response? Probably not, since any suggestions would be useless.
I find the workaround of "just put some other random bullshit for the value" hilarious. And once developers start over-abusing that as well (they will), then what's the plan?
I've used Uber dozens of times, but stories like this really make me think I should give Lyft a shot. After all, if this company is operating in such an underhanded manner this early in the game, it's troubling to consider what they'd be capable of down the line.
I have always been unsure why people prefer one over the other - it's essentially the exact same service. I typically go with whatever one is closer / is offering a deal at the time.
I think they have distinct experiences. Lyft drivers are more talkative and prefer you sit in the front, with Uber the status quo is get in the back and conversation is optional.
I check both apps, if the pick up time and price is similar I'll take Uber.
I'm not sure why MS decided that the release of a free upgrade of their product that's trying to address pain points with Metro would be great opportunity to shove the Windows Store down users' throats. I can't even open Windows Store on my media center PC (hooked up to a 720p television) without increasing my screen resolution past native. That's just irritating.
Do you realize that your Windows 8 license key will not allow you to download and do a clean install of 8.1? You need to download 8.0, install 64 or so important updates, and only then are you deemed worthy of being eligible to download 8.1 through the Windows Store. Who thought this up?
I went the clean install route after my upgrade from 8.1 preview stalled out on the "Almost There" screen for 15 minutes. I rebooted, and all references to my installed applications were gone.
All in all, the 8.1 upgrade wasn't a pleasant experience.
What does this have to do with client trust? They're expecting the client to access their API a certain way, and the client is sending what is arguably a malformed request. This is, surprise, causing an error to occur.
Hard to tell from the post, but it sounds like Rails gave an Exception because it couldn't find should_group_accessibility_children, but that suggests that it would act on something it could find.
If there was a parameter that gave a user admin access, then Rails might accept such a parameter and that might be used to take control of the app.
I would think that the API would validate the POST parameters, ignore unexpected parameters and give errors for malformed expected ones. Taking it a bit further the developer should then be notified of malformed POST parameters being present and decide if it is a bug or an attack.
Except they "give an error" because the provided field doesn't exist in the database. Ignore for a second that half the reposts would break websites if an unexpected parameter yield an error instead of being ignored, if an untrusted client sent an "id" field, that would go through like hot steel through melted butter.
Well. The likeliest thing is that there is no "middle man fucking with them". The likeliest thing, since it's an iOS app posting to their API, is that they're introspecting a client-side object to get the values they care about. And that they're blacklisting values they know they don't want, rather than whitelisting values they know they do want.
Which meant that when a new property showed up their app blindly submitted it to their web API, and their web API blindly accepted it because it was doing mass assignment, and that's when the API broke.
Which really just hammers home the point people have been trying to get you to see, which is that these types of idioms -- mass assignment, blind trust of client-supplied data, blacklisting instead of whitelisting -- are really serious problems that should not be encouraged, and should not be swept under the rug.
Correct me if I'm wrong, but isn't it pretty much always a bad idea to do it blindly, and the defaults are considered a security problem by the Rails project itself?
Hey now, don't go bringing facts into this thread -- I've already been downvoted hard for, apparently, not knowing what I'm talking about when saying that this is a security issue. So obviously the Rails team don't know what they're talking about either.
He just linked to an article that explains how to do it safely.
update to reply because of downvotes:
1) butterfly knives are very useful tools
2) mass assignment can be used safely out of the box in rails post v3.2.3. To use it, you have to explicitly add parameters to the whitelist or disable the whitelist. The article is there to explain why disabling the whitelist is a bad idea.
Sure, just like butterfly knives are safe because you can find some safety tips online.
Edit to reply to edits: Mass assignment is still dangerous "out of the box" since you have to switch on the whitelist behavior by calling attr_accessible on your model classes. In the security guide, the older, more dangerous, attr_protected is introduced first.
I think every rails dev should be familiar with the security guide, but more than that I wish that security was the default. While anybody is free to make an app as insecure as they wish, it should be the exception rather than the default.
You're talking about the old behaviour. New rails apps have config.active_record.whitelist_attributes set. That means that models without attr_acessible statements will throw an error if mass-assignment is attempted. IOW, they've done exactly what you asked for. They should have done it years ago, but they did the right thing after it blew up in their face.
I see. So that whole big mass-assignment security issue that exposed GitHub a while back -- that just didn't happen? Writing code in this style is perfectly safe?
Writing code in this style is perfectly safe if you do it correctly. GitHub didn't, so the defaults were changed to make it harder to do it incorrectly.
UPDATE: editing to reply since hn won't let me reply directly because the thread is to deep, yet I'm getting downvotes
It's not a tautology. Some things are safe even if you do them wrong. Some things are unsafe no matter how you do them.
Rails changed the defaults so that now you have to deliberately decide to do things unsafely. Rails before 3.2.3 fails un-safe in this scenario, but later versions fail safe. Rails 4 uses a different solution that's even harder to screw up.
"Writing code in this style is perfectly safe if you do it correctly."
That's a tautology.
In general you can't count on code being written "correctly", so this isn't a defense. It is better to have systems that degrade gracefully in the face of humans and their idiosyncrasies, rather than those that fail-unsafe, because you can't build your security system on the assumption that your code will be written by superhumans.
I hope you realize that this is the identical argument PHP developers made whenever someone brought up how insecure the base language, libraries, and configuration were.
Users of a framework should have to go out of their way to make themselves insecure. It shouldn't be insecure by default.