One of my long running habits when dealing with databases has been to use transactions. Transactions simplify handling errors in complex, multi-record operations.
More recently, I’ve been using transactions to wrap external API calls that I also need to store data about. External APIs fail. For that matter, internal APIs fail too, so let’s simplify that: APIs fail. Sometimes for no good reason.
Accordingly, it’s necessary to be able to fail gracefully in the face of such circumstances. Failing gracefully includes not only rendering meaningful error messages, but also not leaving local data in an unexpected state.
Transactions provide a simple, clean way to resolve this issue too. And they seem all but indispensable when combining multi-record operations with APIs.
Lately I’ve been doing more work with NoSQL databases. They’re fantastic for certain use cases. But the near universal lack of transaction support is annoying at best.
So now the challenge is how to work around the overall issue and provide as robust an experience as possible within the local app.
Transactions for the win!
Here’s what code might normally look like with transactions (excuse the mildly contrived example):
1begin 2 Order.transaction do 3 @order = Order.create! params[:order] 4 @charge = Stripe::Charge.create @order.stripe_params 5 @payment = Payment.record @order, @charge 6 end 7rescue ActiveRecord::RecordInvalid 8rescue Stripe::Error 9 @order.errors.add :base, 'Unable to charge your card.' 10end 11if @payment 12 redirect_to :thank_you 13else 14 render :new 15end
The benefit is that Order is guaranteed to be valid and yet it isn’t saved if the API (Stripe in this example) fails, whether for a declined card, a network error, or anything else.
Better yet, if there were even more tables/models involved, nothing much would change.
Splitting validation and persistence
With a NoSQL database, there seem to be a couple potential patterns. First, splitting validation and persistence:
1@order = Order.new params[:order] 2if @order.valid? 3 begin 4 @charge = Stripe::Charge.create @order.stripe_params 5 rescue Stripe::Error 6 @order.errors.add :base, 'Unable to charge your card.' 7 end 8end 9if @order.errors.any? 10 render :new 11else 12 if @order.save 13 @payment = Payment.record @order, @charge 14 redirect_to :thank_you 15 else 16 # uh oh, now what? refund the charge? email ops? 17 end 18end
We’re assuming that validation is straight-forward, which it usually is.
But what if the database pukes and
@order.save fails? Now the customer has been charged and we won’t be delivering their order because we don’t know about it. This makes for sad customers. Not good.
Admittedly, this is unlikely–if your database is rock solid and on localhost. If you’re using a database on the network (cloud-hosted anyone?), it’s somewhat more possible.
Regardless, if unlikely were good enough, we could skip all the error checking all the time.
Saving and reverting
Another option is saving everything and then reverting, which is a lot like a manual, non-guaranteed transaction rollback:
1begin 2 @order = Order.create! params[:order] 3 @charge = Stripe::Charge.create @order.stripe_params 4 @payment = Payment.record @order, @charge 5rescue NoSQL::RecordInvalid 6rescue Stripe::Error 7 @order.destroy 8 @order.errors.add :base, 'Unable to charge your card.' 9end 10if @payment 11 redirect_to :thank_you 12else 13 render :new 14end
Now we’re relying on
@order.destroy to not fail. In a sense this is slightly better. Except for the hidden part about recording the Payment after the charge, which still has first-case semantics in a failure.
We could follow up on the order later (a cron job perhaps) and go back and verify that the payment went through. First pass could be to look for the Payment record and if not found, query Stripe (assuming we sent a reference identifier to something like
@order.id). But this begins to add a lot of excess code that a transaction seamlessly handled for us.
What to do…
One variant on the above options is to cause database actions to be more resilient with some kind of automated retry logic. If the database is highly available with automated failover (clustered MongoDB, for example) then the retry has high odds of eventually succeeding. This doesn’t fully resolve the issue, but it mitigates it somewhat.
Neither of the above examples is particularly complex, but imagine just such a scenario. Perhaps 2, 3, or even 4 database records need to be persisted (and reverted in case of a failure). Or maybe some of those data models trigger event logs than then have to be cleaned up (if even possible) or sanitized.
Regarding triggered external events (the kind that you’d process in a queue or message bus), only the “splitting” solution really helps. Even native database transactions are problematic since the transaction doesn’t cover external processes (unless your queue is stored in the same database or you’re making clever use of something like ActiveRecord’s
Anyone have any thoughts or preferences on the matter?tags: activerecord, nosql, databases, apis