Tuesday, May 14, 2013

Bloomgberg - Goldman Sachs Complaint Indicates Questionable Methods

Bloomberg has only just recently realized its crucial mistake of allowing its journalists' special access to the log-in information of clients from the company's information trading terminals.

Or had the company known all along that it was a wrongful tactic?

In the recent Washington Post article, Goldman Sachs filed a complaint against the financial data and news company after a Bloomberg reporter had pointed out to the firm that a Goldman employee had not logged into the Bloomberg terminal for a couple of weeks.  Bloomberg stated that it had corrected its mistake.

A glance at the Bloomberg Terminal
The information that Bloomberg reporters had special access to included the time of log-in of any of the company's subscribers, and the type of functions they were accessing.  Although such data may seem trivial compared to all of the other personal information the company holds (trading, portfolio, monitor), Bloomberg reporters have been notorious for their aggressive tactics to beat competition in breaking news, and those two pieces of information provided them a grounds for some serious inquiries.

One Bloomberg reporter had knowledge about the log-in times of multiple traders on a single desk and would call daily about potential layoffs.  Other Bloomberg journalists were said to have used their special access to attempt to discover whether Bruno Iksil, the JP Morgan trader blamed for the $6 billion  trading loss in 2012, had faced disciplinary action.

Bloomberg was even able to report the earnings of Walt Disney Co. and NetApp Inc. before the company's scheduled releases, which was all made possible by guessing the unprotected web addresses of the press release before being made public.

How could Bloomberg, the multinational mass media partnership that makes up one third of the $16 billion global financial data market, not have known that this was a wrongful tactic?  When the company stated that it had corrected its mistake, was it referring to the mistake of allowing reporters to access the data terminals, or the mistake of getting caught doing so?  Should there be greater consequences for Bloomberg allowing its reporters to exploit personal data for news and to rise above competition?  Leave thoughts in the comments below.

Monday, May 13, 2013

Google Streetview = Personal Data View

Google once again finds itself against a large crowd for its major breach of privacy, except this time it isn't a few companies or a handful of individuals.  It's 38 states.

The New York Times recently published an article reporting that Google has finally acknowledged to state officials that its Street View program has been casually retrieving personal data, including passwords and email information, from the application's unaware users.

Unsurprisingly, this was found to be a big issue with a majority of the states, who brought on the case addressing the privacy concern.

It's hard to believe that even after Google's trouble with the massive lawsuit against its social networking tool Buzz back in 2010, they would still make a huge mistake with privacy breaching...or is it?

Was it even a mistake?

How in the world does Google collect passwords from a navigational application?  One might see how information could potentially be gathered through a messaging system (where log-in is required), but a map guide?  The program was also found to secretly collect financial and medical data, which brings to light an alarming curiosity of how this could have occurred.

When asked about the issue, Google's response was less than satisfactory for regulators.  The company initially stated that no data had been collected from unknowing users, then later attempted to play down the data that had been gathered.  Google even went as far as to fight with regulators who wanted to examine the data.  The shocking false statements continued to ensue as Google said that all collected data had been destroyed, when some had still remained.

One particularly absurd facet of the company's argument was to place full blame on a single engineer for the entire operation.  The Federal Communications Commission had even affirmed, under investigation, that the engineer had worked with others and attempted to inform supervisors of his actions, proving that the engineer was more unsupervised than a rogue (which Google labelled him to be).

Another especially interesting aspect of the case is the method that Google employed to develop Street View.  The company would deploy special vehicles to photograph the offices and houses along streets.  However, these vehicles didn't only capture images; they also captured data off of private networks.

These seemingly normal cars would secretly gather data from millions of unencrypted networks.

So what can be expected this time from Google on how it'll do things differently, and what consequences will the giant corporation face? The company has already been fined $25,000 by the FTC for obstructing its investigation on the collected data, and the company faced a $7 million fine from the case itself (which is minimal compared to its net income of about $32 million a day).  As for what will change in the company's operations, Google must now comply to an updated settlement that includes:
  • setting up privacy certification programs for select employees
  • running educational ads in the top newspapers in each of the 38 states
  • creating a YouTube demonstration on how people can easily encrypt data on their private networks
  • run a daily online advertisement promoting the YouTube video for two years


Is the new plan effective enough to minimize Google's privacy breaching to an extent, if at all?  Was the fine large enough, or should there be more serious penalties in other matters (such as fully established regulations that explicitly limit Google's system)?

From a different stance, would any measure of action render effective against a company that consecutively falsifies the statements of its actual conduct ?  Is the gathering of personal data something that may not be able to be physically helped when developing such technologies?  What motives could Google have for collecting such data?  Leave thoughts in the comments below.

Sunday, May 12, 2013

Facebook Privacy Chain Letter Hoax Resurfaces - Users Remain Uninformed

Several Facebook users are still completely unaware that the infamous copyright chain letter status is just as effective a method to protecting their privacy as their statuses about the weather outside.  The letter has recently resurfaced, according to an article in The Washington Post, once again giving thousands of users the false hope of having their own copyrights to personal information.

This event makes apparent that there are now bigger issues at hand than just that of companies collecting private data:  the extremely limited knowledge of "social-networkers" on their digital rights, and the fact that they are not taking advantage of legitimate opportunities to increase control over their content (such as Facebook's past policy-voting polls that only few users took part in, or simply reading the site policies in general), but are Facebook users the ones fully responsible here?

The popularity of such privacy hoaxes indicates that several Facebook users do value their privacy; however, are they willing to protect that privacy by thoroughly reading the privacy policy and conditions statements?  The popularity of the fake chain letter suggests altogether that there is an excessive amount of Facebook users that do not thoroughly read (if at all) these statements.  One could make the argument that it is the fault on users' parts for letting their personal information be collected.  After all, even after Facebook offered voting polls on policy changes in the past, only a minimal amount of users took the opportunity.


But what exactly is Facebook doing to encourage users to read such conditions?

Facebook has three links:
each covering a different aspect on the company's conditions on user information.  Each link directs users to a new page, with even more subcategories detailing many more aspects within each division.  Finally, after a subcategory has been clicked on, a lengthy page with several paragraphs is presented.

It isn't hard to understand why the average social networking individual would rather simply check "Yes, I agree to the above terms and statements," instead of scrutinizing the massive amount of policy information that is offered to them.  After all, they came to Facebook to socialize, not read page after page of an agreement contract.

Then remain those that do put in the effort to read all of the policies.  But how much of what the company is saying does the average user actually understand?  In their Information We Receive About You page (under their privacy link), about 2 paragraphs before the page ends, the company states that "We store data for as long as it is necessary to provide products and services to you and others."  How long exactly is necessary?  Is it necessary? Do "products and services" refer to advertisements on Facebook, or do they refer to physical Facebook updates?  This statement (along with several others on their policy pages) are actually pretty vague, despite the descriptiveness that one would expect upon the lengthiness of the conditions.


The hoax chain letter was just a seemingly easy way of protecting personal data for many Facebook users, but its massive popularity showed just how little users know about online privacy policies.  This lack of crucial knowledge on the part of the general networking public raises reasonable concern, but also may raise eyebrows towards those that don't take the time to read the policies.

What can (or should) be done on either Facebook's end, the users' end, or both?  Leave thoughts in the comments below.