Stories surfaced recently that two police officers in the State of Georgia in the US ran an unwarranted background check on President Obama. Evidently, the Secret Service alerted the local county government that computers within their system had been used to access information on the President. As a result, the two officers in question have been placed on suspension. Remarkably, a similar incident occurred in Pennsylvania involving a Philadelphia police officer shortly thereafter.
These incidents bring two issues to mind:
First is the issue of access controls or access monitoring with respect to information systems and databases containing personal information. On the one hand, it’s refreshing to know that the kind of controls are in place to allow the Secret Service to know that someone from a particular computer network has accessed information on the President stored on criminal justice systems. Yet, clearly the Secret Service is not going to be extending this kind of safeguard to too many people beyond the President, Vice-President, and potentially their families. It’s also unclear to what extent anyone within the network of federal and state agencies that have access to this information runs audits to ensure that other unwarranted access has not been made. With respect to at least one of the databases in question, the FBI’s National Crime Information Center database—which I discuss below—there are local agencies that oversee the administration of the system of access to the database within their locality (state, territory, etc). This agency is “responsible for monitoring system use, enforcing system discipline and security, and assuring that all users follow operating procedures.” Yet, according to on article that appeared on Slate, it was “common practice” in one locality for police “to run checks for friends and family, and to run prank names to alleviate boredom.”
Then again, I’m not sure how you would structure such an audit given the fact that probably anyone who gets pulled over by police for even the slightest traffic violation can legitimately be subjected to such a background check (Another interesting question is whether anyone has ever challenged the legitimacy of allowing officers to call up this variety of information during a routine traffic stop). Multiple system queries issued in relatively quick succession might be one indication of abuse, but this kind of action wouldn’t be inappropriate where multiple individuals have been stopped for suspicious activities. Perhaps looking for checks run on notable figures such as President Obama might be another way to catch some illegitimate use of the system, but it would not provide much of a safeguard for the majority of citizens. At any rate, my point is to draw out an issue pertaining to the “watching of the watchers” and potential remedies for “violations” on the part of the watchers. This issue of providing access controls and auditing capabilities is likely to be a significant theme in Work Package 6 of the DETECTER Project, which I am working on.
The second issue concerns the actual extent of information that access to a particular system grants—and in the context of these incidents, information sharing or consolidation among different data collecting agencies. The fact is, I don’t know exactly what information is featured in these background check queries; according to the article on Slate, it may vary from police agency to police agency since different agencies may have different access policies and procedures. I would guess they would contain: name, date of birth, height, weight, gender, eye color, address (all of these are standard things included on US driver’s licenses), driver’s license number and state of issue (perhaps even for past driver’s license numbers, too?), vehicle registration information, list of outstanding parking tickets or fines, list of traffic viola-tions, list of arrests, list of criminal convictions, list of outstanding warrants or other All-Points-Bulletin type notices (including e.g. Interpol notices), perhaps even social security number and driver’s license photo. The Slate account adds aliases, tattoos, scars, and other distinguishing marks. However, these clearly would only be available if you had been arrested. As for fingerprints, I know of at least one state that requires fingerprinting when issuing a driver’s license. Otherwise, these also would not generally be available without a prior arrest.
But where does this information come from? According to the Slate article, the information is culled from a number of different databases. Alongside local databases, the primary source for data from all states as well as certain federal information is the National Crime Information Center database mentioned above (see also this page maintained by the Federation of American Scientists). According to the Slate article, not every police officer will necessarily have direct access to this database from his or her squad car. Thus, at least in some places, there are built-in safeguards to limit the extent of information that is made available without some justification on the part of the officer.
Yet the trend has been toward increased availability of information—including increased information sharing and extending the reach of intelligence and criminal justice resources to include more and more databases and data sources. An initiative known as MATRIX (Multi-State Anti-Terrorism Information Exchange)—I’m guessing they didn’t see the movie—represented one effort in the US in the early to mid-2000s to pool information and resources for the support of a better (and perhaps more extensive?) information system. Accounts vary, but some claimed the system would provide access to records from a number of public sources in addition to the usual law enforcement databases. One account, for instance, claimed that things such as credit information, marriage and divorce records, names of business associates, neighbors’ addresses and telephone numbers would also be made available (Duane D. Stanford and Joey Ledford, “State to Link Up Private Data,” Atlanta Journal-Constitution, October 10, 2003, cited by the ACLU in this report). There was certainly discussion of incorporating the use of an analytic system developed by a private corporation which would also include access to that corporation’s databases that held “billions of public and commercial records.” The fear that the new system would provide local police with access to an enormous variety of personal information gave forth to public uproar. Probably at least in part due to that backlash, most of the states that had initially signed on to the program gradually began to withdraw involvement.
There’s a lot more to be said on this subject of what is the appropriate extent of information that should be readily available—particularly in light of the potential for misuse. Especially in the context of national security intelligence, it is often not clear what information is of significance to prevent terrorist attacks. There is this idea, perhaps reflected in programs like DARPA’s Total Information Awareness (or “Terrorist Information Awareness” if you prefer), that if only the greatest possible amount of information were available for analysis, analysts would be able to pick up on patterns of “suspicious” activity before incidents occur. I’ll perhaps save further discussion of this subject for a future post. But beyond the question of the extent to which we should permit data aggregation, there are also the issues of what extent of existing information should be made available to whom and under what circumstances.
Tuesday, August 18, 2009
Monday, August 10, 2009
One in Every 78 Adults Surveilled by Councils in Britain Last Year
The Liberal Democrat party is speaking out on Council Snooping after revelations that an average of 1,500 surveillance requests were made every day in Britain last year - equivalent to one in every 78 adults having been targeted across the year.
I've talked about surveillance by Local Authorities before, but these figures just seem out of all proportion to any justifiable role for spying by councils. I assume the numbers will look smaller once one takes into account that many of these requests will be for repeat surveillance of the same people (though I've no idea by how much).
The Lib Dem solution is for the power to grant surveillance requests to be taken away from government and handed over to magistrates:
Is there anything at all to be said in defence of the law as it currently stands? Are there, for example, any scenarios in which really necessary Council surveillance requests would be likely to be turned down by a magistrate? (What could such scenarios be?) And does anyone disagree that Councils are using these powers far too often?
I've talked about surveillance by Local Authorities before, but these figures just seem out of all proportion to any justifiable role for spying by councils. I assume the numbers will look smaller once one takes into account that many of these requests will be for repeat surveillance of the same people (though I've no idea by how much).
The Lib Dem solution is for the power to grant surveillance requests to be taken away from government and handed over to magistrates:
We have sleepwalked into a surveillance state, but without adequate safeguards. Having the home secretary in charge of authorisation is like asking the fox to guard the hen house. The government forgets that George Orwell's 1984 was a warning and not a blueprint.
Is there anything at all to be said in defence of the law as it currently stands? Are there, for example, any scenarios in which really necessary Council surveillance requests would be likely to be turned down by a magistrate? (What could such scenarios be?) And does anyone disagree that Councils are using these powers far too often?
Labels:
law,
politics,
privacy,
surveillance
Friday, August 7, 2009
Update: People v. Weaver
People v. Weaver, the New York case that was the subject of an earlier blog entry, has been published in the New York Reports at 12 N.Y.3d 433. The opinion may also be found on Lexis and Westlaw under the citations 2009 N.Y. LEXIS 944 and 2009 WL 1286044 (N.Y.), respectively.
Labels:
police,
privacy,
surveillance,
technology
Tuesday, August 4, 2009
UAE Mobile Provider Installs Spyware on Customers' Blackberries
Wired and Silicon Valley have reported that a mobile telephony provider in the United Arab Emirates installed spyware on the Blackberries of subscribers to its services. Blackberry users were prompted to download a software update. Once the update was installed, however, users complained that their device’s performance was adversely impacted and that their batteries were quickly exhausted. As it turned out, the update had the device contact a certain server for registration. The high number of devices which attempted to connect to the server simultaneously caused the server to crash. As each Blackberry regularly tried to contact the server after the initial failure, this action quickly used up the device’s battery levels.
Code analysts reported that the update included code to permit surveillance of the Blackberry’s contents and communications, although this feature was deactivated upon initial download. The code was evidently written by US-based company SS8, which provides surveillance solutions to telecommunications providers as well as products for intelligence and law enforcement. As reported on Wired, analysis by the company Veracode suggested that the installation of the surveillance software on the user’s handheld device, as opposed to relying on surveillance at the server level, would prevent the use of messaging encryption from frustrating attempts to examine communications being sent from and received by the device. Rather than intercepting messages en transit over the server, the code would have the device deliver copies of the content stored there to a special server. These copies would be in unencrypted form since they would be either generated prior to the application of encryption in the case of sent messages, or have been decrypted by the user’s key in the case of received messages.
Code analysts reported that the update included code to permit surveillance of the Blackberry’s contents and communications, although this feature was deactivated upon initial download. The code was evidently written by US-based company SS8, which provides surveillance solutions to telecommunications providers as well as products for intelligence and law enforcement. As reported on Wired, analysis by the company Veracode suggested that the installation of the surveillance software on the user’s handheld device, as opposed to relying on surveillance at the server level, would prevent the use of messaging encryption from frustrating attempts to examine communications being sent from and received by the device. Rather than intercepting messages en transit over the server, the code would have the device deliver copies of the content stored there to a special server. These copies would be in unencrypted form since they would be either generated prior to the application of encryption in the case of sent messages, or have been decrypted by the user’s key in the case of received messages.
Monday, August 3, 2009
Lack of Clarity with respect to fate of CLEAR data?
Anita Ramasastry recently wrote an article (Note: at the time of this post, this link no longer pointed to the correct article; until this problem is corrected, you may find the original article here in Google's cache) for FindLaw discussing the imminent demise of CLEAR—a private company which worked in conjunction with the Transportation Security Administration to offer customers less hassle at airport security in exchange for giving up some of their privacy (and payment of an annual membership fee). Perhaps it was inevitable that some enterprising American would develop this kind of business model following the ever increasingly burdensome and inconvenient security measures being imposed at airports subsequent to 9/11. One might question, however, whether the federal government should have allowed it (See also this article for criticism that CLEAR failed to deliver on its “promise”). The business model was made possible by the TSA’s "Registered Traveler" program.
Although CLEAR was not the only provider of such services in the US, it was the most popular with approximately 165,000 members, according to Ramasastry. She reports that members had to provide CLEAR with biometric data in the form of fingerprints and iris scans to participate in the program. This data was then encoded on the member’s CLEAR card, which had to be tendered to bypass the standard security checkpoint lines. Now that CLEAR is going out of business, what will happen to all the personal data they hold, Ramasastry asks: Will it be sold to one or more other companies? Will the TSA claim it? What say does each member have as to what will happen with his or her data?
Unlike the EU, the US doesn’t have any overarching legal instrument that establishes a basic framework for the handling of personal data. And as Ramasastry points out, CLEAR, as a private company is not subject to the same kinds of privacy regulations as government agencies. But should companies that operate in this area not be subject to the same privacy standards as government bodies? Or should the TSA be authorized to intervene to secure personal data on behalf of former customers of CLEAR? An announcement on the CLEAR website reassures customers of its commitment to protect their personally identifiable information. Yet, even assuming CLEAR had a strong corporate privacy policy in place, it’s UNclear how the company will ensure that that policy is upheld if it ends up being liquidated in bankruptcy. Not to mention, former customers may find it difficult if not impossible to seek compensation for any violation of the policy. The website also speaks of TSA/ federal requirements. But, one source has suggested that neither TSA nor the Dept. of Homeland Security have any relevant requirements in place. The TSA website itself states that “all RT [Registered Traveler] service providers were obligated to follow data security standards to continue offering service [following the initial pilot project]. Each service provider's use of data, however, is regulated under its own privacy policy and by its relationship with its customers and sponsoring airport or airline.” (emphasis added) The only data usage requirement that the TSA imposed may have been that “RT service providers . . . use customer data only for purposes of the RT program unless customers expressly opted-in to other uses.”
In the meantime, the other two Registered Traveler operators, FLO, Corp. and Vigilant Solutions, have reportedly also both closed down the special security clearance lanes they operated at US airports.
Although CLEAR was not the only provider of such services in the US, it was the most popular with approximately 165,000 members, according to Ramasastry. She reports that members had to provide CLEAR with biometric data in the form of fingerprints and iris scans to participate in the program. This data was then encoded on the member’s CLEAR card, which had to be tendered to bypass the standard security checkpoint lines. Now that CLEAR is going out of business, what will happen to all the personal data they hold, Ramasastry asks: Will it be sold to one or more other companies? Will the TSA claim it? What say does each member have as to what will happen with his or her data?
Unlike the EU, the US doesn’t have any overarching legal instrument that establishes a basic framework for the handling of personal data. And as Ramasastry points out, CLEAR, as a private company is not subject to the same kinds of privacy regulations as government agencies. But should companies that operate in this area not be subject to the same privacy standards as government bodies? Or should the TSA be authorized to intervene to secure personal data on behalf of former customers of CLEAR? An announcement on the CLEAR website reassures customers of its commitment to protect their personally identifiable information. Yet, even assuming CLEAR had a strong corporate privacy policy in place, it’s UNclear how the company will ensure that that policy is upheld if it ends up being liquidated in bankruptcy. Not to mention, former customers may find it difficult if not impossible to seek compensation for any violation of the policy. The website also speaks of TSA/ federal requirements. But, one source has suggested that neither TSA nor the Dept. of Homeland Security have any relevant requirements in place. The TSA website itself states that “all RT [Registered Traveler] service providers were obligated to follow data security standards to continue offering service [following the initial pilot project]. Each service provider's use of data, however, is regulated under its own privacy policy and by its relationship with its customers and sponsoring airport or airline.” (emphasis added) The only data usage requirement that the TSA imposed may have been that “RT service providers . . . use customer data only for purposes of the RT program unless customers expressly opted-in to other uses.”
In the meantime, the other two Registered Traveler operators, FLO, Corp. and Vigilant Solutions, have reportedly also both closed down the special security clearance lanes they operated at US airports.
Labels:
data protection,
flight screening,
privacy
Subscribe to:
Posts (Atom)