Friday, October 16, 2015

Computers Freedom Privacy 2015 Conference notes

Notes from some of the panels that I attended at the Computers Freedom Privacy Conference this week.


Government Hacking Panel

Hacking as a next-best solution to backdoors?

Soghoian: rejects both options

Knacke: are there viable 3rd party options? Sees parties at play as Carrier, Court, and Law Enforcement; proposes that a contracter be the one to interface with carriers to get data, to help with the problem that local/state law enforcement won't have enough training

Is hacking just one device better than backdoors in all devices?

Some mention made that this still leaves all devices vulnerable to the exploit used against the single device. Discussion that the Wiretap Act standard (alternatives exhausted, etc) should be the mininum standard process, currently there is a process vacuum.

Hacking doesn't tend to scale well (as compared to the 215 program)

Also compared to 215: years of secrecy about use which sidesteps public debate. There hasn't been any transparency on hacking; no Congressional hearings with technologists.

Could companies be required to help hack under All Writs Act, ie push a malicious OTA update to a device with the payload?

Adversarial relationship between Law Enforcement and US companies that operate globally.

Is it different to require a company to turn over server info vs. requiring the malcious update push? And is that malicious OTA different in any meaningful way from a backdoor?

Without the assistance of the companies, are you limited to drive-by attacks on home wifi networks, or phishing attacks?

LEOs/ i18n espionage: they might impersonate a company to push these malicous OTAs; Harris Company (makers of stingray devices) have tools and engineers that bleed between law enforcement and national security contexts.

Knacke: Post 9/11, companies came to governemnt and said "what do you need from us?", some of that was codified in law; Post-Snowden, that level of cooperation more seen as problematic. But we should make policy now, when it's not an emergency situation.

All Writs Act is ex parte, could be used in a time crunch emergency, and would then create harmful precedent.

Marcy Wheeler question to panel: How would attribution of evidence work in court if it was acquired via hacking, given that attribution in hacking context (OPM/China) is problematic

Knacke answer: Rosy scenario is that disclosure of the vuln used is required (likely to be discovered anyway if used too much). So, LEOs should have access to updated vulns; thinks this would improve security because they would be disclosed and recycled regularly.

Soghoian: iOS jailbreaks are about $1 million on the 0 day market. Do we want state/local LEOs to have access to something worth $1million that they can resell, or that could be stolen from them? State/Local officials get 2 days of training with Stingrays. 2 days training not enough to be entrusted with iOS vulns.

Panel says that only people with skills and infrastructure should have access to the tools that leverage these vulnerabilities. Discussion about whether the targets will figure out the hacks by analyzing them.

UK recently passed backdoor legislation; published a hacking guideline manual because they were sued for not having rules, so they ex-post made rules.

Soghoian: hacking by government makes people who have done nothing wrong targets, ie Gemalto engineers who were hacked to get access to what they have access to. Tor, before this August, had no auto-security update mechanism, but now does. Previously, FBI could use non-zero days; once Tor users update to the auto-updating version, will drive up cost for FBI, more reliance on zero days. Watering hole operations, where FBI delivers malware will only work with non-patched vulnerabilities. The move to auto updates might be bigger impact than the move to widespread encryption.

Internet Content Blocking by ITC

Rebecca Tushnet's notes: http://tushnet.com/2015/10/13/cfp-2015-internet-content-blocking-by-the-itc/

Intermediary Liaiblity

Rebecca Tushnet's extensive notes: http://tushnet.com/2015/10/13/cfp-intermediary-liability/

Laws at play: First Amendment, §230 of CDA that grants immunity for intermediaries, §512 of DMCA that grants conditional immunity.

Attacks on 230: SAVE Act - justice for victims of trafficking act. crime of advertising a person, but 'advertising' is not defined, could be used to go after websites on which ads appear.

512 algorithmic overreach problems; DMCA being used for privacy interests

Content owners want notice-and-stay-down. Copyright notice & takedown are unique in susceptibility to algorithmic enforcement (unlike privacy Right To Be Forgotten Claims that need human review); pushback now though with Lentz that need to consider fair use.



Vulnerability Disclosure Panel

Many industries deal with risk management and have sophisticated methods for sharing information about risk. Vulnerability disclosure: how much should be told to who, and when. Full disclosure vs Responsible disclosure vs zero disclosure (which was tagged as tell no one, ever, not zero day market sale?). Some people call responsible disclosure blackmail; but some vendors don’t behave in a responsible manner.

Information sharing secrecy: some commercial network outages are kept secret, because the outages could reveal vulnerabilities in the networks; similar to removing nuclear power plants from maps.

Risk communications: do we know how to do this? Granick says that we may end up with a cyber 1% who understand the risks and are patched. Trust issues (see Facebook’s Threat Exchange).

If you keep information under wraps, the information become criminalized, but the internet (ie methods like full disclosure email list) push back on this; also independent discovery. Tension between disclosing everything and restricting everything.

The security industry feels that they have lost stamina to discuss disclosure; the status quo works better than regulation, esp. with fear that regulation would censor independent researchers.

Who would open processes help? Only commercial interests? Operational security enhancements are important for internet; consider nature of the information and civil liberties.

Patrick McDonald, Google:



Referenced an AOL Christmas 2000/2001 bug: very hard to get information when defending against a new hack, if you have a POC you can shut off the affected service at least (during Heartbleed, took down a few services to reduce exposure)


Vulnerability researchers think they are special snowflakes, and vendors think they are special snowflakes, want to censor researchers because vendors think the problem will go away if they suppress it.

However, see on seclists how often independent discovery happens; he notes also that they get POCs from separate vendors that have md5s that match; means researchers are sharing info among themselves, and multiple of them choose to share with vendor. So there is likely even more sharing among researchers than vendors think.


Incentives for researchers? Mostly there are bad incentives out there, see DEF CON 9 arrest; weev’s prosecution; his OWASP friend who found airline vulnerability in mobile app and reported it, was met with threat of lawsuit and fine. See also FireEye incident recently, Oracle’s Mary Ann Davidson blog post, “we don’t need researchers”.

Schneier: easy to mock the vendors stance that if researchers don’t find bugs, vendors won’t have to patch, because the zero day market incentivizes the bug finding.

Not all vendors are bad- see Bug Crowd talk at BlackHat. At that talk, they said that if you don’t provide $ award or t-shirt, but instead just promise not to sue, researchers greatly value that social contract. Researchers get a venue where they feel safe, get kudos & can build a portfolio. Notes that FaceBook has had direct hires from their bug bounty program.

Bug bounty programs: compared to single point in time pen test. (Possibly referencing this?  https://bugcrowd.com/resources/4-reasons-to-crowdsource-your-pen-test)


Legislation so far seems aimed at pushing underground; even through lawsuit threats, though, word gets out. Instead of reacting this way, vendors should work with researchers.

Dr. Andrea Matwyshyn:

Focusing on vuln disclosure in isolation from rest of IT is bad idea

Need for common language; then assess risk accurately; see ISO standards efforts; need for security focus top down from C-Suite

FTC’s Start with Security is 10 Questions to start with in an org

Building structures: not always better with bacon! Need to fit solutions to the problem.

Balancing usability & security; push also to update govt. procurement standards to include security.

Meaningful info sharing: lack of security metrics driven by lack of formats in advisories; Drafting & presentation could be standardized

Inadequate ID of libraries in embedded devices also a risk; customers lack access to debug, hard to patch

Legal regimes should evolve to address challenge of feedback loops. See §1201 exemption 25, security research; CFAA circuit splits. Wants to centralize prosecution only with DOJ, no state prosecutions.

Need for tools to do supply chain assessments

· See Data security agencies guidance

· SEC evaluation of Oct 2011 guidance

Kids: should be allowed to tinker naturally, but can’t given surveillance & monitoring & legal threats

Asymmetry of public discourse: researchers should be more upfront & center; need to be vigilant like civil liberties groups were with CISA

Govt says they release some vulns they find, but lacking transparency



Patrick Mc Donald:

Clear, concise disclosure policy & formats really help (Wendy note: see also Bug Crowd Black Hat talk about how bad 95% of vuln reports are)

- Transparency of what vulns are found. No metrics on how many vulns are submitted to vendors, how long it takes to respond (months, years) – need to demonstrate you serve the public interest

Differences between whistleblowers & security researchers: whistleblowers tend to have more legal protection

Incident response by CERTs: have changed to be less technical these days, lay people can get what they need.



Body worn police camera

Sold as “record what the police see”, tension between police accountability vs public privacy. Should defendants have access to raw footage, or only redacted? Currently variety of standards. DC: all kids faces and bodies are blurred, Federal law enforcement: faces redacted. In houses: some departments redact displomas, prescription bottles, faces.


Tech: blurring, or replacement two main methods. But tools exist that can re-construct images from reflections, so is this enough? Also, Google Street View blurs faces, but people are recognized by those that know them. Should there be more redaction, and only show edges/outlines?



Norfolk PD: redact videos requested via FOIA. Do it manually, frame by frame. Footage involved in criminal prosecutions not released until all appeals/process finalized.

Discussion of what if you record in a hospital, or domestic violence victims: do you keep recording? What gets redacted? Officers have discretion to turn off camera, but speaker notes that in domestic violence cases, often photos will be taken at a hospital anyway. All footage recorded is kept for 30 days; after that only kept if needed.



Taser rep:

· Seeking to improve manual redaction process; their cameras upload to an online portal, evidence.com; only police departments have access to the data within their accounts there.

· Footage uploaded has an audit log. All redactions, edits are made to copies of original, which can always be recovered.

· Taser rep was very adamant that only agencies could access the data & it was highly secure, but didn’t back up assertions with any mentions of outside pen testing or other security testing.

Dr Corso – EECS

· Accountability tool vs investigative tool: important distinction

· There is a public belief in the veracity of the footage

· Tech evolution: multimodal sensing, record infrared, motion capture from smart clothing

· There should be widespread adoption of benchmarks o test redaction against



Export control panel

Randy Wheeler
* cost/benefit analysis & scope of control are taken into control

* took WA control text language >> determine initial scope of control items in proposed rules

* have discretion on how to control, i.e. license requirements; how to make license exceptions or other permissive measures

* proposed rule has restricive license requirements; few permissive measures

* expected comments to address license requirements/policy; but got comments on scope of control, i.e. WA control text

* scope of control: what is black & white & read all over? >> WA put together control text, and intrusion software definition,

* what is intended to not be controlled>> takes many readings to capture

* in their scope of control text also captured a sunburned zebra

* appear to have captured in control scope defensive products that protect against offensive products intended to be scope of control

* in addition to products, control technology for development of intrusion software (intrusion software defined in regulation); comments focused on how control language would undermine recent progress in developing incentives to disclose vulns via bug bounties, which contribute to cyber safety; would do more harm than good on cybersecurity front generally

* Now IDing issues raised in comments >> look at scope of items subject to control

* open meeting to discuss tech control >> is it reasonable to go forward with the text as provided from WA; are there measures that can be taken to mitigate harmful effects of control via license exceptions/ licensing policies, or is language such that we can’t find way around harm that it would cause

* interpretations/notes of definitions to understand scope of control

* will have additional meeting on scope of product controlled

* watching EU parliament, other countries addressing issues raised by control entries

* seeking to address concerns raised by control list entries







Suzanne Nossel, PEN American Center (writers)

* see paucity of concrete evidence of surveillance harms

* did survey




Antoinette Paytas

* industry recognizes sentiment, but concerned about impact on telecom, information systems

* companies don’t provide single use surveillance equipment, but general use

* many products fall under > 10 yrs old encryption controls; they understand these; can get bulk licenses from Commerce

* under proposed controls — move products out of encryption section; concern about controls proposed

* clarification of terms cause concern >> commerce has gone past WA, if i have knowledge that my general purpose networking equipment will be combined w/ other components to make a surveillance system, need a special license. isn’t this a de facto control w/ all telecom equipment? is this an effective control?

* items that would meet all control requirements are generally a combination of uncontrolled items (collect, store, analyze) > each of those not individually controlled

* terms that are unclear: carrier class ip network; relational network

* instead of control tech, impose sanctions on bad actors

* Duality >> are export controls the right method?

* China not a member of WA; the Chinese companies product networking equipment used by some of these regimes

* WA members have latitude in implementation

Mailyn Fidler

* can a multilateral agreement work?

* some past items controlled don’t have dual use (i.e. biological weapons)

* flexibility of WA can be a downfall — member states have discretion




How much is driven by sale of products?

* Suzanne: some key providers of these technologies are outside WA, but we want cleaner hands; if we pull back & China steps in, we want to do our part to set a bar; we don’t want to be the providers >>> these are valid statements for us to make; US as standard setter




privacy international paper?






definition of “intrusion software”




"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network capable device, and performing any of the following:




a. The extraction of data or information, from a computer or network capable device, or the modification of system or user data; or






b. The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.

Thursday, October 08, 2015

More on software liability and Black Hat

Over on Plain Text, I address the idea that the "eversion of cyberspace" brought about by putting software into everything will affect how software products liability will work. Why can’t you sue software makers for bugs? And how the law might evolve in the IoT era

Monday, June 01, 2015

Products Liability in the software world

Products Liability is a part of Torts that addresses harm to people from, well, products. For a variety of reasons, there are really very few Products Liability cases about software, although the biggest reason is pretty much that Torts is kind of like the evening news. In my Sociology of Mass Media news class back as an undergrad, we learned a lot about the "if it bleeds it leads" idea, and Torts turns out to be a fan of that concept. The large majority of Torts cases are around actual physical harm to people, and so far, software tends to largely stay safely tucked away in our computers. This will probably be changing a lot with the Internet of Things on the horizon, and so I've been wasting too much time thinking about how Products Liability concepts will play out with software.

Products Liability in the world's smallest nutshell: generally, you can sue under one of three theories.
  1. Manufacturing defect: the particular instance of the product that injured me was defective in some way. This is the "easiest" type of Products Liability suit, so long as the item that injured you wasn't destroyed in the accident.
  2. Design defect: this one is harder, but probably far more common. In this one it's not that one particular item is defective, but that EVERY instance of that particular product is defective.
  3. Failure to warn: this product injured me because I wasn't aware that it would hurt me in that particular way. This is the type of lawsuit that's responsible for loooooong warning stickers on everything.

One concept in Product Liability under the area of "design defect" is the idea of optional safety features on a product. If a particular company was aware of a safety feature, but did not include it in the product, could they be held liable for harm that occurs to a person which the missing optional safety feature might have prevented? This is not really an easy question to answer, because a lot of the time the reason that safety feature is missing from the product is that it would make the product more expensive to produce. The courts sometimes like to let the market "speak" -- they insist that the consumers should be the ones to decide whether an optional safety feature is worth spending on. The purchaser of the product is not the only one who gets a say, of course, but by and large the let-the-consumer-decide idea has a lot of appeal.

(When you have a design defect case, you also generally have to prove a reasonable alternative design, and having that safety feature available on other products like the one you're suing over is basically a reasonable alternative design nicely gift wrapped for you.)

The courts weigh the risk vs the utility of the particular design when deciding the cases. For instance, in Scarangella v. Thomas Built Buses, Inc., the court looked at "seven nonexclusive factors to be considered in balancing the risks created by the product's design against its utility and cost. As relevant here, these include the likelihood that the product will cause injury, the ability of the plaintiff to have avoided injury, the degree of awareness of the product's dangers which reasonably can be attributed to the plaintiff, the usefulness of the product to the consumer as designed as compared to a safer design and the functional and monetary cost of using the alternative design (id.). An additional pertinent factor that may be taken into account is "the likely effects of [liability for failure to adopt] the alternative design on … the range of consumer choice among products" (Restatement [Third] of Products Liability § 2, comment f)." Scarangella v. Thomas Built Buses, Inc., 93 N.Y.2d 655, 659 (1999)

So this is all a very long windup to the problem of Volvo's pedestrian detection. Story in a nutshell: some folks were demonstrating to themselves Volvo's self driving car. The car ran into two people standing in front of it. Volvo says "oops, pedestrian detection is $3000 extra, this model didn't have it."

Now, if a car hits a pedestrian because it's lacking an optional safety feature, how do we weigh the risk-utility of this design, given that the feature was available but not included? So much of what courts look at is the price impact of the optional feature- and here, it looks like Volvo gave us a price: $3000. However, how much of that $3000 is the true cost to Volvo to install this, and how much is just them wanting to charge a lot for a software library because they can?

I know pretty much nothing about how Volvo's actual pedestrian detection works, so let's consider an imaginary car where the pedestrian detection is purely a software library addition to the car's software, and doesn't require any new physical sensors or rewiring of the car, etc. In that instance, could the car company make pedestrian detection available only at a $3000 add-on price? You might say on the one hand that software is basically cost-free once it's been developed. There are going to be tests to do with each model, most likely, but once a particular model has been tested out, adding the software to a particular individual car of that model type should be just about cost-free. This is in contrast to a piece of hardware that requires, perhaps, a hand guard to be manufactured and installed for every single instance of the item.

On the other hand, if car companies could not recoup their software development costs by charging extra for software options, would the incentives be strong enough for them to develop the options? If every other car on the market had pedestrian detection available, the laggard car company would probably develop (or just license) the software for their car. But what would incentivize the first adopter to make it? Could they capture enough of the market by having this new feature available without charging for it as an upgrade?

The inherent non-rivalrous nature of software, in that once complete it can be infinitely reproduced for negligible cost upsets the standard risk-utility calculus; the monetary cost of using an alternative design drops to zero after the initial development.

It will be interesting to see what happens with safety oriented software options going forward in self driving cars.