Thursday, November 17, 2016

Schrodinger's cat and getting through law school as a programmer



This is a "wouldn't fit in 140 characters" overflow blogpost from a twitter discussion of software developers learning in law school. I'll attempt to explain just why law school, and thinking about legal "balancing" and "multi factor" tests can be so frustrating when you're trained as a software developer.

Computer science trains developers to expect logical proofs, and compilers give you a definitive immediate feedback of "yes this is at least one kind of correct because it compiles," which is something you can't do in a legal world. But it's more than that. Programming is a step by small step process, where the computer takes you literally. You're probably saying "yes, I know that," but I mean that quite literally, the computer does exactly what you tell it to do. A programming language is a set of tools you can use to construct a solution to a problem that you've broken down into tiny manageable steps. My first time I figured out that there's a logical system to solving problems, where being at state "A" meant you applied one of a known set of possible next steps, and again and it would lead you to a solution, was amazing. It was relaxing and exhilarating  and fun all at the same time to process the problems and to see how the logical next step at each stage would get you to the answer. To realize that divide, conquer, apply next step from this toolchain, and repeat would get you somewhere, and it would be fun to do so, is addictive.

In college computer science classes we learned not just a particular language, but that any programming language was just "syntactic sugar" and any non-NP complete problem could have a solution written if you just attacked the problem the right way. It taught us to, again, break down an issue into discrete steps, apply the tools of the programming language, and end up with a working piece of code. A developer becomes pretty good at trying to fit problems into recognizable boxes that are sort of like previous problems, and applying the same pattern of solution to them.

Debugging also teaches developers to be incredibly detail oriented and literal. The computer does not do what you meant, does not do what you thought you typed in, but only explicitly and completely and totally exactly what you actually typed in. The compiler is going to follow the directions you gave it, and execute those, and the outcome is always the same for that same set of directions.

So our software developer who decides to go to law school realizes that law school isn't like software development. Sure, there is no legal compiler. Gotcha, outcomes vary and these are more heuristics. But this is still a brain that's been trained to fit problems into boxes, to be literal about what rules apply, and to follow logical deduction chains. The law usually says one thing, but hey, not in this case- the mental model I've built of what the law says now has an exception to it. The logic branches under this condition. I update my mental model to account for that exception and move on. But I'm still, even if subconsciously, constructing a big logical flowchart in my head that's going to handle all these curve balls in judicial opinions.

Until it doesn't work anymore. My first breakdown came over two torts cases that, to my mind, came out in logically inconsistent ways. My email to my professor was pretty much trying to expand my rules to handle this inconsistency: "How should we reconcile these?  Is it just that under battery, particulate matter is considered one way and under the other tort it has a different nature?... The engineer in me is having a bit of trouble with particulate matter being a Schrodinger's cat here, both enough for a tort but not enough for a tort."

The thing is, developers are pretty good at tackling incredibly complex problems and writing up solutions, and so the first curve ball isn't really going to stress your worldview that much. You just add an exception to your mental model and handle it. At some point, this mental model gets stretched too far. It won't handle another contradictory result. Multifactor tests that judges apply come out all over the place, briefs claim to be arguing logically and deducing one part of the argument from the previous one but they're making leaps in inferences, and your poor developer brain will just decide the whole thing is insane.

I had a series of mental freakouts one L year. It's not that the tools you learn in programming are useless in legal analysis; they're actually really helpful. Reading an opinion and constructing a mental model of the relevant law, finding the holding and following how that was applied in this instance are helped by the mental toolkit a developer has. Trying to make peace with the logical-but-not-really logical fuzzy mess of legal reasoning and how the common law really works is kind of a process. Pretty sure my subconscious is still building that exception laden tree, working off "fuzzy logic" precepts now.



Tuesday, August 02, 2016

Becoming an infosec con speaker, Part 2

Back April, I wrote about speaking at my first BSides, the BSides Charm conference in Baltimore. In there, I discussed getting selected for the awesome BSidesLV Proving Ground program, which pairs newbie speakers with mentors.

My BSidesLV talk was just this afternoon, so while it's still fresh in my mind I wanted to write up a little bit about how the talk came together.  My submission was on the same general topic as the BSides Charm, so we started with that slide deck as a starting point.


Security Vulnerabilities, the Current State of Consumer Protection Law, & how IOT Might Change It





For this one, though, I'd put in "I think IOT will change software liability" and so we chose to really push that more than the vulnerability disclosure/failure to warn angle.

The experience of having another person to work through a talk with was incredibly helpful, and I can't recommend this enough. Being able to get feedback on questions like "I don't know what to do with my hands during the talk" to tips like "no full sentences on the slides" (if you view my slides you'll see I actually ended up breaking this one). Here are some notes from one of our discussion:

* front load more of the idea? Start with a story…. get comfortable with delivery
* no full sentences in the presenter notes

1. print out slides, write one or two things on the slide to refer
  * distill what to get out of the slide
2.  imagine the situation, and what would happen?
** context setting


Looking at this now, some of this seems obvious, but it helped to make me realize I wasn't doing those obvious things, and helped to frame the talk. Starting with a story... I was like yes, I have done that in almost every legal paper I've written. Why aren't I doing that here?

The big point in developing the talk for me came in mid-July. I had a set of slides that I was mostly OK with. I did a dry run over Hangouts, and I got... stuck about 3/4 of the way through the talk. I had to restart. It wasn't flowing, I wasn't setting up points I wanted to make later, I meandered. It was painful.

So I enlisted a stuffed animal. I pulled up those slides & tried to give the talk again and again, to my little stuffed fox:



I got stuck, again. So I pulled out a paper notebook, and made a bubble flow chart of what the big ideas should be, and how they would flow. This was really, really, really, really hard. I was sitting there going "I don't know what I'm doing as a speaker and why won't this work."

I started thinking of the spots I got stuck as pivot points, where the talk had to transition into a new idea. My problem was that I wasn't realizing that I had to do those pivots, didn't think about setting up the next section of the talk until I landed on a slide, and had to pick up the threads.

Here is where mentors are awesome: my mentor had pointed out that I needed to signal these transition points, and we had sort of talked about showing it on the slides. On powerpoint's presentation view (which is fabulous!) you can see the next slide coming up. So I made a slide with very little text for each transition and I colored those transition point slides solid purple, so that it would be visible in my peripheral vision when I glanced down at the screen. Just having that solid purple coming up was a reminder that I had to start framing the next section. That one change made the talk flow so much better.

Since this was a legal talk for an infosec audience, trying to get the right level of legal terminology explained without descending into jargon was important, as was getting across the background information. This audience wouldn't have a 1L law school background. Tort law and strict liability are important to software liability, but mean nothing to someone without a legal education. Having a mentor remind me to figure out how to explain those concepts in one sentence was really helpful.

My slides are online now; the talk was recorded, but I'm not sure when it will be online. I actually don't remember much about giving it- I do sort of remember missing some things that I'd wanted to say, but since each time through was slightly different, that was really just going to happen. I don't think that I could give an overly rehearsed talk, because I'd just be reading off the speaker notes, and it would probably be kind of flat. I had phrases in my speaker notes that were "big idea to get across here" but I didn't look at them as much as I thought I was going to.

So, that was how my talk came together. It was really fun, and I'm so grateful for the opportunity to work with a mentor on this talk, and for the chance to come to Las Vegas in August and speak at one of these conferences. Last August was my first time attending, when I got an academic sponsorship to Black Hat. Never in a million years did I think I'd be back the next year as a speaker. Thank you BSides Proving Ground for that chance!


Sunday, April 24, 2016

Becoming an infosec con speaker


In 2006, I attended my first infosec conference- HOPE in NYC, July 2006. And then yesterday I spoke at an infosec con -- BSides Charm 2016, something I seriously still have trouble wrapping my head around.

My talk was "Failure to Warn You Might get Pwned: Vulnerability Disclosure and Products Liability in Software" on how products liability law might someday apply to software (maybe - a lot of the talk is on why it doesn't apply now). It's a topic I've been interested in for a while; I wrote about it after the BlackHat keynote this summer.

I don't particularly have any qualifications to give this talk. I took a Products Liability law class in lawschool last year. I know a little bit about vulnerability disclosure from working in the software industry. The specific idea for this talk came out of hearing two different people, in different contexts, say that they thought that failure to warn and vulnerability disclosure might be worth looking at. And so I did. At the same time, Shmoo was opening their CFP, and encouraging newbie speakers to submit.

Let me jump back a few years- at the 2012 OSCON in Portland, I got into a conversation that made me decide that I should apply to lawschool. But also at that con, I spent some hallwaycon time sitting at the "women in tech" table and talking with Suzanne Axtell. She was trying to encourage all of us sitting at the table to submit to conferences. At the time, I was working at a company with a pretty restrictive public speaking policy, so my take was mostly "I don't know anything interesting to talk about, I don't know how to get a talk accepted, and even if some miracle happened and I did have one accepted, my company wouldn't let me speak." So I forgot about it, and went off to school the next year. At some point, I subscribed to the Technically Speaking email list, mostly because the two women who run it are really smart & say interesting stuff about a pet cause of mine, boosting the numbers of women in tech.

And then I saw the Shmoo tweets, and realized that the "my company wouldn't let me" excuse didn't apply to someone unemployed. So I submitted. And was rejected, but I got a ticket registration, which for Shmoo is pretty amazing. Then there was a lot of chatter on Twitter about encouraging women to speak.


 Shmoo Firetalks CFP went out shortly after. I think my thought process was "well, I have a proposal written, I'll just submit it and nothing will happen but whatever I submitted."  And then it was accepted for Firetalks, and I had to actually get up and give this talk- but it was "Firetalks," aka "not real" and it actually turned out to be a lot of fun.

Shortly after, the BSides Charm CFP opened, and... I think my thought process there was "huh, people actually told me at Shmoo that they found my fire talk interesting, maybe I'll tweak and submit."  Then this:

Then, and who knows why, I got the crazy idea that I should submit to the BSidesLV Proving Ground program, where newbie speakers are paired with mentors. Then the crazy stuff happened, and both  talks were accepted. (what?!?!) At which point I'm fairly sure I had a few days believing that I was dreaming or something and I'd wake up to reality shortly. But no, I had to write (or really modify) a talk.

This talk was a little hard since there was a lot of legal background to explain, and then a lot of speculation, and so I thought I'd try frontloading the legal theory this time. Not sure it worked so well. But, well, it was an approach.  My biggest problem prepping this time was that I love this legal area, and I have *too* *many* *thoughts* and it all feels important. Trying to pare back the amount of information, but still get across the key points, was hard. Not sure I came close to succeeding.

I wish I talked more about risk-utility balancing, because that makes all of products liability so interesting. And then that leads into social/policy goals of having it serve an insurance function, and... it all quickly spirals out into a lot of somewhat extraneous ideas.

I freaked out a lot about timing, and decided that using Power Point's presenter view, and knowing roughly what slide I wanted to be on at what point, was the way to go.  And then... somehow it was Saturday at 2 PM, and I was standing on a stage at an infosec con and talking to a room that was crazily not empty. For starting out feeling pretty confident, I was a nervous wreck by the end.

The firetalks were, well, very different- I ended that one feeling pretty pumped, but there was more audience interaction, and the back and forth with the judges right on stage. At BSides Charm, the lack of a clip mike kind of threw me (I should write some time about my first run in with clip mikes....), and how tall the podium was.

So, that was my experience with my first real conference talk. I'll try to blog along the way to BSidesLV, although there is also going to be bar prep going on this summer, so.... I'll have to frame blogging as "study procrastination" perhaps.

Friday, October 16, 2015

Computers Freedom Privacy 2015 Conference notes

Notes from some of the panels that I attended at the Computers Freedom Privacy Conference this week.


Government Hacking Panel

Hacking as a next-best solution to backdoors?

Soghoian: rejects both options

Knacke: are there viable 3rd party options? Sees parties at play as Carrier, Court, and Law Enforcement; proposes that a contracter be the one to interface with carriers to get data, to help with the problem that local/state law enforcement won't have enough training

Is hacking just one device better than backdoors in all devices?

Some mention made that this still leaves all devices vulnerable to the exploit used against the single device. Discussion that the Wiretap Act standard (alternatives exhausted, etc) should be the mininum standard process, currently there is a process vacuum.

Hacking doesn't tend to scale well (as compared to the 215 program)

Also compared to 215: years of secrecy about use which sidesteps public debate. There hasn't been any transparency on hacking; no Congressional hearings with technologists.

Could companies be required to help hack under All Writs Act, ie push a malicious OTA update to a device with the payload?

Adversarial relationship between Law Enforcement and US companies that operate globally.

Is it different to require a company to turn over server info vs. requiring the malcious update push? And is that malicious OTA different in any meaningful way from a backdoor?

Without the assistance of the companies, are you limited to drive-by attacks on home wifi networks, or phishing attacks?

LEOs/ i18n espionage: they might impersonate a company to push these malicous OTAs; Harris Company (makers of stingray devices) have tools and engineers that bleed between law enforcement and national security contexts.

Knacke: Post 9/11, companies came to governemnt and said "what do you need from us?", some of that was codified in law; Post-Snowden, that level of cooperation more seen as problematic. But we should make policy now, when it's not an emergency situation.

All Writs Act is ex parte, could be used in a time crunch emergency, and would then create harmful precedent.

Marcy Wheeler question to panel: How would attribution of evidence work in court if it was acquired via hacking, given that attribution in hacking context (OPM/China) is problematic

Knacke answer: Rosy scenario is that disclosure of the vuln used is required (likely to be discovered anyway if used too much). So, LEOs should have access to updated vulns; thinks this would improve security because they would be disclosed and recycled regularly.

Soghoian: iOS jailbreaks are about $1 million on the 0 day market. Do we want state/local LEOs to have access to something worth $1million that they can resell, or that could be stolen from them? State/Local officials get 2 days of training with Stingrays. 2 days training not enough to be entrusted with iOS vulns.

Panel says that only people with skills and infrastructure should have access to the tools that leverage these vulnerabilities. Discussion about whether the targets will figure out the hacks by analyzing them.

UK recently passed backdoor legislation; published a hacking guideline manual because they were sued for not having rules, so they ex-post made rules.

Soghoian: hacking by government makes people who have done nothing wrong targets, ie Gemalto engineers who were hacked to get access to what they have access to. Tor, before this August, had no auto-security update mechanism, but now does. Previously, FBI could use non-zero days; once Tor users update to the auto-updating version, will drive up cost for FBI, more reliance on zero days. Watering hole operations, where FBI delivers malware will only work with non-patched vulnerabilities. The move to auto updates might be bigger impact than the move to widespread encryption.

Internet Content Blocking by ITC

Rebecca Tushnet's notes: http://tushnet.com/2015/10/13/cfp-2015-internet-content-blocking-by-the-itc/

Intermediary Liaiblity

Rebecca Tushnet's extensive notes: http://tushnet.com/2015/10/13/cfp-intermediary-liability/

Laws at play: First Amendment, §230 of CDA that grants immunity for intermediaries, §512 of DMCA that grants conditional immunity.

Attacks on 230: SAVE Act - justice for victims of trafficking act. crime of advertising a person, but 'advertising' is not defined, could be used to go after websites on which ads appear.

512 algorithmic overreach problems; DMCA being used for privacy interests

Content owners want notice-and-stay-down. Copyright notice & takedown are unique in susceptibility to algorithmic enforcement (unlike privacy Right To Be Forgotten Claims that need human review); pushback now though with Lentz that need to consider fair use.



Vulnerability Disclosure Panel

Many industries deal with risk management and have sophisticated methods for sharing information about risk. Vulnerability disclosure: how much should be told to who, and when. Full disclosure vs Responsible disclosure vs zero disclosure (which was tagged as tell no one, ever, not zero day market sale?). Some people call responsible disclosure blackmail; but some vendors don’t behave in a responsible manner.

Information sharing secrecy: some commercial network outages are kept secret, because the outages could reveal vulnerabilities in the networks; similar to removing nuclear power plants from maps.

Risk communications: do we know how to do this? Granick says that we may end up with a cyber 1% who understand the risks and are patched. Trust issues (see Facebook’s Threat Exchange).

If you keep information under wraps, the information become criminalized, but the internet (ie methods like full disclosure email list) push back on this; also independent discovery. Tension between disclosing everything and restricting everything.

The security industry feels that they have lost stamina to discuss disclosure; the status quo works better than regulation, esp. with fear that regulation would censor independent researchers.

Who would open processes help? Only commercial interests? Operational security enhancements are important for internet; consider nature of the information and civil liberties.

Patrick McDonald, Google:



Referenced an AOL Christmas 2000/2001 bug: very hard to get information when defending against a new hack, if you have a POC you can shut off the affected service at least (during Heartbleed, took down a few services to reduce exposure)


Vulnerability researchers think they are special snowflakes, and vendors think they are special snowflakes, want to censor researchers because vendors think the problem will go away if they suppress it.

However, see on seclists how often independent discovery happens; he notes also that they get POCs from separate vendors that have md5s that match; means researchers are sharing info among themselves, and multiple of them choose to share with vendor. So there is likely even more sharing among researchers than vendors think.


Incentives for researchers? Mostly there are bad incentives out there, see DEF CON 9 arrest; weev’s prosecution; his OWASP friend who found airline vulnerability in mobile app and reported it, was met with threat of lawsuit and fine. See also FireEye incident recently, Oracle’s Mary Ann Davidson blog post, “we don’t need researchers”.

Schneier: easy to mock the vendors stance that if researchers don’t find bugs, vendors won’t have to patch, because the zero day market incentivizes the bug finding.

Not all vendors are bad- see Bug Crowd talk at BlackHat. At that talk, they said that if you don’t provide $ award or t-shirt, but instead just promise not to sue, researchers greatly value that social contract. Researchers get a venue where they feel safe, get kudos & can build a portfolio. Notes that FaceBook has had direct hires from their bug bounty program.

Bug bounty programs: compared to single point in time pen test. (Possibly referencing this?  https://bugcrowd.com/resources/4-reasons-to-crowdsource-your-pen-test)


Legislation so far seems aimed at pushing underground; even through lawsuit threats, though, word gets out. Instead of reacting this way, vendors should work with researchers.

Dr. Andrea Matwyshyn:

Focusing on vuln disclosure in isolation from rest of IT is bad idea

Need for common language; then assess risk accurately; see ISO standards efforts; need for security focus top down from C-Suite

FTC’s Start with Security is 10 Questions to start with in an org

Building structures: not always better with bacon! Need to fit solutions to the problem.

Balancing usability & security; push also to update govt. procurement standards to include security.

Meaningful info sharing: lack of security metrics driven by lack of formats in advisories; Drafting & presentation could be standardized

Inadequate ID of libraries in embedded devices also a risk; customers lack access to debug, hard to patch

Legal regimes should evolve to address challenge of feedback loops. See §1201 exemption 25, security research; CFAA circuit splits. Wants to centralize prosecution only with DOJ, no state prosecutions.

Need for tools to do supply chain assessments

· See Data security agencies guidance

· SEC evaluation of Oct 2011 guidance

Kids: should be allowed to tinker naturally, but can’t given surveillance & monitoring & legal threats

Asymmetry of public discourse: researchers should be more upfront & center; need to be vigilant like civil liberties groups were with CISA

Govt says they release some vulns they find, but lacking transparency



Patrick Mc Donald:

Clear, concise disclosure policy & formats really help (Wendy note: see also Bug Crowd Black Hat talk about how bad 95% of vuln reports are)

- Transparency of what vulns are found. No metrics on how many vulns are submitted to vendors, how long it takes to respond (months, years) – need to demonstrate you serve the public interest

Differences between whistleblowers & security researchers: whistleblowers tend to have more legal protection

Incident response by CERTs: have changed to be less technical these days, lay people can get what they need.



Body worn police camera

Sold as “record what the police see”, tension between police accountability vs public privacy. Should defendants have access to raw footage, or only redacted? Currently variety of standards. DC: all kids faces and bodies are blurred, Federal law enforcement: faces redacted. In houses: some departments redact displomas, prescription bottles, faces.


Tech: blurring, or replacement two main methods. But tools exist that can re-construct images from reflections, so is this enough? Also, Google Street View blurs faces, but people are recognized by those that know them. Should there be more redaction, and only show edges/outlines?



Norfolk PD: redact videos requested via FOIA. Do it manually, frame by frame. Footage involved in criminal prosecutions not released until all appeals/process finalized.

Discussion of what if you record in a hospital, or domestic violence victims: do you keep recording? What gets redacted? Officers have discretion to turn off camera, but speaker notes that in domestic violence cases, often photos will be taken at a hospital anyway. All footage recorded is kept for 30 days; after that only kept if needed.



Taser rep:

· Seeking to improve manual redaction process; their cameras upload to an online portal, evidence.com; only police departments have access to the data within their accounts there.

· Footage uploaded has an audit log. All redactions, edits are made to copies of original, which can always be recovered.

· Taser rep was very adamant that only agencies could access the data & it was highly secure, but didn’t back up assertions with any mentions of outside pen testing or other security testing.

Dr Corso – EECS

· Accountability tool vs investigative tool: important distinction

· There is a public belief in the veracity of the footage

· Tech evolution: multimodal sensing, record infrared, motion capture from smart clothing

· There should be widespread adoption of benchmarks o test redaction against



Export control panel

Randy Wheeler
* cost/benefit analysis & scope of control are taken into control

* took WA control text language >> determine initial scope of control items in proposed rules

* have discretion on how to control, i.e. license requirements; how to make license exceptions or other permissive measures

* proposed rule has restricive license requirements; few permissive measures

* expected comments to address license requirements/policy; but got comments on scope of control, i.e. WA control text

* scope of control: what is black & white & read all over? >> WA put together control text, and intrusion software definition,

* what is intended to not be controlled>> takes many readings to capture

* in their scope of control text also captured a sunburned zebra

* appear to have captured in control scope defensive products that protect against offensive products intended to be scope of control

* in addition to products, control technology for development of intrusion software (intrusion software defined in regulation); comments focused on how control language would undermine recent progress in developing incentives to disclose vulns via bug bounties, which contribute to cyber safety; would do more harm than good on cybersecurity front generally

* Now IDing issues raised in comments >> look at scope of items subject to control

* open meeting to discuss tech control >> is it reasonable to go forward with the text as provided from WA; are there measures that can be taken to mitigate harmful effects of control via license exceptions/ licensing policies, or is language such that we can’t find way around harm that it would cause

* interpretations/notes of definitions to understand scope of control

* will have additional meeting on scope of product controlled

* watching EU parliament, other countries addressing issues raised by control entries

* seeking to address concerns raised by control list entries







Suzanne Nossel, PEN American Center (writers)

* see paucity of concrete evidence of surveillance harms

* did survey




Antoinette Paytas

* industry recognizes sentiment, but concerned about impact on telecom, information systems

* companies don’t provide single use surveillance equipment, but general use

* many products fall under > 10 yrs old encryption controls; they understand these; can get bulk licenses from Commerce

* under proposed controls — move products out of encryption section; concern about controls proposed

* clarification of terms cause concern >> commerce has gone past WA, if i have knowledge that my general purpose networking equipment will be combined w/ other components to make a surveillance system, need a special license. isn’t this a de facto control w/ all telecom equipment? is this an effective control?

* items that would meet all control requirements are generally a combination of uncontrolled items (collect, store, analyze) > each of those not individually controlled

* terms that are unclear: carrier class ip network; relational network

* instead of control tech, impose sanctions on bad actors

* Duality >> are export controls the right method?

* China not a member of WA; the Chinese companies product networking equipment used by some of these regimes

* WA members have latitude in implementation

Mailyn Fidler

* can a multilateral agreement work?

* some past items controlled don’t have dual use (i.e. biological weapons)

* flexibility of WA can be a downfall — member states have discretion




How much is driven by sale of products?

* Suzanne: some key providers of these technologies are outside WA, but we want cleaner hands; if we pull back & China steps in, we want to do our part to set a bar; we don’t want to be the providers >>> these are valid statements for us to make; US as standard setter




privacy international paper?






definition of “intrusion software”




"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network capable device, and performing any of the following:




a. The extraction of data or information, from a computer or network capable device, or the modification of system or user data; or






b. The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.

Thursday, October 08, 2015

More on software liability and Black Hat

Over on Plain Text, I address the idea that the "eversion of cyberspace" brought about by putting software into everything will affect how software products liability will work. Why can’t you sue software makers for bugs? And how the law might evolve in the IoT era

Monday, June 01, 2015

Products Liability in the software world

Products Liability is a part of Torts that addresses harm to people from, well, products. For a variety of reasons, there are really very few Products Liability cases about software, although the biggest reason is pretty much that Torts is kind of like the evening news. In my Sociology of Mass Media news class back as an undergrad, we learned a lot about the "if it bleeds it leads" idea, and Torts turns out to be a fan of that concept. The large majority of Torts cases are around actual physical harm to people, and so far, software tends to largely stay safely tucked away in our computers. This will probably be changing a lot with the Internet of Things on the horizon, and so I've been wasting too much time thinking about how Products Liability concepts will play out with software.

Products Liability in the world's smallest nutshell: generally, you can sue under one of three theories.
  1. Manufacturing defect: the particular instance of the product that injured me was defective in some way. This is the "easiest" type of Products Liability suit, so long as the item that injured you wasn't destroyed in the accident.
  2. Design defect: this one is harder, but probably far more common. In this one it's not that one particular item is defective, but that EVERY instance of that particular product is defective.
  3. Failure to warn: this product injured me because I wasn't aware that it would hurt me in that particular way. This is the type of lawsuit that's responsible for loooooong warning stickers on everything.

One concept in Product Liability under the area of "design defect" is the idea of optional safety features on a product. If a particular company was aware of a safety feature, but did not include it in the product, could they be held liable for harm that occurs to a person which the missing optional safety feature might have prevented? This is not really an easy question to answer, because a lot of the time the reason that safety feature is missing from the product is that it would make the product more expensive to produce. The courts sometimes like to let the market "speak" -- they insist that the consumers should be the ones to decide whether an optional safety feature is worth spending on. The purchaser of the product is not the only one who gets a say, of course, but by and large the let-the-consumer-decide idea has a lot of appeal.

(When you have a design defect case, you also generally have to prove a reasonable alternative design, and having that safety feature available on other products like the one you're suing over is basically a reasonable alternative design nicely gift wrapped for you.)

The courts weigh the risk vs the utility of the particular design when deciding the cases. For instance, in Scarangella v. Thomas Built Buses, Inc., the court looked at "seven nonexclusive factors to be considered in balancing the risks created by the product's design against its utility and cost. As relevant here, these include the likelihood that the product will cause injury, the ability of the plaintiff to have avoided injury, the degree of awareness of the product's dangers which reasonably can be attributed to the plaintiff, the usefulness of the product to the consumer as designed as compared to a safer design and the functional and monetary cost of using the alternative design (id.). An additional pertinent factor that may be taken into account is "the likely effects of [liability for failure to adopt] the alternative design on … the range of consumer choice among products" (Restatement [Third] of Products Liability § 2, comment f)." Scarangella v. Thomas Built Buses, Inc., 93 N.Y.2d 655, 659 (1999)

So this is all a very long windup to the problem of Volvo's pedestrian detection. Story in a nutshell: some folks were demonstrating to themselves Volvo's self driving car. The car ran into two people standing in front of it. Volvo says "oops, pedestrian detection is $3000 extra, this model didn't have it."

Now, if a car hits a pedestrian because it's lacking an optional safety feature, how do we weigh the risk-utility of this design, given that the feature was available but not included? So much of what courts look at is the price impact of the optional feature- and here, it looks like Volvo gave us a price: $3000. However, how much of that $3000 is the true cost to Volvo to install this, and how much is just them wanting to charge a lot for a software library because they can?

I know pretty much nothing about how Volvo's actual pedestrian detection works, so let's consider an imaginary car where the pedestrian detection is purely a software library addition to the car's software, and doesn't require any new physical sensors or rewiring of the car, etc. In that instance, could the car company make pedestrian detection available only at a $3000 add-on price? You might say on the one hand that software is basically cost-free once it's been developed. There are going to be tests to do with each model, most likely, but once a particular model has been tested out, adding the software to a particular individual car of that model type should be just about cost-free. This is in contrast to a piece of hardware that requires, perhaps, a hand guard to be manufactured and installed for every single instance of the item.

On the other hand, if car companies could not recoup their software development costs by charging extra for software options, would the incentives be strong enough for them to develop the options? If every other car on the market had pedestrian detection available, the laggard car company would probably develop (or just license) the software for their car. But what would incentivize the first adopter to make it? Could they capture enough of the market by having this new feature available without charging for it as an upgrade?

The inherent non-rivalrous nature of software, in that once complete it can be infinitely reproduced for negligible cost upsets the standard risk-utility calculus; the monetary cost of using an alternative design drops to zero after the initial development.

It will be interesting to see what happens with safety oriented software options going forward in self driving cars.

Sunday, November 30, 2014

Ever-Falling Cost of Surveillance Talk

US v Jones is one of the more important recent SCOTUS cases on the 4th Amendment. Kevin Bankston and Ashkan Soltani wrote a great paper analyzing the case, Tiny Constables, on how the rise of software and small sensors is bringing about a sea change in tracking technologies available to law enforcement. Their talk at New America in 2014, The Cost of Surveillance was a great overview of the economics, and how that should affect the legal analysis. I attended, and found the graphs of the cost outlays with and without technology really interesting, and it helped show how economic analysis can be helpful in thinking about how the law should respond to changes in technological capabilities.

Thursday, September 18, 2014

Zero Days

Zero day hacks are software bugs the software vendor is not aware of, and which therefore have no patch available. The "valuable" (for a particular definition of valuable...) ones are bugs that can be leveraged to give the exploiter privileged access on a computer, which can be used to install keyloggers, etc. Zero day exploits the exploits are often sold and there is debate about whether the government does or can use them in counterterrorism surveillance. If we ignore the "does the government use them?" question and focus on the "can they?" aspect, one statute that might offer the answer is the Computer Fraud and Abuse Act, 18 USC §1030(f). This section offers the government immunity from hacking when used to go after criminals. It states, "This section does not prohibit any lawfully authorized investigative, protective, or intelligence activity of a law enforcement agency of the United States, a State, or a political subdivision of a State, or of an intelligence agency of the United States." That's pretty wide ranging- "any lawfully authorized" covers a lot of ground. Is exploiting zero days lawfully authorized? I think that no matter what steps the actual exploit takes, the government might argue that it should be covered under "lawfully authorized activity" if it's part of an ongoing investigation. Is keeping knowledge of a zero day from the software vendor, so that a government agency can continue exploiting it, allowed? Is there a duty to disclose the issue so that others don't also exploit it? And is the purchase of zero day exploits covered under "lawfully authorized activity"? There's a law review article, 50 A.F. L. Rev. 135, Defensive Information Operations and Domestic Law: Limitations on Government Investigative Techniques, from 2001, which addresses 1030(f) in the context of government operations.

Monday, April 14, 2014

Creative Commons Goodness

My Instagram feed has for a long time featured occasional shots of coffee cups and my kindle. My favorite way to spend a weekend morning, after all, is so get a good cup of coffee and read, and in the vein of "shoot what you know" I've shot quite a few coffee cups + kindle still lifes. My friends have kidded me about it a few times over the years, but apparently some of them associated "coffee cup" and "kindle" with my photos enough that I was notified by a few folks when this article was published a few months ago:


THE BOOK THAT MADE ME QUIT MY JOB


- yup, that's my photo illustrating it! The awesomeness of tagging your pictures with a creative commons license on flickr and releasing them into the wild is that once in a blue moon one gets used. So cool. Looking at the last post I put up here, with the coffee cup & kindle reminded me that I wanted to save a link up here so I could find it again! So here are a few more coffee cups with Kindles. As you might guess from the name of my blog, my drink of choice is an americano, with pourovers being my fallback drink.



More Sherlock Holmes over breakfast


Weekend reading.


Pourover and more reading.



Tuesday, April 01, 2014

famous last words

April 2008, on my blog:
"I don't think that there has ever been a foray into legal territory on my blog, if I think about it. Which is a bit odd, because I'm not a lawyer, but my dad is, and I love talking about trials and legal things with him."
-- on Contracts
Spring break cappuccino

March 2014: dear blog, time for the once-every-few-years sorry-I've-neglected-you post (see exhibit 1, the most recent iteration and exhibit 2, the oldest iteration). What's my excuse this time? I'm in law school, so no free time. Also, maybe in a few years that post linked above is going to be factually incorrect. Who knows, I'm currently a clueless 1L so anything could happen. However, I went hunting through my blog today for some substantive writing from my past lives, having had some weird idea that I must have written a few posts that were more than a paragraph long. I was 98% incorrect, but I did dig up a few examples where I managed to ramble on at length.

Why am I in school again? According to my law school application essay, it's because of a conversation about open source software at OSCON 2012. I've thought a fair amount about programming vs legal issues over the last few months, and how diametrically opposed they are in so many ways. Tech industry: we are suspicious of you if you wear a suit to an interview or if you stay at a company for too long. Law: we are suspicious of you if you don't wear a suit to an interview or if you job hop. Software: I'm not sure what this program does, let's compile it and put in a break point to see exactly what's happening. Law: "'Chicken' may mean one thing or it may mean something else entirely. Who knows?" Software: you prove an algorithm is n log n by these steps that every one agrees on. Law: You might be able to prove a prima facie case of negligence with these facts, but maybe not.

In any event, I've had a few pushes from different parties to take up blogging again, so hopefully I will find time to write here more. I also hope I might find time to write about music again- the first few years of this blog were almost entirely music blogging, and it's something I miss.

So on that note, my latest favorite song is by Air Review, and you should check it out in this fabulous video of a border collie enjoying life while his owner does some mountain biking. This made me miss the Northwest so much; I need to find a weekend to get out there again soon!

Sunday, December 02, 2012

Amazon Christmas

I came across these wild photos of Amazon's warehouses, packed with books recently, and it reminded me of my first Amazon Christmas. So far this one has been interesting (best moment so far, walking to Mike's Pastry in Boston at 7 AM after working from 2 AM to 7 AM on some Black Friday prep... nothing like awesome lobster tail pastries to replace sleep). Nothing so far like my first Christmas, though. Here's something I wrote up about that 1999 Christmas.

office

The smell of coffee brewing makes me up before my alarm goes off as the timer on my coffeemaker ticks over to 4 AM. A velvety black is draped across my apartment as I stumble out of bed to get a cup. This morning- mid December 1999, probably rainy, definitely cold and dank- is just like all the others in a string of a few weeks since we paused working on updates to amazon.com’s website software and went all hands on deck in the company’s warehouses. We have been successful beyond anything we thought would happen and people are ordering books and CDs faster than we can get them out the door. My team of developers has drawn the morning to mid-afternoon shift and we assemble in South Seattle at an anonymous warehouse to start work as elves.

I pull on an extra pair of wool socks and put my work boots back on, then slide my walkman into my vest pocket. We all listen to mixtapes during our shifts, swapping them back and forth to ease the monotony of hour after hour after hour of moving books from the loading docks into the shelves. We’ve picked up a new foreign tongue. To “receive” is to grab pallets of shrink wrapped books out of the backs of tracker trailers. To “pick” is to grab books off the shelves that we’ve deposited them on to be assembled into customer orders. I’ve been told that we run our warehouse differently than any other warehouse. I joke that sure, few warehouses around here have software engineers wrestling 60 pound boxes of books off the loading docks, given that this is the pinnacle of the dotcom boom.

One slice of that different way we run our warehouse is in the very area I’m working. One of the backend software engineers had noticed that because we use computers to generate our pick lists it didn’t matter where on the shelves we stuck the books we’d unloaded, just so long as we told the computer where they were. There was no need to put all the textbooks in the same spot- they could go in any shelf that had space for them. Gingerly, I slash open a pallet of book boxes and scoop up an armful, as many as I think I can hold. Grateful that I’ve yet again avoided the sharp razor of the box cutter I turn and set off down the aisles to store the books for the pickers. An open spot down by the floor catches my eye and I stuggle to kneel down without spilling the armful of books. I grab the scanner clipped to my belt and scan the barcode under the shelf spot- the bin- where I’m about to put the book. Then I scan the book, tuck it into that spot, and walk on. I love the delicious simplicity of this hack. I’ve turned into a walking hash function, the computer algorithm by which items can be stored at arbitrary locations but still be retrieved by a pointer. My scanning that bin and then book made a pointer in our warehouse’s memory to be traced in reverse by the picker.

Scanning, kneeling, walking, walking, walking and scanning more. I peek at the giftwrap station sometimes, assuring myself that the endless stream of books that I’m pulling out of trucks really are going out to customers. We sit on breaks sucking down stale coffee. We marvel at seeing all the code we wrote the past few months turning into real packages to people all over the world. The sky outside, when I peek it between the trucks and the loading dock has weathered from the inky black to a flat gray, the same gray that will slide into black again before we are done here.

These really early mornings ended eventually once we got too close to Christmas to ship anything to customers anymore with any hope of it reaching them on time. The next few Christmases I mostly helped out on the customer service email queues, so I rarely went back to that warehouse after that first year. Eventually we closed it, because as an older un-automated one it couldn't keep up with the volume of the newer distribution centers.

Tuesday, August 23, 2011

pictures

hello blog! i've been twittering and flickring and neglecting my poor blog. So some photo quotes, courtesy lens rentals blog and pictures.


IMGP7692

There are always two people in every picture: the photographer and the viewer. – Ansel Adams

4th of July dependents cruise

In black and white you suggest; in color you state. Much can be implied by suggestion, but statement demands certainty… absolute certainty. - Paul Outerbridge


Dueling cameras!

Wednesday, November 11, 2009

penguins!

penguin


That's my sailor, back when we first started dating, with a real, live penguin. In antarctica. We're getting married soon, so just think... I have years more of crazy penguin photos from emergency deployments to the South Pole to look forward to the rest of my life. ;)

Happy Veteran's Day!

Friday, April 10, 2009

fuzzbusters

I've been a blogging slacker over here, and haven't updated in about a month. How about some fuzzy shelties to make it up?

sleepy declan

cameron


On one of my favorite topics, Target printed out a coupon for me the other night for $1 off a "Fur Fighter Kit". I got a huge kick out of it- I've never bought dog food there, so how do they know that I'm surrounded by sheltie fuzz? I do occasionally buy stuffed dog toys for Christmas or the sheltie birthdays, so that must be it. I really wish it had a "click here to see why we recommeded this" on the coupon, do 3 dog toys a year add up enough to recommend a fuzz fighter kit? Maybe I'm buying too many de-lint rollers??

Wednesday, March 18, 2009