They May Be Intelligent, But Are They Wise?

Speaking of fair shakes, here is a wise word of caution about the emerging, expanding use of computer programs to evaluate people in the justice system, whether at bail hearings, sentencings, or elsewhere.

The author is a former software engineer at Facebook who’s now studying law at Harvard. Her point isn’t that we shouldn’t use or consult these programs, but we should know what we’re getting into and proceed with caution. It’s troubling, for example, if we use programs that no one in the field fully understands—not judges, not lawyers, not probation—because the manufacturer won’t disclose a proprietary algorithm.

She says we turn to computers in part to control for our own biases, “[b]ut shifting the … responsibility to a computer doesn’t necessarily eliminate bias; it delegates and often compounds it.” That’s because these programs mimic the data we use to train them, so even the ones that accurately reflect our world will necessarily reflect our biases. Plus, they work on a feedback loop, so if they’re not constantly retrained, they lean in toward those biases and drift even further from reality and fairness. So they don’t just parrot our own biases; they amplify them. She saw this phenomenon time and again as a software engineer.

She agrees that algorithms can work for good. They’ve reportedly helped New Jersey reduce its pretrial jail population, for example.

But let’s proceed with caution, she says:

“Computers may be intelligent, but they are not wise. Everything they know, we taught them, and we taught them our biases. They are not going to un-learn them without transparency and corrective action by humans.”

The Modern Public Square

This week brought us another unanimous U.S. Supreme Court case that’s arguably more important because it concerned the First Amendment.

The issue was a North Carolina law that made it a felony for registered sex offenders to use any social-networking site that lets minors join. So to be clear, that’s any social-media site, period, that lets minors join. That meant Facebook, LinkedIn, Twitter, or pretty much any other social-media site. The law was even broad enough to include websites like Amazon, WebMD, and the Washington Post. So you almost couldn’t use the Internet.

The defendant was one of more than 1,000 people who’ve been prosecuted under the law. In 2002, when he was 21 years old, he had sex with a 13-year-old girl, and he was charged with it. He pleaded guilty to it and registered as a sex offender. Then the law passed in 2008.

In 2010, he happened to get a traffic ticket dismissed in court, whereupon he logged on to Facebook and posted this to his timeline: “Man God is Good! How about I got so much favor they dismissed the ticket before court even started? No fine, no court cost, no nothing spent … Praise be to GOD, WOW! Thanks JESUS!”

He was indicted for that.

He moved to dismiss on the ground that the law violated the First Amendment, but the trial court denied it. He was convicted at trial and given a suspended prison sentence.

On appeal, the state courts duked it out. The court of appeals agreed with the guy, finding that the law violated the First Amendment. But the state supreme court reversed, finding the law “constitutional in all respects.”

Finally, the federal high court unanimously struck down the law because it plainly applied to websites like Facebook, LinkedIn, and Twitter among others. Facebook itself had 1.79 billion active users—or three times the population of North America.

The Court called these sites “integral to the fabric of our modern society and culture.” They had become our main sources for sharing current events, participating in the public square, and exploring human thought and knowledge. To foreclose access to them was to foreclose the legitimate exercise of First-Amendment rights.

Yes, a state could pass specific, narrowly-tailored laws that regulate the type of conduct that portends crime, like contacting a minor or using a website to gather information about one.

But it couldn’t just cut people off from the public square.

Can They Search My Phone at the Border?

Suppose you go to visit your aunt in Italy, and you take your phone and tablet with you.

When you come back through customs, can they just search your devices willy nilly?

Probably. Here’s a good overview of your rights at the border, along with some practical considerations. It’s worth reading ahead of time because the government is stepping up its enforcement at points of entry, and there have been some heavy-handed run-ins lately between agents and travelers, including U.S. citizens.

The general rule is that customs and border agents may conduct routine, reasonable searches of you and your belongings, including your electronic devices, for any reason or no reason at all. They don’t need a warrant, and they don’t need any basis to believe they’ll find evidence of a crime. It’s known as the Fourth Amendment’s border-search exception.

But how far can they go?

Can they conduct full, forensic searches or force you to give up your passwords?

According to this 2009 policy memo, the answer is yes. It says agents can seize your device, copy its contents, and search them. To do so, they can hold a device for up to five days with a supervisor’s approval. For longer periods, they must get further approval at higher levels. Ordinarily, they must conduct the search in the presence of a supervisor, too, but if that’s not feasible, they must inform a supervisor about their search as soon as possible. If they find probable cause to believe your phone contains evidence of a crime, you may not get it back for a while, if at all. If they don’t, you should get your phone back eventually, and they’re supposed to destroy any copied information.

The law is evolving, however, to require at least a reasonable suspicion for a full forensic search. That’s already the case in the federal circuit that covers California and eight other states, and the law should continue to trend in that direction. What is a reasonable suspicion? It’s a particularized and objective basis for suspecting someone of a crime.

Still, reasonable suspicion is not a tough legal standard to meet.

Plus, agents can always just ask you to unlock your phone or give up your passwords, and if you refuse, they have plenty of ways to coerce you. They can take your phone; detain you, too; search your bags more thoroughly; deny you entry if you’re visiting; or scrutinize your green-card status. Most folks just want to be on their way.

So happy trails, traveler. Leave the phone, perhaps, but take the cannoli.

The Future of Face-Recognition Technology

Face it: the future is already here. And by default, your face is ever more likely to be found in a law-enforcement database. It’s as easy as getting a driver’s license.

The facts are that face recognition is neither new nor rare, and more than one out of two American adults have already been loaded into a local, state, or federal database.

That’s according to this report by the Center on Privacy and Technology at the Georgetown University Law Center. Read it to learn more about this technology; how it’s being used; and what the future holds. For three shorter stories about it, see here, here, and here.

What did the researchers do? They sent public-records requests to more than one hundred law-enforcement agencies across the country. They interviewed representatives from dozens of those agencies as well as from the technology companies they contract with. They made two site visits to agencies that use advanced face-recognition systems. And they surveyed the state of the law (or lack thereof) in all fifty states.

What are their takeaways? Here are four.

  1. The technology has value, and its use is inevitable. The report doesn’t aim to stop it.
  2. Its use is spreading rapidly and secretly without limits, standards, or public oversight.
  3. The total network of federal, state, and local databases includes over 117 million American adults. That’s more than half the country.
  4. We’re moving toward a world of continuous, real-time face recognition through public surveillance cameras.

What are their recommendations? Here are three.

  1. Congress and state legislatures should pass commonsense laws to regulate face recognition, and police should follow them before they run a search.
    • For example, to search a database of driver’s license or state identification photos, police should have a warrant backed by probable cause.
    • To search a database of mug shots, they should have a reasonable suspicion of criminal conduct. Periodically, they should scrub the database of people who were arrested but not charged and convicted. Michigan, for one, already requires that.
    • They should not use real-time, continuous surveillance except for public emergencies.
    • They should not track people based on politics, religion, or other protected status.
  2. The federal government should develop tests and best practices to improve the technology’s accuracy. For example, in the latest available test of the FBI’s database, the system included the right person on a list of fifty potential matches 86% of the time. That means that one out of seven searches returned a list of fifty innocent look-alikes, and the other six included 49 of them.
  3. All governments should report their use of the technology, audit such use regularly, and respect civil rights and liberties.

Man Gets Indicted By His Pacemaker

Actually, the case was indicted by a grand jury in Ohio, which charged him with arson and insurance fraud.

Apparently, the man called 911 as his home burned in the background. He said he was sleeping when the fire started and that, in a hurry, he packed a bunch of bags, broke a window with his cane, threw the bags out the window, and carried them away. He mentioned that he had a pacemaker.

The police came to suspect him of arson. They say they found gasoline on his shoes, pants, and shirt, and they believe the fire had multiple points of origin from outside the house.

So they got a search warrant for the data from his pacemaker. That gave them a historical record of his heart rate and rhythms before, during, and after the fire.

Reportedly, the data showed that the man was active when he was supposed to be asleep, and a cardiologist has said it was “highly improbable” that he could carry out the strenuous activities he described.

SEC Reports Enforcement Results for 2016

As we wind down the calendar year, the Securities and Exchange Commission has already reported its enforcement results for the fiscal year that ended September 30.

In case you missed it, here’s the press release. Naturally, there’s some self-patting on the back, but if the past predicts the future, the agency is looking to file cases. Its numbers have climbed steadily over the last dozen years, and it continues to ramp up its use of big-data analytics and the whistleblower program, which it launched in 2011.

Here are some highlights from 2016.

  • The agency filed a total of 868 cases, which was a new single-year high.
  • It filed a record number of cases involving investment companies or advisers and a record number under the Foreign Corrupt Practices Act.
  • It obtained over $4 billion in judgments and orders, which matched the haul from each of the last two years.
  • It awarded more money to whistleblowers ($57 million) than in all prior years combined.

If At First You Don’t Succeed

Here’s that DUI case we alluded to last week.

It’s based on a driver’s challenge to his license suspension after his arrest. His post-arrest blood test showed a blood-alcohol concentration (or BAC) of 0.23 percent. He challenged this finding at the DMV’s administrative hearing and lost. He then petitioned the superior court to overturn that finding and lost again.

After twice losing before the agency and the trial court, he took another swing in the court of appeal, and there, he won.

The issue was whether his blood-test result was reliable.

The crime lab had tested his sample using a machine called a gas chromatograph. It has a heated chamber with two columns through which a sample is passed in gaseous form, and therein lies the rub. You’ve got to use both of those columns. One isn’t enough. Otherwise, you may get a false positive or the machine may indicate more alcohol than actually exists.

According to the driver’s expert and even the machine’s own manufacturer, one column could “tentatively identify” alcohol but “simply [could not] confirm its identity” or “how much might be present.”

In this case, the lab used the right machine, but the test results showed data from only one column, and the DMV didn’t offer any proof to show otherwise.

Thus the DMV could not rely on the test results because, as a matter of scientific principle, one column’s result was incapable of establishing the driver’s BAC.

And so the court of appeal reversed.

E-Discovery and the Rise of Machines

Every day, people and businesses create electronic data about themselves and the world around them, and with modern computing and mobile devices, we often create more data than we could possibly sift through with human eyes.

So what happens when litigants have to review their data in order to respond to a subpoena or discovery request? If you’re a business, are you even sure you understand all that you’ve got? If so then how do you make sure that you’re accurately separating what’s responsive from what’s not, separating what’s relevant from what’s not, holding on to what’s legally privileged, and not missing anything? It’s been a problem in large corporate and commercial cases for a while now, but it’s becoming more prevalent with the sheer volume of electronic data that we create and transmit every day.

The historical solution has been the only one we knew: tackle reams of paper with the brute force of people and hours committed to reviewing every page. But nowadays, in a lot of cases, if you were to print out all that data, you couldn’t afford to pay enough competent people to carefully review all that paper in time.

With electronic discovery, or e-discovery, the solution is still to throw time and bodies at the problem but, also, to expedite the review by scanning or uploading files into a database that reads them electronically, removes duplicates, and renders them searchable.

An emerging solution, however, is predictive coding, or technology-assisted review (“TAR”). With predictive coding, you can teach a computer to analyze a large data set by feeding it small but meaningful subsets that humans have tagged as relevant, privileged, or whatnot. After these initial inputs, you run tests, gauge the computer’s accuracy, and make adjustments. Once you’ve honed the machine’s understanding of the data, you deploy it to code the universal set cheaper, faster, and more accurately than we ever could.

How far has predictive coding come along?

Last week, an influential federal judge had to decide whether he could force litigants to use it to review and produce data. In 2012, this judge was among the first, if not the very first, to approve its use in civil discovery. By 2015, he wrote, the law had come to firmly support a litigant’s choice to use it, but in this case, the litigant had chosen not to.

The plaintiff who requested the discovery wanted the defendant to use predictive coding, but the defendant, who was producing the discovery, preferred to have its own staff run keyword searches instead.

The judge found that he could not compel a litigant to use predictive coding today, but tomorrow, the answer could be different:

“To be clear, the Court believes that for most cases today, TAR is the best and most efficient search tool…. The Court would have liked the [defendant] to use TAR in this case. But the Court cannot, and will not, force [it] to do so. There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet. Thus, despite what the Court might want a responding party to do … [the plaintiff’s] application to force the [defendant] to use TAR is denied.”

Every Man’s Evidence, Everywhere

They say the public has a right to every man’s evidence, but in a world full of digital evidence, what if it’s stored on servers in other countries?

We wrote about this case two summers ago. Back then, the Microsoft Corporation had just defied a federal search warrant that demanded a subscriber’s emails (and other data) as part of a criminal investigation. Microsoft had already produced all of the data that it stored on servers in the United States, but it refused to access and turn over the emails because they were stored in the Republic of Ireland. Instead, the company moved to quash the warrant, which the magistrate denied, and it was appealing that denial to the district court when we last wrote about it. As it happened, the district judge agreed with the magistrate and held the company in contempt of court for not obeying the warrant.

Well, three weeks ago, Microsoft won big in the court of appeals. In a unanimous decision, the court ruled that the warrant couldn’t be enforced against the emails because the federal law in question—the Stored Communications Act—did not authorize warrants to reach beyond the territorial jurisdiction of the United States. Courts must presume that a law applies only within the United States unless Congress clearly says otherwise, and it hadn’t done so here. One judge wrote separately to explain why it was a closer case and to urge Congress to update the Stored Communications Act for the 21st century.

For now, the decision binds federal courts in New York, Vermont, and Connecticut.

“The Fourth Amendment … Is In Retreat”

That’s how a dissenting opinion ends in a major federal case that was decided on Tuesday. This is how it begins:

“A customer buys a cell phone. She turns it on and puts it in her pocket.”

And with that, according to the majority’s opinion, the customer has consented to create a record of everywhere she goes, a record which the government can then obtain without a search warrant based on probable cause.

Neat trick, huh?

If the government wanted to plant a tracking device on you to follow you everywhere you went, it would need a warrant, but if it wants to let your cell phone do the work, it doesn’t.

Instead, under a federal law from 1986, it can apply for a special order to get your phone’s cell-site location data. These are the logs of cell towers that your phone connects to as you go about your business. They create a fairly precise record of where your phone goes.

The special order must be approved by a judge, but the government doesn’t have to show probable cause to believe you committed a crime; it only needs to show reasonable grounds to believe that your travels are “relevant and material to an ongoing criminal investigation.” Off the top of my head, I can’t think of a case where the government couldn’t argue your travels were important once it decided to investigate you for something.

In this case, the government obtained seven months’ worth of records this way.

On appeal, the court not only denied that the Fourth Amendment required a search warrant backed by probable cause, but it denied that the Fourth Amendment applied at all because, supposedly, you have no reasonable expectation of privacy in data that you share (or that your phone shares) with a third party such as your cellular service provider.

The court didn’t explain how people are supposed to work, date, or otherwise live in the real world without doing so.

As we’ve noted before, this third-party doctrine makes no sense in the digital age.

Fortunately, many states, including California, are going the other way.

Ratings and Reviews

10.0Mani Dabiri
Mani DabiriReviewsout of 7 reviews
The National Trial Lawyers
The National Trial Lawyers
Mani Dabiri American Bar Foundation Emblem