Is the Battle Over for Smart-Phones? Search Warrants Should Not Overcome Biometric Protections

By Sonthonax SaintGermain*

In Riley v. California,[1] the Court held that a warrant is required for all searches of cellular phones regardless of whether the search is incident to a lawful arrest.[2]  The Court reasoned that the traditional considerations for the “search-incident-to-arrest doctrine” are not applicable to the capacity and nature of data stored in modern smart-phones.[3]  Riley is a resounding victory for “privacy specialists” and advocates of a digital approach to the Fourth Amendment doctrine.[4]  Yet determining whether Riley carries those aspirations to fruition requires the Court to keep up with the rapid changes in the smart-phone industry.  It is almost certain that a smart-phone model would become either obsolete within a year or unfashionable within a shorter time period.[5]  The technological advances that are made with every “jump” to the next generation are, to the untrained-eye, somewhat nonexistent.[6]  The legal possibilities, however, are ever changing, as the functions of a handheld device resemble less those of a landline than those of a personal computer.[7]  Hence, the next challenge for the Court may be the use of biometric technology (or Biometrics) in smart-phones and computing tablets.

Biometrics “take[s] something unique to the individual—a fingerprint, an iris, voice or facial features—as authentication” to identify that person.[8]  The latest iterations of the Apple iPhone use Biometrics—the Touch ID—to allow customers to access to their device by using their fingerprints.[9]  Each iPhone is equipped with “Touch ID . . . capable of 360-degree readability” allowing rapid access to the device without the added step of entering the previously required four-digit personal identification number (PIN).[10]  Hours after its initial announcement privacy concerns were voiced.[11]  This essay, however, addresses a separate question: whether a search warrant requires a person to bypass the biometric lock on his phone.

Protection could be through the Fifth Amendment’s Privilege against Self-Incrimination. The privilege is violated when a person is forced to communicate information that would lead to evidence incriminating him in the commission of a crime.[12]  There are three requirements in this prohibition.  First, there must be compulsion or involuntary disclosure.[13]  Second, the information obtained must be incriminating against the person providing it.[14]  However, the person need only have a belief the information is incriminating.[15]  And finally, the compulsion must apply to conduct that is communicative.[16]  Thus, the Court’s Self-Incrimination cases look at whether there was a testimonial communication.  Hence three factual patterns raise the Self-Incrimination clause: (1) physical exhibition;[17] (2) production of evidence;[18] and (3) information attesting to the defendant’s state of mind.  The Court drew distinctions between “real or physical evidence” and evidence of factual assertions.[19]  Because of its nature, any information gleaned from fingerprints through the Touch ID would be considered as “real or physical evidence” and be barred from Self-Incrimination protection under the first line of cases.

Stemming from Holt v. United States,[20] these cases hold that the exhibition of certain “physical characteristics” is not dispositive of the accused’s belief of his guilt.  In Holt, the Court reviewed the guilty verdict of a murder trial.[21]  Holt, the defendant, argued that the trial court could not compel him to wear a blouse allegedly belonging to the murderer.[22]  The Court found that this claim was “an extravagant extension of the [Fifth] Amendment.”[23]  The Court reasoned that trial courts are required to compel defendants to come forward in criminal cases to allow the “jury to look at [him] and compare his features with a photograph in proof.”[24]  The Court reasoned that, no “statements” were made and the order did not “extort [any] communications . . . from him . . . .”[25]  The Court understood the Fifth Amendment as not abrogating the state power to compel the physical exhibition of defendants in a criminal matter.[26]  Since Holt, the Court has held that compulsion to participate in police lineups,[27] to provide voice exemplars,[28] to submit to a blood test[29] or a roadside sobriety test,[30] or to give a handwriting sample are not protected by the Self-Incrimination clause.[31]

However, for fingerprints, protection depends on the Court’s characterization of biometric locks: Are they manual key locks or combination locks? In Doe v. United States[32] (Doe II) the Court upheld a “consent directive” compelling the petitioner to “authorize foreign banks to disclose” certain documents.[33]  The Court held that the directive was non-testimonial.[34]  Justice Stevens dissented.[35]  He posited whether a defendant “can . . . be compelled to use his mind to assist the prosecution in convicting him of a crime . . . .”[36]  The answer, to Justice Stevens, seemed clear: “no.”[37]  In reaching this conclusion, he stated that the defendant “may in some cases be forced to surrender a key to a strongbox containing incriminating documents, but . . . he [cannot] be compelled to reveal the combination to his wall safe by word or deed.”[38]  No member of the Court joined his dissent.  In a footnote responding to this hypothetical, Justice Blackmun, writing for the majority, stated:

We do not disagree with the dissent that the expression of the contents of an individual’s mind is testimonial communication for purposes of the Fifth Amendment.  We simply disagree with the dissent’s conclusion that the execution of the consent directive at issue here forced petitioner to express the contents of his mind.  In our view, such compulsion is more like being forced to surrender a key to a strongbox containing incriminating documents than it is like being compelled to reveal the combination to [petitioner’s] wall safe.[39]

The majority accepted the distinction,[40] but it pointed that the disagreement between Justice Stevens and his colleagues was whether the directive was a key or a combination.[41]

Stevens’ position did not win the day, and his distinction may have gone unnoticed if not for United States v. Hubbell.[42]  In Hubbell the Court held that producing documents pursuant to a subpoena, which does not describe with particularity the documents sought, violates the Self-Incrimination clause.[43]  Justice Stevens, now writing for an eight-member majority, stated, “[the] assembly of those documents was like telling an inquisitor the combination to a wall safe, not like being forced to surrender the key to a strongbox.”[44]  The distinction was reiterated and entered the Court’s jurisprudence if only as a dictum.  Nonetheless, it is relevant to our concerns.

Two things are clear from Doe II’s footnote 9.  First, a defendant cannot be compelled to reveal the content of his mind.  This proposition is inherent from the Self-Incrimination clause and the Court’s line of cases on physical characteristics.  A defendant cannot reveal his thoughts and knowledge without bearing testimony to his guilt—that is a given.

Second, producing the key to a strongbox, unlike providing the combination to a wall safe,[45] is not testimonial of the defendant’s content of his mind.  Put differently, the possession of a key—even if it is the only copy—does not testify that the possessor is aware of the content of the strongbox.  When stated as such, this proposition is obvious and has no practical application.[46]  But it may yet be incorrect.  For one, the production of tangible evidence is indicative, almost akin to testimony, of “existence, control, and location of” potentially incriminating evidence.[47]  Equally so, in providing the key upon request, the arrestee is clearly stating that he is aware of its purpose.  Yet even if this notion stands, this would suggest that if a person is privy to the combination of a safe he is automatically privy to its contents.  That is simply incongruent.  One can point to several persons sharing the combination of their safe with guarantors—if only to insure against memory failure.  But if the Court were suggesting that knowledge of the combination creates a temptation to investigate its content and thus leading to testimonial knowledge, what would prevent the detainer of the key from pursuing the same end?[48]  The truth is simple: the Court accepted the distinction without elaboration.[49]

If fingerprint locks are comparable to key locks, then the privilege does not insulate the owner of the iPhone from being compelled to provide access to it.  But if they are more like the combination locks, then the privilege applies.  At first glimpse, a thumbprint is more like a key.  It is tangible.  It is mechanical in its application.  And it requires no recollection of memory in order to operate it.[50]  This lack of memorization may have created this odd distinction.  But the differences are more compelling.  A fingerprint does more than provide access.  It could be use to authenticate identity.[51]  Hence the fingerprint—much like the PIN—must be chosen as the exact means of access to the device;[52] and thus, although to a minimum, testify that there was a conscious thought process applied in its selection.

The strongbox analogy does nothing to resolve this issue.  It is clear that the Court would not entertain compelled acquisition of a safe combination, PIN or Password.[53]  However the analogy creates more uncertainty than it would resolve.  Fingerprint locks are more like passwords and usernames when used in conjunction to gain access.  It identifies the user and authenticates his identity all in one motion.[54]

However, if biometric access suggests identity and control, the problem is perhaps moot, since all searches must be pursuant to a warrant, and hence subject to the particularity requirement.[55]  This is not necessarily true.  Evidence seized from smart-phones may not be accessible unless police officers can bypass the biometric locks.    Yet, regardless of a warrant, a defendant is under no obligation “to aid” police officers in accessing evidence on his device.[56]  In Andresen v. Maryland,[57] the Court addressed whether the Self-Incrimination clause precludes the prosecution from introducing evidence seized pursuant to a search warrant in its case-in-chief against the defendant.[58]  The evidence at issue was “business records [containing] statements made by the petitioner.”[59]  The Court held that “suppression” was not compelled because the defendant “was not asked to say or to do anything.”[60]  The Court said, “when these records were introduced at trial, they were authenticated by a handwriting expert, not by” the defendant.[61]

However in reaching this conclusion the Court was careful to separate the Fifth Amendment problem from the defendant’s obligations vis-à-vis the police investigation under a search warrant: “a seizure of [incriminating] materials by law enforcement officers differs [from production of the same materials in compliance with a subpoena], in a crucial respect . . . . [The] individual against whom the search is directed is not required to aid in the discovery, production, or authentication of incriminating evidence.”[62]  Andresen conflicts with the strongbox analogy found in Doe II and Hubbell.  But it is clear from Andresen that a defendant need not “aid” the police in its investigation.[63]  Providing the key to a strongbox, one clearly insurmountable by any other means, is providing aid in a criminal investigation.

If Andresen is still good law, it stands that a warrant cannot compel the defendant to “aid” in his indictment, arrest or prosecution.[64]  The Fifth Amendment guarantees this protection, and the Fourth Amendment does not deplete it.  The state cannot circumvent one constitutional prohibition by satisfying another.  Meeting all of the requirements for a valid warrant cannot—and should not—allow police officers to compel criminal suspects to assist in their own prosecution.[65]  Thus police officers seeking to access a smart-phone must bypass the biometric lock by means independent from compelling the defendant to do so.


*J.D. University of Illinois (’14); M.Sc. Applied Economics, Florida State University (’09). Special thanks go to Benjamin Sunshine for bringing this topic to my attention. I also give thanks my Professors: Margareth Etienne, Andrew Leipold, and Shannon Moritz, as well as Dean Jamelle Sharpe for their advice. Thanks are also due to my friends: Michael Corliss, and Derek Dion, in conjunction with the Editing Board at the Illinois Journal of Law, Tech. & Policy for their support and indulgence. Finally I thank my family and loved ones, in particular my parents.

[1] Riley v. California, 134 S. Ct. 2473 (2014).

[2] Id. at 2485 (“[O]fficers must generally secure a warrant before conducting such a search.”).

[3] Id. (declining to extend United States v. Robinson, 414 U.S. 218 (1973)).

[4] Richard Re, Symposium: Inaugurating the Digital Fourth Amendment, SCOTUSblog (June 26, 2014, 12:37 PM),

[5]  See Suzanne Choney, Planned Obsolescence: Cell Phone Models, NBC News (Feb. 24, 2009, 8:57 AM), ( (“[M]ost phones have a market life cycle of nine to 12 months.”).

[6]  See id.  (stating a new model may “still look[] like the original . . . but . . . has a few new features”).


[7] Riley v. California, 134 S. Ct. 2473, 2489 (2014) (“The term ‘cell phone’ is itself misleading shorthand; many of these devices are in fact minicomputers that also happen to have the capacity to be used as a telephone.”) (emphasis added).

[8] Mark G. Milone, Biometric Surveillance: Searching For Identity, 57 Bus. Law 497, 497 (2001) (“Biometrics use immutable personal characteristics, such as facial features, fingerprints, and retinal patterns, to establish and authenticate identity.”).

[9] Touch ID. Security. Right at Your Fingertip., Apple, (last visited Oct. 4, 2014).

[10] Id.; see also David Pogue, In Arrival of 2 iPhones, 3 Lessons, N.Y. Times (Sept. 17, 2013), (“[Y]es, a password is a hassle; half of [smart-phone] users never bother setting one up.”).

[11] Apple Fingerprint Tech Raises “Privacy Questions,” BBC News (Sept. 20, 2013, 1:28 PM), (“Senator Al Franken, chairman of the influential Senate Judiciary Subcommittee on Privacy, Technology and the Law, has written to Apple boss Tim Cook explaining his security concerns.”).

[12] See U.S. Const. amend. V (“No person shall . . . be compelled in any criminal case to be a witness against himself.”); see also Holt v. United States, 218 U.S. 245, 252–53 (1910) (expressing the self-incrimination as a “prohibition of the use of [extorted] communications”).

[13] Holt, 218 U.S. at 252–53 (stating the clause “is a prohibition [against] the use of physical or moral compulsion”).

[14] That is simply found in the language of the Amendment. Cf. U.S. Const. amend. V (“No person shall . . . be compelled in any criminal case to be a witness against himself.”).

[15] Cf. Pennsylvania v. Muniz, 496 U.S. 582, 615 (1990) (Rehnquist, C.J., concurring) (“By ‘incriminating response’ we refer to any response—whether inculpatory or exculpatory—that the prosecution may seek to introduce at trial.”) (internal citation omitted).

[16] Holt, 218 U.S. at 252–53 (stating the clause “is a prohibition [against extorting] communication”).

[17] Id. at 245; Schmerber v. California, 384 U.S. 757 (1967); United States v. Wade, 388 U.S. 218 (1967); Gilbert v. California, 388 U.S. 263 (1967); United States v. Dionisio, 410 U.S. 1 (1973); Pennsylvania v. Muniz, 496 U.S. 582 (1990).

[18] Fisher v. United States, 425 U.S. 391 (1976); United States v. Doe (Doe I), 465 U.S. 605 (1984); Doe v. United States (Doe II), 487 U.S. 201 (1988).

[19] Muniz, 496 U.S. at 591.

[20] Holt, 218 U.S. at 245.

[21] Id. at 246.

[22] Id. at 252.

[23] Id. at 252.

[24] Id. at 253.

[25] Id. at 253.

[26] Schmerber v. California, 384 U.S. 757, 761 (1966).

[27] United States v. Wade, 388 U.S. 218, 222 (1967).

[28] Id. at 222–23; United States v. Dionisio, 410 U.S. 1, 17–18 (1973).

[29] Schmerber, 384 U.S. at 761.

[30] Pennsylvania v. Muniz, 496 U.S. 582, 590 (1990).  The Court however framed its ruling to only include test that would not require the accused from giving an answers the veracity of which can become testimonial.  Id. at 600.

[31] Gilbert v. California, 388 U.S. 263, 266–67 (1967).

[32] Doe v. United States, 487 U.S. 201 (1988).

[33] Id. at 219.

[34] Id.

[35] Id. at 219–21 (Stevens, J., Dissenting).

[36] Id.

[37] Id.

[38] Id. (emphasis added).

[39] Id. at 210 n.9 (internal citation and quotation marks omitted) (emphasis added).

[40] Id.

[41] Id.

[42] United States v. Hubbell, 530 U.S. 27 (2000).

[43] Id. at 43.

[44] Id. at 43 (citing Doe II, 487 U.S. at 210 n.9).

[45] This part is almost self-evident.  Safe combinations, much like passwords, require mental recollection.

[46] Practically speaking, combination locks are nothing more than the evolutionary successors of key locks.

[47] Adam M. Gershowitz, Password Protected? Can a Password Save Your Cell Phone from a Search Incident to Arrest?, 96 Iowa L. Rev. 1125, 1171 (2011) (citing Fisher v. United States, 425 U.S. 391, 410 (1976); Commonwealth v. Hughes, 404 N.E.2d 1239, 1244–45 (Mass. 1980)).

[48] One answer is that a person may be in possession of a key without knowing its purpose.  That cannot be the answer because a person can also know a sequence of numbers without knowing its meaning.  For example: what is the meaning of 01.02.54?

[49] The Strongbox analogy gets more complicated when we expand its reach.  Take the following two scenarios that paradoxically lead to opposing results.  First if a person has four keys on a keychain, one of which opens a door, he can be compelled to identify which one opens the door.  Yet consider a door that has four locks, and the same key opens all four.  If, to open the door, the person must always unlock the locks in a given sequence—almost like a safe combination—then it stems from Dow II that he cannot be compelled to tell the authorities which sequence, even though he is required to provide the key.

[50] See Gershowitz, supra note 48 at 1171 (suggesting PINs are testimonial because they “reveal the contents of [a person’s] mind by recalling” the sequence).

[51] Kristian Köhntopp, Comment to Fingerprints are Usernames, not Passwords, Dustin Kirland (Oct. 7, 2013), (“We could each conveniently identify ourselves by our fingerprint.”).

[52] See Apple, supra note 10 (explaining the process for calibrating the fingerprint authentication system).

[53] See, e.g., In re Boucher, No. 2:06-mj-91, 2007 WL 4246473, at *2 (D. Vt. Nov. 29, 2007), rev’d No. 2:06-mj-91, 2009 WL 424718 (D. Vt. Feb. 19, 2009) (denying prosecutors request that the subject of Grand Jury subpoena provides access to his computer by entering the password, even if done privately in a secluded room).

[54] But see Köhntopp, supra note 52 (suggesting “biometrics cannot, and absolutely must not, be used to authenticate an identity”).

[55] Cf. Riley v. California, 134 S. Ct. 2473, 2485 (2014) (“[O]fficers must generally secure a warrant before conducting such a search.”).  The particularity requirement would overcome the generality of the subpoena found in Hubbell.

[56] Andresen v. Maryland, 427 U.S. 463, 473–74 (1976).

[57] Id..

[58] Id. at 465.

[59] Id. at 471.

[60] Id. at 473.

[61] Id. at 474.  This answer reinforced a consistent theme requiring an intermediary separating an inference of guilt from the conduct.  See, e.g., Holt v. United States, 218 U.S. 245 (1910) (requiring jury to authenticate physical match); Schmerber v. California, 384 U.S. 757 (1967) (requiring blood testing); United States v. Wade, 388 U.S. 218 (1967) (requiring voice matching); Gilbert v. California, 388 U.S. 263 (1967) (requiring hand exemplar matching).

[62] Id. at 473–74 (emphasis added).

[63] Id.

[64] Id.

[65] Cf. id. This is true both from Andresen, and the language and spirit of the Bill of Rights.

Anything You Say May Be Used Against You: Corporate Voiceprint Tactics Trigger Latest Privacy & Security Concerns

By Shruti Panchavati*

“We raid speech for its semantic meaning, and then discard the voice like detritus leftovers.”[1]

I. Introduction

Work is being done to integrate various biometrics into mobile devices, but the human voice is a natural choice for businesses because public attention on voiceprinting is shockingly low.  For instance, it came as no surprise when privacy concerns began to take form even as Apple unveiled its fingerprint scanner on the newest iPhone 5S;[2] lawmakers and advocates declared it a hacker’s “treasure trove.”[3]  And yet, despite its obvious functional similarities, Apple’s voiceprint scanner “Siri” has received little public scrutiny, suggesting a widespread misunderstanding about the human voice, one that mobile giants have been quick to market.[4]  The result is chilling: in the absence of legal and regulatory guidelines these corporations could be on their way to creating the largest name to voice database, without even trying.

An increasing number of mobile companies are combining voiceprint technology with broad privacy policies to gain an unfettered right to collect, store, and use an individual’s data for an indefinite period of time.  This Article examines Apple’s voiceprint policy and argues that modern-day remedial strategies have failed to protect users’ privacy and security.  In response, states should adopt and implement California’s Right to Know Act, which would allow users to access and track their digital footprint.  Part II of this Article highlights the sweeping implications of corporate voiceprinting.  Part III exposes the wide-reaching privacy and security implications in Apple’s ill named “Privacy” Policy.  Part IV recommends a practical, effective solution that balances the privacy concerns of the user against the commercial interests of the mobile industry.

II. An Audible Signature

Voiceprinting (also referred to as “voice biometrics”) creates a mathematical representation of the sound, pattern, pitch, and rhythm of an individual’s voice, which can then be used for any number of purposes, such as recognition or identification.[5]  The technology has the distinct advantage of basing authentication on an intrinsic human characteristic—the human voice.  It is our audible signature and, just as no two fingerprints are alike, no two voices are alike.[6]  It is also a powerful guide to the speaker’s most terrifyingly intimate details.[7]  With just a few words, the voice can reveal an individual’s gender, age, height, health, emotional state, sexual orientation, race, social class, education, and relationship to the person being spoken.[8]  It is a remarkably rich resource that is largely taken for granted, in part, because of the spread of mobile devices.

Mobile technology appears to have dissociated the voice from the body, lulling the public into a false sense of security about corporate voiceprinting.  To see its implications, consider that financial service organizations have already implemented voice biometrics to allow users to check account balances, make payments, track transactions simply using their voice.[9]  Additionally, governments across the globe are investing in voice biometrics that would allow them to tuck away millions of voiceprints for surveillance and law enforcement.[10]  Indeed, the human voice is now more valuable than any password or PIN number and widespread corporate collection, storage, and use of our audible signatures raises grave privacy and security concerns, begging the question: can mobile companies be trusted to handle this technology responsibly?

III. Unraveling Apple’s Voiceprint Policy

On October 4, 2011, Apple unveiled the iPhone 4S with Siri, a built-in interactive personal assistant.[11]  While it was not the first foray into speech-recognition technology,[12] it is the most popular and, after only five months of availability, the iPhone 4S sold about 15.4 million units.[13]  It is undoubtedly a remarkable technological achievement, but combined with its overbroad Privacy Policy, it can have many unforeseeable consequences for innocent users.

Apple’s iOS7 Software Licensing Agreement, in relevant part, notes that, “[w]hen you use Siri or Dictation, the things you say will be recorded and sent to Apple in order to convert what you say into text and to process your requests.”[14]  In other words, anything said to Siri is recorded and sent to the company’s data farm in North Carolina, where Apple converts the spoken words into a digital code.[15]  Not mentioned in the Privacy Policy is that the company assigns each user a randomized number and, as voice files from Siri requests are received, the data is assigned to that number.[16]  After six months, Apple then “disassociates” the user number from the voice clip, but maintains records of these disassociated files for up to eighteen months for “testing and product improvement purposes.”[17]  However, it remains unclear what Apple really does when it “dissociates” these files or what it means to use user voiceprints for “testing and product improvement purposes.”  Moreover, without any regulatory oversight, there is no guarantee that Apple ever actually deletes these records after eighteen months or at all.

Siri’s Privacy Policy further states that “[b]y using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including [the user’s] voice input and User Data, to provide and improve Siri, Dictation, and dictation functionality in other Apple products and services.”[18]  This information collected includes “all types of data associated with your verbal commands and may also include audio recordings, transcripts of what [is] said, and related diagnostic data.”[19]  What Apple is referring to here is a voiceprint, so by signing the licensing agreement, a user consents to the company’s collection, storage, and use of their voice biometric data.  Additionally, Apple gives itself the right to share this data with any of its unnamed partners and subsidiaries without notice or cause and for an indefinite period of time.

It may be argued that Apple and other like companies know better than to misuse user information because it would be poor public relations strategy.  There is no evidence to prove that corporations are currently exploiting their position.[20]  However, the problem remains that no one—users, lawmakers, privacy advocates, or politicians—knows what is happening behind closed doors and Apple is not saying either way.[21]  Personal data economy has become a largely elusive and highly lucrative world and, as always, real concern in privacy and security is not what is happening, but what could happen.

IV. Recommendation & Conclusion

With the widespread use of voiceprint technology in mobile phones, it is no surprise that companies, such as Apple, have digital portfolios on each user.  Banning voice biometric technology is not a desired option and admittedly companies do need some information about a user and his or her preferences to operate applications, such as Siri, efficiently.[22]  However, present-day remedies do not provide sufficient protections against corporate intrusions and data thefts.[23]

In the face of this dilemma, California’s “Right to Know” Act sets an unprecedented level of corporate transparency that gives users the right to access and track their own private data.[24]  Specifically, the Act requires that “any business that holds a customer’s personal information to disclose it within 30 days of that customer’s request. Adding to this, names and contact information of all third parties with which the business has shared that customer’s data with during the previous 12 months must also be disclosed.”[25]  Additionally, if the company refuses disclosure, the user has the legal right to bring a civil claim, forcing them to comply with the law.[26]  The Act mimics the right to access data that is already available to residents in Europe, proving that big technology giants, such as Apple, already have the procedures in place to respond.[27]  As more and more companies continue to implement efficient strategies to facilitate the process, the Act will not only have introduced corporate transparency into the digital age, but will likely have also made it the norm.

It may be argued that the Right to Know Act is too modest and does not actually give users the right to correct or delete their personal data.  These are certainly important considerations down the road and, in a perfect world, users would have full and complete control of all of their information.  However, it may be a long time, if ever, before that robust privacy and security strategies can be implemented.  In the meantime, the Right to Know Act is an important first step in putting privacy and security back in the hands of the user.


*J.D. Candidate, University of Illinois College of Law, expected 2015.  B.S. Psychology with Neuroscience Option, Pennsylvania State University, 2012.  I am grateful to the editors of the Journal of Law, Technology, and Policy for their advice and insight on this piece.

[1] Anne Karpf, The Human Voice: How this Extraordinary Instrument Reveals Essential Clues About Who We Are 13 (Bloomsbury USA, 1st ed. 2006).

[2] Chenda Ngak, Should You Fear Apple’s Fingerprint Scanner?, CBS News (Sept. 24, 2013, 10:12 AM),

[3] Charlie Osborne, iPhone Fingerprint Scanner Sparks Privacy Worries, CNET (Sept. 17, 2013, 9:55AM),

[4] See Kevin C. Tofel, How to Enable Experimental “OK Google” Voice Recognition on your Chromebook, Gigaom (Nov. 21, 2013, 8:33 AM), (noting that Google Voice is already a popular feature on the Android smartphone and Chrome).

[5] Authentify, Voice Biometric Authentication, (last visited Sep. 15, 2014).

[6] See id. (“A voice biometric or ‘voice print,’ is as unique to an individual as a palm or finger print.”).

[7] Karpf, supra note 1, at 10–11.

[8] Id.

[9] Omar Zaibak, 3 Banks Using Voice Biometrics for Security and Authentication, Voice Trust (Mar. 24, 2014),

[10] Noel Brinkerhoff, Governments Begin to Build Voice Print Databases, All Gov (Oct. 6, 2012),

[11] Press Release, Apple Launches iPhone 4S, iOS 5 & iCloud (Oct. 4, 2011), available at

[12] Bernadette Johnson, How Siri Works, HowStuffWorks, (last visited Sep. 15, 2014).

[13] Id.

[14] iOS Software License Agreement, Apple, available at (last visited Sep. 15, 2014) (emphasis original).

[15] John W. Mashni & Nicholas M. Oertel, Does Apple’s Siri Records and Store Everything You Say?, Technology Law Blog (July 17, 2012),

[16] Eric Slivka, Anonymized Siri Voice Clips Stored by Apple for Up to Two Years, MacRumors (Apr. 19, 2013, 6:42 AM),

[17] Id.

[18] iOS Software License Agreement, supra note 14.

[19] John Weaver, Siri is My Client: A First Look at Artificial Intelligence and Legal Issues, N.H. B. J., Winter 2012, at 6 available at

[20] See Matthew Panzarino, Apple Says It Has Never Worked With NSA To Create iPhone Backdoors, Is Unaware of Alleged DROPOUT JEEP Snooping Program, Tech Crunch (Dec. 31, 2013), (indicating that Apple denied creating any iPhone “backdoors” for the National Security Agency that would allow NSA to monitor Apple’s users).

[21] Barbara Ortutay, Apple Privacy Concerns: Experts Slam Apple Over ‘Locationgate,’ The Huffington Post (June 28, 2011),

[22] iOS Software License Agreement, supra note 14.

[23] Australian Associated Press, Facebook Gave Government Information on Hundreds of Australian Users, The Guardian (Aug. 28, 2013, 2:41 AM) (noting the failure of a claim by an Austrian law student, who invoked a “habeas data” right by demanding Facebook data).

[24] Rainey Reitman, New California “Right to Know” Act would Let Consumers Find out who has their Personal Data—and Get a Copy of it, Electronic Frontier Foundation (Apr. 2, 2013),

[25] Assembly Bill, 14 California Legislature 1291, (2013), available at

[26] Id.

[27] Reitman, supra note 24.

Putting a Stop to New-Age Revenge: In the Age of Twitter, Instagram and Snapchat, Privacy Is Still Fundamental

By Michael Wester*

I. Introduction

In 2009, Florida resident Holly Jacobs had been dating a guy for several years when the relationship suddenly ended.[1]  Following the breakup, her ex-boyfriend posted naked photos of her online.[2]  Within days, the photos went viral, and the intimate images were posted on as many as 100,000 sites.[3]  As a result of the posting, Holly Jacobs was pressured to quit her job, forced to change her name, and suffered emotional stress and anxiety.[4]

In response, Ms. Jacobs filed a lawsuit against her ex-boyfriend alleging that he posted naked pictures of her online without her consent.[5]  However, despite the damage he caused, Holly Jacobs’s ex faced no criminal punishment for his actions.[6]  Unfortunately, the state of Florida does not have a law that prohibits such action, so Ms. Jacobs eventually had no choice but to drop her case.[7]

Holly Jacobs was a victim of revenge pornography, and like so many other victims of revenge pornography, she did not get justice.  Revenge pornography is a subset of non-consensual pornography that significantly impedes an individual’s fundamental right to privacy.[8]  Often, the victims provide the explicit photos or videos to a dating partner consensually, but do not consent to the photo being shared publicly.  In other cases, the images are taken without the consent of the partner.  The intimate images are then uploaded to the Internet where they can then be spread to anyone who has Internet access.  Over the last several years, the number of non-consensual pornography websites has increased annually.[9]

Recently, a study surveyed more than 1,000 American adults and found that sixty-eight percent of them had shared intimate messages or photos from their cellphones.[10]  Of these 1,000 adults, more than ninety of the respondents said an ex had threatened to post risqué photos online.[11]  Of those ninety respondents, more than fifty of them claimed the threat was carried out.[12]  Once these intimate images or videos reach the Internet abyss, it becomes almost impossible to take them all down.[13] Unfortunately, only six states—Alaska, California, Idaho, New Jersey, Utah, and Wisconsin—have legislation that criminalizes non-consensual pornography.[14]

Presently, no federal law exists to protect victims from this abuse.  Instead, the Communications Decency Act actually shields internet service providers and website owners from civil liability that might result when a third party places material on their website.[15]  Further, because of the Communications Decency Act’s existence, the only current way to stop the spread of revenge pornography is by punishing the source—the individuals posting the images to the websites—through state law.[16]  Although a federal law is preferable because these state laws have limited jurisdiction,[17] state laws have indeed attained some success.[18]

This Article aims to spread awareness of these abhorrent occurrences of revenge pornography and pleads that states across the country pass criminal laws that can finally put a stop to them.  Yet, this Article also warns that any state passing such a law will be walking a tightrope between protecting revenge pornography victims and violating constitutional rights to free speech.  To facilitate the discussion, Part II of this Article provides a brief overview of non-consensual pornography and the problems it causes.  Part III promotes criminal punishment for those engaging in revenge pornography.  Finally, Part IV implores state governments to pass legislation that criminalizes the intentional disclosure of sexually explicit images without consent.  Additionally, Part IV discusses the four necessary elements of a model law.

II. Background

Non-consensual pornography, including revenge pornography, is not a new phenomenon.  It can be traced back to the 1980s, when Hustler, a pornographic magazine, started a monthly feature that urged subscribers to submit explicit female images for the next issue.[19]  Many times, the women were either oblivious to the fact the pictures were even taken or entirely unaware they were submitted.[20]  Accompanying the photos was usually a fake biography, consisting of made-up hobbies, sexual fantasies, and interesting facts.[21]  Several women sued the magazine for publishing their photos without their consent, but few, if any, were successful in obtaining justice.[22]

With the advancement of technology, the ability to take and disseminate non-consensual pornography has significantly increased.  Today, dozens of websites exist that receive and then distribute non-consensual intimate images.  Frequently, these websites allow the material to upload anonymously.[23]  Often, however, the victims are not so lucky and do not remain anonymous.  Just like with Hustler’s monthly feature, on some websites anonymous posters are encouraged to share the women’s personal information, including their personal Facebook pages, phone numbers, email addresses, workplaces, and home addresses.[24]  The release of this information can put the victims’ safety in danger.

As the material circulates the Internet, victims are exposed to a range of sexual propositions, stalking, and harassment.[25]  In some cases, victims have reported losing jobs and educational opportunities as a result of this material spreading to people close to their lives.[26]  Often, release of these explicit images causes extreme emotional distress, anxiety, and psychological trauma.[27]  Sadly, their release has even led several victims to commit suicide.[28]

In most cases, attempting to have the material removed from the Internet proves unsuccessful.[29]  Moreover, trying to take down the material is painstaking and expensive.[30]  The intimate photos almost instantaneously circulate the web, and the sites are constantly morphing, which makes it almost impossible to chase down every website on which the material has been posted.  For example, Is Anyone Up, one notorious non-consensual porn site, received over 30 million page views of thousands of ex-girlfriends before it was finally shut down after months of pursuit by victims.[31]   In spite of these victims’ efforts, the website immediately reopened elsewhere under a different name and IP address.[32]

III. Criminal Versus Civil Punishment

Criminal laws are necessary to combat non-consensual pornography because tort law and copyright law are insufficient.  As tort law and copyright law exist currently, they provide no real relief to these victims.[33]  As stated by Mary Anne Franks, the foremost expert on revenge pornography and a professor at the University of Miami School of Law, although civil remedies sometimes help the victims, such civil remedies are not enough.[34]  Rather, because the damage to their lives is so great, the victims just wish these photos had never been released.

States must prevent this intimate material from surfacing online in the first place.  Their best weapon is to criminalize the action.  Some proponents argue that non-consensual pornography should be criminalized because, in many ways, it is an act of sexual use without consent, analogous to sexual assault.[35]  More importantly, though, by criminalizing non-consensual pornography, states can ensure all victims receive justice.

Civil litigation places enormous financial burdens on victims.[36]  Private individuals have immense determination to seek justice but are often without the means to do so.  In non-consensual pornography cases, in particular, only the victims who can afford lawyers and private detectives to chase down every source are the ones who receive some relief.[37]  Frequently, the websites that victims must sue possess the financial resources to slow the investigation process and stop the pursuit.[38]  Criminalizing non-consensual pornography could help alleviate the burden placed on victims.

IV. Plea for New State Laws

This Article recommends that states pass criminal laws specifically tailored to stopping revenge pornography.  Any such state law should define non-consensual pornography as the “posting or publishing to the public of a sexually explicit image without consent.”  Several states currently recognize broader voyeurism laws, which prohibit the non-consensual recording and distribution of sexually explicit images of another person,[39] but these laws are often insufficient to sustain allegations in revenge pornography cases.[40]  The voyeurism laws remain insufficient because they do not protect those who consented to being recorded but did not consent to the distribution of those images, nor those who recorded the images themselves but did not consent to the distribution.  To truly combat non-consensual pornography, there are four elements that every law should contain.

First, a model law should define “explicit images.”  These images should be defined as photographs, film, videotapes, recordings, or any other reproduction of the image of another person, whose intimate parts[41] are exposed or who is engaged in an act of sexual contact.  Second, it should define the “posting” or “publishing” of the intimate images to the public.[42]  “Publishing” should be defined as the disclosing, selling, providing, transferring, distributing, circulating, disseminating, presenting, exhibiting, advertising, manufacturing, or offering of the explicit images to a public forum.[43]

Third, the law should define “consent” as contextual.  Just because the photography or videoing occurs consensually in the privacy of the home does not mean that a partner has consented to put it on the Internet.  As stated by Professor Mary Anne Franks, “sharing a nude picture with another implies limited consent similar to other business transactions.”[44]  As she told the Huffington Post, “[i]f you give your credit card to a waiter, you aren’t giving him permission to buy a yacht.”[45]  It is imperative that any law punishes the abuse of this limited consent, because the violation of this consent is in many ways more detrimental than the violation of consent in a business transaction.  In this case, it is not business and money on the line, but rather an individual’s privacy and personal security.  To establish this context, a form of a reasonable man standard is recommended.  In Franks’ example, a reasonable man, as a waiter, would not take a credit card from a customer and buy a yacht.  Likewise, in a non-consensual pornography context, a reasonable man would not take intimate photos from an ex and release them onto the Internet for the whole world to see, without the ex’s permission.

Finally, each state should add exceptions for individuals to whom the law does not apply.  For example, in Wisconsin, the law does not apply to parents, guardians, or a provider of Internet access.[46]  Additionally, Wisconsin established an “Anthony Weiner”[47] exception, which excludes from the law, “[a] person who posts or publishes a private representation that is newsworthy or of public importance,”[48] based on the idea that individuals who have placed themselves in the public eye have a significantly diminished privacy interest.[49]  As another example, New Jersey has an exemption for law enforcement officers in connection with a criminal prosecution.[50]

V. Conclusion

States must be proactive instead of reactive.  New Jersey, in 2004, became the first state to pass an anti-revenge porn law after a Rutgers University student killed himself.[51]  The law was the first of its kind and made it a felony for any person to disclose sexually explicit photographs or images of another person without that person’s consent.[52]  Nevertheless, one victim had to lose his life before the law was passed.  Clearly, states cannot wait a second longer to pass a law prohibiting non-consensual pornography.

Wisconsin, Idaho, and Utah passed laws relating to this issue in 2014.[53]  New York, Maryland, and Illinois comprise some of the thirteen states currently considering passing some form of non-consensual porn legislation.[54]  Additionally, Representative Jackie Speier of the United States House of Representatives announced she intends to introduce a federal bill, drafted by Professor Mary Anne Franks, criminalizing revenge pornography in the next few months.[55]  Although a stride in the right direction, many believe Representative Speier’s bill will not pass on a federal level.[56] Consequently, strong state criminal legislation is necessary so that this destructive and inexcusable form of sexual exploitation is prevented and punished.

How many more people must be harmed before legislators around the country wake up and protect their constituents? Advances must be made to shield victims, like Holly Jacobs, from the grave harms associated with the distribution of non-consensual pornography.


*J.D. Candidate, University of Illinois College of Law, expected 2015.  B.A. 2012, Saint Louis University.  I would like to thank everyone who helped me in the writing of this Article with suggestions and comments, including Angelica Nizio and Andrew Lewis, editors of the Journal of Law, Technology, and Policy, and Kristen Sweat.

[1] Beth Stebner, “I’m Tired of Hiding”: Revenge-Porn Victim Speaks out over her Abuse After Claims Ex Posted Explicit Photos of her Online, Daily News (May 3, 2013, 12:05 PM),

[2] Memphis Barker, “Revenge Porn” Is No Longer a Niche Activity Which Victimizes Only Celebrities – The Law Must Intervene, Independent (May 19, 2013),–the-law-must-intervene-8622574.html.

[3] Id.

[4] Id.

[5] Id.

[6] Id.

[7] Stebner, supra note 1.

[8] Woodrown Hartzog, How to Fight Revenge Porn, Atlantic (May 10, 2013, 1:42 PM),

[9] Tristan Hallman, Saying She’s a Victim of Revenge Porn, Dallas Woman Fights to Get Online Images Removed, Dallas News (Feb. 16, 2014, 11:00 PM),

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] See Michelle Dean, The Case for Making Revenge Porn a Federal Crime, Gawker (Mar. 27, 2014, 2:45 PM), (noting the different states that have laws that affect revenge porn); Erin Donaghue, Judge Throws Out New York “Revenge Porn” Case, CBS News (Feb. 25, 2014, 4:42 PM), (mentioning the Alaska, California, and New Jersey statutes while discussing New York’s lack of a revenge porn criminal statute). See also State “Revenge Porn” Legislation, Nat’l Conf. St. Legislatures, (last visited May 27, 2014) (listing the states that considered revenge porn legislation in the years 2013 and 2014).

[15] 47 U.S.C. § 230 (2006).

[16] 47 U.S.C. § 230 (2006) (“Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”).

[17] Mary Anne Franks, Why We Need a Federal Criminal Law Response to Revenge Porn, Concurring Opinions (Feb. 15, 2013,

[18] EJ Dickson, Revenge Porn Site Ordered to Pay $385,000 to Victim, Daily Dot (Mar. 20, 2014),

[19] Alexa Tsoulis-Reay, A Brief History of Revenge Porn, N.Y. Mag. (Jul. 21, 2013),

[20] Amanda Levendowski, Our Best Weapon Against Revenge Porn: Copyright Law?, Atlantic (Feb. 4, 2014, 1:03 PM),

[21] Id.

[22] Id.

[23] See Women Sue Explicit “Revenge Porn” Site After Jilted Lovers Anonymously Posted Revealing Pictures of Them, Daily Mail (Jan. 25, 2013, 6:22 PM), (discussing a lawsuit against a website that allows people to anonymously upload revenge porn).

[24] Dickson, supra note 18; Brenton Awa, Hundreds of Local Women Fall Victim to “Revenge Porn”, KITV (Feb. 26, 2014, 11:00 PM), /hawaii/hundreds-of-local-women-fall-victim-to-revengeporn/24708622.

[25] Mary Anne Franks, We Need New Laws to Put a Stop to Revenge Porn, Independent (Feb. 23, 2014), comment/we-need-new-laws-to-put-a-stop-to-revenge-porn-9147620.html.

[26] Id.

[27] Id.

[28] Awa, supra note 24; See Barker, supra note 2 (noting that the New Jersey anti-revenge pornography law was passed after a Rutgers University student killed himself).

[29] Franks, supra note 25.

[30] See Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, Wake Forest L. Rev. 5 (forthcoming 2014), available at (noting the steep costs associated with revenge porn).

[31] Barker, supra note 2.

[32] Citron & Franks, supra note 30, at 4.

[33] EJ Dickson, Texas Woman Wins Largest Settlement Ever in Revenge Porn Case, Daily Dot (Feb. 28, 2014),

[34] Id.

[35] Franks, supra note 25.

[36] Citron & Franks, supra note 30, at 5.

[37] See Barker, supra note 2 (stating that in Holly Jacobs’ case, she found that only the rich and famous can wield civil laws to effect because they are the only ones that can afford lawyers to chase after all of the sites who have posted the materials).

[38] Id.

[39]See Cynthia J. Najdowski & Meagen M. Hildegrand, The Criminalization of “Revenge Porn”, Am. Psychol. Ass’n (Jan. 2014), (noting that critics state that this is a violation of the right to free speech).

[40] Mary Anne Franks, Why we Need a Federal Criminal Law Response to Revenge Porn, Concurring Opinions (Feb. 15, 2013),

[41] See Alaska Stat. § 11.61.120 (2013) (defining “intimate parts” as genitals, anus, or female breast); id.

[42] See N.J. Stat. Ann. § 2C:14-9 (West 2014) (providing an exception for law enforcement officers engaged in the official performance of their duty).

[43] Id.

[44] Jessica Walters, Why “Revenge Porn” Is Legal in 48 States, Avvo Blog (Dec. 4, 2013, 1:57 PM),

[45] Anne Flaherty, “Revenge Porn” Victims Demand New Laws, Huffington Post (Nov. 15, 2013, 3:35 AM), revenge-porn-laws_n_4280668.html.

[46] See Wis. Stat. § 942.09 (2013) (stating that parents and guardians are excluded as long as the representation does not cross into child abuse or child pornography and the publication is not for commercial purposes).

[47] See Tracy Connor, Anthony Weiner Admits Sexting Continued After 2011 Resignation from Congress, NBC News (July 24, 2013 3:53 AM), (demonstrating how the exception to Wisconsin’s law operates).

[48] Wis. Stat. § 942.09 (2013).

[49] See Corliss v. Walker, 57 Fed. Rep. 434 (1983); see also Rosenfeld v. U.S. Dep’t of Justice, No. C-07-3240 EMC, 2012 WL 710186 at *5 (N.D. Cal. Mar. 5, 2012).

[50] N.J. Stat. Ann. § 2C:14-9 (West 2014).

[51] Barker, supra note 2.

[52] N.J. Stat. Ann. § 2C:14-9 (West 2014).

[53] Michelle Dean, Wisconsin Passes Anti-“Revenge Porn” Law, Gawker (Apr. 9, 2014, 10:00 AM),

[54] Donaghue, supra note 14.

[55] Steven Nelson, Federal ‘Revenge Porn’ Bill Will Seek to Shrivel Booming Internet Fad, U.S. News (Mar. 26, 2014), 2014/03/26/federal-revenge-porn-bill-will-seek-to-shrivel-booming-internet-fad.

[56] Liz Halloran, Race to Stop “Revenge Porn” Raises Free Speech Worries, NPR (Mar. 6, 2014, 11:16 AM), 286388840/race-to-stop-revenge-porn-raises-free-speech-worries (stating that there are constitutional perils in bills being considered because of the worry of protecting the right to free speech).