Motion Bank # M-3015 (Re: F 2.025 n4 [Motion to Exclude Fingerprint Identification Evidence and Request for a Hearing Pursuant to People V. Kelly (1976) 17 C3d 24, Or, in the Alternative, Motion for Funds to Retain Fingerprint Experts and to Permit Their Testimony Before the Jury].)
The copyright for this motion is held by the author who reserves all rights. It is published by FORECITE Legal Publications with permission.
CAVEAT: The file below was not prepared by FORECITE. FORECITE has not made any attempt to review or edit this material and is not responsible for its content or format. FORECITE cannot guarantee the information is complete, accurate or up-to-date. You are advised to conduct your own independent, comprehensive research on all issues addressed in the material below.
NOTE: This document contains a motion, exhibits and a reply. The text of the footnotes appear at the end of each document.
Deputy Public Defender
555 Seventh Street, Second Floor
San Francisco, California 94103
Attorneys for Defendant
SUPERIOR COURT OF CALIFORNIA COUNTY OF SAN FRANCISCO
THE PEOPLE OF THE STATE OF CALIFORNIA, SCN: 000000
Plaintiff, Date: February 18, 2000
Time: 9:00 a.m.
NOTICE OF MOTION AND MOTION TO EXCLUDE FINGERPRINT IDENTIFICATION EVIDENCE AND REQUEST FOR A HEARING PURSUANT TO PEOPLE V. KELLY (1976) 17 CAL. 3D 24, OR, IN THE ALTERNATIVE, MOTION FOR FUNDS TO RETAIN FINGERPRINT EXPERTS AND TO PERMIT THEIR TESTIMONY BEFORE THE JURY
TO TERENCE HALLINAN, DISTRICT ATTORNEY FOR THE CITY AND COUNTY OF SAN FRANCISCO, STATE OF CALIFORNIA, TO ASSISTANT DISTRICT ATTORNEY JOHN FARRELL AND TO THE ABOVE-ENTITLED COURT:
PLEASE TAKE NOTICE that on February 18, 2000 at 9:00 a.m. in Department __ of the above-entitled court, the defendant herein will move for an order excluding from evidence the People’s fingerprint identification evidence, and will and hereby does request a hearing pursuant to People v. Kelly (1976) 17 Cal. 3d on the grounds that the People’s fingerprint identification evidence is inadmissible under Kelly and under Evidence Code Sections 350, 352, 801(a), and 1200. In the alternative, defendant will and hereby does move for funds to retain certain fingerprint experts and for an order permitting their testimony before the jury.
This Motion is based on this Notice, the attached memorandum of points and authorities, the exhibits to be provided under separate cover, and any oral and documentary evidence and argument as may be produced at the hearing on said motion.
DATED: February 10, 2000
Respectfully Submitted, ____________________________
MICHAEL N. BURT
Attorneys for Defendant
STATEMENT OF FACTS
On October 29, 1987, Ms. L’s partially decomposed body was found in her San Francisco home after a mysterious anonymous telephone call alerted police to the presence of her body. Two latent fingerprints were recovered from the crime scene, but a computer search failed to reveal the identity of the person who had left the prints.
Eleven years later, on April 10, 1998, defendant JOHN DOE was arrested in North Beach on a trespassing charge, and his fingerprints were taken and routinely fed into the San Francisco Police Department’s Automated Fingerprint Identification System (A.F.I.S.). The computer alerted San Francisco Crime Laboratory fingerprint technician Wendy Chong to a possible match with latent fingerprints that had been recovered in 1987 by Inspector Kenneth Moses from a water heater at the Ms. L homicide scene. Thus alerted, Chong physically compared Mr. Doe’s known prints to two partial latent prints recovered from two different locations near the top of the water heater.
At the preliminary hearing, Chong testified on direct examination that it was her opinion that a partial latent print recovered three inches from the top of the water heater matched Mr. Doe’s left thumb print. (RT 214). It was further her opinion that a second partial latent print recovered in a different location two inches from the top of the water heater matched the right index finger of Mr. Doe. (Id.). She described the methodology of her comparison as follows: “I USE A MAGNIFYING GLASS THAT — THAT WAS ABOUT FIVE TIMES MAGNIFICATION AND I USE POINTERS. AND THEN LATER I JUST COMPARED THE POINTS OF IDENTIFICATION ON THE LATENT WITH THOSE OF THE KNOWN FINGERPRINTS.” (RT 215).
On cross examination, Ms. Chong forthrightly admitted the subjective nature of her opinion:
Q. ARE YOU SAYING THAT THE COMPUTER (DECLARES) A MATCH OR DOESN’T (DECLARE) A MATCH DEPENDING ON HOW YOU DRAW OUT THE TRACING OF THE LATENT?
A. YES, BECAUSE DIFFERENT PEOPLE WILL HAVE MAYBE A DIFFERENT INTERPRETATION OF WHAT THE FINGERPRINT LOOKS LIKE. THEREFORE, THEY CAN TRACE A LITTLE BIT DIFFERENTLY. AND IF YOU TRACE A LITTLE BIT DIFFERENTLY, YOU CAN HAVE A TOTALLY DIFFERENT CANDIDATES LIST….
Q. THE QUESTION IS: WHEN YOU ARE TRACING THE LATENT, ARE YOU DOING IT THE WAY A PAINTER WOULD DRAW A PAINTING, YOU ARE INTERPRETING WHAT IT LOOKS LIKE AND TRACING IT, OR ARE YOU PHYSICALLY PUTTING THE TRACE PAPER ON THE LATENT AND DRAWING IT?
A. OH, OKAY. SOME PEOPLE MIGHT SEE A BIFURCATION AS A RIDGE ENDING A(ND) SOME PEOPLE MIGHT SEE I(T) AS VIS-VERSA. AND THEN SOME PEOPLE MIGHT SEE A SHORT RIDGE AS BEING A DOT AND SOME PEOPLE WILL NOT ENTER THAT IN THE COMPUTER. SO, IT DEPENDS ON THE PERSONS WHO LOOKING AT THE FINGERPRINT, HOW THEY INTERPRET THAT FINGERPRINT TO BE. I AM SORRY.
Q. SO YOU ARE SAYING THAT TWO PEOPLE, TWO FINGERPRINT EXAMINERS LOOKING AT THE SAME FINGERPRINT CAN INTERPRET DIFFERENTLY THE CHARACTERISTIC OF THE PRINT –A. YES.Q. — RIGHT?
A. YES. IT DEPENDS ALSO ON THEIR TRAINING, YOUR EXPERIENCE AND MAYBE THEIR TECHNOLOGY THAT YOU ARE USING TOO.
Q. AND IN TERMS OF DOING MANUAL COMPARISONS, THE TECHNIQUE IS THE SAME, RIGHT, THE FINGERPRINT IN GENERAL; LOOKING AT THE PRINT AND THE INTERPRETERS INTERPRETS WHETHER IT’S A RIDGE OR A DOT OR SOME OTHER CHARACTERISTIC THAT YOU ARE COMPARING?
Q. AND THAT INTERPRETATION IS DEPENDING UPON THE TRAINING AND THE EXPERIENCE AND THE INTERPRETERS SKILLS OF THE PERSON DOING THE INTERPRETATION, RIGHT?
A. YES.(RT 223-225)
Wendy Chong further admitted that the subjective nature of fingerprint comparisons is not governed by any objective standards:
Q. DO YOU HAVE A CERTAIN NUMBER OF POINTS OF COMPARISON THAT YOU MAKE IN MAKING YOUR DETERMINATION THAT THE WATER HEATER PRINTS MATCH THE KNOWN PRINTS OF MR. DOE?
A. NOT REALLY. YOU ARE GOING ALSO ON POROSCOPY AND RIDGEOLOGY. SO YOU JUST COMBINE THEM. I REMEMBER LOOKING AT MR. DOE’S FINGERPRINTS AND I JUST DECIDED TEN WAS A GOOD NUMBER AND SO LATER I JUST STOPPED AT TEN. IT WAS A NICE PRINT, I JUST MADE AN ARBITRARY DECISION OF TEN POINTS THAT TIME.
Q. WERE THERE ANY POINTS OF DISSIMILARITY?
Q. YOU STOPPED LOOKING AT TEN, YOU SAW TEN POINTS OF SIMILARITY AND THEN SAID THAT IS A NICE ARBITRARY NUMBER I WILL STOP AT TEN?
A. I HAVE STOPPED AT 8, I HAVE STOPPED AT 12 BUT I JUST LIKE THE NUMBER 10.
Q. WELL, IS THERE SOME STANDARD WITHIN YOUR PROFESSION THAT FINGERPRINT EXAMINERS GO BY TO SAY YOU SHOULD HAVE A MINIMUM NUMBER OF POINTS OF COMPARISON BEFORE YOU CALL IT?
A. NO, BECAUSE YOU GO ON POROSCOPY AND YOU ARE GOING ON RIDGEOLOGY.
Q. WHAT DOES THAT MEAN?
A. YOU ARE GOING TO THE PORES OF THE FINGERPRINT RIDGES. AND LIKE ALSO WHEN YOU LOOK AT THE FINGERPRINT RIDGES THERE IS LIKE LITTLE INDENTATIONS ON IT AND LITTLE BUMPS ON THE FINGERPRINT, SO YOU ARE GOING ON THAT TOO.
Q. DO YOU KNOW WHAT THE FBI STANDARDS ARE IN TERMS OF NUMBER OF COMPARISONS?
A. THEY REALLY DON’T HAVE ONE EITHER.
Q. THEY DON’T HAVE A MINIMUM OF 12?
A. NO.(RT 258)
Ms. Chong further testified about the methodology of her comparison. She produced a latent print that had been recovered from the phone booth where the anonymous call had been made and indicated that Inspector Moses had matched the latent to a person other than Mr. Doe. (RT 254-255). An enlarged photograph of both the latent and the known print had been marked by Inspector Moses in red ink to indicate numerous points of similarity. (RT 257). In contrast, none of the latent prints from the water heater nor Mr. Doe’s known prints had been marked by Ms. Chong to indicate similar points. (RT 260-264). Somebody had marked the thumb latent with red ink, but Chong testified that ” I can’t remember who marked it,…whether it was Ken or whether I did it. “(RT 260). There is no procedure in the Lab for documenting who puts ink marks on photographs of latent prints in homicide cases. (RT 260). The photo showed 8 red marks, not 10. (Id.) When asked whether she had any photographs to document her 10 points of similarity, Chong replied, ” SOMETIMES YOU DON’T NEED TO MARK IT WITH A RED DOT. IF YOU ARE JUST GOING TO USE THE POINTERS YOU COULD JUST, YOU DON’T NEED THE MARK OF A RED DOT IF YOU JUST KEEP COUNTING UNTIL YOU GET 10, 12, 14.” (RT 261).
Finally, Ms. Chong acknowledged that she did not follow the proficiency testing requirements of her own Department. She was unaware until informed on the stand that there was a written requirement of her Department that “all C.S.I.(Crime Scene Investigation) staff who report fingerprint comparisons shall complete at least one proficiency test each year.” (RT 232). Since first starting to do fingerprint work in 1969, she had taken only two proficiency tests, one about fifteen to seventeen years ago and the second about five years ago. According to Chong, ” The earlier test was an ” FBI TEST THAT TIME, I TOOK IT AND I DID THE LATENT COMPARISON WITH I THINK THE HAND PRINT INSTEAD OF THE FINGERPRINT THAT TIME, AND THEN LATER THE ANSWER WAS SCORE AS WRONG THAT TIME BUT THEN CHIEF TOM MURPHY… RECORRECTED MY WORK, THEN HE SAID I WAS RIGHT AT THE END.” (RT 231).
LAW AND ARGUMENT
With one important exception to be discussed below at page 76-80, the admissibility of expert testimony based upon fingerprint evidence to prove identity is well established in every jurisdiction of the United States. See generally, David Faigmann, Fingerprint Identification: Legal Issues, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 21-1.0 at 51 (David L. Faigman et al. eds., West 1997). Less well known is the advent of judicial acceptance of the technique during the 1920s. ” This evidentiary development (was) characterized by meager judicial scrutiny combined with rapid spread of acceptance among numerous jurisdictions. Rapid, that is, considering what it takes for a case to reach a state supreme court, and considering that fingerprint identification presented the courts with a claim that was still novel (infinite individuality) in an astonishingly strong form (infallibility). And rapid considering the recent failings of anthropometry, the defective first child of forensic individualization, which had (mistakenly) made much the same claims.” Id. at p. 51. [Footnote 1]
In California, fingerprint evidence was given less than meager judicial scrutiny prior to its acceptance. In the early case of People v. Van Cleave (1929) 208 Cal. 295, the Court reversed a conviction based on fingerprint evidence because of the erroneous admission of a police fingerprint card that implied to the jury that the defendant had previously been in police custody. The Court simply assumed, with no discussion of the scientific issues involved, that the testimony of a fingerprint expert was properly before the jury. Seventeen years later, in People v. Adamson (1946) 27 Cal.2d 478, 495, the Court declared, again without any discussion, that “(f)ingerprint evidence is the strongest evidence of identity….” Thus, by the time the Court held in People v. Kelly (1976) 17 Cal.3d 24, 31-32 that “(w)hen identification is chiefly founded upon an opinion which is derived from utilization of an unproven process or technique, the court must be particularly careful to scrutinize the general acceptance of the technique“, fingerprint evidence had already been exempted from this scrutiny because it was simply assumed to be a proven process. The only indication the Court has ever given that fingerprint evidence should be subjected to the same scrutiny as other expert evidence came when the Court declared in 1993 that in the absence of ” competing expert testimony”, it is “unreasonable for defendant to suggest that the process might somehow have captured a fingerprint which did not exist, transformed some other image into a fingerprint, or changed the fingerprint of another person into one which matched defendant’s.” People v. Webb (1993) 6 Cal.4th 494, 524 [laser procedure for visualizing latent fingerprints not subject to Kelly test because the familiar image of the fingerprint makes reliability of the process readily apparent].
The assumption that the fingerprint process is reliable can no longer be sustained. Forensic scientists, and even fingerprint experts themselves, are now calling into question the fundamental scientific premises upon which fingerprint evidence is based. In these circumstances, under Kelly, “defendant is not foreclosed from showing new information which may question the continuing reliability of the test in question or to show a change in the consensus within the scientific community concerning the scientific technique”. People v. Smith (1989) 215 Cal.App.3d 19, 25;See also, People v. Soto, (1999) 21 Cal. 4th 512, 540-541 n. 31 , (“In a context of rapidly changing technology, every effort should be made to base (decision) on the very latest scientific opinions…”); People v. Allen (1999) 72 Cal. App. 4th 1093, 1101(“The issue is not when a new scientific technique is validated, but whether it is or is not valid; that is why the results generated by a scientific test once considered valid can be challenged by evidence the test has since been invalidated.”)
The question here is not the uniqueness and permanence of entire fingerprint patterns, consisting of hundreds of distinct ridge characteristics. Rather, the question is far more specific: Is there a scientific basis for a fingerprint examiner to make an identification, of absolute certainty, from a small distorted latent fingerprint fragment, revealing only a small number of basic ridge characteristics such as the ten characteristics identified by the lab technician in this case. There are two fundamental premises that underlie such an identification: First, that two or more people cannot possibly share this number of basic ridge characteristics in common; and second, that fingerprint examiners can reliably assert absolute identity from small latent print fragments despite the unknown degree of distortion and variability from which all latent prints suffer. With the issue properly framed, it is readily evident that the People cannot demonstrate the various indicia of scientific reliability set forth by our Supreme Court in People v. Kelly (1976) 17 Cal.3d 24, 31-32, and People v. Leahy (1994) 8 Cal. 4th 587 and elaborated upon by the United States Supreme Court in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579, 113 S.Ct. 2786, 125 L. Ed. 2d 469 (1993) and KUMHO Tire Company, Ltd. v. Carmichael, 119 S.Ct. 1167 (1999).
First, there has been no testing of either of the two fundamental premises that underlie the proffered identification. The failure to test these premises has been repeatedly recognized by various scientific commentators.
Second, there is no known error rate for latent fingerprint examiners. Any claim that the error rate is “zero” is patently frivolous in light of the fact that ” both here and abroad there have been alarming disclosures of errors by fingerprint examiners.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-1, p. 740-741. It is also belied by the alarmingly high number of misidentifications that have occurred on latent print examiner proficiency exams. (See, infra at p. 48).
Third, fingerprint examiners do not possess uniform objective standards to guide them in their comparisons. To the contrary, there is complete disagreement among fingerprint examiners as to how to characterize particular characteristics of a print, and as to how many points of comparison are necessary to make an identification, and many examiners now take the position that there should be no objective standard at all. As Ms. Chong testified in this case, “some people might see a bifurcation as a ridge ending (and) some people might see (i)t as vis-versa. And then some people might see a short ridge as being a dot…It depends on the persons who (are) looking at the fingerprint, how they interpret that fingerprint to be.” (RT 224).
Ms. Chong is not alone in her opinion that fingerprint identification is a subjective process. She indicated that one of her instructors was Royal Canadian Mounted Police expert David Ashbaugh. (RT 204, 218). In his recent book, which many in the fingerprint field are describing as the definitive source, Mr. Ashbaugh repeatedly stresses that “(t)he opinion of individualization or identification is subjective.” David Ashbaugh, Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology, 103 (CRC Press, Oct. 1999) [hereinafter Ashbaugh, Basic and Advanced Ridgeology]. See also, David Stoney, Fingerprint Identification: Scientific Status, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 21-2.1.2 at 65 (David L. Faigman et al. eds., West 1997)(“In fingerprint comparison, judgments of correspondence and the assessment of differences are wholly subjective: there are no objective criteria for determining when a difference may be explainable or not.”).
Fourth, there is not a general consensus that fingerprint examiners can reliably make identifications on the basis of only ten matching characteristics. “There is no consensus on the number of points necessary for an identification. In the United States, one often hears that eight or ten points are ”ordinarily’ required. Some local police departments generally require 12 points. In England, many examiners use 16 points as a rule of thumb. In France, the required number used most often is 24 while the number is 30 in Argentenia and Brazil.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-7(A), p. 768. In England, a 16 point standard was adopted after it was discovered that prints from two different individuals shared from 10 to 16 points of similarity. I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1,4, http://www.scafo.org/library/120101.aspxl [Footnote 2] (“Experts [in Britain] appeared to have a particularly poor regard for the fingerprint profession in the USA where there is no national standard. Cases of wrongful identification which had been made by small bureaus in the USA were cited as being symptomatic of a poor system and the dominant view was that such unfortunate events would not have occurred had there been a 16 points standard in operation”). Even matches that are based on 16 points of comparison and that have been verified by a second or third analyst have been shown to be in error. See, James E. Starrs, Judicial Control Over Scientific Supermen: Fingerprint Experts and Others Who Exceed The Bounds, (1999) 35 Crim. L. Bull. 234, 243-246 (describing two cases in England in 1991 and 1997 in which misidentifications were made despite the fact that the British examiners insist on 16 points for an identification and triple check fingerprint identifications) hereinafter “Scientific Supermen”); Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-1, p. 740-741(discussing same cases) (“Fingerprint identification is not as infallible as many laypersons assume it to be.”). [Footnote 3]
Fifth, the professional literature of the fingerprint community confirms the scientific bankruptcy of the field. As David Ashbaugh acknowledges, historically
(m)ost efforts to describe or defend the evaluative identification process were more an exercise in reciting rhetoric and dogma as opposed to describing a scientific process. Specific facts and logical interpretation were conspicuous by their absence….A fundamental circumstance that helped the new identification process gain acceptance was the fact that few identification specialists were challenged in court. Legal counsel shied away from dwelling on a science that was considered exact and infallible, a belief that was difficult to dispel without adequate and structured literature being available. Most challenges were haphazard at best, usually ill-prepared, and often confusing. The majority were doomed to fail. Each failure further entrenched the infallibility of the science.
It is difficult to comprehend that a complete scientific review of friction ridge identification has not taken place at some time during the last 100 years. A situation seems to have developed where this science grew by default. This is especially alarming in light of the magnitude of change contained in the new identification philosophy put forward in 1973(abandoning an objective standard of a set number of similarities). Had challenges periodically surfaced, not only of the new process but the whole basis of friction ridge identification, they would have benefited all. Challenges should be welcomed within a science as an opportunity to present the founding premises and demonstrate the strength of current methodologies. Challenges lead to open debate, published articles, and a platform of discussion from which all can learn.
(Ashbaugh, Basic and Advanced Ridgeology, supra, at 3-4 ).
Sixth, latent fingerprint identifications are analogous to other techniques, such as DNA analysis, handwriting analysis and hair fiber comparisons, that courts, in the wake of a new skepticism toward junk science , have now found to be scientifically unreliable and hence inadmissible. People v. Venegas (1998) 18 Cal.4th 47 excluded previously accepted RFLP-DNA testing conducted by the F.B.I. because that agency had not followed correct scientific procedures. The United States Supreme Court’s decisions in Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) 509 U.S.. 579 and Kumho Tire Co. V. Carmichael (1999) __U.S.__, 119 S.Ct. 1167, and our own Supreme Court’s decision in Venegas and Leahy demand that scientific evidence be based on reliable and generally accepted “scientific” principles and methods, and that only correct scientific procedures be used in each case.
Citing Daubert or Kumho Tire, several recent federal cases have held that some traditionally accepted techniques, such as handwriting comparison (United States v. Santillan (N.D. Cal. 1999) __F.Supp.__,1999 WL 1201765; United States v. Hines (D. Mass. 1999) 55 Supp. 62;United States v. McVeigh, 1997 WL 47724 (D.Colo.Trans.Feb. 5, 1997); United States v. Starzecpyzel (S.D.N.Y. 1995) 880 F. Supp. 1027, 1038 and hair comparison (See Williamson v. Reynolds (E.D. Okla. 1995) 904 F. Supp. 1529, 1558, rev’d on other grounds (10th Cir. 1997) 110 F. 3d 1523), are no longer supported by current scientific research. And in an analogy even closer to the context of fingerprints, the Washington Court of Appeals has recently reversed an aggravated murder conviction because the state did not establish that latent earprint identification was generally accepted in the forensic science community, as required for admissibility under the Frye test. State v. Kunze (1999) 97 Wash.App. 832, 988 P.2d 977.
As these cases illustrate, the fact that an allegedly scientific procedure has been accepted by courts in the past does not insulate that procedure from challenge based on advances in scientific thinking. Northern California Federal District Court Judge Lowell Jensen put the matter bluntly: “The government is correct in their assertion that pre-Daubert/Kumho/ Ninth Circuit precedent supports the admissibility of (handwriting) testimony; however, the world has changed. The Court believes that… a past history of admissibility does not relieve this Court of the responsibility of now conducting Daubert/Kumho analysis as to this proffered expert testimony.” United States v. Santillan (N.D. Cal. 1999) __F.Supp. __,1999 WL 1201765 at p. 4. See also, United States v. Hines (D. Mass. 1999) 55 F.Supp. 62, 67(“The Court is plainly inviting a reexamination even of ”generally accepted’ venerable, technical fields.”).
Our Supreme Court is in agreement with this forward-looking approach. Most recently, in People v. Soto, (1999) 21 Cal. 4th 512, 540-541 n. 31 , the Court emphasized that “in a context of rapidly changing technology, every effort should be made to base (decision) on the very latest scientific opinions…”See also, People v. Allen (1999) 72 Cal. App. 4th 1093, 1101(“The issue is not when a new scientific technique is validated, but whether it is or is not valid; that is why the results generated by a scientific test once considered valid can be challenged by evidence the test has since been invalidated.”);People v. Smith (1989) 215 Cal.App.3d 19, 25 [263 Cal.Rptr. 678] [in determining whether a particular technique is generally accepted “defendant is not foreclosed from showing new information which may question the continuing reliability of the test in question or to show a change in the consensus within the scientific community concerning the scientific technique”].)
Seventh, partial latent fingerprint identifications do not have any non-judicial applications. As Ashbaugh puts it, ” The failure of the identification community to challenge or hold meaningful debate can also be partly attributable to the fact that the friction ridge identification science has been basically under the control of the police community rather than the scientific community. In the eyes of many police administrators, friction ridge identification is a tool of solving crime, a technical function, as opposed to a forensic science.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 4.When scientists other then police latent fingerprint technicians have looked into the reliability of using computer-scanned complete fingerprints for the purpose of computer security systems in the emerging science of biometrics, they have found that fingerprinting is not reliable enough to use as a stand-alone identification technique. (See, infra at p. 28.)
In addition to these various factors, the lack of scientific reliability of the government’s fingerprint evidence has been most dramatically demonstrated by a test that the F.B.I. recently performed specifically for the purpose of defeating a Daubert challenge to the admissibility of fingerprint evidence in United States v. Byron C. Mitchell (E.D. Pa. 1999)(Criminal No. 96-00407.) [Footnote 4] In an apparent effort to demonstrate that different fingerprint examiners will, at least, be able to reach the same conclusion when they are presented with the same data, the government provided the two latent prints at issue in that case, along with Mr. Mitchell’s inked prints, to 53 different law enforcement agencies. Contrary to the government’s expectations, however, 23% of the responding agencies found that there was an insufficient basis to make an identification with respect to one of the two latents and 17% found an insufficient basis as to the other. The government’s experiment thus perfectly illustrates how subjective latent print identifications really are and how unreliable their results can be.
Finally, the unreliability of latent fingerprint identifications has already been judicially recognized. In the only known fingerprint case in which a federal trial court has performed the type of analysis that is now mandated by Daubert, the district court excluded the government’s fingerprint identification evidence, finding that there was no scientific basis for the latent print examiner’s opinion of identification. United States v. Parks (C.D. Cal. 1991) (No. CR-91-358-JSL) . The district court in Parks reached its determination after hearing from three different fingerprint experts produced by the government in an effort to have the evidence admitted. In excluding the evidence, the district court recognized, among other things, the lack of testing that has been done in the field, the failure of latent fingerprint examiners to employ uniform objective standards, and the minimal training that latent print examiners typically receive.
Accordingly, for all of the foregoing reasons, Mr. Doe requests that this Court preclude the government from introducing its fingerprint identification evidence at his upcoming trial.
An average human fingerprint contains between 75 and 175 ridge characteristics. An Analysis of Standards in Fingerprint Identification, FBI L. Enforcement Bull., June 1972, at 1 [hereinafter FBI, Fingerprint Identification]. These ridge characteristics generally consist of a few different types, although there is no standard agreement among fingerprint examiners as to either the precise number or nomenclature of the different characteristics. James F. Cowger, Friction Ridge Skin: Comparison and Identification of Fingerprints at 143 (1983) (“The terms used to define and describe these characteristics vary markedly among writers in the field and differ even among examiners depending upon the organization in which they were trained.”). The ridge characteristics most commonly referred to are: 1) islands, also referred to as dots, which are single independent ridge units; 2) short ridges, in which both ends of the ridge are readily observable; 3) ridge endings, where a ridge comes to an abrupt end; 4) bifurcations, in which the ridge forks into two; 5) enclosures, which are formed by two bifurcations that face each other; 6) spurs, where the ridge divides and one branch comes to an end; 7) cross-overs, in which a short ridge crosses from one ridge to the next; and, 8) trifurcations, in which two bifurcations develop next to each other on the same ridge. John Berry, The History and Development of Fingerprinting, in Advances in Fingerprint Technology at 2 (Henry C. Lee & R. E. Gaensslen eds., 1994); Ashbaugh, Basic and Advanced Ridgeology, supra, at 138-143.
While some occasional research has been done with respect to the relative frequencies with which these and other characteristics occur, no weighted measures of the characteristics have ever been adopted by fingerprint examiners on the basis of these studies. Research, moreover, has shown that different fingerprint examiners hold widely varying opinions regarding which characteristics appear most commonly. James W. Osterburg, An Inquiry Into the Nature of Proof, 9 J. of Forensic Sci. 413, 425 (1964) (“Clearly, subjective evaluation of the significance to be attached to a fingerprint characteristic is suspect.”).
All prints, both inked and latent, are subject to various types of distortions and artifacts. David Ashbaugh, The Premises of Friction Ridge Identification, Clarity, and the Identification Process, 44 J. of Forensic Identification 499, 513 (1994) [hereinafter Ashbaugh, Premises]. The most common type being pressure distortion which occurs when the print is being deposited. Id. Other types of distortion can be caused by the condition or shape of the surface on which the print has been deposited and by the mediums used to develop and lift the print. Ashbaugh, Basic and Advanced Ridgeology, supra, at 114-128. Significantly, distortion can cause a ridge characteristic to appear as something other than what it really is. Id. at 109; David A. Stoney & John I. Thornton, A Critical Analysis of Quantitative Fingerprint Individuality Models, 31 J. of Forensic Sci. 1187, 1193 (1986).
For example, a rounded surface, such as a water heater, can cause distortions which can lead to misidentifications. Ashbaugh, Basic and Advanced Ridgeology, supra, at 115-116. Dirty surfaces, such as a water heater in a garage, may not accept all the matrix available during deposition and the resulting print can appear blotchy, have areas missing, or generally lack detail. Id. at 116. The type of powder used to lift the prints in this case ” tends to fill in third level detail and may appear to alter second level detail.” Id. at p. 121. And when, as in this case, a developed friction ridge print is lifted with tape, “(i)mproper procedures, and especially efforts to correct those improper procedures , can cause various alterations to the lifted print.” Id. at 117.
As Wendy Chong confirmed in this case, “SOMETIMES THE PRINTS CAN BE SMUDGED, THERE COULD BE SCARS TO IT, IT COULD BE MOVEMENT ON IT, MAYBE THE FINGERPRINTS WASN’T ROLLED COMPLETELY, IT COULD BE A NUMBER OF FACTORS.” See also, William Leo, Distortion Versus Dissimilarity in Friction Skin Identification, 15(2) The Print 1 ( March/April 1999)(“Distortion… is commonly found in both latent and exemplar prints that have the same origin. Examples of distortion can be noted when occurring from any of the following conditions–——overlaid prints, pressure reversals, background interference, slippage, or from any circumstance that would change or misrepresent the appearance or shape of one or both prints that are being compared.”). There have been no studies done to determine the frequency with which such distortions occur or how they can be accounted for so as to prevent a misidentification.
Latent print examiners make identifications when they find a certain number of ridge characteristics to be in common, both in terms of type and location, on the two prints that they are comparing. FBI, Fingerprint Identification, supra . As discussed further below, there is considerable disagreement among latent print examiners as to how many common characteristics should be found before an identification is made. Many examiners, including the SFPD lab technician in this case, currently believe that there should be no minimum standard whatsoever and that the determination of whether there is a sufficient basis for an identification should be left entirely to the subjective judgment of the individual examiner.
It has been well documented that different people can share a limited number of fingerprint ridge characteristics in common. See, Y. Mark and D. Attias, What Is the Minimum Standard for Characteristics for Fingerprint Identification (1996) Fingerprint Whorld 148(” In October 1995, while working with the AFIS and comparing a latent taken from a crime scene with a list of possible suspects, we found a comparison which had 7 identical characteristics. Assuming that the fingerprint from the crime scene was only a partial print and it contained only the specific area of the 7 points, one cannot rule out the idea that not only a junior examiner, but an expert with many years of experience behind him could arrive at a mistaken identity,); James Osterburg, The Crime Laboratory : Case Studies of Scientific Criminal Investigation (1967)(documenting a case where two individual shared 10 points of similarity.); Ene-Malle Lauritis, Some Fingerprints Lie, National Legal Aid Defender Association, The Legal Aid Briefcase, October 1968, p.129 (Describing a case where the latent and known prints shared 14 points of similarity and 3 dissimilarities), J. Edgar Hoover, Hoover Responds to “Some Fingerprints Lie”, The Legal Aid Briefcase, June 1969, p.221(Not disputing the 3 dissimilarities in the same case, Hoover declares ipse dixit that “[a]ny two fingerprints possessing as many as fourteen identical ridge characteristics…would contain no dissimilar ridge characteristics.”) See also, United States v. Parks (C.D. Cal. 1991) (No. CR-91-358-JSL), discussed infra at p. 72 76 ( Steven Kasarsky, a board certified member of the IAI and an employee of the United States Postal Inspection Service, testified that cases have occurred in which there were 10 points of similarity and 1 point of dissimilarity); People v. John Davenport, S. F. Muni Ct. No. 198530, Preliminary Hearing Transcript, August 30, 1978, p. 30 ( San Francisco Police Department fingerprint expert Michael Byrne testified that ” I [have] seen a case one time that had nine points; however, it had a dissimilarity.) As indicated above at page 12, other cases have been documented in which different individuals have shared 10 and even 16 points of similarity. There have been no scientific studies performed that can reasonably serve to predict the probability of such events occurring.
Lacking any such probability studies, latent print examiners do not offer opinions of identification in terms of probability. Indeed, latent print examiners are actually prohibited from doing so by the rules of their primary professional association, the International Association of Identification (IAI) and by the F.B.I.’s Scientific Working Group on Friction Ridge Analysis, Study, and Technology (hereinafter SWGFAST) . [Footnote 5] Instead, latent print examiners make the claim of “absolute certainty” for their identifications. Examiners provide an opinion that the latent print at issue was made by a particular finger to the exclusion of all other fingerprints in the world. Such assertions of absolute certainty, however, are inherently unscientific. Here is what one government expert has had to say on this issue:
Imposing deductive conclusions of absolute certainty upon the results of an essentially inductive process is a futile attempt to force the square peg into the round hole. This categorical requirement of absolute certainty has no particular scientific principle but has evolved from a practice shaped more from allegiance to dogma than a foundation in science. Once begun, the assumption of absolute certainty as the only possible conclusion has been maintained by a system of societal indoctrination, not reason, and has achieved such a ritualistic sanctity that even mild suggestions that its premise should be re-examined are instantly regarded as acts of blasphemy. Whatever this may be, it is not science.
David Grieve, Possession of Truth, 46 J. of Forensic Identification 521, 527-28 (1996).
In this case, Wendy Chong first testified on cross examination that her absolute identification was based on an arbitrary selection of the number 10 as the proper standard of identification (“I JUST LIKE THE NUMBER 10”).Upon further questioning, she implied, although she never stated, that her identification in this case was additionally based on “poroscopy” and “ridgeology”. As Ms. Chong testified, some latent print examiners purport to look for additional identifying features, beyond the basic ridge characteristics set out above, such as small edges on the ridges and the relative location of sweat pores. This technique was coined “ridgeology” by David Ashbaugh in 1983, who now admits that “the word ridgeology was originally an attention-getting device.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 8. Ashbaugh posits that an identification can be made only after looking at “first level detail” class characteristics), “second level detail” (specific friction ridge paths, e.g. ridge dot, bifurcation, etc.), and “third level detail” (small shapes on the ridge, the relative location of pores, and the small details contained in accidental damage to the friction ridges). Id. at 136-144. In Ashbaugh’s mind, it is the presence of “third level detail” which allows the fingerprint profession to abandon an objective standard based on a minimum number of points of similarity of “second level detail.” Id. at p. 143.
However, the pseudo-scientific names used by Ashbaugh and others should not obscure the fact that because “prints of friction skin are rarely well recorded . . . comparison of pore or edges is only rarely practical.” Cowger, supra, at 143. See also, An Analysis of Standards in Fingerprint Identification, FBI Law Enforcement Bulletin, June 1972, p. 7(“FBI technicians know of no case in the United States in which pores have been used in the identification of fragmentary impressions. To the contrary, our observations on pores have shown that they are not reliably present and that they can be obliterated or altered by pressure, fingerprint ink, or developing media.”). Dusty Clark, one of the California Department of Justice’s most experienced latent fingerprint examiners, very recently had this to say about the scientific reliability of “poroscopy” and “ridgeology”:
When making an identification comparison between a known and unknown impression, Latent Print Analysts rely on friction ridge characteristics in concurrence between the two impressions. Those that do not quantify (count points) rely on third level detail (edgeoscopy, poroscopy, and ridge shapes) to make an analysis, comparison, and evaluation. These analysts state that the comparison is a qualitative and quantitative process.
The qualitative process that is applied depends on the validity of what is seen to the examiner. There is such a degree of variation of appearance in the 3rd level detail due to pressure, distortion, over or under processing, foreign or excessive residue on the fingers, surface debris and surface irregularity, to name a few. The repeatability of the finite detail that is utilized in the comparison process has never been subjected to a definitive study to demonstrate that what is visible is actually a true 3rd level detail or an anomaly.
The problem that occurs is when third level detail is not present, it becomes solely a quantitative process of Galton 2nd level detail. The non- point counters refuse to put a number on the quantitative portion of their comparison analysis opting for the rhetorical response of “Show me the Print.” There has to be something to measure and count if the comparison process includes “quantitative”. If the analysts do not quantify their analysis then their opinion of identity is strictly subjective. A subjective analysis without quantification makes the identification process as reliable as astrology. If one does not quantify, is it an ID when a warm and fuzzy feeling overwhelms you? What happens if my warm and fuzzy feeling is different that yours?…
When discussing this issue at the 1999 Calif. Div. IAI Seminar, the audience of approximately 120 persons was asked to raise their hand if ever in their career that they had to rely on that one 3rd level detail to make the identification. Not one single hand was raised!
That brings me to the topic of this article regarding the abandonment of counting points and relying on Ridgeology for individualization. Ridgeology hasn’t been scientifically proven to be repeatable, and it’s application is not standardized.
Dusty Clark, What’s The Point (Dec. 1999), http://www.latent-prints.com/id_criteria_jdc.aspx [Footnote 6]
THE LEGAL STANDARD TO BE APPLIED
A. The Subjective and Arbitrary Technique Used In This Case To Identify Partial Latent Prints With Absolute Certainty Must Meet the Evidentiary Standard of People v. Kelly (1976) 17 Cal.3d 24.
The admissibility of a “new” scientific technique depends on whether it was derived from a method that is generally accepted to be reliable. To make this determination, the court must apply the standard set forth in People v. Kelly (1976) 17 Cal.3d 24. The Kelly standard has three “prongs”:
(1) it must be established, usually by expert testimony, that the scientific methods utilized are generally accepted as reliable by the relevant scientific community,
(2) the witness furnishing such testimony must be properly qualified as an expert to give an opinion on the subject, and
(3) the proponent of the evidence must demonstrate that correct scientific procedures were used in the particular case.
Kelly, 17 Cal.3d at p. 30 [emphasis in original].
Importantly for the present case, the Kelly rule does not cease to exist once an appellate court holds that a particular technique is generally accepted. As recently emphasized in People v. Venegas (1998) 18 Cal.4th 47, 52: ” An important corollary of that rule, however, is that if a published appellate decision in a prior case has already upheld the admission of evidence based on such a showing, that decision becomes precedent for subsequent trials in the absence of evidence that the prevailing scientific opinion has materially changed.” See also, People v. Kelly, 17 Cal.3d at p. 32.(“(O)nce a trial court has admitted evidence based upon a new scientific technique, and that decision is affirmed on appeal by a published appellate decision, the precedent so established may control subsequent trials, at least until new evidence is presented reflecting a change in the attitude of the scientific community.”). The corollary to the Kelly rule is well established. See, People v. Soto, (1999) 21 Cal. 4th 512, 540-541 n. 31(“(I)n a context of rapidly changing technology, every effort should be made to base (decision) on the very latest scientific opinions…”);People v. Allen (1999) 72 Cal. App. 4th 1093, 1101(“The issue is not when a new scientific technique is validated, but whether it is or is not valid; that is why the results generated by a scientific test once considered valid can be challenged by evidence the test has since been invalidated.”);People v. Smith (1989) 215 Cal.App.3d 19, 25 [263 Cal.Rptr. 678] [in determining whether a particular technique is generally accepted “defendant is not foreclosed from showing new information which may question the continuing reliability of the test in question or to show a change in the consensus within the scientific community concerning the scientific technique”].)
Venegas also clarified that “the Kelly test’s third prong does not apply the Frye requirement of general acceptance -it assumes the methodology and technique in question has already met that requirement. Instead, it inquires into the matter of whether the procedures actually utilized in the case were in compliance with that methodology and technique, as generally accepted by the scientific community. …The third prong inquiry is thus case specific; ”it cannot be satisfied by relying on a published appellate decision.'” 18 Cal 4th at 78 (emphasis added).
In People v. Farmer (1989) 47 Cal. 3d 888, 913 the Court had stated that ” careless testing affects the weight of the evidence and not its admissibility…” However, in Venegas the Court clearly retreated: ” Our reference to ”careless testing affecting the weight of the evidence and not its admissibility’ in Farmer …was intended to characterize short-comings other than the failure to use correct, scientifically accepted procedures such as would preclude admissibility under the third prong of the Kelly test.” 18 Cal. 4 th at p.80 (emphasis in original). See also, People v. Soto (1999) 21 Cal. 4th 512, 519 (“The proponent of the evidence must also demonstrate that correct scientific procedures were used.”)
The role of the Court, when applying the general acceptance test to fingerprint evidence, is to determine “the existence, degree, (and) nature of a scientific consensus or dispute (with) the interpretative assistance of qualified live witnesses subject to a focused examination in the courtroom.” People v. Soto (1999) 21 Cal. 4th 512, 540-541 n. 31 The court must ” conduct a ‘fair overview’ of the subject, sufficient to disclose whether ‘scientists significant either in number or expertise publicly oppose [a technique] as unreliable. People v. Reilly (1987) Cal. App.3d 1127, 1148, quoting from People v. Brown (1985) 40 Cal.3d 512, 533. [Footnote 7]
As Venegas teaches, “(i)n determining the question of general acceptance, courts must consider the quality, as well as quantity, of the evidence supporting or opposing a new scientific technique.” 18 Cal. 4th at 85(emphasis added) And as Venegas also makes clear, the court’s role in the inquiry is not one of abdication to the scientists. Thus, in Venegas, the Court adopted the modified ceiling approach advocated in a 1992 National Research Council report, notwithstanding that a 1996 National Research Council report, as well as numerous other scientific articles authored by well-credentialed experts, and testimony from such experts at the Kelly hearing, all assailed the approach as scientifically unsound. 18 Cal 4th at 86-89. Focusing not only on scientific soundness, the Court defined the central issue to be whether the approach was “”forensically reliable’ in that it resolves any imprecision in the statistical calculations in a way that preserves the constitutional presumption of the suspect’s innocence.” 18 Cal 4th at 85. The point here is that the court’s role in the inquiry is not one of a potted-plant.
B. The Court Must Look Beyond Forensic Science For Evidence of General Acceptance.
The general acceptance test of Kelly cannot be met by showing that promoters and practitioners of the method accept it to be reliable. The test is not whether a method is accepted by those who have a personal or professional stake in its acceptance, but rather, whether it is “accepted as reliable by the larger scientific community in which it originated.” People v. John W. 185 Cal. App.. 3d 801, 805 (1986); People v. Shirley 31 Cal.3d 18, 54 (1982).
Courts have also recognized that promoters and practitioners of a particular method “may be too closely identified with the endorsement of [the technique] to assess fairly and impartially the nature and extent of any opposing scientific views.” Kelly, supra, 17 Cal.3d at 38. Thus, when applying the Kelly standard, the court must look to experts who are “‘impartial,’ that is, not so personally invested in establishing the technique’s acceptance that he might not be objective about disagreements within the relevant scientific community.” People v. Brown, supra, 40 Cal.3d 512, 530; accord People v. Venegas, 18 Cal. 4th at 77 (“FBI Agent Lynch, though vigorously defending the merits of the FBI’s RFLP analytical procedures she had followed in this case, did not purport to be qualified, as a molecular biologist or otherwise, to testify on questions of general scientific acceptance of the validity of those procedures.”) In this regard, the court should bear in mind that employees of forensic labs “have a clear pecuniary interest in the acceptance of (forensic) evidence by the courts. The success of their employers and the stability of their own employment depends upon continued use of (forensic) testing.” Dan L. Bark, DNA Identification: Possibilities and Pitfalls Revisited, 31 Jurimetrics J. 53, 79-80.
As Judge Dondero recently held in connection with DNA evidence in People v. Bokin, the relevant scientific community for purposes of the prong one inquiry on DNA was the broader community of molecular biologists and population geneticists and not simply the community of forensic scientists using DNA technology. This ruling was dictated by relevant case law. See, People v. Soto, 21 Ca. 4th at 515 (Issue is whether the RFLP statistical methodology ” is generally accepted in the relevant scientific community of population geneticists.”); People v. Venegas 18 Cal. 4th at 77 (“FBI Agent Lynch…did not purport to be qualified, as a molecular biologist or otherwise, to testify on questions of general scientific acceptance of the validity of…procedures.”);People v. Axell (1991) 235 Cal. App. 3d 836, 857(“The search for scientific consensus must be from within ”the particular field in which it belongs’. Since DNA profiling is an amalgamation of primarily two disciplines, molecular biology and population genetics, it appears logical to consider its acceptance by those communities for forensic use.”); People v. Reilly (1987) 196 Cal. App. 3d 1127, 1138-1139(same).
In this case, “(t)he scientific knowledge supporting ridgeology has been extracted from various related sciences such as embryology, genetics, and anatomy.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 8. In addition, as discussed below, the field of statistics is also implicated. At a minimum, it should be shown that these disciplines generally accept the reliability of making absolute identifications from a partial latent print on the basis of ten points of comparison.
C. Neither Acceptance of Fingerprint Evidence For Nonforensic Scientific Purposes Nor Widespread Law Enforcement Use of Fingerprinting For Classification and Other Nonevidentiary Purposes Establishes That Fingerprint Analysis of a Partial Latent Print Is Reliable or Generally Accepted For Courtroom Use
A scientific technique may be reliable for some purposes and not for others. Indeed, many techniques that have proven reliable for certain purposes in non-forensic settings have been found unacceptable when used for forensic purposes. Polygraphs are one example. The techniques used in polygraphs (monitoring heart rate, blood pressure, galvanic skin response) have a number of accepted applications in physiological research and medicine. It does not follow, however, that lie detection procedures which use these “accepted” procedures are necessarily reliable for courtroom use. See, United States v. Scheffer (1998) 523 U.S. 303, 118 S.Ct 1261, 1266 n. 8. Hypnosis is another example. The use of hypnosis is well accepted for a number of purposes in psychological research and in psychotherapy. But the California Supreme Court held in People v. Shirley (1982) 31 Cal.3d 18 that the use of hypnosis for refreshing witnesses’ memories is not generally accepted.
It is anticipated that the People will make the argument that the identification of a partial latent print based on ten points of identity must necessarily be accepted in the scientific community because the identification technician employs procedures that are used and accepted elsewhere in law enforcement and science for other purposes. For example, the FBI Identification Division has been classifying and analyzing fingerprint cards for purposes of identification since 1924. See, Federal Bureau of Investigation, The Science of Fingerprints: Classification and Uses (1979) p.1. Also, computer scientists in the field of biometrics are beginning to study and use fingerprints for the purpose of establishing security systems for access to computers or other uses. See, United States Government, The Biometric Consortium, http://www.biometrics.org/. The argument is syllogistic, viz.: fingerprint analysis is accepted; the SFPD fingerprint technicians uses fingerprint analysis; therefore the Lab’s fingerprint analysis in all its variations is accepted.
The problem with this argument is that it fails to recognize the difficulties that may arise from the transfer of technology from one application to another. An anology to DNA evidence will illustrate this point. It is widely recognized that forensic DNA testing is more technically demanding than other applications of DNA technology, [Footnote 8] and that it involves additional critical steps that do not arise in other applications (such as the matching and statistical estimationsteps [Footnote 9]).
Fingerprint analysis presents the same sort of problems. The FBI has found that in order to identify criminals from a ten finger fingerprint card, ” it is essential that standard fingerprint cards and other forms used by the FBI be utilized. Fingerprints must be clear and distinct and complete name and descriptive data required on the form should be furnished in all instances.” The Science of Fingerprints: Classification and Uses (1979) p.1. In biometrics, a clear fingerprint image is generated, usually by a high resolution digital camera behind a Plexiglas plate where the users presents their finger. Adrian Dysart, Biometrics (Winter 1998), http://www.monkey.org/~adysart/598/. Even with this high-tech method of collecting the print, it is generally recognized that ” fingerprint verification systems are subject to a mimicry attack… (that) can be avoid(ed) (only) by having thermal sensors detect subcutaneous blood vessels and reject the sample if none are found”. Id. More significantly, it is generally recognized that “biometrics are not reliable enough on their own to act as identifiers, but in conjunction with other, more traditional forms of access control, such as passphrases and PINs, they provide a considerable layer of security.” See also, Let Your Fingers Do the Logging In, Network Computing, Issue 910, June 1, 1998 (“Unfortunately, some of the lowest-cost systems are simply gadgets and too gimmicky for consideration in the enterprise. In our review of fingerprint recognition devices in this issue, we found much of the current crop too insecure and unreliable for practical enterprise wide deployment.”)
For reasons already discussed, a person who deposits a latent print at a crime scene often leaves a partial, unclear print in unknown environmental conditions, and the person obviously does not leave behind subcutaneous blood vessels, passphrases, or PINs.
What must be considered under Kelly is whether the specific method that is being used in this case to produce an opinion of absolute certainty as to identification is generally accepted to be reliable as it was applied, not whether a similar method is accepted for another purpose. To ask, in the abstract, whether fingerprint analysis in general is accepted as reliable by some undefined relevant scientific community is meaningless. At this level of generality, and without specific definition of either the relevant scientific community or the specific methods being utilized, the answer is predictable, but irrelevant, much as it would be in a hypothetical case in which a group of psychics, thinking they are scientists and believing they can make valid fingerprint comparisons by holding the known and questioned prints to their turbaned heads, are asked the question, “Is fingerprinting generally accepted in the scientific community as a reliable means of human identification?”
Moreover, even as to forensic use of a particular method, our Supreme Court has explicitly rejected “widespread use” by law enforcement as a surrogate for a searching inquiry into whether impartial scientists accept a particular technique. In People v. Leahy (1994) 8 Cal. 4th 587, 605-606, the Court stated unambiguously:
The People observe that HGN testing has been used by law enforcement agencies for more than 30 years…. In determining whether a scientific technique is “new” for Kelly purposes, long-standing use by police officers seems less significant a factor than repeated use, study, testing and confirmation by scientists or trained technicians…To hold that a scientific technique could become immune from Kelly scrutiny merely by reason of long- standing and persistent use by law enforcement outside the laboratory or the courtroom, seems unjustified.
The United States Supreme Court agrees: “Respondent argues that because the Government–and in particular the Department of Defense–routinely uses polygraph testing, the Government must consider polygraphs reliable. Governmental use of polygraph tests, however, is primarily in the field of personnel screening, and to a lesser extent as a tool in criminal and intelligence investigations, but not as evidence at trials…. Such limited, out of court uses of polygraph techniques obviously differ in character from, and carry less severe consequences than, the use of polygraphs as evidence in a criminal trial. They do not establish the reliability of polygraphs as trial evidence, and they do not invalidate reliability as a valid concern supporting Rule 707’s categorical ban.” United States v. Scheffer (1998) 523 U.S. 303, 118 S.C. 1261, 1266 n. 8.
THE SUBJECTIVE AND ARBITRARY TECHNIQUE USED IN THIS CASE TO IDENTIFY
PARTIAL LATENT PRINTS WITH ABSOLUTE CERTAINTY CANNOT BE CONSIDERED
GENERALLY ACCEPTED FOR COURTROOM USE WITHOUT CERTAIN INDICIA OF
FORENSIC AND SCIENTIFIC RELIABILITY: EMPIRICAL TESTING, ACCEPTABLE ERROR
RATES, THE EXISTENCE AND MAINTENANCE OF STANDARDS, PUBLICATION AND
PEER REVIEW, AND RELIABLE NON-JUDICIAL APPLICATIONS.
A. Introduction: The Need For Forensic Reliability
How do particular methods become generally accepted by the relevant scientific community? One overriding principle put forth by our Supreme Court in Venegas is the concept of “forensic reliability”. The Court upheld the FBI’s statistical calculation of DNA frequency estimates in that case because the method of calculation was “”forensically reliable’ in that it resolves any imprecision in the statistical calculations in a way that preserves the constitutional presumption of the suspect’s innocence.” 18 Cal 4th at 85. See also, Id.at p. 87 (“We agree with the Court of Appeal’s further conclusion that ”the evidence [is] also clear that the scientific community regards the NRC statistical methodology as forensically reliable’-i.e., as selecting figures that most favor the accused from the scientifically based range of probabilities.).
Judged by the standard of forensic reliability, the use of friction ridge characteristics to identify with absolute certainty a partial latent print certainly does not resolve any imprecision in the calculations in a way that preserves the constitutional presumption of the suspect’s innocence. On the contrary, it presumes the fingerprint examiner’s ability to determine the suspect’s guilt with absolute certainty, even though, by the fingerprint profession’s own admission, “(a) situation seems to have developed where this science grew by default.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 8. Frighteningly, Asburgh describes a close-minded profession where
In the past the friction ridge identification science has been akin to a divine calling. Challenges were considered heresy and challengers frequently were accused of chipping at the foundation of the science unnecessarily. This cultish demeanor was fostered by a general deficiency of scientific knowledge, understanding, and self-confidence within the ranks of identification specialists. A pervading fear developed in which any negative aspect voiced that did not support the concept of an exact and infallible science could lead to its destruction and the destruction of the credibility of those supporting it….
This attitude has been reinforced by the friction ridge identification itself. The role of the scenes of crime officer is continually emphasized in literature. Over the last few years most advancements that have taken place within the science are related to how friction ridge prints are developed, stored, or searched by computers. As a result, most available funding is allotted to furthering those developments. Little, if anything, has been reported on the importance and need for scientific knowledge, understanding the evaluative identification process, or the training necessary to be able to analyze, compare, and evaluate friction ridge prints. Apparently, it is assumed that anyone has the ability to compare friction ridge prints and form an unbiased opinion of individualization.
(Ashbaugh, Basic and Advanced Ridgeology, supra, at 4-5.)
A “science” which has proceeded by default in the absence of scientific knowledge and testing is the antithesis of a forensically reliable procedure as that term is used in Venegas. As in Leahy, “(t)o hold that a(n) [unproven] scientific technique could become immune from Kelly scrutiny merely by reason of long- standing and persistent use by law enforcement outside the laboratory or the courtroom, seems unjustified.” 8 Cal. 4th at 606. The subjective and arbitrary technique used in this case to identify partial latent prints must be subjected to strict scrutiny under Kelly because it is an “the unproven technique or procedure [that] appears in both name and description to provide some definitive truth which the expert need only accurately recognize and relay to the jury.” Id. at 606.
The “science” of partial latent fingerprint analysis does not even satisfy the more liberal definition of scientific reliability set out by the United States Supreme Court in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S. Ct. 2786 (1993). [Footnote 10] In Daubert, the Supreme Court held that federal trial courts, when faced with a proffer of expert scientific testimony, must determine at the outset whether the “reasoning or methodology underlying the testimony is scientifically valid . . . .” Id. at 592-93, 113 S. Ct. at 2796. As the Court recognized, in a case involving scientific evidence, evidentiary reliability will be based upon scientific validity. Id. at 590 n.9, 113 S. Ct. at 2795 n.9. This standard applies both to “novel scientific techniques” and to “well established propositions.” Id. at 592 n.11, 113 S. Ct. at 2796 n.11.
The Daubert Court suggested five factors that trial courts may consider in determining whether proffered expert testimony is scientifically valid. The first factor is whether the “theory or technique . . . can be (and has been) tested.” Id. at 593, 113 S. Ct. at 2796. As the Court recognized, empirical testing is the primary criteria of science:
Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified; indeed, this methodology is what distinguishes science from other fields of human inquiry. The statements constituting a scientific explanation must be capable of empirical test. The criterion of the scientific status of a theory is its falsifibility, or testability.
Id. at 593, 113 S. Ct. at 2796-97 (internal quotations and citations omitted). Our Supreme Court recently put heavy reliance on this factor in People v. Soto, 21 Cal. 4th at 540.
A second closely related factor that the Daubert Court suggested should “ordinarily” be considered is the “known or potential rate of error” of the particular technique. Id. at 594, 113 S. Ct. at 2797. In this regard, the Court cited the Seventh Circuit’s decision in United States v. Smith, 869 F.2d 348, 353-354 (7th Cir. 1989), in which the Seventh Circuit surveyed studies concerning the error rate of spectograghic voice identification techniques. Id. As indicated in n.10, supra, our own Supreme Court referred to this factor in Leahy.
A third factor pointed to by the Court is the “existence and maintenance of standards controlling the technique’s operation.” Id. As an example, the Supreme Court cited the Second Circuit’s opinion in United States v. Williams, 583 F.2d 1194, 1198 (2d Cir. 1978), in which the Second Circuit observed that the “International Association of Voice Identification . . . requires that ten matches be found before a positive identification can be made.” Id. In California, the existence and maintenance of standards is a prerequisite to admissibility under prong three of Kelly.
Fourth, the Daubert Court held that “general acceptance can . . . have a bearing on the inquiry.” Id. “A reliability assessment does not require, although it does permit, explicit identification of a relevant scientific community and an express determination of a particular degree of acceptance within that community.” Id. (quoting United States v. Downing, 753 F.2d 1224, 1242 (3d Cir. 1985)). As the Court recognized, “widespread acceptance can be an important factor in ruling particular evidence admissible and a ”known technique which has been able to attract only minimal support within the community’ . . . may properly be viewed with skepticism.” Id. (quoting Downing, 753 F.2d at 1238)). Of course, in California, general acceptance is a prerequisite to admissibility under prong one of Kelly.
Finally, the Daubert Court recognized that an additional factor which may be considered “is whether the theory or technique has been subjected to peer review and publication.” Id. at 593, 113 S. Ct. at 2797. As the Court recognized, “submission to the scrutiny of the scientific community is a component of ”good science,’ in part because it increases the likelihood that substantive flaws in methodology will be detected.” Id. Accordingly, “[t]he fact of publication (or lack thereof) in a peer reviewed journal . . . [is] a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.” Id. at 594, 113 S. Ct. at 2797. Our Supreme Court recently put heavy reliance on this factor in People v. Soto, 21 Cal. 4th at 540(The debate regarding the effect of population substructuring on RFLP calculations was only resolved empirically by “extensive literature in peer reviewed journals.”)
In providing the above factors, the Supreme Court emphasized that the inquiry under Federal Rule of Evidence 702 is a “flexible one” and that, as such, additional factors may be considered. Id. Several such additional factors have been suggested. For example, in United States v. Downing, 753 F.2d 1224, 1238-39 (3d Cir. 1985), the court held that the following factors were relevant:
(1) the relationship of the technique to methods which have been established to be reliable;
(2) the qualifications of the expert witness testifying based on the methodology;
(3) the non judicial uses to which the method has been put
As demonstrated below, the government’s proposed fingerprint identification evidence fails with respect to each and every reliability factor that has been identified by the Supreme Court.
B. The Lack Of Scientific Reliability As Measured By the Daubert Factors
1. The Failure to Test the Fundamental Hypothesis Upon Which Latent Print Identifications Are Based
The proffered fingerprint identification evidence in this case fails the most basic criteria of science: The premises underlying the identification have not been tested to determine if they can be falsified. As discussed above, there are two fundamental premises to a latent print identification of the type at issue here: First, that it is impossible for two or more people to have prints showing a limited number of ridge characteristics in common such as the ten characteristics identified by the fingerprint examiner in the case at bar, and second, that latent fingerprint examiners can reliably make identifications from small distorted latent fingerprint fragments that reveal only a limited number of basic ridge characteristics.
That these premises have not been empirically validated has, in the wake of Daubert, been repeatedly recognized by forensic science experts. See, United States Department of Justice, Forensic Sciences: Review of Status and Needs (1999), p. 29 (“How can examiners prove that each individual has unique fingerprints? There are certainly statistical models that support this contention. Friction ridge print evidence has historically been ”understood’ to hold individuality based on empirical studies of millions of prints. However, the theoretical basis for this individuality has had limited study and needs a great deal more work to demonstrate that physiological/developmental coding occurs for friction ridge detail, or that this detail is purely an accidental process of fetal development. Studies to date suggest more than an accidental basis for the development of print detail, but more work is needed.”); Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-9, p. 784 (“The criteria used by examiners are ”the product of probabilistic intuitions widely shared among fingerprint examiners, not of forensic research'”.) Michael J. Saks, Merlin and Solomon: Lessons from the Law’s Formative Encounters With Forensic Identification Science, 49 Hastings L.J. 1069, 1105-06 (1998) (“Although in principle fingerprint identification depends upon an objective, probabilistic inquiry, its practitioners use no probability models and have no probability data to use[;] they rely on intuitions and assumptions that have not been tested rigorously . . . .); Margaret A. Berger, Procedural Paradigms For Applying the Daubert Test, 78 Minn. L. Rev. 1345, 1353 (1994) (“Considerable forensic evidence [such as fingerprinting] made its way into the courtroom without empirical validation of the underlying theory and/or its particular application.”).
The lack of testing has also been recognized by those within the fingerprint community. Dr. David Stoney, a leading scholar and fingerprint practitioner, has written:
[T]here is no justification [for fingerprint identifications] based on conventional science: no theoretical model, statistics or an empirical validation process.
Efforts to assess the individuality of DNA blood typing make an excellent contrast. There has been intense debate over which statistical models are to be applied, and how one should quantify increasingly rare events. To many, the absence of adequate statistical modeling, or the controversy regarding calculations brings the admissibility of the evidence into question. Woe to fingerprint practice were such criteria applied! As noted earlier, about a dozen models for quantification of fingerprint individuality have been proposed. None of these even approaches theoretical adequacy, however, and none has been subjected to empirical validation. . . . . Indeed, inasmuch as a statistical method would suggest qualified (non-absolute ) opinions, the models are rejected on principle by the fingerprint profession.
Much of the discussion of fingerprint practice in this and preceding sections may lead the critical reader to the question “Is there any scientific basis for an absolute identification?” It is important to realize that an absolute identification is an opinion, rather than a conclusion based on scientific research. The functionally equivalent scientific conclusion (as seen in some DNA evidence) would be based on calculations showing that the probability of two different patterns being indistinguishably alike is so small that it asymptotes with zero . . . . The scientific conclusion, however, must be based on tested probability models. These simply do not exist for fingerprint pattern comparisons.
David Stoney, Fingerprint Identification, in Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 21-2.3.1, at 72 (David L. Faigman et al. eds., West 1997).
The lack of testing in the fingerprint field also is reflected in an official report that the International Association for Identification (“IAI”) issued in 1973. The IAI had three years earlier formed a “Standardization Committee” for the purpose of determining “the minimum number of friction ridge characteristics which must be present in two impressions in order to establish positive identification.” International Association for Identification, IAI Standardization Committee Report 1 (1973) . After three years of examining the issue, however, the Committee was unable to provide a minimum number. Instead, the IAI issued a Report declaring that “no valid basis exists for requiring a predetermined minimum number of friction ridge characteristics which must be present in two impressions in order to establish positive identification.” Id. at 2. Of course, the reason that the IAI did not have a “valid” basis to set a minimum number was that no scientific testing as to this issue had ever been performed. See Stoney, supra, (Ex. 15 at 71) (“Indeed, the absence of valid scientific criteria for establishing a minimum number of minutiae has been the main reason that professionals have avoided accepting one.”). The IAI effectively conceded as much when it strongly recommended in the Report that “a federally funded in depth study should be conducted, in order to establish comprehensive statistics concerning the frequency, type and location of ridge characteristics in a significantly large database of fingerprint impressions. To date, however, no such research has been conducted.
Perhaps the strongest proof regarding the lack of empirical testing comes directly from the government’s submission in United States v. Byron C. Mitchell (E.D. Pa. 1999). Despite having had months to prepare this submission, and despite having consulted with numerous fingerprint “experts” from around the world, the government was unable to point to any relevant scientific testing concerning either of the two fundamental premises upon which the fingerprint identification in this case is based. Instead, the government referred only to certain embryology studies that have traced the fetal development of fingerprints and to certain “twin” studies which have demonstrated that twins possess different fingerprints. Government’s Combined Report To The Court And Motions In Limine Concerning Fingerprint Evidence (hereinafter Gov’t Mem.) at 15-16, 18-19, http://www.usao-edpa.com/daubert.aspxl. These studies, however, demonstrate, at most, that fingerprints are subject to random development in the embryo and that the individual ridge characteristics are not genetically controlled; they do not address the fundamental premises at issue here — the likelihood that prints from different people may show a limited number of ridge characteristics in common, and the ability of latent print examiners to make accurate identifications from small distorted latent fingerprint fragments.
The government also pointed in its memorandum to certain theoretical statistical claims that have been made with respect to the probability of two different people having entire fingerprint patterns in common. (See Gov’t Mem. at 21.) (citing Francis Galton, Fingerprints 110 (1892) and Bert Wentworth, Personal Identification 318-20 (1932)). These theoretical models, however, have been severely criticized and, more importantly, they have never been empirically tested. See Stoney, supra, at 72. (“As noted earlier, about a dozen models for quantification of fingerprint individuality have been proposed[;] none of these even approaches theoretical adequacy, however, and none has been subjected to empirical validation.”). See also Stoney & Thorton, supra; I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1,6, http://www.scafo.org/library/120101.aspxl (“It is tempting to believe that the problem of deciding on a numerical standard for identification can be solved by statistical models.. . However, it is recognized by all that such arguments are overly simplistic.”). Accordingly, the “models [referred to by the government] occupy no role in the . . . professional practice of fingerprint examination”. Stoney, Fingerprint Identification, supra, §§ 21-2.3.1 at 72 (“Indeed, inasmuch as a statistical method would suggest qualified (non-absolute) opinions, the models are rejected on principle by the fingerprint profession.”). [Footnote 11]
That the theoretical statistical models referred to by the government in Mitchell provide no scientific basis for latent finger print identifications can also be seen from the writing of the government’s own expert David Ashbaugh. In his new book on the subject of fingerprints, Mr. Ashbaugh does not even refer to any of these theoretical models, though one of Mr. Ashbaugh’s stated goals in writing the book is to “address the scientific . . . basis of the identification process.” Ashbaugh, Basic and Advanced Ridgeology, supra at 8-9. [Footnote 12] Moreover, Mr. Ashbaugh acknowledges that there is currently no basis to provide opinions of probability with respect to fingerprints. Id. at 147 (” The so-called probability identifications of friction ridge prints is extremely dangerous, especially in the hands of the unknowing…Extensive study is necessary before this type of probability opinion could be expressed with some degree of confidence and consistency . . . .”). Asbaugh’s own theory of uniqueness based on “poroscopy” has been disproven by biometric scientists. See, A.R.Roddy and J.D. Stosz, Fingerprint Features- Statistical Analysis and System Performance Estimates, from The Proceedings of the Institute of Electrical and Electronics Engineering, Sept. 1997, Vol. 85, No. 9, pp., 18, 25, http://www.biometrics.org/REPORTS/IEEE_pre.pdf (“Ashbaugh…contends that pore pods occur regularly, but the position of the pore within the pod is a random variable. In addition, he assumes independence between pores…(T)he underlying assumption of independence makes uniqueness calculations possible. In reality, though, the independence assumption is not accurate. There appears to be a definite influence on a pore’s position depending on the relative positions of the neighboring pores. If the independence assumption is not valid, then the assumption that all possible configurations of N pores are equally likely is also not valid.”).
The lack of empirical testing that has been done in the field of fingerprints is devastating to any claim that latent fingerprint identifications are scientifically based or generally accepted as reliable. See Daubert, 509 U.S. at 593, 113 S.Ct. at 2796 (“Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified; indeed, this methodology is what distinguishes science from other fields of human inquiry.”) (internal quotations and citations omitted); People v. Soto, 21 Cal. 4th at 540(The debate regarding the effect of population substructuring on RFLP calculations was only resolved empirically by “extensive literature in peer reviewed journals.”). The lack of testing, moreover, deprives latent fingerprint comparisons from having true evidentiary significance. Because of the lack of testing, a latent fingerprint examiner can, at best, correctly determine that a certain number of ridge characteristics are in common in the two prints under comparison; the examiner, however, has no basis to opine what the probability is, given the existence of these matching characteristics, that the two prints were actually made by the same finger. Instead, as discussed further below, the latent print examiner can provide only a subjective opinion that there is a sufficient basis to make a positive identification.
The necessity of being able to provide statistically sound probabilities has been recognized in the analogous area of DNA. See, People v. Venegas, 18 Cal.4th at 82 (“A determination that the DNA profile of an evidentiary sample matches the profile of a suspect establishes that the two profiles are consistent, but the determination would be of little significance if the evidentiary profile also matched that of many or most other human beings. The evidentiary weight of the match with the suspect is therefore inversely dependent upon the statistical probability of a similar match with the profile of a person drawn at random from the relevant population.); People v. Wallace (1993) 14 Cal. App. 4th 651, 661 n.3 (stating that without valid statistics DNA evidence is “meaningless”); People v. Barney (1992) 8 Cal. App. 4th 798, 802 (“The statistical calculation step is the pivotal element of DNA analysis, for the evidence means nothing without a determination of the statistical significance of a match of DNA patterns.”); (1991) People v. Axell, 235 Cal. App. 3d 836, 866 (“We find that . . . a match between two DNA samples means little without data on probability. . .”). [Footnote 13] As forensic scientist Dr. John Thornton has noted, ” DNA analysts seemed to have embraced the premise that they had best be very careful with their statistics, because, if they aren’t, their work will be rejected. If this paradigm becomes the standard, then many other evidence categories, where statistical underpinnings have yet to be developed, are in deep trouble.” John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 20-9.2.1, p. 25 (D. Faigman, ed. 1997).
2. The First Premise Of The Government’s Fingerprint Identification Evidence Not Only Has Not Been Tested, It Has Been Proven False
The first major premise of the government’s fingerprint identification evidence — that it is impossible for fingerprints from two or more people to have as many as ten basic ridge characteristics in common — has not only not been scientifically tested, it has been proven false by anecdotal evidence. As noted above, cases have been documented in which different individuals have shared 10 and even 16 points of similarity. In England, a 16 point standard was adopted after it was discovered that prints from two different individuals shared from 10 to 16 points of similarity. I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1,4, http://www.scafo.org/library/120101.aspxl [Footnote 14]. Even matches that are based on 16 points of comparison and that have been verified by a second or third analyst have been shown to be in error. See, James E. Starrs, Judicial Control Over Scientific Supermen: Fingerprint Experts and Others Who Exceed The Bounds, (1999) 35 Crim. L. Bull. 234, 243-246 (describing two cases in England in 1991 and 1997 in which misidentifications were made despite the fact that the British examiners insist on 16 points for an identification and triple check fingerprint identifications); Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-1, p. 740-741(discussing same cases). As Giannelli and Imwinkelread conclude, “(f)ingerprint identification is not as infallible as many laypersons (and experts) assume it to be.” Id.
Unfortunately, however, findings such as these have not been taken into consideration in determining criteria for the identification of fingerprints in the United States. As discussed further below, there is currently no minimum standard for latent fingerprint identifications in this country, and, as can be seen from the testimony of Ms. Chong each examiner is free to arbitrarily set his or her own minimum threshold and then to declare with absolute certainty that the latent and known print came from the same source. Most telling in this regard is Evett and William’s observation that “(e)xperts [in Britain] appeared to have a particularly poor regard for the fingerprint profession in the USA where there is no national standard. Cases of wrongful identification which had been made by small bureaus in the USA were cited as being symptomatic of a poor system and the dominant view was that such unfortunate events would not have occurred had there been a 16 points standard in operation” A Review of the Sixteen Point Fingerprint Standard at 4. The potential for error is thus significant, especially given that distortion or even fabrication can cause ridge characteristics from two different prints to appear the same, when in reality they are not.
3. The Testing Conducted by the FBI in United States v. Mitchell for the Purposes of Litigation Fails To Demonstrate Scientific Reliability
Recognizing the lack of testing and scientific research that has been done by the fingerprint community during the last 100 years, the government in United States v. Mitchell desperately attempted to make up for this deficiency. The government’s rushed efforts, however, have been far from successful.
As discussed above, one test the government conducted was to send the two latent prints at issue in Mitchell’s case, along with Mr. Mitchell’s inked prints, to 53 different law enforcement agencies. The government requested that the agencies select “court qualified” examiners to compare the prints and to determine whether any identifications could be made. . This experiment is, in fact, relevant to the second fundamental premise at issue in this case — whether latent print examiners can reliably make identifications from small latent print fragments — as it indicates whether different examiners can, at least, be expected to reach the same conclusions when they are presented with the same data.
The results of this test, however, constitute an unmitigated disaster from the government’s perspective, as can be seen from the fact that the test is nowhere mentioned in the government’s first memorandum to the Court. While the results of the test can be found in the Mitchell government exhibit 6-4 , this exhibit does not reveal that the prints utilized in the test are the very prints at issue in Mitchell. The reason for this omission is clear. Of the 35 agencies that responded to the government’s request, eight (23%) reported that no identification could be made with respect to one of the two latents and six (17%) reported that no identification could be made as to the other. See, Memorandum Of Law In Support Of Mr. Mitchell’s Motion To Exclude The Government’s Fingerprint Identification Evidence, p. 21( hereinafter “Memorandum In Support”) http://www.onin.com/fp/fphome.aspxl. The test thus dramatically reveals how subjective latent print comparisons actually are and how unreliable their results can be.
The People can hardly contend in this regard that the participating agencies did not appreciate the extreme importance of the comparisons that they were being asked to perform. The government’s cover letter to the agencies provided:
The FBI needs your immediate help! The FBI laboratory is preparing for a Daubert hearing on the scientific basis for fingerprints as a means of identification. The Laboratory’s Forensic Analysis Section Latent Print Unit, is coordinating this matter and supporting the Assistant United States Attorney in collecting data needed to establish this scientific basis and its universal acceptance.
The time sensitive nature of these requests cannot be expressed strongly enough, nor can the importance of your cooperation. The potential impact of the Federal court not being convinced of the scientific basis for fingerprints providing individuality has far-reaching and potentially negative ramifications to everyone in law enforcement. The FBI wishes to present the strongest data available in an effort to insure success in this legal mater and your cooperation is a key component in achieving this result.
The People also cannot attribute the results of this test to the fact that the fingerprint comparisons were performed by inexperienced examiners. Consistent with the urgency of the government’s cover letter, each of the state law enforcement agencies that did not find a sufficient basis to make an identification selected extremely experienced examiners to make the comparisons. As set forth in the Memorandum In Support at p. 21, the range of experience for this group of examiners is between 10 and 30 years, with the average amount of experience being 20 years. In addition, virtually all of these examiners are board certified members of the IAI, the highest distinction that a latent print examiner can achieve. Id. Accordingly, that this particular group of examiners did not find a sufficient basis to make an identification on either one or both of the latent prints at issue in this case is devastating to the government’s claim of scientific reliability. See also, I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1, 7 (“Statistical analysis [of an extensive collaborative study] did not suggest any association between the number of [correct] identifications made by an expert and his/length of experience.”).
Apparently recognizing just what this test really means to its case against Mr. Mitchell, the government next took the remarkable step of attempting to eradicate the test results. The government asked each of the agencies that did not make an identification to retake the test, but this time the government provided the agencies with the answers that the government believed to be correct. Along with a new response form, the government sent each of these agencies enlargements of the prints at issue displaying what the government apparently believed were the common characteristics. The government’s cover letter to the state agencies provided in pertinent part:
Survey B results indicate that your agency responded with the answer “No” with respect to one or both of the latent prints. For your convenience, I have included with this letter another set of the original photographs submitted to you with another blank survey form and a set of enlarged photographs of each latent print and an enlargement of areas from two of the fingerprints contained on the fingerprint card. These enlargements are contained within a clear plastic sleeve that is marked with red dots depicting specific fingerprint characteristics.
Please test your prior conclusions against these enlarged photographs with the marked characteristics. Please indicate the results on the enclosed survey form and return to me by June 11, 1999. You only need to complete the bottom portion, the third part, of the survey form. Any written narrative description or response should be attached to the survey form.
I anticipate that this data must be made available to the defense counsel and the court prior to the Daubert Hearing proceedings. Therefore, please insure that your handling of this matter is done within the June 11, 1999 deadline. The Daubert Hearing is scheduled for July 7, 1999, and the trial is scheduled for September 13, 1999.
Memorandum in Support at 22.
It is hardly surprising, given the magnitude of what was at stake here, that all of the state agencies at issue, with the exception of one, Missouri, responded to the government’s tactics by recanting and by filling out the new response forms so as to indicate that positive identifications have now been made. The government, in turn, revised its report of the test, Government Exhibit 6-4, so as to indicate that, except for Missouri, only positive identifications were returned by the participating agencies. Memorandum in Support at 23. (The government’s newly revised exhibit 6-4 is provided as Defense Exhibit 23). This revised exhibit, moreover, provides no indication that these state agencies ever returned anything other than positive identifications. By letter to the Court dated June 17, 1999, the government then provided this revised exhibit to the Court, instructing the Court to “substitute” the exhibit for the one the government previously provided in its exhibit book. (Memorandum In Support at p. 23). In this fashion, the government attempted, like a magician, to make the original results of its experiment vanish into thin air.
The government’s considerable efforts in this regard, however, have only succeeded in highlighting the importance of the original test. The study as originally conducted by the government was a relatively fair experiment as to whether different examiners would at least be able to reach the same conclusion when given the same prints to compare, and the test had special significance given the government’s decision to use the very prints at issue in Mitchell. The original unbiased results of the test speak volumes for themselves. That the government has subsequently been able to convince more than 20% of the participating examiners to change their answers only serves to demonstrate the desperate straits that the government found itself in and the lengths to which the government will go in order to have its fingerprint evidence admitted. As a noted fingerprint examiner has aptly recognized, an examiner’s conclusion that a latent print is unidentifiable must be considered “irrevocable,” as nothing is more “pitiful” than an examiner’s subsequent attempt to change that conclusion:
Of course, the crucial aspect is the initial determination to render the latents as unsuitable for identification purposes…this must be a ruthless decision, and it must be irrevocable. There is no more pitiful sight in fingerprint work than to see an expert who has decided that a mark is useless, then seeking to resuscitate the latent to compare with a firm suspect.
John Berry, Useless Information, 8 Fingerprint Whorld 43 (Oct. 1982).
In addition to the above discussed test, the government in Mitchell also conducted experiments on its automated fingerprint identification system (“AFIS”). On the basis of these tests, the government made certain statistical claims with respect to the probability of two people having identical fingerprints or identical “minutia subsets” of fingerprints. The utter fallacy of these statistical claims, as well as the serious methodological flaws that undermine these experiments , became clear at the Daubert hearing, the transcripts of which the Court will be provided.
Moreover, given that the tests in Mitchell were conducted solely for purposes of litigation, and have not been published or subjected to peer review, they do not constitute the type of data or facts that an expert in the fingerprint field would reasonably rely upon, and, as such, the tests should not even be considered by this Court. See Evidence Code section 801(b); United States v. Tran Trong Cuong, 18 F.3d 1132, 1143 (4th Cir. 1994) (“reports specifically prepared for purposes of litigation are not by definition of a type reasonably relied upon by experts in the particular field.”); Richardson v. Richardson-Merrell, Inc., 857 F.2d 823, 831 (D.C. Cir. 1988) (doctor’s testimony held inadmissible because, among other things, the calculations that he relied upon had not been “published . . . nor offered . . . for peer review.”); Perry v. United States, 755 F.2d 888, 892 (11th Cir. 1985) (expert’s testimony rejected where the study upon which the expert relied had not been published or subjected to peer review).
Moreover, there is a particularly good reason why in the instant case the government’s AFIS experiments in Mitchell should be published and subjected to peer review before they are given consideration by a court of law. The government in Mitchell attempted to utilize AFIS as it has never been utilized before. No previous attempts have ever been made to determine fingerprint probabilities from an AFIS system. To the contrary, such systems have been designed for an entirely different purpose — to generate a number of fingerprint candidates which a human fingerprint examiner can then manually compare with the latent print under consideration. The extreme complexity of what the government has attempted to do in Mitchell can readily be seen from the pleadings and transcripts in the case. The following is an excerpt from the description of the first experiment.
Each comparison was performed by two totally different software packages, developed in two different countries by two different contractors using independent teams of fingerprint and software experts. The results of both comparisons were mathematically “fused” using software developed by a third contractor.
The two “matcher” programs calculate a measure of similarity between the minutia patterns of two fingerprints. In both cases, the scores of an identical mate fingerprint is normalized to 1.0 (or 100%). The statistical fusion program combines the two scores by analyzing the most similar 500 (out of 50,000) minutiae patterns. The fusion operation discards 49,500 very dissimilar minutia patters before calculating the fusion statistics. As in the case of the “matcher” programs, the fused similarity measure calculated by the fusion program is normalized to 1.0 (or 100%).
(Memorandum In Support at 26).
Obviously, there are many valid questions regarding the software systems and methodology that the “teams of fingerprint and software experts” utilized to conduct these extremely complicated and novel experiments. As courts have recognized, however, the proper forum for such questioning, at least as an initial matter, is through publication and peer review, not the courtroom. See United States v. Brown, 557 F.2d 541, 556 (D.C. Cir. 1977) (holding that novel hair analysis technique should not have been admitted and stating that “[a] courtroom is not a research laboratory.”); Richardson, 857 F.2d at 831. Peer review is especially important here given that the government in Mitchell refused to even provide the defense with access to the software packages that were used to run the experiments. Memorandum In Support at p. 27).
Finally, the government’s novel AFIS experiments also need to be subjected to peer review and publication before they are accepted in a court of law because the statistical conclusions that the government generated defy reality. The government, for example, asserted, on the basis of its AFIS experiments, that the probability of two people even having four identical ridge characteristics in common “is less than one chance in 10 to the 27th power . . .” (Gov’t Mem at 23.) Yet as discussed above, the fingerprint literature contains examples of people having 10 to 16 ridge characteristics in common. Moreover, as one fingerprint expert has recently acknowledged in explaining why an identification would never be made on the basis of four or five matching points, a “million people” could possess those four or five points of similarity. Commonwealth v. Daidone, 684 A.2d 179, 188 (Pa. Super. 1996). See also, Stoney, Fingerprint Identification, supra, §§ 21-2.1.2 at 66(” A correspondence of four minutiae may well be found upon diligent, extended effort when comparing the full set of prints of one individual with those from another person.”). Accordingly, there is clearly something amiss with respect to the government’s novel efforts to create astronomical statistical probabilities from its AFIS system.
In sum, the AFIS testing that the government conducted in Mitchell for purposes of litigation would not reasonably be relied upon by an expert in the fingerprint field and it should therefore not be relied upon by this Court.
4. There is No Established Error Rate for Latent Print Comparisons, But It Is Clear That Many Errors Do Occur
Given the lack of empirical validation studies that have been performed, it is not surprising that there is no established error rate for latent print comparisons. Nevertheless, the government, without the benefit of any citation, brazenly submitted in Mitchell that the error rate is “zero” (Gov’t Mem. at 19). This claim, however, simply ignores the many documented cases of erroneous fingerprint identifications. Any claim that the error rate is “zero” is patently frivolous in light of the fact that ” both here and abroad there have been alarming disclosures of errors by fingerprint examiners.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-1, p. 740-741(describing two cases in England in 1991 and 1997 in which misidentifications were made despite the fact that the British examiners insist on 16 points for an identification and triple check fingerprint identifications.); James E. Starrs, Judicial Control Over Scientific Supermen: Fingerprint Experts and Others Who Exceed The Bounds, (1999) 35 Crim. L. Bull. 234, 243-246 (describing the same two cases, as well as a case in New York and two in North Carolina); James E. Starrs, A Miscue in Fingerprint Identification: Causes and Concerns, 12 J. of Police Sci. & Admin. 287 (1984).
One such case is reported in State v. Caldwell, 322 N.W.2d 574 (Minn. 1982) The prosecution’s fingerprint expert in Caldwell, a board certified member of the IAI, with more than 14 years of experience, testified that a particular latent print at issue in the case had been made by the defendant’s right thumb. Starrs, A Miscue in Fingerprint Identification supra, at 288. The examiner based his opinion on 11 points of similarity that he had charted. Id. A second fingerprint expert, also a board certified member of the IAI, confirmed the first examiner’s finding, after being consulted by the defense. Id. Following the defendant’s conviction for murder, however, it was definitively established that both of these certified fingerprint experts had erred. Caldwell, 322 N.W. 2d at 585. The defendant’s conviction was accordingly reversed. Id.
“Perhaps the most astounding and colossal fingerprint identification error that has yet been made occurred in England in 1997.” See Starrs, Scientific Supermen at 244-245. In that case, two latent prints that had been recovered from a burglary crime scene were each found to have at least sixteen points in common with two of Andrew Chiory’s inked prints. These identifications, pursuant to standard Scotland Yard procedures, had been triple checked prior to the defendant’s arrest. After the defendant had spent several months in jail, however, the identifications were found to be erroneous.
Professor Starrs also describes how the same tragedy had happened before in England. In 1991, Neville Lee had been arrested for the rape of an eleven year old girl because his fingerprints matched with 16 points of comparison to those of the offender. The fingerprint error was discovered only when another man confessed to the crime. Id. at 245. [Footnote 15]
Accordingly, it is beyond dispute that “(r)egardless of its verbal trappings the science of fingerprint identifications is in no sense infallible, or flawless.” Starrs, Scientific Supermen at 243. The government’s own expert In Mitchell has acknowledged as much. See David L. Grieve, Reflections on Quality Standards, 16 Fingerprint Whorld 108, 110 (April 1990)( “It is true that some overly zealous North American examiners have given testimony concerning false identifications when they believed the identifications were valid.”). What remains unknown, however, is the rate at which misidentifications take place. As commentators have recognized, “it is difficult to glean information about cases of error because they rarely produce a public record, and the relevant organizations and agencies tend not to discuss them publicly.” Simon A. Cole, Witnessing Identification: Latent Fingerprinting Evidence and Expert Knowledge, 28 Social Studies in Science 687, 701 (Oct.-Dec. 1998) . Moreover, as discussed above, there have been no controlled studies conducted so as to determine an error rate for latent print examiners. “Unfortunately, although there is extensive collective experience among casework examiners, there has been no systematic study such as that described above.” Stoney, Fingerprint Identification, supra, §§ 21-2.1.2 at 66.
Just how prevalent the problem of false identifications may actually be, however, can be seen, at least to some extent, from the astonishingly poor performance of latent print examiners on crime lab accreditation proficiency exams. On these exams, latent print examiners are typically provided with several latent prints along with a number of “ten print” inked impressions to compare them with. Commencing in 1995, the provider of the test, Collaborative Testing Service, began to include, as part of the test, one or two “elimination” latent prints made by an individual whose inked impressions had not been furnished.
The results of the 1995 exam were, in the words of the government’s expert in Mitchell, both “alarming” and “chilling.” Grieve, Possession of Truth, 46 J. Forensic Ident. 521, 524. Of the 156 examiners who participated, only 68 (44%) were able to both correctly identify the five latent print impressions that were supposed to be identified, and correctly note the two elimination latent prints that were not to be identified. Even more significantly, 34 of these examiners (22%) made erroneous identifications on one or more of the questioned prints for a total of 48 misidentifications. Id. Erroneous identifications occurred on all seven latent prints that were provided, including 13 errors made on the five latent prints that could be correctly identified to the supplied suspects. Id. In addition, one of the two elimination latents was misidentified 29 times. Id.
“The results of the 1995 proficiency study… raise serious questions about the trustworthiness of fingerprint analysis.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-9(E), p. 784. These shockingly poor results, moreover, could not be blamed on the test. In fact, as Professors Giannelli and Imwinkelried point out, “(a)n especially troubling aspect of the test was that it was not blind, since the participating examiners were surely on notice that they were being tested and such notice should have put them on their guard to do their very best.” Id. at p. 741 n. 18. The 1995 proficiency exam was recognized as being “a more than satisfactory representation of real casework conditions.” Grieve, Possession of Truth, supra, at 524 . The test was designed assembled and reviewed by representatives of the International Association of Identification. Id. As Mr. Grieve correctly observed, a “proficiency test composed of seven latents and four suspects was considered neither overly demanding or unrealistic.” Id. Accordingly, the dreadful results are a matter of significant concern. As Mr. Grieve has written:
Reaction to the results of the CTS 1995 Latent Print Proficiency Test within the forensic science community has ranged from shock to disbelief. Errors of this magnitude within a discipline singularly admired and respected for its touted absolute certainty as an identification process have produced chilling and mind-numbing realities. Thirty-four participants, an incredible 22% of those involved, substituted presumed but false certainty for truth. By any measure, this represents a profile of practice that is unacceptable and thus demands positive action by the entire community.
Grieve, Possession of Truth, supra, at 524-25 (Ex. 9 at 524-25).
Despite Mr. Grieve’s call for “positive action,” the poor results have continued unabated on the more recent proficiency exams. On the 1998 test, for example, only 58% of the participants were able to correctly identify all of the latents and to recognize the two elimination latents as being unidentifiable. Collaborative Testing Services, Inc., Report No. 9808, Forensic Testing Program: Latent Prints Examination 2 (1998) . Even more disturbing was the fact that 21 erroneous identifications were made by 14 different participants. Id. [Footnote 16]
Having failed to address any of these proficiency tests in advancing its claim of a zero error rate, the government In Mitchell took the remarkable position that “practitioner error is not relevant to the validity of the science and methodology under Daubert . . . .” Government’s Response to the Defendant’s Motion to Compel the Government to Produce Written Summaries for All the Experts That It Intends to Call at the Daubert Hearing at 3 n.3. The government, however, failed to explain why practitioner error is irrelevant under Daubert. Nor did the government explain how an error rate for a particular technique may be assessed other than through its real-life practitioners. Not surprisingly, courts have, in fact, looked at studies of examiner error rate in determining whether proffered “scientific” evidence is reliable. See, e.g., United States v. Smith, 869 F.2d 348, 353-54 (7th Cir. 1989) (studies of “actual cases examined by trained voice examiners” considered by court in deciding admissibility). The Seventh Circuit’s decision in Smith was, as noted above, cited with approval by the Supreme Court in Daubert. See Daubert, 509 U.S. at 594, 113 S. Ct. at 2797; People v. Leahy (1994) 8 Cal. 4th 587, 609 (To be qualified as a Kelly expert on an HGN test, witness must have “some understanding of the processes by which alcohol ingestion produces nystagmus, how strong the correlation is, how other possible causes might be masked, what margin of error has been shown in statistical surveys, and a host of other relevant factors…”); See also Saks, supra, at 1090 (“Even if forensic metaphysicians were right, that no two of anything are alike, for fact finders in earthly cases, the problem is to assess the risk of error whatever its source, be that in the basic theory or in the error rates associated with human examiners or their apparatus.”); John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 20-6.2, p. 19 (” Proficiency testing is a means by which [reliability, validity, precision, and accuracy] can be measured…Proficiency testing [is] the most appropriate means for the identification of sources of error…”).. Accordingly, the argument that practitioner error rates are irrelevant is without merit.
In sum, any claim of a zero error rate is plainly at odds with reality. While no controlled studies have been done to determine an error rate, it would appear from the proficiency testing done in the field that the rate is in fact substantial. In this regard, it must be remembered that under Kelly it is the government’s burden to establish the scientific reliability and general acceptance of the expert evidence that it seeks to admit. With respect to the error rate factor, the government plainly has not met that burden. See United States v. Starzecpyzel, 880 F. Supp. 1027, 1037 (S.D.N.Y. 1995) (“Certainly, an unknown error rate does not necessarily imply a large error rate[;] [h]owever, if testing is possible, it must be conducted if forensic document examination is to carry the imprimatur of ”science.'”).5. There Are No Objective Standards to Govern Latent Fingerprint Comparisons Latent fingerprint examiners in the United States are currently operating in the absence of any uniform objective standards. The absence of standards is most glaring with respect to the ultimate question of all fingerprint comparisons: What constitutes a sufficient basis to make a positive identification? As discussed above, the official position of the IAI, since 1973, is that no minimum number of corresponding points of identification are required for an identification. The SWGFAST Quality Assurance Guidelines of the FBI are in agreement. According to the Introduction to the Guidelines, “(t)here is no scientific basis for requiring that a minimum number of corresponding friction ridge features be present in two impressions in order to effect an identification.”
Instead, the determination of whether there is a sufficient basis for an identification is left entirely to the subjective judgment of the particular examiner. Indeed, in his recent book, David Ashbaugh repeatedly stresses that “(t)he opinion of individualization or identification is subjective.” Ashbaugh, Basic and Advanced Ridgeology at 103;See also, David Stoney, Fingerprint Identification: Scientific Status, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 21-2.1.2 at 65(“In fingerprint comparison, judgments of correspondence and the assessment of differences are wholly subjective: there are no objective criteria for determining when a difference may be explainable or not.”)
While the official position of the IAI and SWGFAST, as supported by Mr. Ashbaugh, is that there is no basis for a minimum point requirement, many fingerprint examiners in the United States continue to employ either their own informal point standards or those that have been set by the agencies that they work for. Simon Cole, What Counts For Identity? The Historical Origins Of The Methodology Of Latent Fingerprint Identification, 12 Sci. In Context 1, 3-4 (Spring 1999) [hereinafter Cole, What Counts For Identity?]. This variability of standards is confirmed by Professors Giannelli and Imwinkelried: “There is no consensus on the number of points necessary for an identification. In the United States, one often hears that eight or ten points are ”ordinarily’ required. Some local police departments generally require 12 points.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-7(A), p. 768.
Prior to the IAI’s 1973 proclamation, the informal standard most commonly employed in the United States was 12. See FBI, Fingerprint Identification, supra, at 6. To this day, FBI latent fingerprint experts testify that “(i)n the FBI latent fingerprint section, at present time, there is no set number of points. However, we have an administrative rule which is on the books which requires any latent print of less than 12 points of identity–and that being the dots, the end of ridges or enclosures–requires supervisory approval before it can be reported in a report that it is in fact an identification.” United States v. Timothy McVeigh, Testimony of Special Agent Louis Hupp, Reporter’s Transcript of Proceedings, Vol. 68, April 29, 1997, http://www.papillion.ne.us/mriddle/okctr/4-29-1.aspx. See also, People v. Clarence Powell, S. F. Muni Ct. No. 167003, Testimony of Inspector Michael Byrne, Preliminary Hearing Transcript, April 5, 1978, p. 70 (“Now, the San Francisco Police Crime Laboratory for years we have liked to testify on 12 points…We stop at 12. We are completely satisfied at 12 but…that doesn’t mean we will not testify on nine or eight or–I have never done it myself–I have testified to ten but I don’t think I have gone to nine yet.”).
In addition, while there is no uniform identification standard in the United States, “many” other countries have, in fact, set such standards based on a minimum number of points of comparison. Ashbaugh, Basic and Advanced Ridgeology, supra, at 6-7. As indicated above, in England, many examiners use 16 points as a rule of thumb and triple check the results. “In France, the required number used most often is 24 while the number is 30 in Argentenia and Brazil.” Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)§§ 16-7(A), p. 768. Italy has a minimum standard of 17 matching ridge characteristics. Christophe Champod, Numerical Standards and “Probable” Identifications, 45 J. of Forensic Identification 136, 138 (1995) . The primary purpose of establishing such standards is to try to insure against erroneous identifications. K. Luff, The 16-Point Standard, 16 Fingerprint Whorld 73 (Jan. 1990) .See also, Ashbaugh, Basic and Advanced Ridgeology, supra, at 102 (“[T]he static training threshold is an acceptable practice as a safeguard and permits one to gain experience and confidence with a reduced fear of committing an error.”). Such a standard is legally necessary to ensure “forensic reliability” as that term is used in Venegas.
As commentators have recognized, the question of whether there should be a minimum point standard for latent print identifications has bitterly divided the fingerprint community. See Cole, What Counts For Identity, supra, at 1 . While latent print examiners have somehow managed to maintain a united front in the courtroom, they have been at odds in the technical literature. Id. at 6. Mr. Ashbaugh, for example, has written that “it is unacceptable to use the simplistic point philosophy in modern day forensic science.” Ashbaugh, Premises, supra, at 513 . [Footnote 17] As Mr. Ashbaugh has correctly recognized, the selection of any particular point standard is based, not on scientifically conducted probability studies, but “through what can best be described as an ”educated conjecture’.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 2; see also Ashbaugh, Premises, supra, at 512 (stating that “superficial and unsubstantiated quips became the methodology of the point system”).
The problem, however, is that while Mr. Ashbaugh is correct that the point system, as employed by fingerprint examiners over the past hundred years, is scientifically invalid, neither Mr. Ashbaugh, nor any other member of the fingerprinting community, has advanced a scientifically sound alternative. Here, for example, is Mr. Ashbaugh’s explanation as to how a latent print examiner, in the absence of a minimum point standard, is supposed to know when a sufficient basis exists to make an identification:
A frequently asked question is how much is enough? The opinion of individualization or identification is subjective. It is an opinion formed by the friction ridge, based on the friction ridge formations found in agreement during comparison. The validity of the opinion is coupled with an ability to defend that position, and both are founded in one’s personal knowledge ability and experience
How much is enough? Finding adequate friction ridge formations in sequence, that one knows are specific details of the friction skin, and in the opinion of the friction ridge identification specialist there are sufficient uniqueness within those details to eliminate all other possible donors in the world, is considered enough. At that point individualization has occurred and the print has been identified. The identification was established by the agreement of friction ridge formations, in sequence, having sufficient uniqueness to individualize.
Ashbaugh, Basic and Advanced Ridgeology, supra, at 103.
The utter meaninglessness of this explanation speaks for itself. Mr. Ashbaugh’s prior writings on this subject provide little in the way of additional insight. He has stated, for example, that while “in some instances we may form an opinion on eight ridge characteristics [,] [i]n other instances we may require twelve or more to form the same opinion.” David Ashbaugh, The Key to Fingerprint Identification, 10 Fingerprint Whorld 93, 93 (April 1985) . Mr. Ashbaugh’s explanation for this sliding scale is that some ridge characteristics are more unique than others. Id. at 94, 95. But, as discussed above, no weighted measures of the different characteristics have ever been adopted by the fingerprint community. As California Department of Justice fingerprint expert Dusty Clark has explained, “(t)he repeatability of the finite detail that is utilized in the comparison process has never been subjected to a definitive study to demonstrate that what is visible is actually a true 3rd level detail or an anomaly…Ridgeology hasn’t been scientifically proven to be repeatable, and it’s application is not standardized.” Dusty Clark, What’s The Point (Dec. 1999), http://www.latent-prints.com/id_criteria_jdc.aspx. Accordingly, as Mr. Ashbaugh has recognized, the particular examiner’s determination of whether eight or twelve matching characteristics is sufficient in a particular case is entirely “subjective.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 103. But as Mr. Clark again points out, “”(a) subjective analysis without quantification makes the identification process as reliable as astrology. If one does not quantify, is it an ID when a warm and fuzzy feeling overwhelms you? What happens if my warm and fuzzy feeling is different that yours?…” Id.
Ashbaugh and others place principle reliance on the experience and training of the analysist as a hedge against erroneous results. However, as indicated above, Evett and Williams found in an extensive collaborative study that “(s)tatistical analysis did not suggest any association between the number of [correct] identifications made by an expert and his/length of experience.” I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1, 7. In their study, the FBI and other North American experienced experts were sent 10 sets of samples, only 6 of which should have resulted in a court quality identification and the tenth of which came from two different individuals. Significantly, ” four experts at the FBI were unanimous in deciding that there were 9 court quality identifications, the tenth comparison being not identical. Most of the north American experts decided on 8 or 9 full identifications.” id. at 8. This study perfectly illustrates the truth of Dr. John Thornton’s observation that
(S)ome experts exploit situations where intuitions or mere suspicions can be voiced under the guise of experience. When an expert testifies to an opinion, and bases that opinion on “years of experience”, the practical result is that the witness is immunized against effective cross examination. When the witness testifies that ” I have never seen another similar instance in my 26 years of experience…,” no real scrutiny of the opinion is possible. No practical means exists for the questioner to delve into the extent or quality of that experience. Many witnesses have learned to invoke experience as a means of circumventing the responsibility of supporting an opinion with hard facts. For the witness, it eases cross-examination. But it also removes the scientific basis for the opinion.
Experience is neither a liability nor an enemy of the truth; it is a valuable commodity, but it should not be used as a mask to deflect legitimate scientific scrutiny, the sort of scrutiny that customarily is leveled at scientific evidence of all sorts. To do so is professionally bankrupt and devoid of scientific legitimacy, and courts would do well to disallow testimony of this sort. Experience ought to be used to enable the expert to remember the when and the how, why, who, and what. Experience should not make the expert less responsible, but rather more responsible for justifying an opinion with scientific facts.
John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 20-5.5, p. 17.
The lack of uniform standards for latent print comparisons extends well beyond the question of what ultimate standard should apply for a positive identification. Objective standards are lacking throughout the entire comparison process. Take for example, the simple issue of how points of similarity should be counted. When examiners find themselves struggling to reach a certain point criteria, they often engage in a practice known as “pushing the mark.” Clegg, supra, at 99 . Pursuant to this practice, a single characteristic, such as a short ridge, is counted not as one point, but rather as two separate ridge endings. Id. Or, a single enclosure is counted as two bifurcations. See, Robert Olsen, Friction Ridge Characteristics and Points of Identity: An Unsolved Dichotomy of Terms, 41 J. Forensic Identification 195(1991)(IAI has declared in a formal report that an enclosure should be counted as a single point rather than as two separate bifurcations.). While the IAI has declared that points should not be counted in this fashion, it is nevertheless commonly done, as can be seen by the work of the FBI examiner in the Mitchell case, where an enclosure was counted as two bifurcations . The obvious danger of this practice, as one examiner has candidly recognized, is its “potential to generate error . . . .” Clegg, supra, at 101.
The lack of objective standards in fingerprint comparisons can also be seen with respect to the so called “one dissimilarity rule.” See John I. Thornton, The One-Dissimilarity Doctrine in Fingerprint Identification, 306 Int’l Crim. Police Rev. 89 (March 1977) . Pursuant to this doctrine, if two fingerprints contain a single genuine dissimilarity then the prints cannot be attributed to the same finger or individual. Id. This doctrine is well recognized in the fingerprint community and has been endorsed in the writings of the government’s own experts. David Ashbaugh, Defined Pattern, Overall Pattern and Unique Pattern, 42 J. of Forensic Identification 505, 510 (1992) [hereinafter Ashbaugh, Defined Pattern]. The doctrine, however, is effectively ignored in practice. As Dr. Thornton has recognized, once a fingerprint examiner finds what he or she believes is a sufficient number of matching characteristics to make an identification, the examiner will then explain away any observed dissimilarity as being a product of distortion or artifact:
Faced with an instance of many matching characteristics and one point of disagreement, the tendency on the part of the examiner is to rationalize away the dissimilarity on the basis of improper inking, uneven pressure resulting in the compression of a ridge, a dirty finger, a disease state, scarring, or super-imposition of the impression. How can he do otherwise? If he admits that he does not know the cause of the disagreement then he must immediately conclude that the impressions are not of the same digit in order to accommodate the one-dissimilarity doctrine. The fault here is that the nature of the impression may not suggest which of these factors, if any, is at play. The expert is then in an embarrassing position of having to speculate as to what caused the dissimilarity, and often the speculation is without any particular foundation.
The practical implication of this is that the one-dissimilarity doctrine will have to be ignored. It is, in fact, ignored anyway by virtue of the fact that fingerprint examiners will not refrain from effecting an identification when numerous matching characteristics are observed despite a point of disagreement. Actually, the one-dissimilarity doctrine has been treated rather shabbily. The fingerprint examiner adheres to it only until faced with an aberration, then discards it and conjures up some fanciful explanation for the dissimilarity.
Thornton, supra, at 91.
Dr. Thornton has also noted an additional problem which plagues those few police departments which adhere to an illusory standard of eight points of identification. As he explains, under this rationale
(E)ight matching characteristics, if they are clear and unambigious, will serve for purposes of identification. A problem, however, is that if the evidence print can be gleaned for no more than eight characteristics, it is likely that the print suffers from some lack of clarity. Evidence fingerprints that possess only eight characteristics, but with those eight characteristics being brilliant and unequivocal, are not commonly encountered. So at the same time that the criterion for identification is being relaxed, the ambiguity of each characteristic is being augmented.
John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 20-9.2.5, p. 31.
The absence of real standards in the fingerprint field also can be seen with respect to the issue of verification. Independent verification is considered an essential part of the identification process. See, SWGFAST Quality Assurance Guidelines, Guideline 1.1 (” All identifications must be verified by a qualified latent print examiner.”). But, in real practice, fingerprint agencies sometimes “waive the verification requirement.” William Leo, Identification Standards – The Quest for Excellence, Cal. Identification Dig. (December 1995) . Moreover, as revealed by one of the government’s experts in the Mitchell case, some examiners will simply go from one supervisor to another until a desired verification is obtained. Pat Wertheim, The Ability Equation, 46 J. of Forensic Identification 149, 153 (1996) . Mr. Wertheim candidly recounts in this article his experience of shopping for a supervisor so as to obtain the positive verification that he believed was warranted. Id.
More subtle, but no more scientifically acceptable, is the verification process used in this case. Ms. Chong testified that after she made her identification she wrote up a report and gave it to Ken Moses who was then asked to verify her results. (RT 254). The obvious problem is that Mr. Moses was given access to his colleague’s report before he was asked to do the verification. Even Mr. Asbaugh condemns such a biasing process.Ashbaugh, Basic and Advanced Ridgeology, supra, at 108 (“The latent print is always analyzed first, before comparison to the exemplar. This rule ensures an uncontaminated analysis of the unknown friction ridge detail. Comparisons conducted in this fashion ensure objectivity and prevent contamination through previous knowledge.”). See also, See, Y. Mark and D. Attias, What Is the Minimum Standard for Characteristics for Fingerprint Identification (1996) Fingerprint Whorld 148(“We wish to emphasize that the determination of a positive identification by one of our experts is made independently from other experts and from the circumstances of the case.”) Violation of this principle no doubt explains how two separate misidentifications were made in England, despite the presence of triple verification. (See supra at 49-50).
Finally, the lack of standards in the fingerprint community extends to the training and experience requirements for latent print examiners. To put it simply, no such requirements currently exist. See Leo, supra (recognizing need for “minimum training and experience standards” for latent print examiners). As one of the government’s experts in Mitchell has recognized, “people are being hired directly into latent print units without so much as having looked at a single fingerprint image.” Wertheim, supra, at 152 (Ex. 41 at 152). Once hired, the training that examiners receive is typically minimal. Consider what government expert David Grieve has said on the subject of training:
The harsh reality is that latent print training as a structured, organized course of study is scarce. Traditionally, fingerprint training has centered around a type of apprenticeship, tutelage, or on-the-job training, in its best form, and essentially a type of self study, in its worst. Many training programs are the “look and learn” variety, and aside from some basic classroom instruction in pattern interpretation and classification methods, are often impromptu sessions dictated more by the schedule and duties of the trainer than the needs of the student. Such apprenticeship is most often expressed in terms of duration, not in specific goals and objectives, and often end with a subjective assessment that the trainer is ready.
David L. Grieve, The Identification Process: The Quest For Quality, 40 J. of Forensic Identification 109, 110-111 (1990).
As Mr. Grieve has recognized, the direct result of this poor training is deficient examiners. “The quality of work produced is directly proportional to the quality of training received.” Id. See also David L. Grieve, The Identification Process: Traditions in Training, 40 J. of Forensic Identification 195, 196 (1990) (that there are “examiners performing identification functions who are not qualified and proficient . . . unfortunately has been too well established”); Robert D. Olsen, Cult of the Mediocre, 8 Fingerprint Whorld 51 (Oct. 1982) (“There is a definite need for us to strengthen our professional standards and rise above the cult of the mediocre.”).
A final example of the lack of standards is the alleged requirement of annual proficiency testing. SWGFAST Quality Assurance Guideline 7 provides that “(a) proficiency test should be administered to each latent print examiner annually.” Similarly, the San Francisco Police Department’s own lab policy manual provides “all C.S.I.(Crime Scene Investigation) staff who report fingerprint comparisons shall complete at least one proficiency test each year.”(RT 232). Yet, since she first starting to do fingerprint work in 1969, Ms. Chong had taken only two proficiency tests, one about fifteen to seventeen years ago and the second about five years ago.(Id.) She had never even heard of her own Department’s requirement for annual testing. (Id.). According to Chong, ” The earlier test was an ” FBI TEST THAT TIME, I TOOK IT AND I DID THE LATENT COMPARISON WITH I THINK THE HAND PRINT INSTEAD OF THE FINGERPRINT THAT TIME, AND THEN LATER THE ANSWER WAS SCORE AS WRONG THAT TIME BUT THEN CHIEF TOM MURPHY… RECORRECTED MY WORK, THEN HE SAID I WAS RIGHT AT THE END.” (RT 231). Such testimony makes a mockery of the dubious theory that the training and proficiency requirements of the profession have done away with the need for an objective standard of analysis.
Moreover, the lack of training and standards has not only resulted in a plethora of deficient examiners, but dishonest ones as well. New York police officers have fabricated fingerprint evidence in numerous cases. See Mark Hansen, Trooper’s Wrongdoing Taints Cases, A.B.A. J., Mar. 1994, at 22; Ronald Sullivan, Trooper’s 2d Tampering Charge, N.Y. Times, Jan. 6, 1994, at B9 This fiasco came to light when a New York State policeman bragged in a CIA interview about his fabrication skills. In January 1991, the CIA passed the information on to the FBI. It took over a year, however, for an investigation to be commenced. The special prosecutor found that up to forty cases may have been tainted, and he “wonder[ed] why more prosecutors in the region didn’t grow suspicious about the sudden avalanche of good fingerprint evidence.” Gary Taylor, Fake Evidence Becomes Real Problem, Nat’l L.J., Oct. 9, 1995, at A1, A28. One of the experts in the Mitchell case, Pat Wertheim, estimates that there have been “hundreds or even thousands” of cases of forged and fabricated latent prints. Pat Wertheim, Detection of Forged and Fabricated Latent Prints, 44 J. of Forensic Identification 653, 675 (1994) (“A disturbing percentage of experienced examiners polled by the author described personal exposure to at least one of these cases during their careers.”).
In sum, latent print examiners operate without the benefit of any objective standards to guide them in their comparisons. There also are no objective standards or minimum qualifications with respect to their hiring, training, and proficiency testing. Accordingly, another indicia of good science is critically lacking in this case.
6. There Is No General Consensus That Fingerprint Examiners Can Reliably Make Identifications on the Basis of Ten Matching Ridge Characteristics
As indicated at the outset of this motion, the relevant question in this case is not whether entire fingerprints are unique and permanent, but whether there is a general consensus that fingerprint examiners can make reliable identifications on the basis of only a limited number of basic ridge characteristics, such as the ten that have been identified by the examiner in the case at bar. The answer to that question is plainly no. As discussed above, many countries require that there be at least 12 – 30 matching ridge characteristics before fingerprint evidence is deemed sufficiently reliable so as to warrant its admission at a criminal trial.
Moreover, in this country, no relevant scientific community, beyond fingerprint examiners themselves, generally accept that latent fingerprint identifications are reliable. As courts have recognized, in defining a relevant scientific community, it is necessary to look beyond the practitioners of the technique that is under assessment. See, People v. Leahy, 8 Cal. 4th at 609(“Consistent with both the weight of authority and the cautious, ”conservative’ nature of Kelly, we conclude that testimony by police officers regarding the mere administration of the test is insufficient to meet the general acceptance standard required by Kelly“). See also, Williamson v. Reynolds, 904 F.Supp. 1529, 1558 (E.D. Okl. 1995) (“Not even the ”general acceptance’ standard is met, since any ”general acceptance’ seems to be among hair experts who are generally technicians testifying for the prosecution, not scientists who can objectively evaluate such evidence.”); United States v. Starzecpyzel, 880 F. Supp. 1027, 1038 (S.D.N.Y. 1995) (“[Forensic Document Examiners] certainly find general acceptance within their own community, but this community is devoid of financially disinterested parties, such as academics.”).
Except for scientists in the field of biometrics, who have questioned the reliability of fingerprint identification even of computer scanned prints, other mainstream scientists have essentially ignored the question of whether individuals can be reliably identified through latent fingerprint impressions. Saks, supra, at 1081 . And as discussed above, the forensic science experts that have examined the issue, have found the fingerprint field to be scientifically deficient. See Saks, supra, at 1106 (“A vote to admit fingerprints is a rejection of conventional science as a criterion for admission.”); David L. Faigman et al., Fingerprint Identification: Legal Issues, in Modern Scientific Evidence: The Law and Science of Expert Testimony §§21-1.0, at 55 (West 1997) (“[B]y conventional scientific standards, any serious search for evidence of the validity of fingerprint identification is likely to be disappointing.”); Stoney, Fingerprint Identification, supra, §§21-2.3.1, at 72 (Ex. 15) (“[T]here is no justification [for fingerprint identifications] based on conventional science: no theoretical model, statistics or an empirical validation process.”). Accordingly, the factor of general acceptance by impartial experts outside the fingerprint profession weighs heavily in favor of Mr. Doe’s motion to exclude the government’s fingerprint evidence.
7. The Fingerprint Literature Confirms the Scientific Bankruptcy of the Field
Prominent fingerprint experts themselves, such as the California Department of Justice’s Dusty Clark, have made it clear that they do not accept the precepts and teachings of David Asbaugh. See, Dusty Clark, What’s The Point (Dec. 1999) supra (“The repeatability of the finite detail that is utilized in the comparison process has never been subjected to a definitive study to demonstrate that what is visible is actually a true 3rd level detail or an anomaly…There has to be something to measure and count if the comparison process includes ”quantitative’. If the analysts do not quantify their analysis then their opinion of identity is strictly subjective. A subjective analysis without quantification makes the identification process as reliable as astrology… Ridgeology hasn’t been scientifically proven to be repeatable, and it’s application is not standardized.”)
Astoundingly, even Ashbaugh himself has declared that ” (i)t is becoming more apparent as time passes that friction ridge identification science is more vulnerable now than at any time in its history…[T]he Daubert hearing in the U.S. federal court in Philadelphia, PA, will continue to unfold over the next few years. The future will harbor many similar challenges.”Ashbaugh, Basic and Advanced Ridgeology, supra, at 6-7.
The source of this pessimism by one of the profession’s leading advocates is easy to trace. The fundamental premises underlying latent print identifications have not been critically examined in the technical literature of the fingerprint community. As Mr. Ashbaugh has stated “it is difficult to comprehend that a complete scientific review of friction ridge identification has not taken place at sometime during the last one hundred years[;] [a] situation seems to have developed where this science grew through default.” (Id. at4.) The truth of Mr. Ashbaugh’s comments can be seen by an examination of the publications that are typically listed as authoritative sources on the “science” of fingerprinting. While some of the titles typically listed might convey the impression of science, a review of their actual contents will readily reveal otherwise. Take for example the FBI publication The Science of Fingerprints(1979). Only three pages of this 211 page text even concern the subject of latent fingerprint comparisons. The rest of the text is primarily concerned with classifying ten print patterns, recording ten print patterns and the lifting of latent prints. As to the three pages that concern latent fingerprint comparisons, there is no discussion whatsoever as to the fundamental premises that underlie latent print identifications or even how such comparisons should be conducted. As Mr. Ashbaugh has correctly recognized ” (l)ittle, if anything, has been reported on the importance and need for scientific knowledge, understanding the evaluative process, or the training necessary to be able to analyze, compare, and evaluate friction ridge prints.” Id. at 5.
Even when the premises of latent print identifications have been considered in the technical literature, they have not been critically examined. A perfect example is Alan McRobert’s article Nature Never Repeats, The Print 12(5), Sept/Oct`96, pp 1-2.In this article, Mr. McRoberts cites with approval the following statement which was originally made by Wilder and Wentworth in their 1916 text, Personal Identification:
Finally, there is never the slightest doubt of the impossibility of the duplication of a fingerprint, or even of the small part of one, on the part of anyone who has carefully studied the subject at first hand, whether finger-print expert or anatomist: the only doubters are those who have never taken the trouble to look for themselves, and who argue from the basis of their own prejudices and preconceived opinions.
It is probably statements such as these that have led government expert David Ashbaugh to bemoan the “failure of the identification community to challenge or hold meaningful debate.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 4. As Mr. Ashbaugh explains:
In the past the friction ridge identifications science has been akin to a divine following. Challenges were considered heresy and challengers frequently were accused of chipping at the foundation of the science unnecessarily. This cultish demeanor was fostered by a general deficiency of scientific knowledge, understanding and self confidence within the ranks of identification specialists. A pervading fear developed in which any negative aspect voiced, which did not support the concept of an exact and infallible science, could lead to its destruction and the credibility of those supporting it.(Id.).
Thus, while the phrase “Nature never repeats itself” is catchy, it is not, to paraphrase former Justice Potter Stewart, a talisman in whose presence the protections of Kelly disappear. It is generally held that no two snowflakes are exactly the same. But see, N.C. Knight, No Two Alike? 69 Bulletin Am. Meteorological Soc’y 496 (1988)(finding ” apparent contradiction of the long-accepted truism that no two snow crystals are alike.”). As Dr. John Thornton observes,”(b)ased on the same type of not very rigorous observation, it is held that no two fingerprints have ever been found to have the same ridge positioning….Observations such as these have gradually become tenets of the beliefs of the forensic scientist of the uniqueness of all objects. In some quarters, these tenets have been scooped up and extended into a single all-encompassing, generalized principle of uniqueness, which states that ”Nature never repeats itself.” John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 20-4.2, p. 11-12. Yet, as Dr. Thornton argues, the principle is false as applied to fingerprint evidence or any other kind of physical evidence:
The principle is probably true, although it would not seem susceptible of rigorous proof. But the general principle cannot be substituted for a systematic and thorough investigation of a physical evidence category. One may posit that no two snowflakes are alike, but it does not immediately follow that no two shoes are alike, since snowflakes are made in clouds and shoes are not. If no two shoes are alike, the basis for this uniqueness must rest on other grounds, and those grounds must be identified and enunciated. (Id.)
In sum, the literature of latent fingerprint examiners “fails to meet the expectations of the Daubert (and Kelly) Court(s) — that a competitive, unbiased community of practitioners and academics would generate increasingly valid science.” United States v. Starzecpyzel, 880 F. Supp. 1027, 1037 (S.D.N.Y. 1995).
8. Latent Fingerprint Identifications Are Analogous to Other Techniques That Courts Have Found Scientifically Unreliable
Latent fingerprint comparisons are analogous to two other long standing forensic identification techniques that, in the wake of Daubert, have been found scientifically deficient; these techniques are hair comparisons and handwriting analysis. As indicated above, citing Daubert or Kumho Tire, several recent federal cases have held that handwriting comparison (United States v. Santillan (N.D. Cal. 1999) __F.Supp.__,1999 WL 1201765; United States v. Hines (D. Mass. 1999) 55 Supp. 62;United States v. McVeigh, 1997 WL 47724 (D.Colo.Trans.Feb. 5, 1997); United States v. Starzecpyzel (S.D.N.Y. 1995) 880 F. Supp. 1027, 1038 and hair comparison (See Williamson v. Reynolds (E.D. Okla. 1995) 904 F. Supp. 1529, 1558, rev’d on other grounds (10th Cir. 1997) 110 F. 3d 1523), are no longer supported by current scientific research. And in an analogy even closer to the context of fingerprints, the Washington Court of Appeals has recently reversed an aggravated murder conviction because the state did not establish that latent earprint identification was generally accepted in the forensic science community, as required for admissibility under the Frye test. State v. Kunze (1999) 97 Wash.App. 832, 988 P.2d 977.
Like latent fingerprint identifications, the fundamental premises of handwriting analysis are that no two people write alike and that forensic document examiners can reliably determine authorship of a particular document by comparing the document with known samples. Starzecpyzel, 880 F. Supp. at 1031. As with fingerprints, however, these premises have not been tested. Id. at 1037. Nor has an error rate for forensic document examiners been established. Id. As the court in Starzecpyzel recognized, while “an unknown rate does not necessarily imply a large error rate . . . . if testing is possible, it must be conducted if forensic document examination is to carry the imprimatur of ”science.'” Id. The parallel between the handwriting and fingerprint fields extends to the issue of objective standards. As in the fingerprint field, forensic document examiners do not have any numerical standards to govern their analysis. Id. at 1032. And, like the fingerprint community, forensic document examiners have not subjected themselves to “critical self-examination” in their literature. Id. at 1037. For these various reasons, the district court in Starzecpyzel concluded that “forensic document examination . . . cannot after Daubert, be regarded as ”scientific . . . knowledge.'” Id. [Footnote 18] The courts followed similar reasoning in Santillan, Hines and Mcveigh.
Hair analysis also is analogous to latent fingerprint comparisons. Like latent print examiners, hair analysts look for a number of matching characteristics in doing hair comparisons. Williamson v. Reynolds, 904 F. Supp. at 1553 (“Hett testified that there are approximately 25 characteristics used in hair comparisons.”). Hair analysts then state whether the hair found at the crime scene is consistent microscopically with the hair of the defendant. [Footnote 19] Id. As with fingerprints, there has been a “scarcity of scientific studies regarding the reliability of hair comparison testing.” Id. at 1556. And, like fingerprints, “there is no research to indicate with any certainty the probabilities that two different hair samples are from the same individual.” Id. at 1558. Accordingly, as with fingerprints, the “evaluation of hair evidence remains subjective, the weight the examiner gives to the presence or absence of a particular characteristic depends upon the examiner’s subjective opinion.” Id. at 1556. Given these various considerations, the district court in Williamson concluded that “expert hair comparison testimony [does not] meet any of the requirements of Daubert” and that the state trial court thus erred in admitting it. Id. at 1558.
Latent earprint analysis is obviously closely analogous to latent fingerprint comparisons. As explained in State v. Kunze (1999) 97 Wash.App. 832, 988 P.2d 977, the technique involved in recovering the latent earprint in that case is identical to the procedure used in this case to recover latent fingerprints: “[the technician] ”dusted’ the print by applying black fingerprint powder with a fiberglass brush. He ”lifted’ the print by applying palm-print tape first to the door and then to a palm-print card.” 988 P. 2d at 980. And as with the fingerprints in this case, there was in Kunze variability in the way a known earprint is made and a lack of knowledge as to how a latent earprint was created: “[the technicians] knew that earprints of the same ear vary according to the angle and rotation of the head, and also according to the degree of pressure with which the head is pressed against the receiving surface. They did not know the angle and rotation of the head that made the latent print, or the degree of pressure with which that head had been pressed against McCann’s door.” Id. at 981. The one distinguishing feature between the two techniques actually marks forensic earprint analysis a more forensically conservative technique than fingerprint analysis. Thus, while the expert in Mr. Doe’s case proposes to render an opinion of absolute certainty as to the identity of the latent print, the expert in Kunze testified only that “David Kunze is a likely source for the earprint and cheekprint which were lifted from the outside of the bedroom door at the homicide scene.” Id. at.
Despite this conservative stance, the Washington Court of Appeal, after reviewing the testimony of the 15 expert witnesses called at the Frye hearing and two letters supporting the technique from scientists in Germany and England, concluded that forensic earprint analysis did not meet the general acceptance test of Frye. The head of the state’ crime lab testified and based on his experience of over twenty years “(h)e claimed that latent earprint identification is generally accepted in the scientific community, reasoning that ”the earprint is just another form of impression evidence,’ and that other ”impression evidence is generally accepted in the scientific community.'” Id at 982. The court rejected this testimony because “he had not seen any data or studies on earprints, or on ”how often an ear having the general shape of the questioned print in this case appears in the general human population…” Id. at 981.
The counterpart of David Asbaugh also testified and stated that he had compared over 7000 photographs of earprints and that “he [had] published a book describing his system, which he calls ”earology’ or the ”science of ear identification.'” Id. at 983. The Court was obviously unimpressed, stating that “(h)e did not know of any published scientific studies confirming his theory that individuals can be identified using earprints, … he did not claim that his system was generally accepted in the scientific community, [and his book]…contains no bibliography and no scientific verification”. Id at 983-984. The Court also cited with approval another expert who had stated that the book “was ”narrative,’ not ”reported in a scientific manner’, and ”not subjected to any statistical analysis.’ Id. The Court also quoted favorably yet another expert who testified that forensic earprint analysis had not been generally accepted in the broader scientific community because it had never been tested by scientific methodology, it had never been subjected to scientific peer review, and it had never been shown that results can be reliably obtained in terms of an acceptable rate of error. Id at 984. The Court concluded its opinion with these words:
We agree with and adopt the statements of a commentator who, after noting two generally held tenets–“that no two snowflakes are exactly the same,” and “that no two fingerprints have ever been found to have the same ridge positioning”–states as follows:
In some quarters, these tenets have been scooped up and extended into a single, all- encompassing, generalized principle of uniqueness, which states that “Nature never repeats itself.”
This principle is probably true, although it would not seem susceptible of rigorous proof. But the general principle cannot be substituted for a systematic and thorough investigation of a physical evidence category. One may posit that no two snowflakes are alike, but it does not immediately follow that no two shoe soles are alike, since snowflakes are made in clouds and shoes are not. If no two shoe soles are alike, the basis for this uniqueness must rest on other grounds, and those grounds must be identified and enunciated.
Id. at 992, quoting John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 20-4.2, p. 11-12.
These case are sending the powerful message that, in the words of Judge Jensen, “the[legal] world has changed” and that ” a past history of admissibility does not relieve this Court of the responsibility of now conducting [Kelly] analysis as to … proffered expert testimony.” United States v. Santillan (N.D. Cal. 1999) __F.Supp. __,1999 WL 1201765 at p. 4. Just by the standards set forth in these cases, it is clear that the subjective and arbitrary technique used n this case to identify partial latent prints with absolute certainty is neither forensically reliable nor generally accepted within the broader community of scientists.
9. Latent Fingerprint Comparisons Have Not Been Put to Any Non-Judicial Applications
There have been no non-judicial applications of latent fingerprint comparisons. As expert David Ashbaugh has recognized, the use of fingerprints has been “under the control of the police community rather than the scientific community” and latent prints are used by law enforcement solely as a “tool for solving crime.” Ashbaugh, Basic and Advanced Ridgeology, supra, at 4.
As indicated above, to the extent that non-latent prints have been employed in such fields as biometrics, the experience is anything but helpful to establishing the reliability and general acceptance of partial latent print comparisons. In biometrics, a clear fingerprint image is generated, usually by a high resolution digital camera behind a Plexiglas plate where the users presents their finger. Adrian Dysart, Biometrics (Winter 1998), http://www.monkey.org/~adysart/598/. Even with this high-tech method of collecting the print, it is generally recognized that ” fingerprint verification systems are subject to a mimicry attack… (that) can be avoid(ed) (only) by having thermal sensors detect subcutaneous blood vessels and reject the sample if none are found”. Id. More significantly, it is generally recognized that “biometrics are not reliable enough on their own to act as identifiers, but in conjunction with other, more traditional forms of access control, such as passphrases and PINs, they provide a considerable layer of security.” See also, Let Your Fingers Do the Logging In, Network Computing, Issue 910, June 1, 1998 (“Unfortunately, some of the lowest-cost systems are simply gadgets and too gimmicky for consideration in the enterprise. In our review of fingerprint recognition devices in this issue, we found much of the current crop too insecure and unreliable for practical enterprisewide deployment.”) http://www.techweb.com/se/directlink.cgi?NWC19980601S0021. Thus, this factor also favors Mr. Doe’s motion to exclude latent fingerprint evidence.
In sum, having considered the various indicators of scientific reliability set forth by the Supreme Court in Daubert, and having surveyed the fingerprint literature produced by forensic scientists, biometric scientists, and fingerprint technicians themselves, it is clear that latent fingerprint comparisons do not constitute scientifically reliable and generally accepted scientific evidence. Indeed, the picture that has emerged from this analysis is a disturbing one. It is a picture of poorly trained law enforcement fingerprint examiners making extremely subjective determinations in the absence of any uniform standards and in the absence of any testing to validate the fundamental premises upon which the technique rests. It should therefore hardly be surprising that forensic science commentators have concluded that a “vote for science is a vote to exclude fingerprint expert opinions.” Saks, supra, at 1106.
10. A Federal Court Has Rejected Fingerprint Identification Evidence Because of its Scientific Unreliability
Mr. Doe began this memorandum with the observation that with one exception, the admissibility of expert testimony based upon fingerprint evidence is well established in every jurisdiction in the United States. In the only known instance in which a federal trial court has performed the type of analysis that is now mandated by Daubert, the district court excluded the government’s fingerprint identification evidence, finding that there was no scientific basis for the latent print examiner’s opinion of identification. United States v. Parks (C.D. Cal. 1991) (No. CR-91-358-JSL). (The relevant transcript pages of Parks will be provided under separate cover). [Footnote 20] The district court in Parks reached this determination after hearing from three different fingerprint experts produced by the government in an effort to have the evidence admitted. The testimony of these three experts, however, confirms virtually every argument that has been advanced above.
The first fingerprint expert to testify in Parks was a Los Angeles Police Department latent fingerprint examiner, Diana Castro. (RT at 469 -557). Ms. Castro testified that she identified three different fingerprints at the crime scene as having between 10 and 12 points of similarity with the known prints of the defendant. What particularly concerned the court about these identifications was Ms. Castro’s testimony that her minimum standard for an identification is only eight points of similarity. (RT at 538). Ms. Castro acknowledged that her standard is on the “low side” and that other examiners require ten or twelve points or even more. (RT at 539). Ms. Castro further acknowledged that there has never been any empirical studies done to determine if two people might have the same fingerprints. (RT at 541).The district court in Parks found Ms. Castro’s testimony disturbing because all the latent print examiners that had previously testified before the court had testified to higher minimum point thresholds. In this regard, the court stated:
This business of having a sliding scale — and this is a very high risk business, because I’ve had a lot of fingerprinting testimony, and it’s been from the same group of people by and large, and my impression, correct me if you can — that it slides up and down, that if you have only 10 points, you’re comfortable with 8, if you have 12, you’re comfortable with 10, if you have 50, you’re comfortable with 20.
I’ve had them say that when they had 20 and 25, and say, “I wouldn’t be comfortable with less than 10,” and they’ve thrown out some that were less than 10. Whether they were less than 8, I don’t know.
Suddenly I find that you come — being I think probably the most junior that’s ever testified before me that I’ve ever permitted to testify as an expert — you are comfortable with fewer than anybody that has ever testified before me before.
And as it happens, you also have fewer than anybody that’s ever testified before me; that makes me very uncomfortable.(RT at 551-553).
The district court then questioned the government as to what the fingerprint treatises state with respect to a minimum point standard. (RT at 555). The court was incredulous over Ms. Castro’s testimony that no studies had been performed. If there are no studies the court stated, “then this is not a science and there are no experts in it.” (RT at 556).
In response to the court’s concerns, the government called Ms. Castro’s supervisor, Darnell Carter, to testify regarding the “standard in the industry.” (RT at 556). Mr. Carter’s testimony, however, only succeeded in further revealing the unreliability of the evidence. Mr. Carter disclosed to the court that while the Los Angeles Police Department has a 10 point standard, which can slide down to 8 with a supervisor’s approval, the Los Angeles Sheriff’s Department employs a 12 or 15 point rule and that “if there was a survey taken, you would probably get a different number from every department that has a fingerprint section as to their lowest number for a comparison.” (RT at 559-61). [Footnote 21] Mr. Carter further revealed, in a response to a direct question from the court, that there is no “literature” regarding this issue and that he is unaware why there is no uniform rule. (RT at 561).
After hearing Mr. Carter’s testimony, the district court was only more convinced that the fingerprint evidence should be excluded. To try to “resuscitate” the evidence, the government called yet a third fingerprint expert, Steven Kasarsky, a board certified member of the IAI and an employee of the United States Postal Inspection Service. (RT at 567-68, 596). The court specifically questioned Mr. Kasarsky as to where the “science” is to support fingerprint identifications. (RT at 576-92.) Mr. Kasarsky, however, could not provide a satisfactory response.
Like Mr. Carter and Ms. Castro, Mr. Kasarsky testified that “everyone in our field basically has independent standards.” (RT at 584). Mr. Kasarsky also acknowledged that misidentifications in the field had occurred, (RT at 568-569), and, in response to a question from the court, he admitted that no published studies regarding false identification had ever been done. Mr. Kasarsky further admitted that he knew of instances where prints from two different people have had ten matching characteristics and that he personally compared prints from different individuals possessing six points of similarity. (RT at 599, 600). While Mr. Kasarsky testified that he was able to observe a dissimilarity between these prints which convinced him that they had been made by two different people, he admitted that on other occasions a dissimilarity might go unseen given the partial nature of most latent prints. (RT at 600, 602). Accordingly, Mr. Kasarsky conceded that latent print examiners are in “dangerous territory” when making identifications on the basis of only eight points of similarity:
The Court: Unless you have a very clear full print, you can’t rule out a dissimilarity someplace on it that you didn’t have, and if you have only five or six, or seven or eight, you’re in dangerous territory.
The Witness: Yes, Your Honor, because if you can’t see the area that might have the dissimilarity, one can only guess.
(RT at 602) (emphasis added).
After hearing Mr. Kasarsky’s testimony, the district court ruled that he would not admit the government’s fingerprint evidence. Here is some of what the district court had to say regarding the scientific bankruptcy of the field:
You don’t have any standards. As far as I can tell, you have no standard. It’s just an ipse dixit. “This is unique, this is very unusual?” “How do you know it’s unusual?” Because I never saw it before.” Where is the standard, where is the study, where is the statistical base that been studied?
I have discovered . . . that there are very limited objective standards, and that the training in this area, if it exists, other than “I’ve done this for a long time and I’m teaching you what I know,” is almost nonexistent.
People that have done it teach each other. So far as I’ve heard from you, and so far I’ve heard from anybody, those kinds of studies that would turn this into a bona fide science simply haven’t been done.
The information is there, it could be done, but it hasn’t been done. There has been no study about how far qualified experts with existing prints could look at them and make a mistake on which kinds of things. That’s something that can be done. Those prints exist. It wouldn’t be hard for those studies to be made.
This thing could be turned into a science, but it isn’t now, not from what you’ve said, and not from what she said, and not from what her supervisor said.
Now I have heard a lot of conversation about what it takes to become an expert in this field, and I will say, based on what I’ve heard today, the expertise is as fragile as any group that I’ve ever heard hold themselves out as experts.
The basis for calling themselves experts seems to me to be very fragile. The basic premise that they don’t need expertise, that fingerprints don’t change, doctors told them that.
The other premise that they are unique is, I think, a matter of genetics, and also a matter not of fingerprint specialists. Those are givens in the expertise.
The expertise that they have said that they possess, to say this is unique, I can’t find, as I said, a common thread of analysis. It may be there, but I haven’t heard it.
(RT at 587, 591-92, 606-07).
Unites States v. Parks thus stands as a compelling precedent for the instant motion. Having conducted a searching inquiry for the “science” of fingerprints, the district court in Parks properly determined that no such science exists and that the government’s fingerprint evidence did not possess sufficient reliability to warrant admission. Mr. Doe respectfully submits that this Court should reach the same determination here.
THE GOVERNMENT’S FINGERPRINT IDENTIFICATION EVIDENCE SHOULD ALSO BE EXCLUDED
BECAUSE CORRECT SCIENTIFIC PROCEDURES WERE NOT USED IN THIS CASE
As indicated above, Venegas clarified that “the Kelly test’s third prong does not apply the Frye requirement of general acceptance -it assumes the methodology and technique in question has already met that requirement. Instead, it inquires into the matter of whether the procedures actually utilized in the case were in compliance with that methodology and technique, as generally accepted by the scientific community. …The third prong inquiry is thus case specific; ”it cannot be satisfied by relying on a published appellate decision.'” 18 Cal 4th at 78 (emphasis added).
In People v. Farmer (1989) 47 Cal. 3d 888, 913 the Court had stated that ” careless testing affects the weight of the evidence and not its admissibility…” However, in Venegas the Court clearly retreated: ” Our reference to ”careless testing affecting the weight of the evidence and not its admissibility’ in Farmer …was intended to characterize short-comings other than the failure to use correct, scientifically accepted procedures such as would preclude admissibility under the third prong of the Kelly test.” 18 Cal. 4 th at p.80 (emphasis in original). See also, People v. Soto (1999) 21 Cal. 4th 512, 519 (“The proponent of the evidence must also demonstrate that correct scientific procedures were used.”)
The problem of applying Kelly’s third prong to fingerprint experts is manifest. Since, as demonstrated above, there are no objective standards governing the profession, it is hard to judge whether correct scientific procedures have been followed in an individual case. The fingerprint expert’s solution to this problem is to declare that correct scientific procedure is whatever procedure each individual fingerprint expert decides it to be. But this is obviously not what Kelly and Venegas had in mind.
As Mr. Doe points out in his Opposition To People’s Motion to Admit DNA test Results, in the field of DNA testing, the NRC Reports, TWGDAM Guidelines, and DAB Standards establish what is scientifically acceptable procedure for conducting PCR or other DNA testing. Compliance with these and other guidelines, including the lab’s own testing protocols, is accordingly a prerequisite to admissibility of DNA evidence in this or any other case.
The court in People v. Barney (1992) 8 Cal. App. 4th 798, 812-813 specifically addresses this question:
The NRC Report concludes there is indeed a need for standardization of laboratory procedures and proficiency testing (as well as appropriate accreditation of laboratories) to ensure the quality of DNA laboratory analysis. But the absence of such safeguards does not mean that DNA analysis is not generally accepted….Rather, the absence of these safeguards goes to the question whether a laboratory has complied with generally accepted standards in a given case, or, as stated in terms, whether the prosecutor has shown that “correct scientific procedures were utilized in the particular case. (Emphasis added.)
See also, People v. Venegas (1998) 18 Cal. 4th at 53,( “lack of compliance by the FBI with procedures recommended in 1992 by the National Research Council (NRC) for determining the statistical probability of a random match” renders DNA evidence inadmissible); United States v. Beasley (8th Cir. 1996) 102 F.3d 1440,1448( ” In every case, of course, the reliability of the proffered test results may be challenged by showing that a scientifically sound methodology has been undercut by sloppy handling of the samples, failure to properly train those performing the testing, failure to follow the appropriate protocols, and the like.”); State v. Jackson (Neb. 1998) 255 Neb. 68, 582 N. W. 2d 317, 325(the results of an unspecified STR procedure should not have been admitted absent a foundation that the lab had followed its own testing protocols).
Significantly, there are no NRC Reports purporting to vouch for the reliability of fingerprint evidence or setting forth the correct scientific procedures to follow. The closest attempt at such standardization are the SWGFAST GUIDELINES, http://onin.com/twgfast/twgfast.aspxl, which are still in draft form and are in any event not binding on any agency. Still, if these Guidelines are to be considered ” the minimum necessary to perform consistent quality examinations” (Preface to Guidelines), then it is clear that the San Francisco Police Department fingerprint technicians failed to follow correct scientific procedures.
Most importantly, SWGFAST and the Department’s own policy manual require annual proficiency testing of fingerprint analysts. It is clear from her testimony, that Ms. Chong is not even aware of this requirement, let alone in compliance with it. Further, Mr. Doe has specifically requested the proficiency testing of Mr. Moses and as of this date no tests have been forthcoming. It therefore appears that Mr. Moses is also not in compliance with SWGFAST or the rules of his own Department.
SWGFAST also requires that each analyst must pass written tests and/or practical exercises demonstrating knowledge of “Required Objectives”, “Friction Ridge Analysis”, “Friction Ridge Detection and Preservation”, and ” Documentation of Examination”. SWGFAST Training to Competency for Latent Print Examiners, Guidelines 1.3, 2, 3, 4. The Quality Assurance Guidelines require that “a Quality Manual must be maintained”, and that the Manual must contain documentation of: Methods and Procedure for Latent Print Development, Evidence Handling Procedures, Proficiency Testing, Equipment Calibration and Maintenance Logs, Method Validation Records, and Policy and Procedure Manuals for Electronic Fingerprint Systems. Guideline 3). Guideline 4 requires that latent lift prints and photographic images must show “significant information about the orientation and/or position of the latent print on the object through description and/or diagram. Guideline 5 provides that “(e)vidence must be collected, received, and stored so as to preserve the identity, integrity, condition, and security of the item”, and that “a clear, well-documented chain of custody must be maintained from the time that the evidence is collected or receive until it is released.” Guideline 6 provides that “(p)rocedures must be in place to ensure the accuracy and completeness of documentation” and that “(d)ocumentation must be sufficient to ensure that any qualified latent print examiner could evaluate what was done and replicate any comparisons.”
Although these requirements would appear to be minimal, and although complete resolution of “prong 3” issues must obviously await the hearing of this motion, it already appears that Ms. Chong is not in compliance with correct scientific procedures. Most significantly, she has no documentation of her examination and at least one of the latent prints has been marked up with a red pen by persons unknown. Further, the latents themselves are not properly identified as to source. And, as indicated above, the required verification process was performed in such a way as to bias the results. Finally, and most importantly, the making of an identification on the basis of 10 or less points of similarity is a failure to follow correct scientific procedures.
THE GOVERNMENT’S FINGERPRINT IDENTIFICATION EVIDENCE SHOULD ALSO BE
EXCLUDED UNDER EVIDENCE CODE SECTIONS 352 AND 801(A).
In addition to the fact that the government’s fingerprint identification evidence does not possess sufficient scientific reliability or general acceptance so as to warrant admission under Evidence Code Section 350 and Kelly, the evidence is also properly excludable pursuant to Evidence Code Sections 352 and 801(a). Section 352 provides for the exclusion of relevant evidence “if its probative value is substantially outweighed by the probability that its admission will (a) necessitate undue consumption of time or (b) create substantial danger of undue prejudice, of confusing the issues, or of misleading the jury.” Section 801(a) limits the opinion of an expert witness to that which is “(r)elated to a subject that is sufficiently beyond common experience that the opinion of an expert would assist the trier of fact…”
As federal and state courts have recognized, rules like Section 352 play an important role with respect to expert witness testimony, because when it comes to experts, especially “scientific experts,” jurors may be awed by an “”aura of special reliability and trustworthiness’ which may cause undue prejudice, confuse the issues or mislead the jury.” Williamson v. Reynolds, 904 F. Supp. at 1557 (quoting United States v. Amaral, 488 F.2d 1148, 1152 (9th Cir. 1973). See Daubert, 509 U.S. at 595, 113 S. Ct. at 2798 (“[E]xpert evidence can be both powerful and quite misleading because of the difficulty of evaluating it.”);People v. Kelly, 17 Cal. 3d at 31(” lay jurors tend to give considerable weight to ”scientific’ evidence when presented by ”experts’ with impressive credentials. We have acknowledged the existence of a …. ”misleading aura of certainty which often envelops a new scientific process, obscuring its currently experimental nature.’…'[S]cientific proof may in some instances assume the posture of mystic infallibility in the eyes of the jury.”). See also, People v. Venegas (1998) 18 Cal. 4th 47, 83(The predictable result of expert testimony on DNA statistics is that “the jury would simply skip to the bottom line-the only aspect of the process that is readily understood-and look at the ultimate expression of match probability, without competently assessing the reliability of the process by which the laboratory got to the bottom line. This is an instance in which the method of scientific proof is so impenetrable that it would ‘ … assume a posture of mystic infallibility in the eyes of a jury ….'”); United States v. Starzecpyzel, 880 F. Supp. at 1048 (“With regard to scientific experts, a major rationale for Frye, and now Daubert, is that scientific testimony may carry an ”aura of infallibility.'” (quoting 1 Charles T. McCormick et al., McCormick on Evidence §§ 203, at 608-09 (3d ed. 1984));see also John W. Strong, Language and Logic in Expert Testimony: Limiting Expert Testimony by Restrictions of Function, Reliability and Form, 71 Or. L. Rev. 349, 361 (1992) (“There is virtual unanimity among courts and commentators that evidence perceived by jurors to be ”scientific’ in nature will have particularly persuasive effect.”).
The risk of undue prejudice and confusion is especially great when it comes to latent fingerprint identifications. With fingerprint evidence having been uncritically accepted by the American legal system for the past 80 years, see supra at n.20, the general public has come to firmly believe that fingerprint identifications are scientifically based and that they are invariably accurate. In a study that was conducted concerning jurors’ attitudes toward fingerprint evidence, 93% of the 978 jurors questioned expressed the view that fingerprint identification is a science, and 85% ranked fingerprints as the most reliable means of identifying a person. Charles Illsley, Juries Fingerprints and the Expert Fingerprint Witness 16, presented at The International Symposium on Latent Prints (FBI Academy, Quantico, VA, July, 1987) . As demonstrated above, however, these commonly held views are completely unwarranted. Latent fingerprint identifications are not scientifically supported and there are substantial questions regarding their reliability. Thus, while the probative value of the government’s fingerprint evidence is, in reality, low, the danger of undue prejudice is extremely high, since there is a substantial danger that the jury will give the evidence considerably more weight than it deserves.
Moreover, as in Venegas,
to … leave it to jurors to assess the current scientific debate on (fingerprint evidence) as a matter of weight rather than admissibility, would stand Kelly-Frye on its head. We would be asking jurors to do what judges carefully avoid-decide the substantive merits of competing scientific opinion as to the reliability of a novel method of scientific proof…. The result would be predictable. The jury would simply skip to the bottom line-the only aspect of the process that is readily understood-and look at the ultimate expression of (an absolute opinion), without competently assessing the reliability of the process by which the laboratory got to the bottom line. This is an instance in which the method of scientific proof is so impenetrable that it would ‘ … assume a posture of mystic infallibility in the eyes of a jury ….” (18 Cal 4th at 83.)
The government’s fingerprint evidence, therefore, is properly excludable not only under Federal Rule of Evidence 350 and Kelly, but under Section 352 as well. See United States v. Santillan (N.D. Cal. 1999) __F.Supp.__,1999 WL 1201765 at 5 (“[Handwriting comparison] testimony, when it is buttressed by the fact that it comes from an ”expert,’ would appear to be more prejudicial and misleading than probative in the Court’s consideration of the application of Rule 403 of the Federal Rules of Evidence.”); Williamson v. Reynolds, 904 F. Supp. at 1558 (finding that the probative value of hair comparison evidence was substantially outweighed by its prejudicial effect).
The Venegas case also indicates in dicta that the fingerprint expert’s ultimate opinion of identity in this case is inadmissible under Section 801(a). Although the admissibility of fingerprint testimony was not at issue in Venegas, the Court stated in the course of its discussion of RFLP-DNA evidence that “DNA evidence is different. Unlike fingerprint, shoe track, bite mark, or ballistic comparisons, which jurors essentially can see for themselves, questions concerning whether a laboratory has adopted correct, scientifically accepted procedures for generating autorads or determining a match depend almost entirely on the technical interpretations of experts.” Id at 81.(emphasis added) However, as applied to fingerprint testimony, this reasoning proves too much. For if, in fact, “jurors essentially can see for themselves” the validity of a fingerprint comparison, then there is no need for the opinion testimony of an expert in the first place. Such testimony is inadmissible under Section 801 (a) because it is not “(r)elated to a subject that is sufficiently beyond common experience that the opinion of an expert would assist the trier of fact…” See, United States v. Santillan (N.D. Cal. 1999) __F.Supp.__,1999 WL 1201765 at 5(“(T)he Court should look with special care at the issue of whether or not proffered expert opinion will ”assist the trier of fact’ within the requirements of Rule 702 of the Federal Rules of Evidence. The Court is satisfied that testimony of a handwriting ”expert’ as to the specific mechanics and characteristics of handwriting will add to the general knowledge of lay persons and assist them to make comparisons of different examples of handwriting. However, the Court questions whether adding the essentially subjective opinion of another person as to how the jury should answer the question of fact before them is not the type of assistance contemplated by the rule.”).
IF THE COURT ALLOWS THE GOVERNMENT TO PRESENT ITS FINGERPRINT
IDENTIFICATION EVIDENCE, THEN THE EXPERT SHOULD BE BARRED UNDER
EVIDENCE CODE SECTIONS 350, 352, AND 1200 FROM TESTIFYING THAT THE AFIS
COMPUTER MATCHED THE PRINT TO MR. DOE
At the preliminary hearing, Wendy Chong testified that the Police Department’s Automated Information Fingerprint System (AFIS) can produce a match depending on how the data is traced into the computer. (RT 221). She also said that “each time you submit the fingerprint you get a different candidates list.” (RT 222). She further testified that AFIS was “just a preliminary screening device.” (RT 225). This testimony is confirmed by other sources. See, State v. Feldman (1992) 254 N.J.Super. 754,758, 604 A.2d 242(” Somewhat surprisingly, if the same latent fingerprints are analyzed a number of times by AFIS, the candidates list of order of likelihood may change, including the possible addition or deletion of less likely candidates.”)
In view of this testimony, it is clear that even if the court permits some fingerprint testimony in this case, the court should not permit any expert to testify concerning the match made on Mr. Doe’s prints by the AFIS system. Such testimony is irrelevant and unreliable, and thus inadmissible under section 350, in light of the fact that the AFIS is only a “preliminary screening device” which produces a different candidates list each time a fingerprint is submitted. Indeed, in the Feldman case, it was the State itself that argued that discovery of AFIS information should be denied because ” The State contends that [AFIS] information is irrelevant because no evidence will be offered at the trial concerning the use or operation of AFIS. The evidence which will be produced at the trial will be the expert opinion of the fingerprint expert from the Sheriff’s Office, not the opinion of the AFIS operator or the AFIS printouts…. The State argues that the AFIS information is not relevant because it is not part of the identification process.” Id. at 758-759.
Further, such testimony is inadmissible under section 352 because there is a substantial probability that the jury will be confused and misled into thinking that a match by a seemingly infallible computer has some probative value in bolstering the expert’s opinion testimony. Also, any description of AFIS will necessarily reveal Mr. Doe’s prior criminal history, a fact which is of no significance to this case and which would be highly prejudicial. See, People v. Van Cleave (1929) 208 Cal. 295(conviction reversed because of the erroneous admission of a police fingerprint card that implied to the jury that the defendant had previously been in police custody.); State v. Staten 1996 WL 339953 (Ohio App. 10 Dist.)(“Because … testimony [about AFIS] raises a reasonable inference defendant had a prior criminal record, the trial court erred when it overruled defense counsel’s motion for a mistrial.”). Finally, such testimony is inadmissible hearsay under section 1200. See, People v. Smith (1994) 256 Ill.App.3d 610, 628 N.E.2d 1176, 1181 (“The testimony of Mary McCarthy in this case that Lauren Wicevic verified her fingerprint identification was clearly hearsay. Wicevic’s opinion was an out-of-court statement and it was offered to prove the truth of the matter asserted–that the fingerprint found at the scene of the crime was left by the defendant.) admission of hearsay fingerprint identification was reversible error).
IF THE COURT ALLOWS THE GOVERNMENT TO PRESENT ITS FINGERPRINT
IDENTIFICATION EVIDENCE, MR. DOE SHOULD BE GRANTED FUNDING TO RETAIN
AND BE PERMITTED TO CALL EXPERT WITNESSES TO TESTIFY TO THE
LIMITATIONS OF THE GOVERNMENT’S EVIDENCE.
In the event this Court decides to admit any of the government’s fingerprint evidence, Mr. Doe should be permitted to introduce expert witness testimony regarding the limitations of that evidence. The Third Circuit’s decision in United States v. Velasquez, 64 F.3d 844 (3d Cir. 1995), is directly on point. In Velasquez, the Third Circuit held that it was reversible error for the trial court not to allow the defense to call an expert witness who would have testified about the limitations of the government’s handwriting evidence.
As the Third Circuit recognized in Velasquez, the same Daubert factors that “inform the court’s legal decision to admit evidence under Rule 702 may also influence the fact finder’s determination as to what weight such evidence, once admitted, should receive.” Id. at 848.Similar to the case at bar, the defense expert in Velasquez was prepared to testify that “handwriting analysis is not a valid field of scientific expertise because it lacks standards to guide experts in the match or non-match of particular handwriting characteristics.” Id. at 846. The Third Circuit recognized that the defense expert’s “testimony as a critic of handwriting analysis would have assisted the jury in evaluating the government’s expert witness.” Id. at 848.
The government in Velasquez, challenged the qualifications of the defendant’s expert on the ground that he was not trained as a handwriting analyst. The defense expert in Velasquez was a Seton Hall University Law School Professor who for several years had conducted self-directed research on the field of handwriting analysis. Id. at 847 n.4. The Third Circuit rejected the government’s argument that the witness was not sufficiently qualified. Consistent with its previous holdings, the Court held that the qualifications requirement of Rule 702 “has been liberally construed,” and that it has “”eschewed imposing overly rigorous requirements of expertise.'” Id. at 849 (quoting In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 741 (3d Cir. 1994)). Moreover, the Court found that “the mere fact that the [witness] is not an expert in conducting handwriting analysis . . . does not mean that he is not qualified to offer expert testimony criticizing the standards in the field.” Id. at 851.
In light of Velasquez, it is difficult to understand how the government can possibly contend that the defense should not, at the very least, be permitted to call expert witnesses to testify regarding the myriad of problems with fingerprint analysis which have been detailed above. Expert testimony regarding the lack of objective standards for such identifications, the lack of empirical testing that has been conducted, the failure to establish an error rate, the failure to follow correct scientific procedures and the other Daubert factors discussed above, will properly assist the jury in determining what weight the government’s fingerprint evidence should receive. Velasquez, 64 F.3d at 848.See also, People v. Farmer (1989) 47 Cal. 3d 888, 913(Careless testing procedures can be attacked on cross-examination or by other expert testimony); People v. Smith (1989) 215 Cal. App. 19, 28 (“Appellant had full opportunity to cross-examine criminalist Keel’s qualifications and methods, and to present whatever evidence he desired challenging them.”)
Of course, the abstract right to present opposing expert testimony is meaningless to an indigent defendant such as Mr. Doe unless he has the funds to hire such experts. It cannot be doubted that the right to counsel guaranteed by both the federal and state constitutions includes, and indeed presumes, the right to effective assistance of counsel, and that the public duty to implement that right “is not discharged by an assignment . . . under such circumstances as to preclude the giving of effective aid in the preparation and trial of the case.” (Powell v. Alabama  287 U.S. 45, 71.) Five decades after Powell, the courts have implemented that right in very concrete fashion. For instance, the United States Supreme Court in Ake v. Oklahoma (1985) 470 U.S. 68, 84 L.Ed.2d 53, held that an indigent, upon a proper showing, must be granted access to consultation and expert testimony of a psychiatrist; in so holding, the court set out the broader principle that:
mere access to the courthouse doors does not by itself assure a proper functioning of the adversary process and that a criminal trial is fundamentally unfair if the State proceeds against an indigent defendant without making certain that he has access to the raw materials integral to the building of an effective defense. (84 L.Ed.2d at 62.)
This same principle has been vigorously implemented in California, both by the Legislature (Evidence Code Sections 730-731; Penal Code Sections 987-987.9) and by the courts (see, e.g., Corenevsky v. Superior Court  36 Cal.3d 307 [authorizing funds for law clerk and jury selection expert]; Keenan v. Superior Court  31 Cal.3d 424 [authorizing funds for second counsel]; Taylor v. Superior Court  168 Cal.App.3d 1217 [abuse of discretion not to appoint fingerprint expert]; People v. Gunnerson  74 Cal.App.3d 370, 379 [abuse of discretion not to appoint a cardiologist]; In re Hwamei  37 Cal.App.3d 554 [reversible error not to authorize funds for attorney to travel abroad for purposes of investigation]; see also, Torres v. Municipal Court  50 Cal.App.3d 778 [funds for confidential experts required by equal protection analysis]).
In its most important exploration of the right to ancillary defense services, the California Supreme Court in Corenevsky emphatically declared:
that an indigent defendant has specific statutory rights to certain court- ordered defense services at county expense; that an indigent defendant has a constitutional right to other defense services, at county expense, as a necessary corollary of the right to effective assistance of counsel; that such rights must be enforced, and a court’s order directing payment for such services must be obeyed, even if a county has no specifically appropriated funds for those purposes. (36 Cal.3d at 313.)
* * *
Additionally, the court emphasized:
A right to ancillary defense services will . . . arise if (defendant) has demonstrated the need for such services by reference to ‘the general lines of inquiry he wishes to pursue, being as specific as possible.’ Although such motions can be granted only if supported by a showing that the investigative services are reasonably necessary, it has been recognized that because of the early stage at which the request typically arises, it will often be difficult for counsel to demonstrate a clear need for such funds. Therefore, the trial court should, in appropriate circumstances, ‘view with considerable liberality a motion for such pre-trial assistance.’ (36 Cal.3d at 320; emphasis added.)
In Corenevsky, the defendant, represented by the public defender, requested inter alia funds for the payment of two law clerks. The trial court denied the request, not on the basis of lack of need, but because the court felt the request related to “staffing problems . . . and that is something that you should be taking up with your public defender and his budget instead of the court simply saying that in addition to whatever budget you already have the court is going to expand that budget.” (36 Cal.3d at 323.)
The Supreme Court held that this ruling was erroneous because
[T]he record clearly demonstrates that such an attempt through the public defender’s office would have been futile. Moreover, the court’s order is suspect because it fails to address the key issue posed by defendant: whether such ancillary services are reasonably necessary. (Id. at 323.)
In the present case, as in Corenevsky, the defendant has set out in great detail the basis for his request for funding of ancillary defense services. If the court were to allow any fingerprint evidence in this case, the defense at a minimum would need to call the defense experts relied upon in United States v. Mitchell, and he would be seeking to elicit testimony along the same lines as was done in the Daubert hearing in that case, the transcript of which the defense will lodge with the court. As an offer of proof, the transcript, as supplemented by the scientific literature reviewed in this memorandum, more than satisfies Corenevsky’s requirement that the defendant need refer to “the general lines of inquiry he wishes to pursue, being as specific as possible.” (36 Cal.3d at 320.) Moreover, although the Public Defender, with the assistance of the Court, has had the financial wherewithal to fund this extremely complex case up to this point, because all of the necessary experts reside out of state and will require travel expenses as well as expert witness any attempt to secure the necessary services through the Public Defender’s office would be “futile” within the meaning of Corenevsky.
Viewing the present motion “with considerable liberality,” it should be concluded that defendant has established his entitlement to county funds to pay the reasonable costs necessary for the preparation of his defense.
For all of the foregoing reasons, the government’s fingerprint identification evidence should be precluded. In the alternative, the defense should be granted funds to retain, and should be permitted to present, expert witness evidence regarding the limitations of the government’s evidence.
Dated: February 10, 2000 Respectfully submitted,
Michael N. Burt
Deputy Public Defender
Footnote 1: Anthropometry, or bertillonage, developed in the early 1880’s by Alphonse Bertillion, a clerk in the Paris prefecture of police, relied on the measurements of 11 different physical features of prisoners to determine if they had prior arrests in spite of their giving aliases to the police. This supposedly “infallible” system was abandoned in the United States in the early 1900s when it was discovered that some prisoners, contrary to the theory, had indistinguishable anthropometric measurements. Id. at p. 51 n. 2.
A similar phenomenon may be developing with respect to the “infallible” science of DNA testing. See, Mismatch Calls DNA Tests Into Question, USA TODAY, February 8, 2000, p. 1(“Great Britain’s national DNA database, the world’s largest crime-solving computer system, has mistakenly matched an innocent man to a burglary-a one-in-37 million possibility that American experts call ”mind-blowing'”).
Footnote 2: The Print is the official publication of the Southern California Association of Fingerprint Officers, an association for scientific investigation and identification since 1937.
Footnote 3: Professor Starrs’ use of the term “Scientific Supermen” is used derisively to refer to a tendency on the part of police fingerprint technicians to testify to feats beyond the capacity of mere mortals. A good illustration of such testimony was provided by Wendy Chong in this case. She testified under oath that she performs approximately 60,000 latent fingerprint comparisons a year. (RT 205) Assuming she works 365 days a year, that works out to 164.38 comparisons a day. Assuming further she works straight through without break on nothing but latent comparisons for 8 hours per day, she would be doing 20.54 comparisons per hour, which leaves her 2.92 minutes for each comparison. Not even a Robotic Superman could maintain such a schedule.
Footnote 4: Following an extensive evidentiary hearing, the district court denied the Daubert challenge in Mitchell. Unfortunately, the court did not write an opinion. Most of the Daubert pleadings in the case are available online at
Footnote 5: In 1995 the FBI hosted a group of latent print examiners to discuss development of consensus standards which would preserve and improve the quality of service provided by the latent print community nationwide. The Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST) was formed and since then, SWGFAST expanded to 32 individuals representing 25 federal, state and local law enforcement agencies and crime laboratories, and developed a limited set of basic draft guidelines for hiring, training and quality assurance. The Quality Assurance Guidelines , considered by SWGFAST to be the minimum necessary to perform consistent quality examinations, provide in Section 2.1.5 that “(f)riction ridge identifications are absolute conclusions. Probable, possible, or likely identification conclusions are outside of the acceptable limits of the science of friction ridge identification.” SWGFAST Quality Assurance Guidelines (Dec. 1997), http://www.onin.com/twgfast/twgfast.aspxl.
Footnote 6: Latent-prints. com is a forensic platform focused primarily on latent fingerprints. It’s purpose is for the sharing of articles, ideas, and discussion regarding the impression evidence sciences.
Footnote 7: When evaluating whether a scientific technique is generally accepted, the Court may take judicial notice of transcripts of scientific testimony in previous hearings. Cal.Evid.Code Section 452(d). The Court may also consider scientific and legal articles and judicial opinions from other jurisdictions, including decisions reached by other members of the same court. People v. Brown (1985) 40 Cal.3d 512, 530; People v. Smith (1989) 215 Cal. App..3d 19, 25 The defense refers in this pleading to scientific articles, evidence and testimony in United States v. Byron Mitchell (E.D. Pa. 1999) and the ruling in United States v. Park (C.D. Cal. 1991)(No. CR-91-358-JSL). Full transcripts of the testimony, articles and rulings referred to herein will be provided to the court and the People under separate cover. Judicial notice of this material is therefore mandatory under People v. Smith. Id at 25.
Footnote 8: According to a report by the U.S. Congress’ Office of Technology Assessment:
[i]t is generally agreed that applying DNA tests to forensic samples, especially criminal evidence, potentially presents more difficulties than analyzing samples in basic research or clinical diagnosis. Samples from crime scenes are frequently small and might be of poor quality because of exposure to a spectrum of environmental onslaughts. And unlike paternity samples, where each sample is from an identified source, the contributor to evidence from a crime scene is often unknown.
OTA report, p. 59; see also Lander, DNA Fingerprinting on Trial, supra at 501 (use of RFLP analysis for medical diagnosis does not mean the technique is reliable for forensic identification); NRC 1, p. 51-53 (discussing differences between forensic and diagnostic applications of DNA technology); NRC II at 10 ( “Even with the best laboratory technique, there is intrinsic, unavoidable variability in the measurements that introduces uncertainty that can be compounded by poor laboratory technique, faulty equipment, or human error.”)
Footnote 9: “Unlike many of the technical aspects of DNA typing that are validated by daily use in hundreds of laboratories, the extraordinary population-frequency estimates sometimes reported for DNA typing do not arise in research or medical applications that would provide useful validation of the frequency of any particular person’s DNA profile.” NRC 1 at 77.
Footnote 10: In People v. Leahy (1994) 8 Cal. 4th 587,594 the Court concluded that Daubert affords no compelling reason for abandoning Kelly in favor of the more “flexible” approach outlined in Daubert. The Court ” deemed the more cautious Frye formulation preferable to simply submitting the matter to the trial court’s discretion for decision in each case.” Id. at 595. Elsewhere in the opinion, the Court refers to “Frye’s ”austere standard'” and its “essentially conservative nature.” Id at 595, 603. Since the Court explicitly held that Kelly is more cautious, conservative, and austere than Daubert, it follows that a technique that cannot pass muster under Daubert certainly must fail the more stringent Kelly test. Moreover, in applying Kelly, the Court in Leahy relied on many of the indicia of scientific reliability found determinative in Daubert. See Id. at 609 (To be qualified as a Kelly expert on an HGN test, witness must have “some understanding of the processes by which alcohol ingestion produces nystagmus, how strong the correlation is, how other possible causes might be masked, what margin of error has been shown in statistical surveys, and a host of other relevant factors…”). The Daubert reliability factors are therefore highly relevant to the Kelly standard. Even aside from Kelly, these factors are relevant because “the reliability and thus the relevance of scientific evidence is determined…. under the requirement of Evidence Code section 350, that ‘[n]o evidence is admissible except relevant evidence.” In other words, even apart from Kelly, scientifically unreliable evidence is irrelevant and hence inadmissible.
Footnote 11: The inadequacies of the models referred to by the government are readily evident. For example Mr. Wentworth states:
There is, however, in all of these problems involving chance, an important factor which in our present lack of precise knowledge we have to assume; and that is the exact, or even approximate, percentage of occurrences of the different details. . . . . We find in the fingerprint in question a fork, opening downward. . . . . We have no definite data for knowing the percentage of occurrence of this detail . . . but the variability of the ridges and their detail is so great that we may be warranted in asserting that it is small.
Bert Wentworth & Harris H. Wilder, Personal Identification (2d ed. 1932) at 318.
Another problem concerns the lack of empirical proof that the ridge details are statistically independent of one another. Two scientists studying this problem in the field of biometrics have pointed out that
“(t)he underlying assumption made (in the statistical models) is that the content of each cell is a random variable which is independent of all other cells. The implication is that any configuration of the same set of features has the same probability of occurrence meaning, for instance, that a tightly clustered pack of minutiae is just as likely as the same set of minutiae being distributed uniformly over the print. Although the (model) gives meaningful results, empirically the independence assumption is not valid because some configurations of Galton features are much less likely than others.
A.R.Roddy and J.D. Stosz, Fingerprint Features- Statistical Analysis and System Performance Estimates, from The Proceedings of the Institute of Electrical and Electronics Engineering, Sept. 1997, Vol. 85, No. 9,
Footnote 12: Mr. Ashbaugh, like the government, points to the embryology studies as providing a scientific basis for fingerprint identifications. Ashbaugh, Basic and Advanced Ridgeology, supra at 8, 38-54. Like the government, though, Mr. Ashbaugh fails to explain how these studies relate to the fundamental premises that underlie latent fingerprint identifications.
Footnote 13: As the British physicist William Thomson, Lord Kelvin, observed in 1883:
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.
(quoted in United States v. Starzecpyzel, 880 F. Supp. 1027 (S.D.N.Y. 1995)).
Footnote 14: Prints from two different individuals were originally determined to have 16 points of similarity by New Zealand experts, but “(w)hen the illustration was examined at New Scotland Yard, it was concluded that 6 of the points were not close enough to be considered similarities but the remaining 10 were.” Id. at 2. Evett and Williams now claim that certain of the points of similarity are fabricated, but if this is true, then one can only question how two preeminent organizations missed what Evett and Williams call ” patent” fabrications. Id. p. 9. Moreover, the authors point out that “(d)uring meetings with U.K. fingerprint officers the team heard, in support of the 16 point standard, antidotes-often second hand-of how experts had seen more than 8 points of comparison in prints from different individuals.” Id. at 9. In any case, the Evett and Williams study was done before two documented cases of 16 points of comparison were discovered in 1991 and 1997.
Footnote 15: For other documented cases of false identifications, see James E. Starrs, More Saltimbancos on the Loose? — Fingerprint Experts Caught in a Whorl of Error, 12 Sci. Sleuthing Newsl. 1 (Winter 1998) (detailing several erroneous identifications discovered in North Carolina and Arizona); see also Dale Clegg, A Standard Comparison, 24 Fingerprint Whorld 99, 101 (July 1998)(“I am personally aware of wrong identifications having occurred under both ”non numeric’ and ”16 point’ approaches to fingerprint identification.”).
Footnote 16: On the 1997 exam, 16 false identifications were made by 13 participants. Collaborative Testing Services, Inc., Report No. 9708, Forensic Testing Program: Latent Prints Examination 2 (1997) . Six misidentifications were made on the 1996 exam. Collaborative Testing Services, Inc., Report No. 9608, Forensic Testing Program: Latent Prints Examination 2 (1996).
Footnote 17: Of course, the identification in the instant case appears to have been made by Ms. Chong on just such a simplistic counting of points.
Footnote 18: Despite reaching this conclusion, the district court in Starzecpyzel ultimately allowed the handwriting evidence to be admitted under Federal Rule of Evidence 702 as “specialized knowledge” rather than as “scientific knowledge.” (The court provided the jury with a lengthy instruction explicitly telling them that the evidence was not scientific in nature). Id. at 1050-51. In so holding, the court reasoned that the Daubert factors are inapplicable to experts who are not testifying on the basis of scientific knowledge. Id. at 1029. This aspect of the court’s decision, however, was ultimately shown to be erroneous by the Supreme Court’s subsequent decision in Kumho Tire Co. v. Carmichael, 119 S. Ct. 1167 (1999).
Footnote 19: Unlike latent print examiners, hair analysts candidly concede that they cannot make absolute identifications. Williamson, 904 F. Supp. at 1554, 1555.
Footnote 20: A review of fingerprint cases will show that no court has ever conducted the type of analysis that is required by Daubert. Indeed, as commentators have recognized and as argued at the outset of this memorandum, the early American cases establishing the admissibility of fingerprint identifications involved virtually no scrutiny of the evidence whatsoever. See David L. Faigman et al., Modern Scientific Evidence: The Law and Science of Expert Testimony §§ 21-1.0, at 52 (Ex. 15) (“These cases, germinal not only for fingerprint identification but for the many other forensic individualization techniques invested virtually no effort assessing the merits of the proffered scientific evidence, but merely cited treatises on criminal investigation, or general approval of science, or, . . . other cases admitting [such evidence].”); Saks, supra, at 1103 (Ex. 13) (“What is disappointing about the fingerprint admissibility cases is that these courts made virtually no serious substantive inquiry into the body of knowledge on which they had the responsibility to pass judgment. Later cases had the illusory luxury of precedent, reasoning in effect: ”Courts in other states are letting in fingerprint evidence, so we can too.'”).
Footnote 21: Mr. Carter further testified in this regard that he had attended the FBI Academy for training and that the lowest number that anyone from the FBI had “gone to court on has been seven.” (RT at 561).
Deputy Public Defender
555 Seventh Street, Second Floor
San Francisco, California 94103
Attorneys for Defendant
SUPERIOR COURT OF CALIFORNIA COUNTY OF SAN FRANCISCO
THE PEOPLE OF THE STATE OF CALIFORNIA, SCN: 000000
Plaintiff, Date: February 18, 2000
Time: 9:00 a.m.
EXHIBITS IN SUPPORT OF MOTION AND MOTION TO EXCLUDE FINGERPRINT IDENTIFICATION EVIDENCE AND REQUEST FOR A HEARING PURSUANT TO PEOPLE V. KELLY (1976) 17 CAL. 3D 24, OR, IN THE ALTERNATIVE, MOTION FOR FUNDS TO RETAIN FINGERPRINT EXPERTS AND TO PERMIT THEIR TESTIMONY BEFORE THE JURY
1. United States v. Mitchell (E.D. Pa. 1999)(No. 96-407), Testimony of William Babler and David Asbaugh, July 7, 1999
2. United States v. Mitchell (E.D. Pa. 1999)(No. 96-407), Testimony of Ed German and Stephen Meagher, July 8, 1999
3. United States v. Mitchell (E.D. Pa. 1999)(No. 96-407), Testimony of Stephen Meagher, Donald Zeisig and Bruce Budowle, July 9, 1999
4. United States v. Mitchell (E.D. Pa. 1999)(No. 96-407), Testimony of Marilyn Peterson, David Stoney, and James Starrs, July 12, 1999
5. United States v. Mitchell (E.D. Pa. 1999)(No. 96-407), Testimony of Simon Cole, Pat Wertheim, and Bruce Budowle, July 13, 1999
6. United States v. Parks (C. D. Ca. 1991)(No. CR-91-358-JSL), Testimony of Diana Castro, Freddie Underwood, and Michael Ames, December 10, 1991
7.United States v. Parks (C. D. Ca. 1991)(No. CR-91-358-JSL), Testimony of Diana Castro, Darnell Carter, and Stephen Kasarsky, December 11, 1991
8. People v. Clarence Powell, S. F. Muni Ct. No. 167003, Testimony of Inspector Michael Byrne, April 5, 1978
9.People v. John Davenport, S. F. Muni Ct. No. 198530, Testimony of Inspector Michael Byrne, August 30, 1978
10. Paul Giannelli and Edward Imwinkelried, 1 Scientific Evidence (3d. Ed 1999)
11. David Ashbaugh, Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology, 103 (CRC Press, Oct. 1999)
12.David Stoney, Fingerprint Identification: Scientific Status, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony (David L. Faigman et al. eds., West 1997)
13. I. W. Evett and R.L. Williams, A Review of the Sixteen Point Fingerprint Standard in England and Wales, (1996) 12(1) The Print 1
14. James E. Starrs, Judicial Control Over Scientific Supermen: Fingerprint Experts and Others Who Exceed The Bounds, (1999) 35 Crim. L. Bull. 234
15. Federal Bureau of Investigation, An Analysis of Standards in Fingerprint Identification, FBI L. Enforcement Bull. 1 (June 1972)
16. Y. Mark and D. Attias, What Is the Minimum Standard for Characteristics for Fingerprint Identification (1996) Fingerprint Whorld 148
17. James Osterburg, The Crime Laboratory : Case Studies of Scientific Criminal Investigation (1967)
18. Ene-Malle Lauritis, Some Fingerprints Lie, National Legal Aid Defender Association, The Legal Aid Briefcase, October 1968
19. J. Edgar Hoover, Hoover Responds to “Some Fingerprints Lie”, The Legal Aid Briefcase, June 1969, p.221
20. Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), Quality Assurance Guidelines (Dec. 1997)
21. Dusty Clark, What’s The Point (Dec. 1999)(internet document)2
2.Federal Bureau of Investigation, The Science of Fingerprints: Classification and Uses (1979)
23.Adrian Dysart, Biometrics (Winter 1998)(internet document)
24.Let Your Fingers Do the Logging In, Network Computing, Issue 910, June 1, 1998
25. United States Department of Justice, Forensic Sciences: Review of Status and Needs (1999)
26. A.R.Roddy and J.D. Stosz, Fingerprint Features- Statistical Analysis and System Performance Estimates, from The Proceedings of the Institute of Electrical and Electronics Engineering, Sept. 1997, Vol. 85, No. 9(internet document)
27. Simon A. Cole, Witnessing Identification: Latent Fingerprinting Evidence and Expert Knowledge, 28 Social Studies in Science 687, 701 (Oct.-Dec. 1998)
28. John Thornton, The General Assumptions and Rationale Of Forensic Identification, in 2 Modern Scientific Evidence: The Law and Science of Expert Testimony
29. Simon Cole, What Counts For Identity? The Historical Origins Of The Methodology Of Latent Fingerprint Identification, 12 Sci. In Context 1, 3-4 (Spring 1999)
30. Charles Illsley, Juries Fingerprints and the Expert Fingerprint Witness 16, presented at The International Symposium on Latent Prints (FBI Academy, Quantico, VA, July, 1987)
31. David Grieve, Possession of Truth, 46 J. of Forensic Identification 521 (1996)
32. Collaborative Testing Services, Inc., Report No. 9608, Forensic Testing Program: Latent Prints Examination 2 (1996)
33. Collaborative Testing Services, Inc., Report No. 9708, Forensic Testing Program: Latent Prints Examination 2 (1997)
34. Collaborative Testing Services, Inc., Report No. 9808, Forensic Testing Program: Latent Prints Examination 2 (1998)
35. David A. Stoney & John I. Thornton, A Critical Analysis of Quantitative Fingerprint Individuality Models, 31 J. of Forensic Sci. 1187 (1986)
36. United States Department of Justice, National Institute of Justice, Solicitation: Forensic Friction Ridge (Fingerprint ) Examination Validation Studies (March 2000)
37. Dusty Baker, Reliability of Conclusions (internet document) April, 2000)
38. Ed German, Problem Idents (internet document) (April, 2000)
39. Dario Maio and Davide Maltoni, Minutiae Extraction And Filtering From Gray-Scale Images, in L.C. Jain et al, Intelligent Biometric Techniques in Fingerprint and Face Recognition (1999)
Deputy Public Defender
555 Seventh Street, Second Floor
San Francisco, California 94103
Attorneys for Defendant
SUPERIOR COURT OF CALIFORNIA COUNTY OF SAN FRANCISCO
THE PEOPLE OF THE STATE OF CALIFORNIA, SCN: 000000
Plaintiff, Date: March 3, 2000
Time: 9:00 a.m.
REPLY TO PEOPLE’S OPPOSITION TO MOTION TO EXCLUDE FINGERPRINT
IDENTIFICATION EVIDENCE AND REQUEST FOR A HEARING PURSUANT TO PEOPLE
V. KELLY (1976) 17 CAL. 3D 24, OR, IN THE ALTERNATIVE, MOTION FOR FUNDS
TO RETAIN FINGERPRINT EXPERTS AND TO PERMIT THEIR TESTIMONY BEFORE THE JURY
The prosecutor repeatedly makes one, and only one, point about Mr. Doe’s fingerprint motion: “It’s too long.” The Bible is long; so is Tolstoy’s War and Peace. By the prosecutor’s child-like reasoning, these works of art should be rejected out of hand because they are “too long.” The defendant’s motion, and the extensive scientific evidence upon which it is based, must be judged on it’s merits, not on the basis of catchy phrases or irrational fears of information contained on the internet. As former prosecutor and now Northern California Federal District Court Judge Lowell Jensen put the matter, “The government is correct in their assertion that pre-Daubert/Kumho/ Ninth Circuit precedent supports the admissibility of (expert) testimony; however, the world has changed. The Court believes that… a past history of admissibility does not relieve this Court of the responsibility of now conducting Daubert/Kumho analysis as to this proffered expert testimony.” United States v. Santillan (N.D. Cal. 1999) __F.Supp. __,1999 WL 1201765 at p. 4. See also, People v. Soto, (1999) 21 Cal. 4th 512, 540-541 n. 31(“(I)n a context of rapidly changing technology, every effort should be made to base (decision) on the very latest scientific opinions…”); People v. Allen (1999) 72 Cal. App. 4th 1093, 1101(“The issue is not when a new scientific technique is validated, but whether it is or is not valid; that is why the results generated by a scientific test once considered valid can be challenged by evidence the test has since been invalidated.”);People v. Smith (1989) 215 Cal.App.3d 19, 25(in determining whether a particular technique is generally accepted “defendant is not foreclosed from showing new information which may question the continuing reliability of the test in question or to show a change in the consensus within the scientific community concerning the scientific technique”).
The prosecutor admits that a Kelly hearing is required if the defendant presents “new evidence… reflecting a change in the attitude of the scientific community.” People v. Kelly (1976) 17 Cal. 3d 24.(emphasis added). [Footnote 22] In a non-sequiter, however, the prosecutor then argues that defendant’s motion for a Kelly hearing should be denied because Mr. Doe “cites more to web pages than cases.” But by the prosecutor’s own admission the focus must be on the attitude of the scientific community, not on how many cases Mr. Doe can cite as reflective of the attitudes of the judicial community.
The government has failed to demonstrate that its latent fingerprint identification evidence possesses the various indicia of scientific and forensic reliability set forth by the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) and our Supreme Court in People v. Venegas (1998) 18 Cal.4th 47 and the other cases cited in defendant’s motion. Most significantly, as even a cursory review of the Mitchell transcripts and other Exhibits in support of the motion will reveal, there has been no testing of any of the fundamental premises that underlie latent fingerprint identifications. As the government witnesses in Mitchell conceded, there has been no controlled studies performed so as to determine the reliability of identifications which are based on small distorted latent fingerprint fragments such as those at issue in the case at bar. Nor has there been any testing done so as to determine the minimum amount of corresponding detail that a fingerprint examiner should find before making an identification.
Because no testing of this nature has been performed, there is no established error rate for latent fingerprint examiners. It is clear, however, from many real life examples, that errors do occur. Moreover, there is substantial reason to believe that additional errors have gone undetected, given the alarmingly high rate of misidentifications that have occurred during the last several years on latent fingerprint examiner proficiency exams. The government’s experts in Mitchell provided no explanation for the shockingly poor results of these exams and the prosecutor in this case is also silent on this issue, but does admit the astonishing fact that the AFIS system “has an error rate of 70-75%.” (Opposition to Motion to Suppress) . Instead, the government in Mitchell and the prosecutor here resort to the argument that human error rate is irrelevant, a contention that not only conflicts with Daubert and Venegas, but which is patently absurd in light of the fact that latent fingerprint identifications are based entirely on subjective human judgment.
The lack of experimentation that has been done with respect to latent fingerprint identifications also has resulted in the failure to establish an objective identification standard. As the government’s experts in Mitchell all acknowledged, a fingerprint examiner’s opinion of identification is completely subjective. Shockingly, the government’s experts conceded that the varying point standards that fingerprint examiners have been relying upon, and testifying on, for the past 90 years are not scientifically based and are instead the product of what can, at best, be described as educated conjecture. The extreme subjectiveness of latent fingerprint identifications is especially troubling given that many latent print examiners are poorly trained and minimally qualified.
In addition to these various Daubert factors, the unreliability of latent print comparisons was amply demonstrated in Mitchell the survey that the government conducted of state law enforcement agencies. Of the 35 agencies that, initially responded, eight (23%) found that there was an insufficient basis to make an identification with respect to one of the two latent prints at issue in this case and six (17 %) found an insufficient basis as to the other. That the government subsequently went to the extreme lengths that it did to convince each of these agencies to change their opinions, only serves to demonstrate just how significant the original results of this test actually are.
Unable to satisfy any of the key Daubert and Downing factors discussed above, and reeling from the results of the state examiner survey, the government, at the conclusion of the Daubert hearing in Mitchell, submitted that even if its fingerprint identification evidence is not admissible as “scientific” knowledge, it may still be admitted as “technical” or “specialized” knowledge. The government, however, cannot so easily evade the criteria of Daubert. As the Supreme Court recently held in Kumho Tire v. Carmichael, 119 S. Ct. 1167 (1999), the Daubert factors may properly be applied to evaluate the reliability of all expert testimony, not just scientific expert testimony. The prosecutor in this case can offer no plausible reason why the Daubert factors should not be used to assess the reliability of such critically important evidence as latent fingerprint identification testimony. It is beyond dispute that testing in the fingerprint field can be performed, an error rate established, and an objective identification standard developed. That none of this has occurred, cannot be glossed over simply by characterizing the evidence as well accepted by a heretofore uncritical. The essential fact remains; the government has not established the reliability or general scientific acceptance of its latent fingerprint identification evidence.
Finally, the unreliability of latent fingerprint identifications has already been judicially recognized. In the only known fingerprint case in which a federal trial court has performed the type of analysis that is now mandated by Daubert, the district court excluded the government’s fingerprint identification evidence. United States v. Parks (C.D. Cal. 1991) (No. CR-91-358-JSL) (Ex. 48). The district court in Parks reached its determination after hearing from three different fingerprint experts produced by the government in an effort to have the evidence admitted. In excluding the evidence, the district court recognized, among other things, the lack of testing that has been done in the field, the failure of latent fingerprint examiners to employ uniform objective standards, and the minimal training that latent fingerprint examiners typically receive.
Accordingly, for all of the foregoing reasons, Mr. Doe requests that this Court preclude the government from introducing its fingerprint identification evidence at his upcoming trial. Mr. Doe does not think he has “saddl(ed) the court with hundreds of pages of testimony from (United States v. Mitchell).” Opp. p 3). The prosecutor cannot have it both ways. He cannot complain that defendant must demonstrate a change in the attitude of the scientific community and then turn around and whine when the defendant attempts to satisfy that burden. Nevertheless, defendant is sensitive to the burden which his showing places upon the Court. He therefore offers the following summary of the testimony elicited in Mitchell as an aid to understanding the scientific basis of the defendant’s motion.
THE DAUBERT HEARING IN MITCHELL
A. The Government’s Witnesses
The government offered seven experts to support its position that fingerprint evidence is admissible as scientific or technical evidence. These experts, by and large, testified to their acceptance of the prosecution’s three premises, set forth in Government Exhibit A:
1. Human friction ridges are unique and permanent;
2. human friction ridge skin arrangements are unique and permanent;
3. individualization, that is, a positive identification, can result from comparisons of friction ridge skin or impressions containing a sufficient quality (clarity) and quantity of unique friction ridge detail.
The first two of these premises, however, that entire fingerprint patterns are “unique and permanent,” are largely irrelevant to the question concerning the reliability of latent fingerprint identifications which are based on small distorted fingerprint fragments. [Footnote 23] And, with respect to the much more significant third premise, none of the government’s witnesses were able to answer the fundamental question which that premise leaves open: what is a sufficient quantity and quality of friction ridge detail from which a latent fingerprint identification can reliably be made. The reason for this failure is simple; there has been no testing in the fingerprint field so as to determine this.
(i) William Babler
The government’s first witness was Dr. William Babler, an expert in the prenatal development of friction ridges. While Dr. Babler presented an interesting discussion of certain embryology studies concerning how and when fingerprints form during fetal development, he readily conceded that these studies have nothing to do with latent fingerprint identifications. (Tr. 7/7 at 75). Indeed, while Dr. Babler testified to his belief in premises one and two set forth above, he did not opine as to premise three, that positive identifications can be made from a comparison of a latent impression with a known exemplar. (Id. at 72-74).
(ii) David Ashbaugh
The next government witness was Staff Sergeant David Ashbaugh of the Royal Canadian Mounted Police. Ashbaugh, whose education did not extend beyond high school, testified that the scientific basis for latent fingerprint identifications consists of the embryology studies that Dr. Babler referred to. (Id. at 86, 213-214). Ashbaugh, however, did not offer any explanation as to how these studies demonstrate the reliability of latent fingerprint identifications or provide any inkling as to what constitutes a sufficient quantity of friction ridge detail on which an identification may reasonably be based.
Ashbaugh repeatedly acknowledged that a latent fingerprint examiner’s opinion of identification is subjective. (Id. at 115, 166). Indeed, in a 1993 article that Ashbaugh wrote entitled, Premises of Friction Ridge Identifications, he went so far as to state that the opinion of identification is “very subjective.” (Id. at 151). While Ashbaugh testified that he was “exaggerating” when he wrote this article (Id. at 153), in his most recent writing on the subject, Ashbaugh again emphasized the subjectiveness of a latent print identification, this time by writing the word “subjective” in all capital letters. (Id. at 167); (Gov’t Ex. 10 at 99).Ashbaugh also revealed in his testimony that the various “point” standards that latent fingerprint examiners have employed, and have been testifying on, over the past 100 years are not scientifically based. (Id. at 168). Rather, as Ashbaugh candidly conceded, these point standards are based on nothing more than “educated conjecture.” (Id.). Accordingly, Ashbaugh testified that it is unacceptable for point standards to still be utilized. (Id. at 168, 169). He admitted that he was unaware that examiners in this country were continuing to use them. [Footnote 24] (Id. at 169).
In Ashbaugh’s opinion, there should not be any minimum identification standard. (Id. at 168). Rather, he asserts that the “standard” in the fingerprint field “is in the training of the expert and the knowledge that the person has.” (Id. at 168, 186). Ashbaugh acknowledged, however, that he is “not familiar with the training” that latent print examiners in this country are provided. (Id. at 174). Ashbaugh testified that in his opinion latent fingerprint examiners should be board certified and subjected to “blind” proficiency testing before they are permitted to testify in court. (Id. at 200, 201). As he recognized though, there is no board certification requirement in this country. (Id. at 200). Ashbaugh professed ignorance as to whether any blind proficiency testing is conducted in this country. (Id. at 201).
Ashbaugh testified that rather than simply looking for points of comparison he and other latent fingerprint examiners currently employ a “quantitative – qualitative” process known as “ridgeology” that consists of four components, analysis, comparison, evaluation and verification. (Id. at 110, 135, 146). Ashbaugh, as well as some of the government’s other witnesses, also referred to this process by the acronym ACE-V.” Although Ashbaugh conceded that the critical evaluation step of this process is highly subjective, he insisted during his testimony that the analysis and comparison stages are of an objective nature. (Id. at 115). His testimony, however, disclosed otherwise. In demonstrating to the court how he conducts a comparison, Ashbaugh unwittingly revealed the very subjective judgments that are made throughout the entire process:
So when I looked at the first ridge, that was acceptable. The second ridge, this area is a little different. And because there’s no lateral pressure, I would have a concern. This area here, I would accept the top — the top area I would accept. The shape of this particular ridge feature, I feel even with less pressure you should be able to see that pointed aspect of that short ridge.
When I move on to this ridge, there’s a dissimilarity here that will — in my opinion, would not mean a difference. Dissimilarity to me is something you can’t accept, but when it looks like this, I believe that is now moved into the difference category.
Then, of course, the ridge ends here and that’s reasonably in agreement.
. . .
The next ridge comes down and has a dogleg. This ridge comes down and the dogleg is not as abrupt. And that again would concern me.
I feel that is bordering on disagreement. This particular short ridge in the unknown has a wide area at this end and then moves narrow, and this is pretty well straight all the way through.
I feel that this would be a disagreement because even with less pressure, things would get thinner on this ridge. You would still have that shape.
The next ridge in the unknown print — I apologize. I’m so used to looking at unknown on the left-hand side and this is backwards.
In the unknown print, this ridge runs straight through between two short ridges. This ridge makes a major dogleg and that is disagreement. That isn’t acceptable.
This other short ridge, it just doesn’t seem to be the same length. And where this ridge ends, a little bit farther down the ridge, where, if you look at — I’m sorry, this ridge ends a little farther up the ridge of the short ridge, if you draw a line straight across here to their relationship. And yet this one is down quite a bit farther.
If I moved to the next ridge across and followed it down, it has a dogleg again coming in nice and tight here. I’m not sure I would accept that, but I likely would.
The next ridge, the angle here is a little bit different than the angle here (indicating). But again, there’s a great deal of pressure so I may even accept that as being an agreement.
(Id. at 121-123) (emphasis added).
As Ashbaugh’s demonstration thus revealed, latent fingerprint examiners make subjective judgment calls throughout the entire comparison process as to whether various ridge characteristics are in agreement or disagreement. Neither Ashbaugh, nor any of the government’s other witnesses, testified to any standard rules or measurements that exist which would serve to guide latent print examiners in making these determinations.
Ashbaugh testified that “ridgeologists”, such as himself, do not simply look at the traditional “Galton” characteristics when comparing a latent print to a known exemplar. (Id. at 150). The Galton characteristics, which Ashbaugh referred to as “second level detail,” are what latent print examiners have traditionally counted when making identifications on the basis of point standards. (Id. at 126, 130). Ashbaugh testified that in addition to Galton characteristics he looks for “third level” detail such as sweat pores and the shapes of the ridge edges. (Id. at 150). In so testifying, Ashbaugh expressed his disagreement with the position of James Cowger that because “prints of friction skin are rarely well recorded … comparison of pores or edges is only rarely practical.” (Id. at 150); see James F. Cowger, Friction Ridge Skin: Comparison and Identification of Fingerprints at 143 (1983). Ashbaugh testified that he was unaware that the FBI had previously expressed the same view as Cowger. (Id. at 213); see, An Analysis of Standards in Fingerprint Identification, FBI Law Enforcement Bulletin (June 1972) (Ex 1 at 3). (“observations on pores have shown that they are not reliably present and that they can be obliterated or altered by pressure, fingerprint ink, or developing media;” the FBI “knows of no case in the United States in which pores have been used in the identification of fragmentary impressions.”).
Ashbaugh acknowledged during his testimony that all fingerprints suffer from distortion which can result from 1) the pressure exerted at the time the print is made, 2) the shape of the surface on which the print is found, 3) the material used to lift the print, 4) the medium or substrate in which the print is found, and 5) anatomical aspects of the finger itself. (Id. at 159-162). Ashbaugh conceded that these distortions may cause a ridge characteristic to appear to be something other than what it really is. (Id. at 160, 161). As Ashbaugh recognized, a major part of a latent print examiner’s job is to determine whether the features that are observable in a particular print are genuine or a product of distortion. (Id. at 162).
Ashbaugh took issue with the view expressed by Dr. James Thorton that latent fingerprint examiners will routinely make up explanations regarding distortions so as to explain away differences in prints once the examiners have become convinced that the prints were made by the same finger. (Id. at 140).; see John Thornton, The One Dissimilarity Doctrine in Fingerprint Identifications, 306 Int’l Crim. Police Rev. 89 (March 1977) (Ex. 38). Ashbaugh, conceded however, that he is unaware of any paper that has ever been written disagreeing with Dr. Thorton’s article. (Id. at 166).
Finally, Ashbaugh acknowledged that there have been no studies performed from which fingerprint examiners can determine the probability of different people having a certain number of fingerprint ridge characteristics in common. (Id. at 187, 190, 193) Lacking any such probability studies, examiners testify in terms of absolute certainty; i.e. that a latent print was made by a particular finger to the exclusion of all other prints in the world. (Id. at 190).
(iii) Edward German
The next prosecution expert was Edward German, another law enforcement witness. Mr. German is a latent print examiner, employed by the U.S. Army Criminal Investigation Laboratory. (Tr. 7/8 at 2). Mr. German, who also does not possess a college diploma, testified that he is a member of the Scientific Working Group on Friction Ridge Analysis Study and Technology (“SWGFAST”) [Footnote 25]. (Id. at 12, 14). This group has recently promulgated recommended guidelines concerning qualifications and training of latent fingerprint examiners. (Id. at 12). These recommended guidelines were introduced as government exhibit 4-4. As this exhibit states on its first page, these “draft” guidelines have been presented for “consideration and comment.” Neither Mr. German, nor any of the government’s witnesses, presented any testimony or evidence that these draft guidelines have been adopted by any law enforcement agency at either the state or federal level, including the FBI. Accordingly, in this country, there continues to be no minimum training or qualifications requirements for latent fingerprint examiners.
Mr. German also testified about certain twin studies which he and others have conducted. (Id. at 20–28). In Mr. German’s opinion, these studies have established that fingerprints are not genetically determined. [Footnote 26] (Id. at 46). Neither Mr. German, however, nor any of the government’s other witnesses claimed that these studies provide even a clue as to what constitutes a sufficient quantity of friction ridge detail on which a latent print identification can reliably be based.
(iv) Steven Meagher
The government’s next witness was Stephen Meagher, employed by the F.B.I. as a fingerprint specialist unit chief. Mr. Meagher, who also does not have a college degree, testified, like Sergeant Ashbaugh, that the embryology studies referred to by Dr. Babler, provide the scientific basis for latent fingerprint identifications. (Tr. 7/8 at 61; 7/9 at 13). Like Ashbaugh, however, Meagher was unable to explain how these studies demonstrate the reliability of latent fingerprint identifications, or provide any inkling as to what constitutes a sufficient quantity of friction ridge detail on which a latent print identification can reasonably be based. Accordingly, Meagher did not disagree with the opinion of forensic science commentator, Michael J. Saks, that a “vote to admit fingerprints is a rejection of conventional science” and that “a vote for science is a vote to exclude fingerprint expert opinions.” (Tr. 7/9 at 17, 18).
Like Ashbaugh, Meagher conceded that a latent fingerprint examiner’s opinion of identification is subjective. (Tr. 7/9 at 15). Meagher testified that the FBI stopped using an objective identification standard in the late 1940’s and that the FBI currently uses the ACE-V methodology testified to by Ashbaugh. (Tr. 7/8 at 105, 106). When asked to explain why the FBI fingerprint examiner at Mr. Mitchell’s first trial testified in terms of points of similarity, Meagher asserted that this was just the examiner’s simplistic way of explaining the identification to the jury. (Id. at 98, 99). Meagher stated that he had no opinion as to whether latent fingerprint examiners outside of the FBI continue to employ point standards. (Tr. 7/8 at 227). [Footnote 27]
Meagher also described the various surveys he had constructed for the purpose of the Daubert hearing. Most relevant to the proceeding was the survey in which photographs of the latent prints at issue in this case and Mr. Mitchell’s inked prints were sent to 53 law enforcement agencies for comparison. Of the 35 agencies that initially responded to the government’s request, eight (23%) did not make an identification with respect to one of the two latents and six (17%) did not make an identification as to the other. (Tr. 7/9 at 207). Mr. Meagher testified that he subsequently sent the photographs back out to these agencies, along with marked up enlargements displaying the common characteristics that he (Meagher) had found, with the request that the agencies perform a new examination and complete a new survey form. (Id. at 210, 211). Meagher acknowledged that it is “not common” for “examiners to get blowups as part of their examinations.” (Id. at 118). Nevertheless, Meagher claimed that he did not send the marked up enlargements to the state agencies because of any concern on his part that the agencies’ initial responses might be detrimental to the government’s interests. (Id. at 154). Indeed, Meagher testified that in his view “practitioner error” is irrelevant to a Daubert hearing. [Footnote 28] (Id. at 152-154). Meagher testified that he sent out the enlargements to the agencies as a training tool so as to demonstrate the mistakes which, in his view, they had made. (Id. at 124). When questioned, however, as to why, if his motivation was simply to educate the examiners, he urged them to quickly complete the new response form and return it to him in advance of the Daubert hearing, Meagher testified that it “was just decided that it would be in our best interests to do that.” (Tr. 7/9 at 8, 9).
Meagher also offered explanations as to why the state latent print examiners had not made identifications. Either they had “just screwed up”, or they had lacked the appropriate experience, or it was late in the day, or they didn’t realize the result was being used in a pre-trial hearing. (Tr. 7/8 at 134-150). (This latter explanation implies that knowledge of the purpose of the survey would have influenced the determination of identification.) [Footnote 29]
Finally, Meagher testified, on his direct examination, with respect to two experiments that were conducted on the FBI’s Automated Fingerprint Identification System (“AFIS”), by the government’s AFIS provider, Lockheed Martin. On cross examination, however, Meagher revealed that he did even possess the necessary expertise so as to be able to explain the basic terminology that Lockheed used in the conclusion section of the test report that it generated in connection with these two experiments. (Tr. 7/8 at 196, 198). [Footnote 30]
(v) Don Zeisig
The government next called Don Zeisig, an electrical engineer at Lockheed Martin who conducted the AFIS experiments to which Meagher referred. Mr. Zeisig testified that the two experiments utilized a database of 50, 000 fingerprints extracted from the FBI’s Criminal Master File. (Tr. 7/9 at 81). The first experiment compared each fingerprint with itself, as well as with all of the other fingerprints in the database. (Id.) The scores generated by AFIS from these comparisons were converted into “Z” scores and then probability measures. (Id. at 82, 83). Not surprisingly, whenever AFIS compared a fingerprint image with itself, an extremely high score was generated. [Footnote 31] (Id. at 82). Much lower scores were obtained when different fingerprint images were compared. (Id. at 84). The conclusion that Lockheed derived from this experiment was that the “probability of a non-mate rolled fingerprint being identical to any particular fingerprint is less than 1/10 (1 followed by 97 zeros). (Id. at 84). As Mr. Zeisig explained, this probability was taken from the lowest score that AFIS generated when a particular fingerprint image was compared with itself. (Id. at 85).
Significantly, Mr. Zeisig conceded on cross examination that even rolled fingerprints which are taken of the exact same finger will not be identical because of the various distortions that occur in the rolling process. (Id. at 86, 87). [Footnote 32] Indeed, this fact was unintentionally demonstrated by the experiment. Included in the 50,000 print database were multiple fingerprints that had been taken of the same fingers. (Apparently some individuals had been fingerprinted twice by the FBI). (Id. at 87). Three different examples of this were found to exist in the database, though Mr. Zeisig conceded that there could be others that went undetected. (Id. at 87-94; Ex. 54 at 4). Significantly, in each of these instances, the scores that were generated by AFIS, when comparing two different fingerprints of the exact same finger, were significantly lower than the scores that were obtained when each fingerprint in the database was compared with itself. (Id.) In fact, in some instances, the scores generated was so low as to fall well within the range of scores that were generated when fingerprints of different fingers were compared. (Id. at 91, 92). Thus, there were some fingerprints of different fingers that AFIS found to have greater similarity than fingerprints of the same finger. Accordingly, Mr. Zeisig acknowledged that if two people actually had fingers with identical fingerprint patterns, and rolled fingerprints of those fingers were compared by AFIS, the score generated would in all likelihood be far less than the scores that Lockheed obtained when each fingerprint image was compared with itself. (Id. at 94-95). In other words, fingerprints of identical fingers would not meet the definition of identical that Lockheed established through the methodology of comparing fingerprints with themselves. (Id. at 92, 94-95).
The second AFIS experiment which Mr. Zeisig testified to was very similar to the first. Again each fingerprint was compared with itself and with all of the other fingerprints in the database. (Id. at 95). However, this time each fingerprint, prior to comparison, was converted into a simulated “latent” fingerprint by extracting the central 21.7% of the print. (Id. at 96). This central portion was than compared with the entire print from which it had been extracted. (Id.) Accordingly, with respect to the middle 21.7%, identical images were again being compared, and as such the scores generated from these comparisons were extremely high, much higher than the scores that were obtained when the simulated latents were compared with different fingerprints. (Id. at 96, 97). The conclusion that Lockheed derived from this experiment was that “the probability of a non-mate fingerprint being identical to a minutia subset of any particular fingerprint is less than 1/10 (1 chance in 1 followed by 27 zeros) for small numbers of minutiae (in this case small means four), decreasing to less than 1/10 (1 chance in 1 followed by 97 zeros) for larger numbers of minutia (in this case larger means greater than eighteen.).” (Id. at 97, Gov’t Ex. 6-8). Again, these astronomical probabilities were simply derived from the scores that were generated when fingerprint images were compared with themselves. (Id. at 98). And, as with the first experiment, Mr. Zeisig answered yes to the hypothetical posed by defense counsel: If two people possessed fingers bearing identical fingerprint patterns, and a simulated latent print was created from a rolled print of one of these fingers and than compared on AFIS with a rolled print of the other identical finger, the score generated would in all likelihood be far less than the standard for identical that Lockheed created by comparing images with themselves. (Id. at 99, 100).
(vi) Bruce Budowle
The government’s final witness on its direct case was Dr. Bruce Budowle, a geneticist employed by the FBI. (Id. at 105). Dr. Budowle’s lengthy resume does not reveal any background or experience with fingerprints, and he did not claim any such experience or expertise during his testimony. (Gov’t Ex. 8; Tr. 7/9 at 105-110). Nevertheless, Dr. Budowle offered his opinion that latent fingerprint identifications are scientific. (Id. at 161). Unlike the government witnesses that testified before him, Dr. Budowle did not premise his opinion on the embryology studies that were the subject of Dr. Babler’s testimony. Rather, Dr. Budowle referred to certain theoretical probability models, including one developed by defense expert Dr. David Stoney. (Id. at 121, 161). As Dr. Budowle recognized at another point in his testimony, however, a theoretical model must be tested before it is relied upon. (Id. at 168). It was therefore curious that Dr. Budowle did not comment upon the fact that the probability models that he was relying upon have never been tested.
In opining that latent fingerprint identifications are scientific, Dr. Budowle also expressed the view that the “100 years of fingerprint employment has been empirical studies,” which have demonstrated that “no two unrelated individuals or related individuals have the same print.” (Id. at 115). As Dr. Budowle well knows, however, there is a substantial difference between the somewhat academic question of whether two people might possess the same entire fingerprint pattern and the significantly more important real-life issue concerning the frequency with which small distorted latent fingerprint fragments are mistakenly identified with the rolled impressions that they are compared with. Dr. Budowle did not claim that the 100 years of fingerprint employment constitute “empirical studies” as to the frequency of such errors.
In the absence of any empirical studies concerning the subject of errors, Dr. Budowle expressed the view, contrary to that of the Supreme Court in Daubert, that “calculating an error rate is meaningless and misrepresents the state of the art.” (Id. at 163). When asked specifically about the 1995 latent fingerprint examiner proficiency exam, on which 22% of the participating examiners made false identifications, Dr. Budowle testified, somewhat confusingly, that although he has “no experience in what was done in that particular situation,” he nevertheless knows that “in 1995 people weren’t using it in the proper fashion and design for proficiency testing.” (Id. at 170). [Footnote 33]
Dr. Budowle also attempted to analogize the opinion of a latent fingerprint examiner to a diagnosis rendered by a medical doctor. (Tr. 7/13 at 86-87). In making this comparison, however, Dr. Budowle failed to address two significant distinctions between medical doctors and latent fingerprint examiners; the first being the extreme level of training and testing that medical doctors are put through before their opinions are deemed sufficiently reliable to be trusted; the second distinction being that doctors do not make the inherently unscientific claim of absolute certainty when rendering their opinions. In addition, Dr. Budowle failed to support his position that a medical doctor’s opinion would be considered “scientific” if that opinion was not based on any controlled studies or experimentation that had previously been done in the particular field.
Dr. Budowle also analogized the comparison of fingerprints to the comparison of DNA strands. (Id. at 86). But, in drawing this analogy, Dr. Budowle simply ignored his earlier testimony that with respect to DNA there has been a plethora of testing of different population groups so as to determine the statistical probability of different people having the same DNA. (Tr. 7/9 at 150, 151). As Dr. Budowle well knows, there has been no comparable testing done with respect to fingerprints.
Finally, Dr. Budowle also vouched for the AFIS experiments conducted by Lockheed Martin. However, after hearing the testimony of defense expert David Stoney, Dr. Budowle, in his rebuttal testimony, acknowledged that “no one” would say that these tests have “prove[d] uniqueness” of fingerprints. (Tr. 7/13 at 82). Moreover, in opining that these tests have any value at all, Dr. Budowle revealed his fundamental misunderstanding of the tests, which is surprising given that he helped to design them. (Tr. 7/9 at 116, 177). Dr. Budowle testified that Lockheed should not be criticized for comparing prints with themselves, because this was simply done as a “quality control measure.” (Tr. 7/13 at 80). To the contrary, Mr. Zeisig’s testimony made clear that the astronomical probabilities provided for by these tests, concerning the likelihood of two fingerprints, or two fingerprint subsets, being “identical,” are directly derived from the scores that AFIS generated when each fingerprint image was compared with itself. Thus, far from being a simple quality control measure, the comparison of each fingerprint image with itself effectively provided the standard for “identical” by which all other fingerprint comparisons were measured. Dr. Budowle did not explain during his testimony how these experiments tell us anything about uniqueness given that Mr. Zeisig conceded that rolled fingerprints of the exact same finger would not meet the standard of identical which Lockheed had adopted. [Footnote 34]
(vii) Pat Wertheim
In addition to Dr. Budowle, the government also called Pat Wertheim as a rebuttal witness. Wertheim, who has more than 20 years of experience as fingerprint examiner with various state law enforcement agencies, testified that he has given numerous training courses to fingerprint examiners around the country. (Tr. 7/13 at 53-56). He testified that the trainees at these courses would commonly be “reluctant to embrace the philosophy and the methodology of ridgeology,” but that they would ultimately conclude that the ridgeology methodology was consistent with what they had “been doing all along.” (Id. at 58). Wertheim did not testify, however, as to whether his trainees had advised him as to whether they were adhering to point standards when making identifications. [Footnote 35]
Wertheim acknowledged that with respect to fingerprint identifications, verification is usually done by an examiner who not only works with the examiner who initially compared the prints, but who also knows the first examiner’s conclusion. (Id. at 62). Wertheim agreed that such verifications are essentially a “cleanup or checkup confirmation process.” (Id. at 62). In recounting an experience where he served as a defense expert in England, Wertheim revealed that when he wanted a true verification of his opinion, he provided the prints in question to a colleague, without revealing the opinion that he had formed. (Id. at 59, 61).
B. The Defense Witnesses
The defense called four witnesses at the Daubert hearing, three experts and an investigator. The three experts all agreed that latent fingerprint identifications are not scientific. Among other things, each of the defense experts pointed to the lack of testing that has been done in the fingerprint field and the failure of the fingerprint community to develop an objective identification standard. Each agreed that the case work that has been performed by fingerprint examiners over the past 80 years is no substitute for scientific testing.
(i) David Stoney
The first expert witness called by the defense was Dr. David Stoney. Dr. Stoney is currently the Director of the McCrone Research Institute, a not-for-profit teaching and research institution located in Chicago, Illinois. (Tr. 7/12 at 36, 37). Dr. Stoney was awarded his Ph.D. in forensic science in 1985 from the University of California at Berkeley. (Id. at 36). His dissertation concerned a quantitative assessment of fingerprint individuality. (Id. at 36). It included a statistical model for fingerprint individuality that Dr. Stoney personally created. (Id. at 66). This work was subsequently recast into a peer reviewed journal article, entitled, A Critical Analysis of Quantitative Fingerprint Individuality Models, 31 Journal of Forensic Sciences, 1187 (1986), which Dr. Stoney coauthored with Dr. John Thornton. Dr. Stoney has authored approximately 20-25 peer reviewed publications in the forensic science area, more than 10 of which have concerned the area of fingerprints. (Id. at 40-42). One of these publications is a book chapter concerning the scientific status, or lack thereof, of fingerprint identifications which is included as part of a West publication entitled Modern Scientific Evidence: The Law and Science of Expert Testimony (David L. Faigman et al. eds., West 1997) (Ex. 15).
Dr. Stoney is himself a trained fingerprint analyst. (Id. at 56-58). He received his training as part of his forensic science education at Berkeley. (Id.). After graduating from Berkeley, Dr. Stoney worked for several years at a private criminalistics laboratory in California. (Id. at 39, 40). Approximately one third of his work there involved fingerprint comparisons. (Id. at 40). Dr. Stoney has previously been qualified to testify as an expert witness concerning fingerprint identifications. (Id. at 45).
At the Daubert hearing in the case at bar, Dr. Stoney testified that latent fingerprint identifications are not scientific. Rather, a fingerprint examiner’s opinion of identification is a “subjective determination, without objective standards.” (Id. at 87). [Footnote 36] Consistent with the Supreme Court’s decision in Daubert, Dr. Stoney testified that for a technique, such as latent fingerprint identification, to be considered scientific, the technique must be tested. (Id. at 87-89). Dr. Stoney explained two ways such testing could be done with respect to fingerprints. (Id.). First, an objective identification standard could be proposed and that standard then tested so as to determine the degree to which identifications meeting the criteria of the standard are correct or incorrect. (Id.). Second, a process could be proffered, such as the ACE-V process testified to by the government’s witnesses, and that process then tested so as to determine the degree to which examiners utilizing the process are producing correct answers. (Id. at 87, 89, 102).
Dr. Stoney provided a specific example of a test that might be done in this regard. Different fingerprint examiners could be asked to compare fingerprints which, though deposited by different fingers, nevertheless contain several common ridge characteristics. (Id. at 105). An example of such prints was provided in defense exhibit 6, a journal article discussing a latent fingerprint that had seven characteristics in common with a rolled fingerprint impression that had been taken from a different person. There was some disagreement among the government experts as to the likelihood of a fingerprint examiner making a misidentification when confronted with prints of this nature. (Tr. 7/7 at 195, 196; Tr. 7/8 at 224-226). As Dr. Stoney recognized, it would be a very simple and interesting study to provide these prints to different examiners so as to see how many would actually make misidentifications. (Tr. 7/12 at 105) (“That would be applying science to the issue; testing.”).
Dr. Stoney testified that the casework that fingerprint examiners have been doing over the past eighty years cannot be considered a substitute for scientific testing. (Id. at 120, 121). There is no simply no way of knowing, Dr. Stoney testified, how often fingerprint examiners have erred in their casework. (Id. at 121). In order to accurately assess whether fingerprint examiners are producing correct opinions, it is necessary to test them by providing them with prints that are known ahead of time to have either arisen or not arisen from the same source. (Id.).
Dr. Stoney disagreed with Dr. Budowle’s position that practitioner error rate is irrelevant and that the error rate for the ACE-V methodology is 0. (Id. at 102-104). As Dr. Stoney recognized, and as the government experts conceded, the ACE-V process is entirely dependant upon a practitioner’s “subjective determination.” (Id. at 103). Accordingly, since “the individual is an inherent part of getting to the opinion in this process [,]… errors that individuals make are a very important part of evaluating whether or not it works.” (Id. at 104). Dr. Stoney testified that the claim of a zero error rate differentiates fingerprints from all of the other forensic sciences. (Id. at 103).
Dr. Stoney also disagreed with Dr. Budowle as to whether the statistical probability models that he (Stoney) and others have created provide a scientific basis for fingerprint identifications. (Id. at 119, 120). Dr. Stoney testified that none of these models, including his own, have ever been tested. (Id. at 120). Accordingly, he testified that the models, as they now stand, are simply “reasonable guesses or speculations … as to what might be a model that would apply to the individuality of fingerprints.” (Id.). Moreover, none of these models concern the reliability of identifications that are based on small distorted latent fingerprint fragments.
As to the embryology studies that some of the government witnesses referred to, Dr. Stoney testified that these studies have nothing to do with the fundamental question of what constitutes a sufficient basis to make a reliable latent fingerprint identification. (Id. at 106,107). What these studies address, Dr. Stoney testified, is the issue of how “friction ridges come to be on the fingers.” (Id. at 92). Dr. Stoney testified that while these studies provide important “background information,” they in no way make latent fingerprint identifications scientific, since they neither tell us when identifications should be made or when they are correctly made. (Id. at 107).
Finally, Dr. Stoney harshly criticized the government’s AFIS experiments. (Id. at 109-119). He testified that the methodology of these experiments was “fundamentally flawed” in that the standard for “identical” was derived by comparing a fingerprint image with itself. (Id. at 110, 111). It is a basic element of forensic science, Dr. Stoney testified, that no two representations of anything, be it a person’s signature or his fingerprints, will be exactly alike. (Id. at 111). Dr. Stoney testified that the experiments, therefore, do not “mean anything” with respect to the issue of whether fingerprints are unique or the question of what constitutes a sufficient basis to make an identification. (Id. at 113). This was made clear, Dr. Stoney testified, by Mr. Zeisig’s admission that fingerprints taken from identical fingers would not meet the definition of identical employed in the experiments. (Id. at 110). Dr. Stoney testified that the conclusions that have been drawn from these experiments, as expressed in the Lockheed Martin test report, are very misleading:
Q. [The first conclusion] states that: The probability of a non-mate rolled fingerprint being identical to any particular fingerprint is less than 1/10 to the 97th. Do you have any opinions as to the way that this conclusion is stated?
A. Well, if this is meant as a comment or to have anything to do with the notion of how common is it that people either have identical skin patterns on their finger or if it is meant to comment on the process of will different prints from the same person how common will they appear to be the same by either judgment that we are talking about here in terms of an examiner’s opinion, this is completely irrelevant to those determinations. By completely irrelevant to it, I mean that it doesn’t make it to this point in the exhibit where I was describing earlier as foundation for my opinion. If I were to use this as a foundation for my opinion. I would be grossly in error in making that statement.
It’s so fundamentally inaccurate to do that type, so as it sits here in this report, it causes me grave concern. I consider it very misleading. . .. . .
Q. [The second conclusion] states that: The probability of a minutia subset of a non-mate fingerprint being identical to a minutia subset of any particular fingerprint is less than 1/10 to the 27th for small numbers of minutiae, in this case, small means four, decreasing to less than one in 10 to the 27th for larger numbers of minutiae.
Do you have any comments on the way that this Experiment 2 Conclusion is stated?
A. Well, I have already given my opinion that I feel these experiments have — are well removed from any kind of forensic science, from describing any process in forensic science. I mean, so to the degree that a person would read that and think that it did or read into this, this is the probabilities of a comparison coming out wrong or this is a random probability of encountering a particular minutia configuration, that would — it does not present that to us.
Q. So one cannot properly read from this that the chance of two different people having four minutia points in common is in 1 in 10 to the 27,000?
A. Absolutely not.
(Id. at 113-114; 118-119).
(ii) James Starrs
The defendant next called Professor James E. Starrs, a Professor of Forensic Sciences at The George Washington University, the Columbian School of Arts and Sciences, and a Professor of Law at The George Washington University Law School. (Ex. 50). Professor Starrs has held these positions for the past 30 years and is one of the founding fathers of the Forensic Sciences department at George Washington University. (Tr. 7/12 at 123, 124). Professor Starrs has published more than 80 articles in the forensic science area and he is a co-author of the leading text in the field: Scientific Evidence in Civil and Criminal Cases (4th ed. 1995). (Id. at 127, 128). Among his many notable honors and achievements, Professor Starrs is a Distinguished Fellow of the American Academy of Forensic Sciences, a distinction that only 24 other persons have received. (Id. at 133,134). Professor Starrs testified that fingerprints has been one of his major areas of interest and study throughout his lengthy career. (Id. at 125, 126).
In the opinion of Professor Starrs, there is not a scientific basis for latent fingerprint identifications. (Id. at 151). Like Dr. Stoney, Professor Starrs pointed to the lack of testing that has been done in the field. (Id. at 151, 154). He pointed out, for example, that there has been no experimentation done to determine how many different people might have the same fragmentary print in common. (Id. at 156). Nor has testing been done to determine the different assessments that different examiners might make when presented with the same latent print to analyze. (Id. at 154). [Footnote 37]
Closely associated with the lack of testing, Professor Starrs also pointed to the failure of the fingerprint community to establish an error rate. (Id. at 157). He testified that it is “scientific balderdash” for the FBI to claim that fingerprints are infallible and that there is a zero error rate. (Id. at 161). (“The infallibility of fingerprinting is only as fallible or infallible as the one conducting the examination.”). Moreover, he stated that if the error rate is actually as high as is indicated by the 1995 latent print examiner proficiency exam, on which there was a 22% false identification rate, than “we have got more than a fly in the ointment, we have a bee hive in the ointment of fingerprint analysis.” (Id. at 157).
In opining that latent fingerprint identifications are not scientific, Professor Starrs also pointed to the lack of standards in the fingerprint field. (Id. at 161-165). He testified that the various fingerprint examiners that he spoke with in advance of the hearing advised him that they are continuing to use point standards rather than the ridgeology approach espoused by the government’s witnesses. (Id.). Moreover, Professor Starrs testified that there is no standard classification or terminology with respect to the basic ridge characteristics. (Id. at 161, 162). Accordingly, as seen in this case (see supra at 5), what some examiners might count as one characteristic, another examiner might consider to be two. (Id.).
Finally, Professor Starrs pointed to the fundamental lack of skepticism that pervades the fingerprint community. (Id. at 167-169). The essence of science, Professor Starrs testified, is the willingness to consider that your hypothesis may be wrong and to attempt through testing to falsify it. (Id. at 157, 169). Such a willingness is completely missing in the fingerprint field. (Id.). Rather, Professor Starrs testified, you have fingerprint examiners providing their subjective opinions of identification in the inherently unscientific terms of absolute certainty. (Id. at 152, 153).
(iii) Simon Cole
The last expert witness called by the defense was Dr. Simon A. Cole, a post doctoral fellow at Rutgers University. (Tr. 7/13 at 8). Dr. Cole was awarded a Ph.D. in 1998 from Cornell University in Science and Technology Studies, an interdisciplinary field comprised of the disciplines of history, philosophy, sociology, anthropology and policy studies. (Id. at 5, 8). Dr. Cole testified that for the past four years his work has been devoted to a study of latent fingerprint examiners. (Id. at 8). This work consisted, among other things, of a complete review of the fingerprint community’s professional literature, interviews and an e-mail survey of latent fingerprint examiners, and field work at a police crime laboratory. (Id. at 7). Dr. Cole’s efforts have thus far culminated in two peer reviewed articles concerning the fingerprint profession: Simon A. Cole, Witnessing Identification: Latent Fingerprinting Evidence and Expert Knowledge, 28 Social Studies in Science 687, 701 (Oct.-Dec. 1998) (Ex. 30) [hereinafter Cole, Witnessing Identification] and Simon Cole, What Counts For Identity? The Historical Origins Of The Methodology Of Latent Fingerprint Identification, 12 Sci. In Context 1, 3-4 (Spring 1999) (Ex. 34) [hereinafter Cole, What Counts For Identity?].
Like the two experts who preceded him, Dr. Cole testified that latent fingerprint identifications are not scientific. (Tr. 7/13 at 21). In support of this opinion, Dr. Cole identified the lack of experimentation that has been done in the field, the failure of the fingerprint community to even attempt to establish an error rate, the lack of objective standards and the failure to engage in meaningful peer review. (Id. at 21-25). [Footnote 38]
In addition to opining that latent fingerprint identifications are not scientifically based, Dr. Cole testified about the two articles that he has published. As Dr. Cole explained, the first article, Witnessing Identification, addresses the question of why fingerprint identification has been so widely accepted despite the lack of scientific basis to support it. (Id. at 9). Dr. Cole testified that there are four primary explanations for this.
First, the fingerprint profession, from its earliest days, developed an “occupational norm of unanimity.” (Id. at 9-13). In marked contrast to other forms of expert knowledge, Dr. Cole testified, fingerprint examiners adopted the principle that they should not disagree over the same evidence. (Id. at 10). Dr. Cole testified that a current example of this norm of unanimity can be seen from the FBI survey in this case. As soon as the state law enforcement examiners discovered that the FBI had made an identification with respect to the prints that they had been asked to compare, the state examiners quickly changed their opinions with respect to whether a match could be made. (Id. at 11).
Second, Dr. Cole testified, the fingerprint community has successfully managed cases of error. (Id. at 13-14). When cases of error have become public, the error has been blamed on the “incompetence of the examiner rather than on the possibility that there was either (a) a problem with the methodology, or (b) that it would be possible for examiners to disagree over the print in question.” (Id. at 14). Again, Dr. Cole testified that the FBI survey provides a recent example of how “error” is dealt with by the fingerprint community. (Id. at 15). The examiners who did not make identifications were all said to have “screwed up.” in one way or another. (Id.). The possibility that examiners could reasonably disagree as to whether the latent prints from this case are properly identifiable was not even considered.
As a third reason that latent fingerprint identification has come to be so widely accepted, Dr. Cole pointed to the lack of scrutiny that the field has been subjected to. (Id. at 15). Dr. Cole testified that questions, such as testing, error rate, standards, peer review, have not previously been asked, especially not by courts. (Id.).
Fourth, and finally, Dr. Cole testified, there has been a “lack of organized opposition.” (Id. at 15, 16). A group of “counter experts” never arose such as is commonly seen in other fields of expert knowledge, such as psychology or medicine. (Id. at 16). Dr. Cole testified that this is likely due to the fact that the only way to get real training in fingerprint identification is by being in law enforcement. (Id.).
As to the second article that he has published, What Counts for Identity, Dr. Cole testified that this article was intended to address the division in the fingerprint community between the ridgeologists, such as Sergeant Ashbaugh, and the “point counters,” who continue to make identifications on the basis of point standards. (Id. at 16, 17) Dr. Cole testified that there is an “old guard” of rank and file fingerprint examiners who reject “ridgeology” and adhere to the point counting methodology in which they were trained. (Id. at 18, 19). While it is not unusual to see such a “segmentation” in a particular field, Dr. Cole testified, what is unusual here is that this rift, though apparent in the technical literature, has not previously been explored in a court of law. (Id. at 46). The fingerprint community has somehow managed to “preserve a united front in the courtroom.” (Id. at 45, 46).
(iv) Marilyn Peterson
Finally, the defense also called Marilyn Peterson, an investigator employed by the Federal Court Division of the Defenders Association. Ms. Peterson testified that she conducted phone interviews of the state latent fingerprint examiners who had participated in the F.B.I. survey. (Tr. 7/12 at 4-7). Of the 37 that she spoke with, 23 reported using a point counting system, either as a matter of agency policy or as a personal standard. (Id. at 7).
Ms. Peterson further testified that she inquired of the examiners who did not make identifications of the prints at issue in this case, why they were unable to do so. (Id. at 5). Each of the eight examiners that she spoke with stated that they were “unable to come up with a sufficient number of points to be comfortable with making [a] positive ID.” (Id.)
In light of this evidence, and for all of the foregoing reasons, the government’s fingerprint identification evidence should be precluded absent a showing of general acceptance following a full Kelly hearing.. In the alternative, the defense should be granted funds to retain, and should be permitted to present, expert witness evidence regarding the limitations of the government’s evidence.
Dated: February 10, 2000
Michael N. Burt
Deputy Public Defender
Footnote 22: The prosecutor also admits that “(d)efendant will be free to attack the fingerprint evidence before the jury.” (Opp. p. 4). Although the prosecutor is silent on the issue, his concession must mean that he does not oppose that aspect of defendant’s motion (point VII) requesting funds to hire fingerprint experts David Stoney, James Starrs, and Simon Cole.
The prosecutor is also silent on whether correct scientific procedures were used in this case (Defendant’s Motion, Point V), on whether the fingerprint evidence should be excluded under Evidence Code Sections 352 and 801(a (Point VI), and on whether evidence concerning AFIS should be admissible (Point VII). In a separate opposition, the prosecutor does reveal the astonishing fact that the AFIS system “has an error rate of 70-75%.” (Opposition to Motion to Suppress). In view of this fact, evidence regarding an AFIS hit is obviously irrelevant and more prejudicial than probative.
Footnote 23: Indeed, some latent fingerprints are so distorted that in the words of government expert Don Zeisig, it is difficult to tell “whether it’s a fingerprint or a dead fly.” (Tr. 7/9 at 42).
Footnote 24: Ashbaugh has recognized that examiners in many countries outside the United States continue to employ point standards and that these standards are in fact required either by legislative or administrative rule. (Tr. 7/7 at 143, 144); (Gov’t Ex. 10 at 98).
Footnote 25: SWGFAST was formerly titled “TWGFAST”, which stood for Technical Working Group on Friction Ridge Analysis Study and Technology. (Id. at 43). The name change from “Technical” to “Scientific” was made after the defense in this case challenged the scientific basis for latent fingerprint identification evidence. (Id. at 44). Mr. German testified that this was just a coincidence. (Id. at 43).
Footnote 26: The Department of Justice is apparently not as convinced as Mr. German. As discussed further below, the Department, this past February, published a document entitled Forensic Sciences: Review of Status and Needs. (Ex. 25). In the section of this publication concerning latent fingerprint identifications, the Department states that “the theoretical basis for … individuality has had limited study and needs a great deal more work to demonstrate that physiological/developmental coding occurs for friction ridge detail, or that this detail is purely a accidental process of fetal development.” Id. at 29. This section goes on to state, “[s]tudies to date suggest more than an accidental basis for the development of print detail, but more work is needed.” Id.
Footnote 27: While Meagher testified that the FBI has long ago abandoned point standards as a requirement for identification, he nevertheless added that the FBI continues to employ a 12 point standard in terms of a “quality assurance issue.” (Tr. 7/8 at 104, 105). If an FBI examiner makes an identification when there is less than 12 Galton (level two) characteristics in common, there must be “close scrutiny by a supervisory examiner.” (Id. at 105).
Footnote 28: The defense, of course, does not agree with the government’s view that the examiners who did not make identifications were in error. Rather, it is our contention that these examiners correctly recognized that the latent prints at issue in Mitchell are not sufficiently clear so as to provide a reliable basis for identification.
Footnote 29: Meagher testified that he also requested the state law enforcement agencies to run the two latent prints at issue in this case through their Automated Fingerprint Identification Systems (“AFIS”). This issue is discussed infra at pages 49-51.
Footnote 30: Mr. Meagher also testified to his experience with a non-judicial use of latent fingerprint identification, specifically attempting to find latent fingerprints to help identify crash or disaster victims. (Id. at 59-61). However, Mr. Meagher did not testify how often this has been attempted or whether it has ever been successful.
Footnote 31: The reason that there was some variation in the scores that were obtained when fingerprint images were each compared with themselves is because fingerprints have varying amounts of minutia. (Id. at 82, 83). The more minutia that AFIS finds in common the higher the score that it will generate. (Id.). Accordingly, a fingerprint that contains a great deal of minutia, when compared with itself by AFIS, will generate an extremely high score. A fingerprint containing a smaller amount of minutia will also generate a very high score, but not quite as high a score as the fingerprint that contains a greater amount of minutia.
Footnote 32: This same admission was made by government expert, Edward German, who testified that different prints of the same finger will never be identical. (Tr. 7/8 at 22).
Footnote 33: In point of fact, the 1995 proficiency exam, which was designed, assembled and reviewed by representatives of the International Association of Identification, has been recognized as being “a more than satisfactory representation of real casework conditions.” David Grieve, Possession of Truth, 46 J. of Forensic Identification 521, 524 (1996) (Ex.31). Mr. Grieve was, of course, designated as an expert witness by the government in this case, though he was ultimately not called as a witness.
Footnote 34: Dr. Budowle also testified to the non-judicial use of fingerprints as a means of identification in the area of biometrics. (Tr. 7/9 at 138-140). In so testifying, however, Dr. Budowle failed to recognize that biometrics involve the use of rolled fingerprints, not latent fingerprint fragments.
Footnote 35: The ACE-V methodology of ridgeology is not mutually inconsistent with a minimum point standard. A fingerprint examiner can analyze, compare, evaluate and verify while still adhering to an identification standard that requires a certain number of common ridge characteristics before an identification is considered sufficiently reliable.
Footnote 36: Dr. Stoney testified that the subjectiveness of the ACE-V process, testified to by the government’s witnesses, extends beyond the evaluation phase and into the analysis and comparison phases. (Id. at 97). As Dr. Stoney recognized, when Sergeant Ashbaugh demonstrated how he analyses a latent print and compares it with a known exemplar he repeatedly made subjective judgments as to whether he could “see enough of a similarity between the two or enough of a difference between the two to either reject or accept it.” (Id. at 96).
Footnote 37: Like Dr. Stoney, Professor Starrs rejected the notion that the case work that examiners have conducted over the past eighty years is a substitute for scientific testing. (Id. at 155). As Professor Starrs correctly recognized, there is no way to determine how often those examiners have erred. (Id. at 155, 156).
Footnote 38: As Dr. Cole pointed out with respect to the issue of peer review, the only person who is publishing anything with respect to the “ridgeology” methodology is David Ashbaugh, and his work has never been critically evaluated. (Id. at 24, 25)