The Claremont Shakespeare Clinic
In the fall of 1987, in Claremont, California, a team of eight undergraduates from the Claremont Colleges met to tackle two questions: who was the True Shakespeare? and what did Shakespeare, whoever he was, write? The group was called the Shakespeare Clinic. It normally met at Claremont McKenna College but involved students and faculty from the other four neighboring Claremont Colleges and was modeled after engineering clinics pioneered by one of these, Harvey Mudd College. The students reported to a client, the Shakespeare Authorship Roundtable, a group of dedicated California authorship buffs. They utilized the resources of the Francis Bacon Foundation Library, which was then located in Claremont. Successive generations of them, a total of 37 students over time, continued to meet almost every year till May 1994. The Clinic was funded initially by the Sloan Foundations New Liberal Arts Program, and, later, by an Irvine Foundation Practicum grant to Claremont McKenna College.
The students assignment was to use computers to find which, if any, of the 58 Shakespeare claimants listed in the Readers Encyclopedia of Shakespeare
Both of the authors of this book [on the Clinics findings] were the Clinic students advisors, but not their bosses. The students organized their own research, picked their own leaders, chose and ran their own tests, and reported directly to the Shakespeare Authorship Roundtable, often with a Chorus to provide theatrical embellishment. One of us, Elliott, gathered and edited texts for them and did most of the follow-on development and writing-up of results after each wave of students had reported their findings and gone home. The other, Valenza, was the math and computer mentor, the one who developed and programmed our most high-tech computer tests, such as modal, semantic bucketing, and Thisted-Efron. Elliott was the original organizer of the Clinic.
A great source of inspiration for the Clinic was Elliotts father, William Y. Elliott (1896-1979). Elliott père was a man of brilliance, commanding presence, and glowing memory. He was a poet, a Rhodes Scholar, a senior professor of government at Harvard, counselor to six Presidents, mentor to world notables - and a firm and outspoken believer that the True Shakespeare was not William Shakspere of Stratford but Edward de Vere, Seventeenth Earl of Oxford. To this day Elliott père is probably the most prominent academic ever to take the Oxfordian position firmly and publicly. Elliott fils, though not a card-carrying Oxfordian nor a Shakespeare scholar of any kind (his home field is American Constitutional Law), had an amateurs knowledge of Oxfordian arguments and was more inclined than most literature-department Shakespeare regulars to consider them open to debate and worthy of further exploration.
Two events of the mid-1980s rekindled this inclination: Gary Taylors announcement in 1985 that he had discovered in Shall I Die? a new Shakespeare poem (Lelyfeld, 1985), and Brad Efrons and Ronald Thisteds announcement in 1986 that their statistical analysis showed clearly that Shall I Die? could be by Shakespeare, and that other comparable poems by Donne, Marlowe, and Jonson, could not (Kolata, 1986; Thisted and Efron, 1987). Elliott, intrigued, contacted Thisted and asked whether the same novel methodology could be used to show which of the major claimants works actually matched Shakespeare. Could some of the great Authorship questions be settled simply by pressing a couple of buttons? Thisted was confident that they could, and that his program was statistically robust. It was certainly eminently pedigreed, having been adapted by Efron, one of the five leading statisticians then or now active, from a technique devised by Sir Ronald Fisher, the greatest statistician of his day, to predict the discovery of new butterfly species (Fisher, 1943). But it required more legwork than Thisted could devote to the question. The works of every claimant had to be gathered, put on disk, and the frequency of every rare word counted and entered into his makeshift Unix-VAX-based program.
If it had been 1983, that would have ended the matter. Machine-readable texts and frequency counters were available then, but they were the province of mainframe power users like Thisted, who were normally too busy to bother with frivolities like authorship identification. Only after 1984, when $2,500 desktop computers first showed up like seven-league boots in the offices of ordinary mortals like Elliott, did such undertakings start to look conveniently doable. So equipped, Elliott volunteered to do the legwork.
Today we know that most of the suppositions that originally inspired the Clinic were wrong, and the seven-league boots of the day seem pitifully inadequate by current standards. No one now thinks that Shall I Die? could be by Shakespeare. Thisted and Efrons proof to the contrary had fatal methodological weaknesses for sample texts as short as Shall I Die? There turned out to be 58 or more claimants, not the half-dozen or so we originally expected to deal with. The Clinic has made a strong showing that lit department Stratfordian regulars were right about Shakespeare and Elliott père and the anti-Stratfordians wrong. If such things could be settled by evidence, we believe that many big authorship questions should be much, much closer to being settled than they were in 1986, thanks largely, but not exclusively, to our students work. But we were colossally wrong to suppose that they could do the job by pressing of a couple of buttons. And, for the few who remember them at all, our amazing seven-league boots of 1986 are now quaint fossils long ago junked in favor of newer, better tools and toys. We were not wrong, however, in supposing that these quaint old tools and toys and their not-so-quaint successors have made a day-and-night difference in the doability of projects like ours.
1.2. The Clinics organization and resources The Clinic itself arose from an odd combination of opportunity, trial, and error. By the fall of 1986 Elliotts inquiries had revealed that there were many more than a half-dozen claimants and that much more begging, buying, editing, and typing of Shakespeare claimant texts would be needed than he could afford to do with his own resources. Only one electronic Shakespeare existed, not available for public use or purchase at any price. Two more were expected to appear in 1988, purchasable for hundreds of dollars. Casting about for texts and analytical programs, Elliott turned to the Sloan Foundations New Liberal Arts Program. This program had just awarded a large grant to the Claremont Colleges to support the use of computers in the humanities. Elliott was far from an authority on either computers or the humanities, but mainline humanities professors in Claremont had little use for computers, and mainline computer professors had little use for the humanities, so there was not a crush of more qualified applicants. However, the Claremont Sloan committee would only fund text purchases in support of a student clinic organized to apply computers to the humanities. Elliott therefore conquered his qualms about rushing in where more qualified scholars feared to tread, applied for and got a Sloan Clinic grant. He started rounding up resources: students, a client, and a co-advisor, Hank Krieger, a statistician from Harvey Mudd College experienced in advising that colleges famous math and engineering clinics.
HMC clinics were the model for Sloan clinics. These typically involved a large industrial client funding a team of students attacking a technical, practical problem, such as optimizing someones missile defenses or keeping Disneyland cars from bumping into each other, with the help of a faculty advisor like Krieger. More often than not, the students, though well trained in basic technical subjects, had to solve the chosen problem from scratch, and to learn on the job whatever new skills seemed needed to solve it. The merely difficult we can do immediately, was the brash HMC motto; the impossible takes longer. It seemed like a suitable mindset for the Shakespeare Clinic. If bright college students could use their computers to find new ways to optimize missile defenses, maybe they could also find new ways to search for Shakespeare. In this spirit, the first Shakespeare Clinic opened in the fall of 1987 with winged heels, as [Yankee] Mercuries, for now [sat] Expectation in the air (1H5 1.01.7-8).
1.3. The Clinics years in action, 1987-1994
Years of trial and error, and many highs and lows, ensued. Our Yankee Mercuries heels were not as winged as initially hoped, and the brash Harvey Mudd motto, The merely difficult we can do immediately, was spectacularly inappropriate for a big authorship-testing program by neophytes who had to create their own text archives. Neither of the imminent electronic Shakespeare texts became available till late 1988. Our claimant text development - gathering, editing, and commonizing spelling -- was slow and frustrated by the accidental erasure without backup of all the first years work. Student efforts to adapt the Thisted-Efron tests to local platforms were slow, idiosyncratic, and full of bottlenecks. Their programs were often workable only by the student who did the programming -- who would then graduate and leave the job to be redone from scratch the following year. When computer-scientist Valenza, a man experienced with signal processing and rocket launches newly hired at Claremont McKenna College in 1988 (and immediately recruited as a Clinic advisor) reprogrammed these tests, it was a high because anyone could make his programs work. But the high shortly became a deep low when he showed that the Clinics new tests were useless for samples as small as Shall I Die? and for poem baselines as small as Shakespeares (Valenza, 1990). Since the Thisted-Efron tests on poems had been the cornerstone of the Clinics early work; Valenzas validation testing, for practical purposes, demolished the first two years of the Clinics findings.
When the students turned to other tests, including Valenzas new, high-tech modal test, they soon showed, all but conclusively, that Elliotts family candidate, the Earl of Oxford, did not match Shakespeare at all. This was a low indeed for Elliott because Oxford, under the prior Thisted-Efron testing, had seemed one of the most promising of the candidates tested. More than once, especially in the early years, it seemed that we had gotten in over our heads.
But there were many highs, too: watching the students perform before tough, knowledgeable audiences; finding and developing new troves of texts and tests; finding and fixing testing errors; meeting and comparing notes with other authorship buffs, both amateur and professional. Prominent among the professionals was Donald Foster, a professor of English at Vassar College. He is now famous for his Shakespeare ascription of a poem, Elegy by W.S., discovered in the Bodleian Library, Oxford (Foster, 1989, 1996). Foster thought our efforts to shorten the Shakespeare claimant list were misconceived and embarrassing, since no Shakespeare professional considered any non-Stratfordian claim open to rational examination or debate. In 1996, when we were about to publish evidence contrary to his Elegy ascription, he became our most implacable critic and censor. Nevertheless, during the life of the clinic, he was a cordial and unstinting counselor, the only lit department regular to show deep, sustained interest in our work. We learned much about authorship studies from him, and he, who was also experimenting with novel identification tests, from us.
A final set of highs was reporting our results. The students more than held their own in year-end reports, which were usually delivered on, or conveniently close to Shakespeares birthday, often with theatrical embellishment. In two years, 1988 and 1990, they became worldwide media sensations, appearing on ABC, NBC, and the Japan and Korean Broadcasting Systems. Their work was covered in the Los Angeles Times, the Washington Post, several British papers, and more than a hundred other papers at home and abroad. Each year, having delivered their final reports, the students would go home, leaving it to us to sort out, develop, and write up their results.
In 1988 and 1989, the initial findings, based mostly on our new Thisted-Efron tests, seemed favorable to the Earl of Oxford and were much acclaimed by Oxfordians. By year-end 1990, our Oxford-favoring Thisted-Efron tests seemed to be discredited for poems (Valenza, 1990), and we had to junk it, like Don Quixotes helmet visor, which he tested in Book One of Don Quixote by smashing it to pieces with his own sword. We replaced it with other, better tests which seemed to rule out all of the leading poet claimants, including the three frontrunners with large, organized followings, Oxford, Bacon, and Marlowe (Elliott and Valenza, 1991). These tests got a brief, intense barrage of criticisms from Oxfordians, Marlovians, and others but have turned out, on re-examination, to be remarkably robust. (for example, Elliott and Valenza, 1991, 1996, pp. 198, 203, 208-09).
The students tackled poet claimants first, for two reasons: four-fifths of the testable claimants were poets, while less than half were playwrights; and the pertinent poem baseline texts, both for Shakespeare and for the claimants, were twenty times shorter than the play baselines and required much less editing. They did not require the stripping of speech headings and stage directions, nor the separation of verse from prose. They could be completed and tested much more quickly than full play baselines, and they permitted the selective use of slow, manual tests, which the much larger play baselines did not. By focusing on poets only, normally in 3,000-word blocks, the students appear to have eliminated 29 of 37 testable claimants by 1990 (Elliott and Valenza, 1991), including all three of the frontrunners, Bacon, Oxford, and Marlowe.
The much larger play baseline was not fully edited till 1993-94 and, for the most part, could only be conveniently analyzed with fast, computerized tests. But analyzing a huge 800,000-word Shakespeare play baseline in large, 15-30,000-word, play-sized blocks proved to have many advantages over analyzing much shorter blocks from our much smaller poem baseline. Large blocks average out more variation than small ones and permit the valid use of many tests that would be lost in the noise of smaller blocks (Chapter 5 below). Our large Shakespeare play baseline also permitted the full use of our three Thisted-Efron tests, which our small Shakespeare poem baseline did not (Section 5.1). Once we had a full play baseline, we discovered that two of the three play-validated Thisted-Efron tests were usable on poems after all. In all, we validated 51 tests for plays, 14 for poems, a total of 54 tests, counting three that were validated for poems but not for plays.
The 51 play tests seem to us particularly robust. Since there were too many play tests to fit onto a single spreadsheet, we divided them into three rounds, one devised and performed mostly by students in 1993-94, two devised and performed mostly by Elliott after the last clinic closed down in 1994. Not only did we get 100% reliability from all three test rounds combined in accepting core Shakespeare and rejecting non-Shakespeare, we also got 100% reliability in accepting Shakespeare from each round separately, and better than 95% reliability, in most cases, in rejecting non-Shakespeare. This heavy redundancy, overkill, if you wish, explains why, though we do expect some future erosion of our findings, we do not consider it particularly threatening to our overall conclusions. You could knock out an entire round, a third of our 51 play tests, and still have zero false positives and zero false negatives. You could even knock out two entire rounds and still have zero false positives and not much more than five percent false negatives. Actual erosion discovered since our 1996/97 final report has been less than a tenth of one percent, nowhere near enough to threaten our main conclusions.
The last student clinic closed down in May 1994, leaving its advisors to sort out, further develop, write up, and circulate its findings to knowledgeable critics. Many drafts ensued, with copies sent to our correspondents, and many comments received and discussed. We also published summaries of our findings in Computers and the Humanities, Chance, and the Shakespeare Newsletter and presented our data and methods to a plenary session of the North American Classification Society, a group of professional statisticians. We received many valuable comments. By April 1996, Computers and the Humanities had accepted our final report for publication. We concluded that none of the 37 testable claimants, and none of the 30-odd plays and poems of the Shakespeare Apocrypha, matched Shakespeare.
1.4. Aftermath: A Funeral Elegy and other controversies The April 1996 issue of CHum, containing our final report, was delayed for several months. When it finally appeared in January 1997, we were surprised to find that our milestone had a millstone attached. CHum had repackaged it as a debate and paired it with a sweeping, scathing denunciation of it by our old ally Donald Foster. Foster by that time had concluded that the Funeral Elegy was Shakespeares beyond all reasonable doubt and gotten it accepted as possibly Shakespeares in all three new American editions of Shakespeares Complete Works. He no longer took kindly to our evidence to the contrary but dismissed it categorically as idiocy, madness, and foul vapor (Foster, 1996a, 1998).
We shall not review Fosters arguments here, nor our counterarguments, though we shall refer to some of both, as necessary, in our discussion of specific tests. For a full account of the debate, interested readers may consult Fosters 1996a and 1998 and our 1998 and forthcoming. But the massive condemnation by a doyen of authorship studies did make certain that our final report was not final at all. We had to respond. CHums new editors were kind enough to let us do so with firmness and to make whatever corrections seemed necessary. The resultant corrections turned out to be trivial, changing our overall results by less than a tenth of a percent (Elliott and Valenza, 1998, forthcoming). We still got 100% acceptance of Shakespeares core plays and 100% rejections of everyone elses plays.
Foster also forced us to take a closer look at the Elegy, which had gotten only passing scrutiny in our final report. It seemed to us unlikely that our seemingly strong negative evidence and his seemingly strong positive evidence could both be right, let alone right beyond all reasonable doubt, as Foster was claiming. We summarized our counterevidence in a letter to him (Elliott, 1996a) and later expanded it into an article in the Shakespeare Quarterly (Elliott and Valenza, 1997). Still later, in response to a draft of Brian Vickers The Shakespeare Authorship Question (forthcoming), we compared the Elegy both to Shakespeares poem corpus and to John Fords. The Elegy is indeed loaded with smoking-gun features shared with Shakespeare, but they are not real smoking guns because they are also abundantly shared with Ford. More important, the Elegy is heavily loaded with features not shared with Shakespeare, each one, in our view a silver bullet through the body of the Shakespeare ascription (Chapter 4, Section 1 below). One or two such silver bullets are not enough to kill an ascription, since five of our 70 baseline Shakespeare verse blocks had two rejections. But three or four rejections are enough to raise serious questions, and five or more are enough to make us consider summoning a Shakespeare regular to administer last rites to the ascription at issue. Counting firm rejections only, the Elegy flunks 17 of 35 validated Shakespeare tests and only two of 30 validated Ford tests. If the traits of both authors are Poisson-distributed - and they seem to be so on many of our tests - you can figure from these test results the odds that the Elegys scores could have arisen by chance from one corpus or the other. These odds, by the most conservative reckoning, turn out to be 3,000 times worse for Shakespeare than they are for Ford (Elliott and Valenza, 2000a; Chapter 9 below).
In retrospect, our approach had many gaps because we spent little time retracing old paths and much time blazing new ones. We did consult Foster and others about old-tech external evidence issues, such as paleography, foul papers, memorial reconstructions, and commonly accepted stylistic quirks of authors, editors, and compositors. We also looked at every new-tech, internal-evidence author-identification methodology we could find, such as those mentioned in David Holmes surveys of 1985 and 1993. But typically we tackled the problem at hand directly and ab initio, like a Clinic, with much less formal attention to the work of others, old-tech or new, than you would expect in an academic paper. These omissions were not just from our lack of training to do the old, slow, conventional external analysis, far less from a disinterest in established, quantitative techniques of analysis. They came also from a belief that the chips of our new-tech tests should best be let fall where they may, rather than closely watched to keep them in line with whatever the established scholarly consensus seemed to require. From the standpoint of scholarly consensus, after all, we were already deep in the wilderness when we started. Had we been truly deferential to consensus, we would never have taken on the Clinics defining questions and certainly would not have used computers to find the answer.
Hence, we developed a broad menu of testing alternatives, many of them graded by anticipated speed and yield, (that is, how many non-Shakespeare rejections it would produce while still saying could-be to 95% of our Shakespeare baseline). The students picked whichever ones they thought could best help them answer the questions in the time available. They generally favored new, fast, quantitative tests over old, slow, qualitative ones, which often seemed inconclusive for Shakespeare even in the hands of people who had spent years mastering them. If you have only a semester or a year to learn and try out your tests, and you have a new set of seven-league boots at hand, there are good reasons to prefer fast new tests that use the boots to slow old ones that do not.
All our tests, especially the novel ones, involved a lot of trial and error. Many fizzled, both novel and otherwise. But many did not, and the errors grew steadily smaller with each succeeding clinic and each succeeding written study.
Fosters scorching denunciations of the Clinics work (his 1996, 1998) had many bad effects, but some good ones as well, because you can often learn much from your critics that you dont learn from your fans. They did force us to do a thorough re-evaluation of our own work. Fortunately, unlike Don Quixotes first visor, it survived with barely a scratch, while whatever Foster was smiting it with is in deep disrepair (our 1998, forthcoming). Foster also forced us to do further work on the Elegy, which brought our work to a broader, more literary audience than CHums (our 1997). The new work generated many new insights about the influence of baseline size, the measurement of overall distance between a sample text and alternative baselines, and the problems of smoking-gun authorship indicators. It also, in our view, left the Elegys Shakespeare ascription untenable and showed John Ford to be a much more likely author (our 2000). The new work is interesting enough, extensive enough, and scattered enough among different issues of different scholarly journals, to make us believe that a book is now needed to pull it all together.
We believe that our work has produced many new insights and points of reference, most of them more complementary than contradictory to the old points of scholarly reference. Most of what scholars thought was Shakespeares in 1987 still looks like the work of one person, not a committee, and does not look like the work of any of the 37 claimants we could test. Admittedly, we havent actually proved that the Stratford man did it, but, in our view, nothing is either true or false but alternatives make it so. Who else could it now be? Also, most of what scholars thought was not by Shakespeare in 1987 does not look like Shakespeare by our tests. And most of what scholars thought was in doubt in 1987 is still in doubt now (Chapter 3). Our main exception to the 1987 consensus is A Lover's Complaint, which falls outside our Shakespeare profile in six of fourteen tests. Of 70 other Shakespeare verse blocks, none had more than two rejections. If LC is Shakespeares, it is a very odd outlier. A better guess, we think, though we are still conservatively calling it a guess, is that LC is not by Shakespeare.
Of the three Shakespeare poem discoveries of the 1980s, two have been discarded without much input from us: Shall I Die? and As This is Endless. Both look improbable by at least one of our validated Shakespeare tests, but both are far too short for most of our 14 standard poem tests to produce meaningful results. The third, the Funeral Elegy, is long enough to take 36 valid Shakespeare tests. It flunks 24 of them, 17 of them by a wide margin, and in our view should not be ascribed to Shakespeare at all. We dont see a scholarly consensus yet on FE - American editors still seem to think it could be Shakespeares, British do not. But hardly anyone besides Donald Foster thought it was Shakespeares when we began in 1987, so our general proposition stands.
Titus Andronicus and Henry VI, Part Three, two plays commonly (but tentatively) thought to be Shakespeares in 1987, show what looks to us like clear signs of co-authorship with another writer. Edward III, commonly thought not to be by Shakespeare in 1987, has been ascribed to Shakespeare in 1990s American editions. We consider the ascription about as doubtful as that of A Lover's Complaint (Chapter 9). Almost all of our other ascriptions seem in line with the 1987 consensus (Chapter 3).
We draw several lessons from our Shakespeare project (Chapters 4.1 and 12).
1) Postmodernists to the contrary, authorship does matter, especially if it is a grand master like Shakespeare. We are at one with most of our critics, and fans, and with most of the public, on this issue.
2) No authorship test that we know of is perfect, that is, immune to false positives and false negatives in general application. Talk of an authors thumbprints can make sense if you are comparing one author with another, James Madison to Alexander Hamilton, say, or Shakespeare to Ford. Some of these preferred to say while, others whilst or whiles, and their patterns of preference among these alternatives helps make a thumbprint of sorts. But it is generally misleading when comparing an author to everyone else.
3) With imperfect tests, negative evidence is normally a hundred times more persuasive than positive evidence. Silver bullets can disprove an ascription far more conclusively than smoking guns can prove it. Only in fairy tales is having a size-five foot strong proof that you are Cinderella. In practice, you could just as well be Little Miss Muffet or Tiny Tim. But, if you are a size-ten, your claim to be Cinderella is in big trouble.
4) Some idées fixes can have value even if mistaken. Many of the key starting assumptions of the Shakespeare Clinic, like those of other major authorship study projects, turned out to be wrong. But we learned much from our mistakes, fixed those we could find, and made many interesting and worthwhile discoveries as side-benefits in the process. Who would have labored for ten years, as we did, to shorten the claimant list who was convinced that none of them had a chance in the first place? Who would have labored ten years on the Funeral Elegy, as Don Foster did, if he had known at the start that it was Fords, not Shakespeares? Of course, it helps if you actually do learn from your mistaken idées fixes. If you dont, the ounce of error, though initially fruitful in effect, is not necessarily made more fruitful when it becomes a ton.
5) Shakespeares known writing is consistent enough, and different enough from that of his contemporaries, to distinguish him from everyone else we tested. If Shakespeares works were written by a committee, as some anti-Stratfordians claim, the committee was astonishingly regular and predictable in its range of stylistic habits. If they were written by any of the claimants we tested, or by the same person who wrote any of the apocryphal plays and poems we tested, that person was astonishingly irregular and unpredictable.
6) As discussed above, our work supplements, not supersedes, prior authorship studies, and is generally consistent with most of them.
7) Due diligence in authorship studies has always required some attention
to stylometric numbercrunching: finding, counting, and analyzing an authors
measurable traits. From the 1750s to the 1960s, literary sleuths counted
verse endings, function words, and other measurables by hand and added
to our understanding of Shakespeares contributions to Henry VIII
and Two Noble Kinsmen, much as their nineteenth-century statistician
counterparts did for our understanding of public hygiene. The latter moved
Florence Nightingale to speak of statistical study as a religious service.
Since the 1960s, and particularly since the 1980s, computers have expanded
our measuring power a hundredfold. It is still true that counting and crunching
cannot solve every problem and that both counting and crunching can be,
and have been, frightfully abused and oversold. It is also true that many
literary people of Whitmanesque sensitivities find it surprising and distressing
that anyone would spend even one second, let alone more than a decade,
crunching poems and plays when one could be reading them instead. We feel
their pain. It does seem more like grinding up butterflies to find matching
nucleotides than like looking up at them in perfect silence, like the stars.
But grinding and matching nucleotides do tell us things we could otherwise
never know about butterflies - how closely one population is related to
another, how swiftly each population mutates, and so on. No lepidopterist
could make a competent genetic-drift assessment without reference to it.
Likewise, collecting and crunching poems and plays tell us things we could
never know without statistics. Literary people should certainly not stop
reading poems or poring over old papers in rare-books libraries any more
than butterfly people should stop observing live butterflies in their natural
habitats. But they should be well aware that there is much of interest
going on beneath the surface, and that computers are beginning to make
much of it manifest, as Shakespeare put it, to make the truth appear where
it seems hid (Measure for Measure, 5.01.66). If authorship matters
- and it surely does -- statistics have to matter, too.
Induction to Shakespeare Clinic: [Truth] O, let the vile world end,
[Truth] The slipper glass doth not befit the foot,
[Truth] O, let the vile world end,
[Truth] The slipper glass doth not befit the foot,