There's a new strategy in NT textual criticism that's been gaining ground: the Coherence-Based Genealogical Method (CBGM). I doubt it's possible to fully grasp CBGM unless you practice it. Having read some expositions/explanations, here are my amateur impressions:
i) On the one hand it takes greater advantage of computer technology. Digitizing all our Greek NT MSS. And thanks to Dan Wallace going around the world photographing Gk MSS, we have more than ever.
Computers make it possible to do exhaustive comparison and contrast.
ii) CBGM makes the claim that it can recover the readings of lost MSS. Tantalizing if true.
iii) And that computer analysis has a way of getting behind MSS to expose genealogical relationships. Useful if true.
iv) On the other hand, computers can't think. You only have to use autocorrect to see that computers have no understanding!
So computers can't replace human judgment, human intelligence. As such, I still put a lot of stock in the intuitive judgment of very talented textual critics like the late Bruce Metzger.
It's like gifted, experienced physicians who can just tell what's wrong with a patient, without doing any tests.
Or it's like the difference between computer chess and human chess players. These employ fundamentally different strategies,. I once read the comparison that humans have an analogue approach.
I had some lingering questions about CBGM, so I wrote a leading exponent, who was kind enough to respond. Here's the exchange:
I was reading your exposition/explanation of CBGM. I had a couple of questions:
1. From what I can tell, the method is essentially quantitative rather than qualitative. All our extant Greek NT MSS are digitized, then fed into computers to generate a commutative comparative analysis of textual variants. Correct me if I'm wrong.
i) Assuming that's correct, all MSS are treated equally. But can't seasoned NT critics just tell that some MSS are superior to others? Does it not adulterate the analysis when superior and inferior copies are treated alike and fed indiscriminately into the pool?
ii) Likewise, doesn't that skew the analysis when there's so much duplication, based on the fact some groups constitute a much larger sample, by virtue of the fact that we have more copies from a later date, or more copies from a particular urban center? In other words, an arbitrarily disproportionate sample that isn't necessarily representative?
2. I'm not entirely clear on the new goal of textual criticism. In traditional textual criticism, the goal was to approximate as near as possible the Ur-text. But in CBGM, the goal is to approximate the initial text. To borrow a comparison from evolutionary biology, that seems to be analogous to tracing extant readings back to their "last universal common ancestor" (LUCA)–like a hypothetical 2C copy from which all extant readings derive.
So that's the convergent point in terms of direct evidence. The extant MSS can only take us back so far.
But even if that's the case, shouldn't we expect a high degree of continuity leading up to the initial text? Scribes copy a preexisting copy, so there's a series of preceding copies with few substantive changes. Doesn't that create a strong presumption of textual continuity? The initial text still points further back to a text prior to the initial text. So doesn't the goal remain the closest realistic approximation to the Ur-text? Isn't the initial text quite likely to be nearly identical to the U-text (aside from minor scribal blunders)?
Hi Steve,
Thanks for the questions. In brief:
1. Actually, the method is both quantitative and qualitative. Pregenealogical coherence is purely quantitative but, since genealogical coherence takes into account the editors' own decisions, it is qualitative. At the point of pregenealogical coherence all witnesses are treated without privilege, but that is not the case with genealogical coherence where certain witnesses are further up in the textual flow than others. At this point it becomes clear that some witnesses are indeed better—according to the user's own judgment—than others.
2. You are not the only one who is unclear on the new goal! The editors of the published Editio critica maior volumes note that the initial text is the text from which the extant tradition developed. This definition allows for several possible referents which could be the author's original or may not be, depending on the situation. In the case of Acts and the Catholic Letters, they explicitly say they think there is no reason to hypothesize a gap between their reconstructed initial text and the text of the authors. But that may not be the case throughout the NT with all the ECM editors. Only time will tell as they complete more volumes. Does that make sense?
///several possible referents which could be the author's original or may not be, depending on the situation. In the case of Acts and the Catholic Letters, they explicitly say they think there is no reason to hypothesize a gap between their reconstructed initial text and the text of the authors. ///
ReplyDeleteIs he saying here that the theory IS that the original texts of Acts and these Epistles, and the hypothetical "initial text from which the extant textual tradition developed" are one and the same? If so, that could be very helpful to know. (And what about the Gospels, then, or Paul's letters?)