Is anyone testing the methodologies being used?
Is anyone testing the methodologies being used?
Various methodologies are being used to try to determine what did or did not happen in the past, and it often centers on whether a passage in scripture or otherwise existed originally, or was interpolated partially or wholly. And, it often centers on what different words in the passage mean, and their origins,etc.
It occurs to me that the passages studied and debated here are not randomly chosen. They are chosen because of their POTENTIAL significance to a given theory.
While this make sense it potential INVALIDATES any methodology used UNLESS that same methodology is randomly applied to other passages, and the outcomes compared.
Anyone can come up with criteria for interpolation or something else based on common sense or some other reasoning. But how well have those criteria been tested in a non-biased manner? How many relatively insignificant passages would be declared interpolated applying the same criteria used on the more meaningful passages?
In the recent thread on probability which I only skimmed I saw that Bernard and Peter - in looking at various passages and applying a formula - came up with polar opposite odds. Obviously there's a problem with the methodology used or in how it was applied, but I also wonder what each might have learned if they applied their approaches to relatively insignificant passages. Would that not be revealing in some way? I don't know I'm just asking.
We see in all areas of research the problems with study-design. That's why there are very strict methods - double blind/placebo etc.. - to try and remove the bias.
It sounds boring, very tedious, and difficult. But isn't is really necessary in order to elevate the study to something more scientifically valid?
I"m only raising the question for discussion.
It occurs to me that the passages studied and debated here are not randomly chosen. They are chosen because of their POTENTIAL significance to a given theory.
While this make sense it potential INVALIDATES any methodology used UNLESS that same methodology is randomly applied to other passages, and the outcomes compared.
Anyone can come up with criteria for interpolation or something else based on common sense or some other reasoning. But how well have those criteria been tested in a non-biased manner? How many relatively insignificant passages would be declared interpolated applying the same criteria used on the more meaningful passages?
In the recent thread on probability which I only skimmed I saw that Bernard and Peter - in looking at various passages and applying a formula - came up with polar opposite odds. Obviously there's a problem with the methodology used or in how it was applied, but I also wonder what each might have learned if they applied their approaches to relatively insignificant passages. Would that not be revealing in some way? I don't know I'm just asking.
We see in all areas of research the problems with study-design. That's why there are very strict methods - double blind/placebo etc.. - to try and remove the bias.
It sounds boring, very tedious, and difficult. But isn't is really necessary in order to elevate the study to something more scientifically valid?
I"m only raising the question for discussion.
- Ben C. Smith
- Posts: 8994
- Joined: Wed Apr 08, 2015 2:18 pm
- Location: USA
- Contact:
Re: Is anyone testing the methodologies being used?
It is an excellent question.TedM wrote:Various methodologies are being used to try to determine what did or did not happen in the past, and it often centers on whether a passage in scripture or otherwise existed originally, or was interpolated partially or wholly. And, it often centers on what different words in the passage mean, and their origins,etc.
It occurs to me that the passages studied and debated here are not randomly chosen. They are chosen because of their POTENTIAL significance to a given theory.
While this make sense it potential INVALIDATES any methodology used UNLESS that same methodology is randomly applied to other passages, and the outcomes compared.
Anyone can come up with criteria for interpolation or something else based on common sense or some other reasoning. But how well have those criteria been tested in a non-biased manner? How many relatively insignificant passages would be declared interpolated applying the same criteria used on the more meaningful passages?
In the recent thread on probability which I only skimmed I saw that Bernard and Peter - in looking at various passages and applying a formula - came up with polar opposite odds. Obviously there's a problem with the methodology used or in how it was applied, but I also wonder what each might have learned if they applied their approaches to relatively insignificant passages. Would that not be revealing in some way? I don't know I'm just asking.
We see in all areas of research the problems with study-design. That's why there are very strict methods - double blind/placebo etc.. - to try and remove the bias.
It sounds boring, very tedious, and difficult. But isn't is really necessary in order to elevate the study to something more scientifically valid?
I"m only raising the question for discussion.
Often arguments are made from analogy, meaning that the case in question (the unknown) is postulated as having followed the pattern of another case which is more clear (the known). I have done this sort of thing before with arguments for directionality (which text or tradition gave rise to the other?), based on (for example) Josephus' use of the Hebrew scriptures (a case in which we already know that the tradition flowed from the scriptures to Josephus and not vice versa).
Often, of course, analogies can be found for both directions; in such a case, the evidence at hand is simply not up to the task of telling us which text or tradition came first. This is similar to a situation in paleontology in which we simply do not possess enough transitional fossils to accurately recreate an organism's evolutionary tree; we are awaiting further evidence, or we must devise a new kind of test.
Arguments which seem to derive from common sense, incidentally, are very frequently arguments from analogy: we are used to things working in a certain way, so common sense tells us that they probably worked that way in the particular case we are studying. For example, when one author quotes the work of another, we expect that the quoted work preceded the author's quotation in time, even though it is logically possible that the quoting author invented the quote and then another author created a text which would make good on the deception. We know how things normally work in these cases, so we apply that knowledge to cases in the past which we cannot directly test, and we expect heavy duty arguments to the contrary if we are to believe otherwise. Another example here is that we privilege eyewitness testimony over hearsay; even though we know that eyewitness testimony is not always reliable, we also know from our own modern experience that it is generally more reliable than testimony passed on at second or third hand or worse.
I would love to see more methodologies tested and then applied to the dark and murky subject matter we happen to be studying on this forum. These things take time, I suppose, and there is not necessarily a lot of motivation to find and test new methodologies against what we think we already know.
ΤΙ ΕΣΤΙΝ ΑΛΗΘΕΙΑ
- Peter Kirby
- Site Admin
- Posts: 8616
- Joined: Fri Oct 04, 2013 2:13 pm
- Location: Santa Clara
- Contact:
Re: Is anyone testing the methodologies being used?
What methodology?TedM wrote:Obviously there's a problem with the methodology used or in how it was applied,
Yes, capital idea. I agree.TedM wrote:but I also wonder what each might have learned if they applied their approaches to relatively insignificant passages. Would that not be revealing in some way? I don't know I'm just asking.
We see in all areas of research the problems with study-design. That's why there are very strict methods - double blind/placebo etc.. - to try and remove the bias.
It sounds boring, very tedious, and difficult. But isn't is really necessary in order to elevate the study to something more scientifically valid?
I"m only raising the question for discussion.
Great idea. Difficult and expensive at best, but great idea.Various methodologies are being used to try to determine what did or did not happen in the past, and it often centers on whether a passage in scripture or otherwise existed originally, or was interpolated partially or wholly. And, it often centers on what different words in the passage mean, and their origins,etc.
It occurs to me that the passages studied and debated here are not randomly chosen. They are chosen because of their POTENTIAL significance to a given theory.
While this make sense it potential INVALIDATES any methodology used UNLESS that same methodology is randomly applied to other passages, and the outcomes compared.
Anyone can come up with criteria for interpolation or something else based on common sense or some other reasoning. But how well have those criteria been tested in a non-biased manner? How many relatively insignificant passages would be declared interpolated applying the same criteria used on the more meaningful passages?
Hard as hell.
The professionals don't do it, and they get paid for this... so...
"... almost every critical biblical position was earlier advanced by skeptics." - Raymond Brown
Re: Is anyone testing the methodologies being used?
Yeah, often there seems to be a good basis for these arguments. I had in mind the passages people look at for interpolations mostly.Ben C. Smith wrote:
Often arguments are made from analogy, meaning that the case in question (the unknown) is postulated as having followed the pattern of another case which is more clear (the known). I have done this sort of thing before with arguments for directionality (which text or tradition gave rise to the other?), based on (for example) Josephus' use of the Hebrew scriptures (a case in which we already know that the tradition flowed from the scriptures to Josephus and not vice versa).
Re: Is anyone testing the methodologies being used?
That probably isn't the right word for those Baysean formulas. Something caused you and he to have very different results. Perhaps it was just the underlying assumptions. Scratch my comment there, as it was not really on point.Peter Kirby wrote:What methodology?TedM wrote:Obviously there's a problem with the methodology used or in how it was applied,
The issue reminds me of discussions about chiasms. It seems to me that the existence of chiasms doesn't necessarily mean much - they may be intentional or not. And since there are all kinds of chiasms possible (ABA ABBA ABBCBBA, AB1B2CDB2A, etc..) the patterns may really exist, or they may exist only in the 'creative minds' of the reader. How can one know what value to place in what they think they are seeing if they haven't applied their own criteria to other passages that may not be nearly as interesting to them?
-
- Posts: 3964
- Joined: Tue Oct 15, 2013 6:02 pm
- Contact:
Re: Is anyone testing the methodologies being used?
I do not think the formulas used by I and Peter are incompatible.In the recent thread on probability which I only skimmed I saw that Bernard and Peter - in looking at various passages and applying a formula - came up with polar opposite odds.
Peter's algorithm is meant for "events" within an author's writings, all implying the same thing, when a fair amount of these "events" is thought likely to be interpolations (through evidence on each). In that case, a same interpolator can be assumed to have made these interpolations systematically on all the "events", even if some are not "evidenced" as interpolations.
Even in that case, if one (or more) "event" can be "proven" to be not an interpolation, the theory of systematic interpolations is considerably weakened.
My (simpler) algorithm worked for "events" which are likely to be true (with possibly a few exceptions), and therefore out of reach of Peter's formulas. My algorithm also allows me to factor in the possibility of not only interpolation, but also interpretation (against implying that same thing) and dependence on the gospels.
So the big difference between Peter and I is how we rate the "events" regarding the probability of interpolation/falsity for each "event".
Peter seems to think that most "events" implying the historicity of Jesus in Paul's epistles are evidenced interpolations.
I don't. That's where the main difference lies.
Cordially, Bernard
Last edited by Bernard Muller on Sat Dec 17, 2016 8:51 am, edited 4 times in total.
I believe freedom of expression should not be curtailed
Re: Is anyone testing the methodologies being used?
Thanks for the explanation. Do you think there would be any value in applying your approach to events of relatively low importance where you don't really care what the outcome is?Bernard Muller wrote:I do not think the formulas used by I and Peter were incompatible.In the recent thread on probability which I only skimmed I saw that Bernard and Peter - in looking at various passages and applying a formula - came up with polar opposite odds.
Peter's algorithm is meant for "events" on an author's writings, all implying the same thing, when a fair amount of these events are thought likely to be interpolations (through evidence on each). In that case, a same interpolator can be assumed to have made these interpolations systematically on all the events.
Even in that case, if one (or more) event can be "proven" to be not an interpolation, the theory of systematic interpolations is considerably weakened.
My (simpler) algorithm worked for "events" which are not likely to be true (with possibly a few exceptions), and therefore out of reach of Peter's formulas. My algorithm also allows me to factor in the possibility of not only interpolation, but also interpretation (against implying that same thing) and dependence on the gospels.
So the big difference between Peter and I is how we rate the "events" regarding the probability of interpolation/authenticity for each "event".
Peter seems to think that most "events" implying the historicity of Jesus in Paul's epistles are interpolations.
I don't. That's where the main difference lies.
Cordially, Bernard
-
- Posts: 3964
- Joined: Tue Oct 15, 2013 6:02 pm
- Contact:
Re: Is anyone testing the methodologies being used?
Please note I corrected errors in my earlier post:
which are not likely to be true => which are likely to be true
and
interpolation/authenticity => interpolation/falsity
For example: P = 1- [(1-p1) * (1-p2) * (1-p3)] = 1- [(1-0.8) * (1-0.7) * (1-0.9)] = 0.994
But if p3 is fully dependent on p2 being true:
P = 1- [(1-p1) * (1-p2) * (1-(p3 * p2))] = 1- [(1-0.8) * (1-0.7) * (1-(0.9*0.7))] = 0.987
Cordially, Bernard
which are not likely to be true => which are likely to be true
and
interpolation/authenticity => interpolation/falsity
Sure, as long as these events imply the same thing and the probability of falsety is low for each and one is not dependent on another.Thanks for the explanation. Do you think there would be any value in applying your approach to events of relatively low importance where you don't really care what the outcome is?
For example: P = 1- [(1-p1) * (1-p2) * (1-p3)] = 1- [(1-0.8) * (1-0.7) * (1-0.9)] = 0.994
But if p3 is fully dependent on p2 being true:
P = 1- [(1-p1) * (1-p2) * (1-(p3 * p2))] = 1- [(1-0.8) * (1-0.7) * (1-(0.9*0.7))] = 0.987
Cordially, Bernard
I believe freedom of expression should not be curtailed
Re: Is anyone testing the methodologies being used?
Big problem with this is much of the text were community efforts, groups of people writing, what looks like a later addition has great possibility of being the original text, even when it looks like an obvious addition or later compilation, it could have been the original. Its the certainty I have issue with either way.TedM wrote: I had in mind the passages people look at for interpolations mostly.
Re: Is anyone testing the methodologies being used?
TedM wrote: for those Baysean formulas.
They have no business in this line of study.