Science and math.

Light-hearted discussions, forum games and anything that doesn't fit into the other forums.
Tech Corner - Firewalls, AV etc. - Report Bugs - Board Rules
User avatar
Jolly Joker
Round Table Hero
Round Table Hero
Posts: 3316
Joined: 06 Jan 2006

Unread postby Jolly Joker » 08 Aug 2007, 05:57

But that has nothing to do with Math, to come back to the issue.

What people do in science is, they extrapolate results which is the reason why Newton works on the earth, but is wrong generally. It's like having a series of numbers and now you have to find the next one:

1 2 4 8

Easy job in this case. This series can be described as 2(n) (where n is a natural starting with 0 and is of course an exponent which I cannot type here). Scientifically this extrapolation or "law" which it was called if it would describe some phenomenon, would have the advantage of being simple and science tries to keep things simple when on the look for explanantions (you may call this bias, but I think simplicity is a good aim).

Anyway, this leaves us with an expectation, that the next number will be 16, and everything is working fine with this formula, except there are some small miss-matchings and descrepancies in the vague area beyond 8 - and then someone has an idea which allows him to make an experiment that will give the next number which is 15. This might be due to some error in the way the experiment was conducted, a measuring error and so on - however, a repeat brings the same result, 15.

Now someone starts thinking about the earlier discrepancies and reconsiders. Looking at the series

1 2 4 8 15

the differences are checked

1 2 4 7

and now the differences between the differences

1 2 3

and we may build a new and much more complex recursive formula:

a(0) = 1, a(1) = 2; a(n+2) = 2a(n+1) - a(n) + (n+1) with n being a natural starting with 0 and everything in brackets except the last one being a footnote.

So there is no problem whatsoever with math. The problem is (or has been, always) the quality of the observations and their extrapolation, when it comes to formulating a law..

As my own footnote, due to the fact that the recursive formula is so much "ungainlier" than the first one a lot of scientists wouldn't want that second one to be right which is where the bias comes into play. They would feel that something like that can't be right and might spend the rest of their days trying to disprove things. The only thing that could do that would be more factual results of higher numbers or new observations.
ZZZzzzz....

User avatar
Caradoc
Round Table Knight
Round Table Knight
Posts: 1780
Joined: 06 Jan 2006
Location: Marble Falls Texas

Unread postby Caradoc » 08 Aug 2007, 06:36

However, JJ, there are any number of alternative functions that generate the same sequence. You have chosen the simplest one, as pragmatism dictates, and until the sequence is broken, it will serve. However, when you happen on a number that is out of sequence, do you write it off as an anomaly, tell yourself the simple function is 'good enough', or look for an alternate function that may be more complex but fits the data?

Here's a funny fact about Newtonian physics. One of the problems that vexed Newton was the circular orbit of the Moon. To explain this, at one point he incorporated 'an occasional push from God' into his gravitational equations.
Before you criticize someone, first walk a mile in their shoes. If they get mad, you'll be a mile away. And you'll have their shoes.

User avatar
Jolly Joker
Round Table Hero
Round Table Hero
Posts: 3316
Joined: 06 Jan 2006

Unread postby Jolly Joker » 08 Aug 2007, 07:11

Caradoc wrote:However, JJ, there are any number of alternative functions that generate the same sequence. You have chosen the simplest one, as pragmatism dictates, and until the sequence is broken, it will serve. However, when you happen on a number that is out of sequence, do you write it off as an anomaly, tell yourself the simple function is 'good enough', or look for an alternate function that may be more complex but fits the data?
The answer here is, I think, if you stumble onto a number that is out of sequence it's time to check the fundamentals of the way you came across that out-of-sequence number. If you can reproduce the anomaly and rule out an experimental flaw it's time to rethink the function, compley or not.

I took that sequence because the numbers 15 and 16 are sufficiently near to make an error a possibility: the anomaly is small, one function is "nice and simple" the other "ungainly and awkward". In such a case there would be a big inertia towards the simple one - until we could come up with a new number in sequence that would deliver new values.
ZZZzzzz....

User avatar
Corribus
Round Table Knight
Round Table Knight
Posts: 4994
Joined: 06 Jan 2006
Location: The Duchy of Xicmox IV

Unread postby Corribus » 08 Aug 2007, 15:11

Mytical wrote:The addition of computers may have aliviated some bias, but they only know what was programmed into them. Unfortunately, bias does exsist in every experiment. It may be small, and barely effect the experiment, but unfortunately it is there. Lets take one experiment done by two different entities.
Your knowledge of scientific methodology is clearly very limited.
Lets say there is a drug A. Experiment from lab 1 shows that drug A has limited side effects and is well tolerated. They study 1000 people, documenting their findings.

Lab 2 however, shows a lot more side effects, and even 1 or 2 fatalities. They also study 1000 people.

Now the first question is why two experiments conducted identically would show different results. The reason is bias.
Could be. On the other hand, the conditions may not be the same. Good scientists eliminate most, if not all, bias from their experiments. Experimental design is an important aspect of any scientific education.

I am not saying that there isn't bias in poorly designed experiments. But in experiments which are properly designed and which use electronic instruments to acquire data, the only real bias comes in interpretation.
Of the possible millions of people who might benifit from the drug the researchers select people (maybe subconsciously) that they think would either (in lab 1) be more likely to tolerate it or (in lab 2) be less likely too. Any experiment is the same. It may be very minor, and even not seen by the naked eye, but there is always some bias. Even experiments done by computers, for they only know what they are told.
I'm sorry, but you don't really know what you're talking about. Your drug example is a very poor one. It's possible to blindly design experiments that eliminate the type of bias you are suggesting exists (even "maybe subconsciously"). Computers do what they're told but they aren't asked to interpret results, only to acquire and process data.

You need to separate experimental methodology from experimental interpretation. The latter is certainly open to bias, but thankfully there are many checks and balances in modern scientific disciplines to reduce its impact. Properly designed experiments do not in themselves sway the experimental data towards any preconceived, biased result. The body of accumulated data may be interpreted by an individual in such a way that they may appear to support a preconceived, biased result, but it is in the interests of scientists as a community to reduce this type of bias as much as possible. Bias is not as big of a problem in controlled settings as you are making it out to be.
"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?" - Richard P. Feynman

User avatar
Corribus
Round Table Knight
Round Table Knight
Posts: 4994
Joined: 06 Jan 2006
Location: The Duchy of Xicmox IV

Unread postby Corribus » 08 Aug 2007, 16:03

@JJ

Nice example. Except your footnote:
As my own footnote, due to the fact that the recursive formula is so much "ungainlier" than the first one a lot of scientists wouldn't want that second one to be right which is where the bias comes into play. They would feel that something like that can't be right and might spend the rest of their days trying to disprove things. The only thing that could do that would be more factual results of higher numbers or new observations.
That is not completely correct. But it's a nice simplified way of demonstrating the scientific method for the benefit of Mytical who clearly (from her above posts) does not understand the place of bias and repeated measurements in the art of experimentation.

If we were measuring some observable phenomenon and with that phenomenon was associated the numbers 1,2,4,8 as you suggest, a scientist - who is only human - would naturally expect the next number to be 16, as you indicated. Nobody thinks of complicated answers before they think of simple ones. He would build a hypothesis based on this mathematical prediction that the next value would be 16. Then he would do an experiment and determine that YES it is 16, at which case his hypothesis is vindicated (temporarily) OR he measures 15 (or something else), in which case he has a problem.

Rather than immediately discarding his hypothesis, which initially seemed to have been based on good logic (based on the information at hand), the easiest, best conclusion is that the value of 15 was an instrumental error. Hypotheses take time to come up with and are not causally discarded. Nevertheless, he does not reject the value of 15 simply because it doesn't agree with his hypothesis. Well some scientists - BAD scientists - might, but these problems are thankfully ironed out in the peer review process, as a reviewer would likely ask WHY the value of 15 was discarded and if the author had no reason besides "well I didn't like it" the paper would likely be rejected. There's a code of scientific integrity which most scientists are expected to follow. Some things slip through the cracks - yes - but often these are uncovered at a later data and those scientists are punished. Severely.

Anyway, back to the example. So the experimental results which are anomalous according to the hypothesis (in fact - MOST of the time experimental results to not completely support a pre-existing hypothesis) are now hypothesized to be incorrect due to instrumental (or human) error. So what do you do? You repeat the measurement. Furthermore, you have to realize that typically scientists do not make one measurement at a time. Not only do they measure the value which should be 16 (according to the hypothesis), they also measure the next ten. So they not only expect 16, they also expect 32, 64, 128, etc. So for instance if I measured

1, 2, 4, 8, 15, 32, 64, 128...

I would be much more confident that my anomalous 15 value was due to instrumental error. On the other hand, if I measured

1, 2, 4, 8, 15, 26, 42, 64...

I would be much more confident that my anomalous 15 value was due to something else. Furthermore in a good experiment each data point is measured more than once, often by more than one person, and under a variety of controlled conditions. These controls eliminate other sources of "error" (or better: complicating factors) that could otherwise explain deviations from expected values. Identifying important controls takes a lot of time and effort, and it is hard work, and many scientists do not do it properly. But again these things are usually caught in peer-review. Poorly done experiments are rejected by reviewers. More controls are asked for. Etc.

So you measure each value 3 (or more) times under each set of conditions and generate a large body of data. So in this way you might measure

(1,1,1), (2,2,2), (4,4,4), (8,8,8), (15,16,16)...

In which case again you could say that probably your 15 value was simply an error. Note that this isn't actually as subjective a process as I indicate here. In analytical sciences you can't just say "Well I discard the 15 because I think...) There are statistical ways to determine if values which deviate from expected values are able to be discarded. Furthermore by using controls I may find that I measure 15 consistently, but only when the temperature is above a certain level, or the humidity is just so, or I do the experiment in the dark. These controls aren't chosen randomly - a scientist usually has a reason to expect that such a condition may affect the outcome, but nevertheless while such controls would not demonstrate the 15 value to be a random error, they WOULD give the scientist a clue as to what is really going on.


So you see, your example, while salient, definitely oversimplies the scientific method. And again I stress that the scientist does not casually reject "bad data". Since the original hypothesis SEEMED valid, they may exhaust themselves trying to show why the "bad data" point *IS* an erronious value, which may ultimately be a waste of time if it turns out the point is not due to random error at all. But that's the process of science. Eventually a scientist will be convinced one way or the other, and he'll move on.

Now of course, if it turns out through exhaustive repetition of the original experiment that 15 is DEFINITELY the next number, then he will discard the original hypothesis (or modify it to account for the controls) and adopt a new one. And he'll make further tests. In your example, he'd probably measure the next few numbers (although he's likely to have done this already). And satisfy himself that, for his limited set of controls, the new hypothesis is a good one, in which case he can build a genuine theory OR he needs to do further work. As you can see, science is a laborious process. They don't call it a LABORatory for nothing. ;)


Now to address Caradoc's comment-
The thing is that there is, as I've explained, no Theory of Everything. Most likely there never will be. Every theory, while it may explain a certain subset of reality absolutely perfectly, is going to fail at some point. At some point your going to change some parameter - or get access to better ways of measuring your numbers - and find that the theory just isn't right.

Richard Feynman was fond of using the example of the constancy of mass. It seems to our every day experience that mass is a constant thing. You measure a top (Feynman's example) and find it weighs 1 kg. If you spin the top, it still weighs one kg. Let's say you are able to measure the weight of the top at certain velocities. Your data would look something like this (velocity [m/s], mass [kg]):

(1,1), (2,1), (3,1), (4,1)......(500,1)

You make every measurement up to 500 m/s and find that the mass doesn't change at all. You repeat each measurement 5000 times and find no statistical deviation - there will always be some, due to "noise" based on the quality of your instrument, but let's pretend not. At this point you are pretty confident that your hypothesis - mass is independent of velocity - is pretty good. Scientists around the world accept this as a law. Constancy of Mass Law.

Then some hack named Einstein comes along and throws that out the window. Using logic, he predicts that actually mass increases with velocity. "What a dumbass!" scientists say. Obviously we KNOW that mass is invariant to velocity! It doesn't CHANGE! But Einstein wags his finger and says, "Yeah but the change is really, really small and you won't really notice it until the object goes REALLY fast, like speed of light fast."

Nothing you can do about it now, of course. But some years go by and technology gets better and now people can measure masses with more accuracy and precision. And so you repeat your measurements and find the following (made up, of course, by me :) )

(1, 1.0000000), (2, 1.000001), (3, 1.0000002), (4, 1.0000003)... etc.

To your surprise, the mass is found to INCREASE, just as predicted. The difference is so tiny that you'd never notice it in daily life, but there it is. You could never have had access to this information before, because the quality of your data is limited by the capacity of your instrument. Your noise 'hid' the deviation from you. We take this (after much verification) as evidence that the original theory was "wrong" and the new one is "right". But those are really just relative terms. The old one isn't useless - as you can see, it's very good for everyday use - and the new one probably isn't "right" - because even this theory probably fails at some point.

The point is: do you throw out the old hypothesis because you find it to be inaccurate? Of course not! Philosophically you might consider it to be "wrong". And in some senses it is, because that little revelation of finding a TINY deviation from your old theory, though almost imperceptively small, completely overturns everything you thought you believed. But your old results are not wrong because they were bad experiments, or because they were biased. They're very good for a certain "volume" of reality.

The interesting question is: Had Einstein not come around and instrumentation had advanced before relativity was proposed, what would have happened if the measurements had been made, and mass had been found to deviate slightly with velocity? Would the experimental results been thrown out? "Mass is constant" was a law, after all! By many scientists, it very well may have been, because that's a very small number and until the variation in mass grew much larger than variation due to noise (we call this signal to noise in the "biz"), scientists may not have made much of it. It's hard to see statistically relevant trends until a sufficient number of points have been acquired (after all - can you conclude from only two points, that your data makes a line?). What usually happens is a very gifted individual notices the data and makes something of it. MOst scientists usually sort of ignore this poor person for a while because it's true - scienstists don't like to completely throw out laws that have been immutable for centuries, so maybe that's a bit of bias - but eventually instruments will improve to the point that the trends are unmistakable. This is the scientific process. Sometimes theories come along before experiments are there to support them (but these theories ARE based on empirical logic) and sometimes the experiments come along first and theories are devised to explain them.

The point is that in whichever order it happens, all existing theories are constantly being modified and improved. That's not to say there isn't a sort of "logical inertia" that makes it difficult to overturn hard-held beliefs. But all scientists recognize that nothing is fully understood, and that we don't really know everything there is to know about anything. Given the right data that has been acquired correctly and repeated through very well designed experiment, scientists WILL discard even the most concrete of Laws or scientific Beliefs.
"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?" - Richard P. Feynman

User avatar
Gaidal Cain
Round Table Hero
Round Table Hero
Posts: 6972
Joined: 26 Nov 2005
Location: Solna

Unread postby Gaidal Cain » 08 Aug 2007, 18:04

Corribus wrote:The interesting question is: Had Einstein not come around and instrumentation had advanced before relativity was proposed, what would have happened if the measurements had been made, and mass had been found to deviate slightly with velocity? Would the experimental results been thrown out?
There's actually one instance where the effect of relativity was observed before the theory of it was put forth: the orbit of planet Mercury had some anomalies that didn't quite comply with Newtonian mechanics. Astronomers originally tried to explain this with a new planet (a very sensible thing to do, given that other planets had been found in a similar manner), but that didn't work out.
You don't want to make enemies in Nuclear Engineering. -- T. Pratchett

User avatar
Corribus
Round Table Knight
Round Table Knight
Posts: 4994
Joined: 06 Jan 2006
Location: The Duchy of Xicmox IV

Unread postby Corribus » 08 Aug 2007, 18:18

Gaidal Cain wrote:There's actually one instance where the effect of relativity was observed before the theory of it was put forth: the orbit of planet Mercury had some anomalies that didn't quite comply with Newtonian mechanics. Astronomers originally tried to explain this with a new planet (a very sensible thing to do, given that other planets had been found in a similar manner), but that didn't work out.
Such observations are usually the stimulus for continued theoretical thought. Quantum theory arose the same way. It just usually takes someone bright enough to "think outside the box" and be readily willing to throw conventional "wisdom" out the window in order to really get at what is going on. :)
"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?" - Richard P. Feynman

User avatar
Gaidal Cain
Round Table Hero
Round Table Hero
Posts: 6972
Joined: 26 Nov 2005
Location: Solna

Unread postby Gaidal Cain » 08 Aug 2007, 18:39

Corribus wrote:Such observations are usually the stimulus for continued theoretical thought. Quantum theory arose the same way. It just usually takes someone bright enough to "think outside the box" and be readily willing to throw conventional "wisdom" out the window in order to really get at what is going on. :)
Not "the same", but "that". Einstein wasn't out to specifically solve the problem with Mercury's orbit, and I'm not sure that he even knew about the problem.
You don't want to make enemies in Nuclear Engineering. -- T. Pratchett

User avatar
Jolly Joker
Round Table Hero
Round Table Hero
Posts: 3316
Joined: 06 Jan 2006

Unread postby Jolly Joker » 08 Aug 2007, 19:47

Thanks, Corribus, because I made that example specifically with Newton and Einstein in mind. :)

The problem is indeed extrapolation. I mean, I don't think Newton was wrong. The problem is the question: is a law always and under every circumstance true.
As far as I know there is NO law PROVEN to be right in every situation. Even Einstein is supposedly wrong in the area of quantum effects and vice versa. But the laws are getting always better.
ZZZzzzz....

User avatar
Caradoc
Round Table Knight
Round Table Knight
Posts: 1780
Joined: 06 Jan 2006
Location: Marble Falls Texas

Unread postby Caradoc » 09 Aug 2007, 01:03

But all scientists recognize that nothing is fully understood, and that we don't really know everything there is to know about anything.
All scientists?
Before you criticize someone, first walk a mile in their shoes. If they get mad, you'll be a mile away. And you'll have their shoes.

User avatar
winterfate
Round Table Hero
Round Table Hero
Posts: 6191
Joined: 26 Nov 2006
Location: Puerto Rico

Unread postby winterfate » 09 Aug 2007, 01:04

I'd suppose so Caradoc.

T'would be arrogant to think that we know everything. We probably never will (and perhaps that's for the best). ;)
The Round Table's birthday list!
Proud creator of Caladont 2.0!
You need to take the pain, learn from it and get back on that bike... - stefan
Sometimes the hearts most troubled make the sweetest melodies... - winterfate

User avatar
Corribus
Round Table Knight
Round Table Knight
Posts: 4994
Joined: 06 Jan 2006
Location: The Duchy of Xicmox IV

Unread postby Corribus » 09 Aug 2007, 01:08

@Caradoc -
You have a counter-example? Every scientist I know - and I know a lot - would never claim that any area of science is "finished".
"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?" - Richard P. Feynman

User avatar
Mytical
Round Table Knight
Round Table Knight
Posts: 3780
Joined: 07 Aug 2006
Location: Mytical's Dimension

Unread postby Mytical » 09 Aug 2007, 02:44

And so far my point is proven. Some people are so convinced that it is impossible that our current scientific methods can be wrong they fail to see that this stance is only vanity. In any experiment regardless how carefully administered there is bias.

First lets take computers. You say they only register the findings, but those findings are determined by their program. The computer can not register things it has no programming for, and thus rejects that data. If the people that program it (ie humans) does not know it exsists or thinks it is of no conscequence, the computer would not 'know' to messure that. IE the computer only knows what humans tell it to. Thus it has bias. Or do you argure there are now self aware computers that are smarter and know more then the person programming it?

Next lets take the set up of the experiment. Since it is impossible to set up any experiment for every contingency (especially since we do not know every possible contingency) then every experiment has bias. It is bias due to lack of knowledge and understanding, but bias nonetheless. Or do you contend we DO know everything, and can compensate for it?

Now take the conductors of the experiment. All experiments today come from knowledge we have gained over the centuries. It builds on what we already know. Unfortunately that knowledge is bias in itself. Though there are truths they sometime do shade the outcome.

I am sure there will be those who will take quotes from this and argue symantics. Me, I am not perfect, but I am a lot smarter then you might think. Now that is not to say I am perfect, or that there is no possiblity I am wrong, I am not vain enough to say that. People like to think they are right, it is part of the human condition. It is when people let their own ego's get in the way of improving things that things get off course.
Warning, may cause confusion, blindness, raising of eybrows, and insanity. Image

User avatar
Corribus
Round Table Knight
Round Table Knight
Posts: 4994
Joined: 06 Jan 2006
Location: The Duchy of Xicmox IV

Unread postby Corribus » 09 Aug 2007, 03:43

Mytical wrote:And so far my point is proven. Some people are so convinced that it is impossible that our current scientific methods can be wrong they fail to see that this stance is only vanity. In any experiment regardless how carefully administered there is bias.
If you think so.
First lets take computers. You say they only register the findings, but those findings are determined by their program. The computer can not register things it has no programming for, and thus rejects that data. If the people that program it (ie humans) does not know it exsists or thinks it is of no conscequence, the computer would not 'know' to messure that. IE the computer only knows what humans tell it to. Thus it has bias.
I deleted the last sentence of that paragraph because it wasn't even worth responding to. The findings aren't determined by the computer. They're determined by the instrument. The instrument takes readings, readings which humans tell it to sense, and sends those readings to the computer, which either displays the data unaltered or transforms it in some way (mathematically) into useful information. AHA! You say, but humans told the detector what to sense! Yes. But that doesn't mean it is a biased measurement.

Consider an instrument that detects photons (i.e., light). This can be an electronic detector or a human eye. Assuming for a second that they have equal sensitivity, the human eye is biased because it will discard information that doesn't pertain to the experiment (or information it THINKS is not important), because it has a thinking brain that knows what it wants to see. The detector will not. For instance if I am trying to measure the amount of photons coming from a flashlight and some other light blinks in the background, if I'm using my eye, my brain rejects those photons from the distant blink and only "counts" the photons coming from the flashlight. That's a biased measurement, because I'm discarding data I don't want. An electronic detector doesn't do this. It records ALL photons that hit it, regardless of origin, and the computer then displays these on the screen. That's an unbiased recording of the data. Now my brain can analyze that data and throw out - if I know what to look for - those data points which correspond to photons coming not from the flashlight, and that could be bias in certain circumstances, but it doesn't have to be. It can also be rightly rejecting a source of noise.

I'm beginning to wonder if you even know what bias means or how it affects a scientific study. Why don't you define bias for me and explain what ramifications you think it has for a scientific study. That way, I know where to start.
Next lets take the set up of the experiment. Since it is impossible to set up any experiment for every contingency (especially since we do not know every possible contingency) then every experiment has bias. It is bias due to lack of knowledge and understanding, but bias nonetheless.
How is that bias? An experiment has to test for something. You cannot say that, since my experiment is designed to test something I want to test for, then it's biased and therefore useless. Otherwise, there's no point in testing for anything.
Or do you contend we DO know everything, and can compensate for it?
Yeah right that's my contention.
Unfortunately that knowledge is bias in itself. Though there are truths they sometime do shade the outcome.
:|
I am sure there will be those who will take quotes from this and argue symantics. Me, I am not perfect, but I am a lot smarter then you might think. Now that is not to say I am perfect, or that there is no possiblity I am wrong, I am not vain enough to say that. People like to think they are right, it is part of the human condition. It is when people let their own ego's get in the way of improving things that things get off course.
Maybe it's your own perceptions of science that are biased, and not science itself. Have you considered that possibility?
"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?" - Richard P. Feynman

User avatar
Mytical
Round Table Knight
Round Table Knight
Posts: 3780
Joined: 07 Aug 2006
Location: Mytical's Dimension

Unread postby Mytical » 09 Aug 2007, 05:05

Bias' effect on science is that until a method (and there is none yet) is created to eliminate all bias all findings are limited to human perceptions. Be it instrument bias (the instruments can only pick up what we tell it to, which perhaps means it misses variables because we do not know of them). For instance, your example of photons. Lets say we only know of one kind of photon, but there are 103. We can not create a instrument to pick up the other 102 kinds, because we do not know they exsist. So therefore a instrument set to record the number of photons that hit it would only be 1/103 parts correct. Now I am not saying that this is useless as you seem to think I am. I am just saying there IS bias, and that we have no idea if we are on the right track or billions of miles away from it.

Imagine a play. The auditorium is huge, one might say infinite in size. Now, we only see one or two parts of this, have no handbook that tells us what the play is about, and we are in the far back of the auditorium. We might see what the one or two parts (of billions of parts) and think we know what the play is about. Until we realise we were watching stage hands, who had nothing to do with the actual play.

It is the same with our limited understanding. There is so much in the universe that it is impossible for us to know if what we are seeing is even close to what is actually transpiring. Yet, often I have heard people stating such things as "Well if we did encounter other intelligent species, they would see things this way, cause it is the only way." This is vanity. They may have a much better seat and see a lot more then us in this play we call the universe.

Again I am going to stress that this does not mean we still should not strive and try to understand things. We have to do so, however, knowing that our reality as we percieve it is shaded by our own bias. If you need to know what bias is, use a dictionary. Each day we do learn more, and a little more of the play is revealed, but we are far from understganding the play. That has been my point all along. People may take it bad, they may take it wrong. My own bias might be shading things also. Of course, not many will admit their own bias is shading their own thoughts on the matter. After all, they can't be wrong...right?
Warning, may cause confusion, blindness, raising of eybrows, and insanity. Image

User avatar
asandir
Round Table Hero
Round Table Hero
Posts: 15481
Joined: 06 Jan 2006
Location: The campfire .... mostly

Unread postby asandir » 09 Aug 2007, 05:08

actually an instrument will pick up what it is capable of picking up, which is not necessarily limited to what we tell it to .... it is the filtering of the information that a lot of people have trouble with
Human madness is the howl of a child with a shattered heart.

User avatar
protecyon
Golem
Golem
Posts: 628
Joined: 19 Nov 2005
Contact:

Unread postby protecyon » 09 Aug 2007, 05:52

Mytical wrote:If you need to know what bias is, use a dictionary.
bi·as
1. an oblique or diagonal line of direction, esp. across a woven fabric.
2. a particular tendency or inclination, esp. one that prevents unprejudiced consideration of a question; prejudice.
3. Statistics. a systematic as opposed to a random distortion of a statistic as a result of sampling procedure.
4. Lawn Bowling.
a. a slight bulge or greater weight on one side of the ball or bowl.
b. the curved course made by such a ball when rolled.
5. Electronics. the application of a steady voltage or current to an active device, as a diode or transistor, to produce a desired mode of operation.
6. a high-frequency alternating current applied to the recording head of a tape recorder during recording in order to reduce distortion.
–adjective
7. cut, set, folded, etc., diagonally: This material requires a bias cut.
–adverb
8. in a diagonal manner; obliquely; slantingly: to cut material bias.
–verb (used with object)
9. to cause partiality or favoritism in (a person); influence, esp. unfairly: a tearful plea designed to bias the jury.
10. Electronics. to apply a steady voltage or current to (the input of an active device).
—Idiom
11. on the bias,
a. in the diagonal direction of the cloth.
b. out of line; slanting.

prej·u·dice
1. an unfavorable opinion or feeling formed beforehand or without knowledge, thought, or reason.
2. any preconceived opinion or feeling, either favorable or unfavorable.
3. unreasonable feelings, opinions, or attitudes, esp. of a hostile nature, regarding a racial, religious, or national group.
4. such attitudes considered collectively: The war against prejudice is never-ending.
5. damage or injury; detriment: a law that operated to the prejudice of the majority.
–verb (used with object)
6. to affect with a prejudice, either favorable or unfavorable: His honesty and sincerity prejudiced us in his favor.

I think that you're referring to definition 2 in bias, and definition 1 and 2 in prejudice, correct me if I'm wrong. Now if those are the definitions you're referring to then I fail to see how an instrument can have "any preconceived opinion or feeling, either favorable or unfavorable." I would agree that a human being has preconceived opinions, however this is not bias, or prejudice because contrary to definition 1 a scientist will have formed an opinion out of reason. Nonetheless, even if we say a scientist is biased, science itself is unbiased as the bias is weeded out through an iterative process of peer review.

Furthermore, what you're referring to as bias and vanity is merely lack of knowledge which science is striving to correct. Your example that just because we know of one photon we'll reject the other 102 photons is incorrect, if the phenomena manifests itself It'll be put through the full rigors of the scientific method, and then through the peer review process will be either accepted or rejected. Then if new data is presented that challenges the 102 photon theory it will then be put through another cycle of the peer review process.

Now if you believe that not actively looking for the other 102 photons is bias then I'll point to definition 1 of prejudice.
Last edited by protecyon on 09 Aug 2007, 17:36, edited 1 time in total.

User avatar
Jolly Joker
Round Table Hero
Round Table Hero
Posts: 3316
Joined: 06 Jan 2006

Re: Science and math.

Unread postby Jolly Joker » 09 Aug 2007, 07:10

Mytical wrote:Recently I pondered something in the random thoughts thread, and decided that a new thread would be more appropriate for a discussion about it. I pondered if math truely was a universal language.
Let's not forget, Mytical, that this was your initial post.
Math, however, has nothing to do with science and its findings or observations. It has nothing to do with any bias either.
ZZZzzzz....

User avatar
Veldrynus
Round Table Hero
Round Table Hero
Posts: 2513
Joined: 06 Jan 2006
Location: Inside your head!

Unread postby Veldrynus » 09 Aug 2007, 08:43

Mytical wrote: My own bias might be shading things also.
Damn right.

Like I've posted in one of my earlier posts (apparently nobody payed attention to it), that the human psychological biases usually work against the accepting of science, especially in people who cannot understand it (like you), and who do not find the answers it gives satisfying enough.
Veldryn 15:15 And Vel found a dirty old jawbone of a walrus and put forth his hand, and took it, and in his unholy rage, he slew thirty four thousand men and children therewith.

User avatar
ThunderTitan
Perpetual Poster
Perpetual Poster
Posts: 23270
Joined: 06 Jan 2006
Location: Now/here
Contact:

Re: Science and math.

Unread postby ThunderTitan » 09 Aug 2007, 11:06

Jolly Joker wrote: Math, however, has nothing to do with science and its findings or observations. It has nothing to do with any bias either.
I'm pretty sure Descartes would disagree.

at the human psychological biases usually work against the accepting of science,
Really? Coz imo it works both ways.
Disclaimer: May contain sarcasm!
I have never faked a sarcasm in my entire life. - ???
"With ABC deleting dynamite gags from cartoons, do you find that your children are using explosives less frequently?" — Mark LoPresti

Alt-0128: €

Image


Return to “Campfire”

Who is online

Users browsing this forum: No registered users and 49 guests