I understand the idea behind your comment (that leftist will reject any data that is not politically correct, but a computer won't), but I think your argument is really only boiling down to an exercise in semantics:
Obviously, data can actually be wrong. A trivial example is where someone has erroneously recorded the number of people living in a city, as say, -100. This number is clearly wrong as it is impossible for a negative number of people to be living in a city. You seem to have provided an out in your comment in that this data point won't be "wrong" per se, but rather "measure(d) improperly" etc., or that it is not "syntactically valid". Yet the latter arguments are merely questions of semantics. Is something not "wrong" if it was "measured incorrectly" or if it was "syntactically invalid"?
A human programming a computer that is processing population size would probably set an range of valid inputs, and only allow integer inputs above or equal to zero, which is a sensible choice for this kind of data. In doing so, the human is effectively telling the computer what data is "wrong" - a negative population size is invalid and "wrong". Again, you could say that a negative population size isn't "wrong", it's simply "syntactically invalid", but this is just an argument about semantics.
A human could also program a computer to only accept a mean IQ for all populations to be 100. That is, any number that is not equal to 100 is "wrong" or "syntactically invalid". Here, the computer is rejecting any data that does not agree with it's programming. In what what is a computer programmed in this way "attached to a hard connection to reality"? It's only attached to a hard connection to its programming.
The actual difference between the computer and the human is that the computer is acting on explicit programming for particular cases of information that is deemed to be "wrong" by its programmers. It is therefore possible to "fool" the computer to by finding edge cases in which the programmers did not explicitly exclude. The computer is unable to understand that information and make inferences about what that information might convey, so it can't be programmed to follow a more general rule about what information is "wrong" (e.g. because it is politically incorrect). The leftist human, on the other hand, is perfectly capable of doing just that. They can understand what is implied by certain pieces of information, and reject it because it does not follow their political ideals and is therefore "wrong" (here, they could be doing this completely subconciously without even knowing they are doing it).
This means that the difference between the computer and the leftist human has nothing to do with "introspection". It's merely that the human can understand the implications of certain data, and the computer can not. If the computer was able to understand the implications of certain data, it could be programmed to find that information "wrong" (or "syntactically invalid").
The data is not actually wrong. This is one of the things that's the basis of experimentation. If you fail to collect the data properly, you can't ignore your data. You actually have to account for it. You have to calculate a correction, and propagate an error, in order to identify what your correct data actually was. Worst case scenario, you have to mention it, but explain why it wasn't included in the overall calculation, to be able to be reviewed as an appendix
I don't see wrong and syntactically invalid as the same thing. Syntactically valid data, inherently, still works like any valid data. Invalid data isn't "wrong", the input itself is nonsensicle. It's like describing something as: "tasting like purple". When you can't even read the data properly, you actually don't have data because you've broken something very fundamental in your code.
Again, this is not wrong, or syntactically invalid. This is a logic error. A logic error is an error so fundamental that no computer could be expected to understand that the human made a mistake. We check for these using assertion errors. If the assertion error fires, this is an indication of a programming fault somewhere. Once again, this is a very fundamental error which makes the machine unable to preform it's basic tasks.
No computers can make an inference, but again, this is so fundamental that no inference would be expected. You do not use computers to make inferences.
No, it's still about a kind of introspection in the way that an AI has to analyze data.
I understand the idea behind your comment (that leftist will reject any data that is not politically correct, but a computer won't), but I think your argument is really only boiling down to an exercise in semantics:
Obviously, data can actually be wrong. A trivial example is where someone has erroneously recorded the number of people living in a city, as say, -100. This number is clearly wrong as it is impossible for a negative number of people to be living in a city. You seem to have provided an out in your comment in that this data point won't be "wrong" per se, but rather "measure(d) improperly" etc., or that it is not "syntactically valid". Yet the latter arguments are merely questions of semantics. Is something not "wrong" if it was "measured incorrectly" or if it was "syntactically invalid"?
A human programming a computer that is processing population size would probably set an range of valid inputs, and only allow integer inputs above or equal to zero, which is a sensible choice for this kind of data. In doing so, the human is effectively telling the computer what data is "wrong" - a negative population size is invalid and "wrong". Again, you could say that a negative population size isn't "wrong", it's simply "syntactically invalid", but this is just an argument about semantics.
A human could also program a computer to only accept a mean IQ for all populations to be 100. That is, any number that is not equal to 100 is "wrong" or "syntactically invalid". Here, the computer is rejecting any data that does not agree with it's programming. In what what is a computer programmed in this way "attached to a hard connection to reality"? It's only attached to a hard connection to its programming.
The actual difference between the computer and the human is that the computer is acting on explicit programming for particular cases of information that is deemed to be "wrong" by its programmers. It is therefore possible to "fool" the computer to by finding edge cases in which the programmers did not explicitly exclude. The computer is unable to understand that information and make inferences about what that information might convey, so it can't be programmed to follow a more general rule about what information is "wrong" (e.g. because it is politically incorrect). The leftist human, on the other hand, is perfectly capable of doing just that. They can understand what is implied by certain pieces of information, and reject it because it does not follow their political ideals and is therefore "wrong" (here, they could be doing this completely subconciously without even knowing they are doing it).
This means that the difference between the computer and the leftist human has nothing to do with "introspection". It's merely that the human can understand the implications of certain data, and the computer can not. If the computer was able to understand the implications of certain data, it could be programmed to find that information "wrong" (or "syntactically invalid").
The data is not actually wrong. This is one of the things that's the basis of experimentation. If you fail to collect the data properly, you can't ignore your data. You actually have to account for it. You have to calculate a correction, and propagate an error, in order to identify what your correct data actually was. Worst case scenario, you have to mention it, but explain why it wasn't included in the overall calculation, to be able to be reviewed as an appendix
I don't see wrong and syntactically invalid as the same thing. Syntactically valid data, inherently, still works like any valid data. Invalid data isn't "wrong", the input itself is nonsensicle. It's like describing something as: "tasting like purple". When you can't even read the data properly, you actually don't have data because you've broken something very fundamental in your code.
Again, this is not wrong, or syntactically invalid. This is a logic error. A logic error is an error so fundamental that no computer could be expected to understand that the human made a mistake. We check for these using assertion errors. If the assertion error fires, this is an indication of a programming fault somewhere. Once again, this is a very fundamental error which makes the machine unable to preform it's basic tasks.
No computers can make an inference, but again, this is so fundamental that no inference would be expected. You do not use computers to make inferences.
No, it's still about a kind of introspection in the way that an AI has to analyze data.