The difference is that you're now trying to support this statements with peer-reviewed research.
I think usafmedic sometimes comes across as a little abrasive. I'm not sure if he's aware of this, or if he cares if he is. But what he was asking isn't unreasonable, even if his frustration was clear from some of his posts. Perhaps he could have been gentler.
Personally, I've got some reading to do before I have an opinion on this area. So far, I'm more convinced by the information that usafmedic has put forward, because his arguments have been supported by an evaluation of peer-reviewed research, which he and other people have linked to. But I'm keeping an open mind, and interested in seeing other data.
I think key points that have been made here include:
* Any diagnostic test carries a certain false-negative and false-positive rate. When considering the benefit of a given test, this has to be balanced against the potential cost / risk / negatives of both false-positives and false-negatives.
* The ECG is less sensitive and specific than the echocardiogram for identifying cardiac hypertrophy. (Although, I think the sensitivity improves dramatically if we have voltage criteria + signs of LV strain).
* Any time you have a rare condition (i.e. low prevalence), any test with a low specificity will throw a lot of false-positives. This can result in unnecessary expense, alarm for the patient, and exposure to potentially dangerous medical treatments for healthy patients that would never had been exposed to this risk had they not been tested.
* Peer-reviewed research trumps anecdote. What a particular cardiologist, tree surgeon or supermarket cashier "thinks", is not that interesting. Obviously we should listen first to the cardiologist, but this remains "expert opinion" at best. What we should do, is demand research, and demand real data.
I hope this can remain a productive discussion.