DesignWIKI

Fil Salustri's Design Site

Site Tools


teaching:critical_thinking

Critical Thinking

Assorted notes on thinking critically to ensure sound reasoning. Though not a replacement for actual study of the discipline, it can be a helpful reminder.

Science is foundational in engineering design. Engineers design things based on scientific knowledge about the universe (including people), so good engineering designers need to know when they are looking at bad science, and call it out whenever they find it.

Critical thinking can help do that.

Spotting mistakes

Here's a generic list of common mistakes that can creep into scientific work, summarized from here. If you depend on spotty science for your engineering, you won't do very well at all.

Sensationalized Headlines

Actual research papers won't fall prey to this error, but newspapers invariably do, as do many magazines intended to popularize science and technology. This is because non-scientific media outlets are interested in generating as many views (so-called “clicks”) as possible.

Don't trust headlines; don't assume they give anywhere near the whole story. At very least, read the whole article. Even better, use the article to research actual research papers and read those.

Misinterpreted results

Sometimes, even with the best intentions, a media outlet will misinterpret research results. It's your ethical responsibility to be aware of this tendency and verify everything you read. Look up actual research papers to verify what you've read in the media.

Conflicts of interest

This is a Really Big Deal for engineers, especially once they have been licensed. In science, it can be even more troubling. A conflict of interest is when an expert advocates for a claim while also directly benefitting from the widespread adoption of that claim in “real life.” Doctors who used to tell people that smoking cigarettes is good for one's health had been corrupted by the tobacco industry. Even today, there are various cases of medical researchers acting unethically under pressure (and direct personal benefit) of so-called “Big Pharma.” The list is long: engineers who advocate for unlicensed gun ownership and work for arms manufacturers, or who work for the coal industry and argue strenuously against sustainable energy sources, or automotive engineers who claim manufacturing defects in automobiles are not killing people….

Correlation vs causation

Greenhouse gases have been going up since the 1800s, and ocean piracy has been going down during the same period. That's a correlation. But that doesn't mean that the lack of piracy is causing increased GHGs. Suicides by hanging, suffocation, and strangulation in the US correlate tightly with US spending on science, space, and technology, but that doesn't mean that US science spending causes suicide. These are called spurious correlations.

If X causes Y, then one can expect a correlation between X and Y, but if one only has a correlation, one knows nothing about the cause. Put another way, correlation is a necessary, but not sufficient, condition for causation.

You want to know how phenomena are caused, so be sensitive to spurious correlations.

Speculative language

Beware language that uses “may,” “could,” “might,” and similar speculative language. This is a sure sign that the conclusions being drawn are speculative and not grounded in good evidence. The more speculative the language, the less reliable is the claim.

Sample size too small

Statistical analysis can be nasty and counter-intuitive, but it's the best way humans have to find patterns in noisy signals, whether it's a signal sent from an extraterrestrial intelligence or a new cancer drug. Small sample sizes lower so-called confidence that the conclusions of the statistical analysis are true. However, the bigger the sample size, the more expensive the experiment will be, and the more time and resources it will take. Sometimes, small sample sizes are used to “pilot” experiments. Other times, small samples are unavoidable for various other pragmatic or ethical reasons.

While it may be that, in some cases, small sample sizes are all that is possible, be wary of experiments involving small sample sizes that could have easily been made bigger.

Unrepresentative samples

Experiments cannot be performed on every possible subject or specimen, so a sample must be used. It's critically important that the sample be representative of the overall population of subjects/specimens. If it isn't, then there's no way to know if the results will apply to the overall population.

For instance, if you're designing an app that will be used by the general population (i.e., mostly people without technological backgrounds) but you only test the user interface with “geeks,” then you'll probably find that your actual user population will hate the user interface.

This means that you actually have to study your user population carefully, so that when you select your sample, you know that it will be representative.

No control group

Control groups are groups of subjects/users that will not use the intervention you've developed. You do this to create a baseline of behaviour to the experimental conditions against which you can compare the other, experimental group of subjects/users.

For instance, a recent student of mine wanted to see how well designers could use a particular method. He ran an experiment where some of the designers used his method, while others used another method with which the designers were already familiar. Both groups carried out the same design exercise, only using different methods. The results showed that my student's method had both advantages and disadvantages compared to the other method, and opened the door for assorted future research projects.

Examples in clinical trials of drugs should be rather obvious.

No blind testing

A blind test is one in which the user/subject/participant doesn't know if they are in the control group or the experimental group. This is done to remove all kinds biases to which both the researchers and the subjects might fall prey. An experiment that isn't blind is far less likely to be reliable than a blind one.

Sometimes, though, it's impossible to make get a truly blind design experiment, because the designers who is being studied needs to know what they're doing in order to do it. Still, one can at least make a participant blind to what other participants are doing; and that is often enough.

A double blind experiment is one in which neither the participants nor the researchers know if a given participant is in the control group or not. Again, this is done to eliminate a whole other bunch of biases. In a design experiment, the experimenter will typically have another person use random numbers to allocate participants to groups, and the experimenter will never know which human participant was in which group - that is, participation is made anonymous.

Cherry-picking results

To cherry-pick results means to use only those experimental data that support the desired result, and excluding the rest. This is never acceptable, and is considered a violation of ethics. There may be certain reasons for excluding some data points from an analysis, but these data must still be reported and the explanations for their exclusion from analysis must be made available. To do otherwise undermines the whole point of conducting an experiment.

(Climate change deniers and certain religious extremists, such as young-earth creationists, are excellent examples of how cherry-picking is used for duplicitous reasons.)

Unreplicable results

One of the cornerstones of evidence-based experimentation is the notion of reproducibility: it must be possible for the results of an experiment to be reproduced by different experimenters using different (but functionally equivalent) equipment, in different locations. Reproducibility tells us that we're seeing some kind of real effect and not just a spurious, random instance of something.

To be reproducible, an experiment must be described/reported in sufficient detail for anyone else with the right training and resources to execute their own version of it.

In design and engineering work, this is why very careful, detailed, precise, and complete documentation must be kept (in, for instance, a design journal). The documentation is necessary for others to verify the work done - by reproducing it if necessary - to validate that it was all done correctly.

Peer review and publication

Even if research is done conscientiously and diligently, errors can creep into it. This is why research, be it in design, engineering, science, or the humanities, must be peer-reviewed. This means sending the research reports to known experts in a field, who carefully analyze them to look for mistakes. Often, these mistakes can be directly addressed by the authors without re-doing the experiments, but when a serious flaw is found, the report fails to get published, which is an indicator that the research was seriously flawed.

This is why research journals are the most reliable source of detailed information.

See Also

teaching/critical_thinking.txt · Last modified: 2020.03.12 13:30 (external edit)