Module 1:1 - Introduction to Science and Data

From SSMS
Jump to: navigation, search

Youtube Playlist Associated Files


A - Introduction[edit]

Science is not a collection of facts -- It's a system for making principled observations of the world to learn about the nature of all things. Statistics is the application of math to these observations, allowing us to think about the observations in a principled way.

The content laid of this and all subsequent outlines focuses on the practical application of statistics in the pursuit of scientific inquiry. It touches upon mathematical principles and formulas, but only insofar as they illustrate how we use statistics to make principled inferences about the world.


B - Science and Methodology[edit]

1 - Philosophy of Science[edit]

There is an entire branch of philosophy dedicated to what knowledge is, how we acquire it, and what it means to "know" something. This field is called epistemology, and is a huge and complex component of philosophy. Science holds special status in this field.

Methods of Knowing[edit]

Two contrasting methods of knowing something:

  • Rationalism: Reason itself is a source of knowledge and is superior to and independent of observations made through the senses
    • For example, Cartesian geometry is something you can arrive at by reason, independently of observation
    • Rationalism extends from informal, everyday reasoning all the way to logic, a fully formal system of reasoning
  • Empiricism: Knowledge and evidence must be based on that which can be physically observed
    • Empiricism extends from informal, everyday observation to science, the most rigorous form of principled observation

The Problem of Demarcation[edit]

Science is granted special epistemic status -- we regard it as a fundamentally different (and better) way of generating knowledge through observation of the world. Science isn't always right, but as a society we grant it special status as the most rigorous empirical method of knowing.

Recognizing the difference between scientific (e.g. Physics) and pseudo-scientific (e.g. Astrology) enterprises is crucial. How do we demarcate ("distinguish") between the two? We use 4 criterion below.

Characteristics of Science[edit]

For a method of knowledge generation to be considered science, it must meet 4 criteria:

(1) Empirical: The process of knowledge generation hinges on observations of the world.

(2) Falsifiable: Theories and hypotheses must be able to be proven false through observation.

  • This criterion is attributed to Karl Popper, who was impressed with Einstein's Theory of Relativity and the fact that it made claims that could be proven to be untrue through observation.

(3) Objective: All people (across space, time, cultures, etc.) will make the same observations and come to the same conclusions.

(4) Public: Findings are honestly presented to the entire public so that they are able to be vetted and debated by all, and are protected against malicious influence.

  • See the story of Diederik Stapel for an example of how violating this principle can lead to major problems.


2 - The Scientific Method[edit]

Sir Francis Bacon (1561 - 1626)[edit]

Bacon is considered by many to be the father of empiricism and the scientific method. In fact, the scientific method was known for a while as the Baconian Method. He published this method in his seminal work, the Novum Organum Scientiarum in 1620.

A visualization of the steps of the scientific method, taken from Dr. Julian Parris' slides in the video The Scientific Method

The Scientific Process[edit]

The scientific method follows a particular process, or series of steps:

(1) Observe something in the world

(2) Generate a hypothesis about the state of nature that could give rise to our observation

(3) Generate a prediction about what else one should observe if our hypothesis is true of nature

(4) Test your prediction empirically

(5) Evaluate the results of your test

(6) Go back to your original hypothesis, refine or modify it, and proceed through the steps again


3 - Types of Research[edit]

Scientific research generally falls into to methodological groups: correlational and experimental:

An XKCD comic (license) illustrating that correlation does not imply causation; as adapted by Dr. Julian Parris in his slides for the video Manipulation in Experimental Design

Correlational[edit]

Correlational methods are those in which we observe two or more variables as they occur naturally, to determine the relationship (if any) between them. Correlation does not imply causation between the variables measured (though it is possible that two correlated variables are causally linked). This limitation is due to the possibility of spurious correlation.

  • Variable: any characteristic or condition that can take on different values for different individuals or observations
  • Spurious correlation: an unmeasured or unseen variable may be causally linked to the variables we measure and create the appearance of a direct relationship between our measured variables, even though there isn't one in reality; also known as confounding
    • A grim example of spurious correlation: Data has shown that as ice cream sales increase, murder rates go up. Does this mean that more ice cream causes more murders? Of course not; it is more plausible that warmer temperatures result in more people eating ice cream, and also result in more people being out and about in public, able to be murdered.
Venn diagram of the Direct Method of Agreement; taken from Dr. Julian Parris' slides in the video Manipulation in Experimental Design
Venn diagram of the Method of Difference; taken from Dr. Julian Parris' slides in the video Manipulation in Experimental Design
Illustration of the Method of Concomitant Variation; taken from Dr. Julian Parris' slides in the video Manipulation in Experimental Design

Experimental[edit]

When it is crucially important to establish a causal relationship between two or more variables, we must do an experiment. Experiments have two main characteristics:

(1) The experimenter manipulates an independent variable to see the effect on a dependent variable; notion often attributed to John Stuart Mill

  • Independent Variable: The variable that is manipulated or changed by the experimenter; different types of changes or conditions that the experimenter composes are called levels
  • Dependent Variable: The variable that is measured in the experiment

(2) Extraneous variables are controlled (or held constant) across all levels of the independent variable

  • Extraneous variable: variable that has some relationship with the dependent variable, but are not of interest in the experiment
  • We control for extraneous variables through random assignment
  • We must watch out for particular extraneous variables, called confounding variables: variables that correlate with both the independent and dependent variables
Random Assignment[edit]

With random assignment, chance alone dictates which treatment an individual receives in the experiment. This ensures that there are no systematic differences across groups before treatment. Groups will nearly always differ, even with random assignment, but their differences will be due to chance only.

John Stuart Mill's Canons of Experimental Inquiry[edit]

In Mill's work A System of Logic, he included a chapter called "Canons of Experimental Inquiry" that described ways in which we can study the world to infer causal relationships between variables. Below are three of these ways, which form the basis of much experimental research.

  • Direct Method of Agreement: If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree is the cause (or effect) of the given phenomenon.
  • Method of Difference: If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one only occurring in the former; the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensable part of the cause of the phenomenon. (This is the method you'll most often see in psychological science.)
  • Method of Concomitant Variation: Whatever phenomenon varies in any manner whenever another phenomenon varies in the same particular manner, is either a cause or effect of that phenomenon, or is connected with it through some fact of causation.

An Example: Plane Nose Types and Flight Speed[edit]

Consider a case in which we want to know whether or not the nose type on a fighter jet affects top flight speed. We have two nose types (call them A and B) that we want to investigate. We could do either a correlational or an experimental study, but only the experimental study will allow us to infer causation.

Correlational Study[edit]

1) Go to the hangar, and pick 50 planes with nose type A (Group A) and 50 planes with nose type B (Group B).

2) Check past flight logs for each group, and determine whether one group exhibits greater top speed than another.

Let's say that the logs show Group B to have a higher top speed on average than Group A. This might be due to nose type, but it could be due to all sorts of other variables that systematically differ between the two groups, for example:

  • Group A planes were older than Group B and had more wear-and-tear
  • Group A planes had smaller engines than Group B planes
  • Group A planes were flown under different circumstances that necessitated slower speeds

These are all potential confounding variables, in that they correlate with both the independent and dependent variables. With these potential confounds, it's impossible to apply the Method of Difference to infer that nose type has a causal effect on top flight speed, even though we observed a difference between the groups.

Diagram of the plane experiment; taken from Dr. Julian Parris' slides in the video Experimental Methods of Research
Experimental Study[edit]

1) Pick 50 planes at random and assign to Group A, and pick 50 planes at random and assign to Group B.

2) Manipulate the nose type (the independent variable) by attaching nose type A to all planes in Group A and nose type B to all planes in Group B.

  • In reality, this isn't practical, as this would likely destroy the planes. Practicality of experimental treatment is always an important consideration in experimental design.

3) Fly all planes under the exact same conditions (e.g. in a controlled wind tunnel), to control for other potential confounds (e.g. flying Group A plans in higher headwinds).

4) Measure the top speed of each plane (our dependent variable).

Because we have controlled for extraneous variables through random assignment (and by flying the planes under the exact same conditions), we know that only chance dictates any differences between planes in Groups A and B. If we now observe that Group B planes are faster on average than Group A planes, we can apply the Method of Difference infer that nose type B causes a plane to have a higher top speed than nose type A.


C - Introduction to Data[edit]

1 - Basic Terminology[edit]

Statistic[edit]

A statistic is any function of data (e.g. a proportion, sum, mean, etc.). A statistic is used to represent data in some meaningful way, and is defined by some specific operation carried out on the data.

Descriptive vs. Inferential Statistics[edit]

1) Descriptive statistics describe, summarize, organize, and simplify the communication of data. For example, we might say that a politician received 53% of the vote in an election. This statistic (a proportion) describes the voting data in a meaningful way.

2) Inferential statistics facilitate inferences about populations from data collected in samples. For example, we might poll a group of people and find that 55% of people say they'll vote for some politician in an upcoming election. In this case, we might infer that the politician will receive 55% of the votes when the election happens.

Data, Populations, and Samples[edit]

1) Data (plural, singular "datum" or "datapoint") is a collection of observations.

2) A population is the set of all possible individuals/units that we're interested in. The population is defined according to what we want to make inferences about. For example, if we're want to know something about UCSD students, then our population is all students at UCSD. Populations can be small, or they can be infinite.

3) A sample is a set of observations selected from a population. We use the sample to represent the population so that we don't have to measure the entire population. When we sample from a population, we want to do so randomly. For example, if we want to know about the heights of UCSD students, we should randomly select a subset of UCSD students and measure their heights. If we measured only the basketball players, our sample wouldn't be representative of our population.

Parameters, Sample Statistics, and Sampling Error[edit]

1) Population parameters are statistics calculated over all units of the population. That is, a parameter describes a population.

2) Sample statistics are statistics calculated over all observations in a sample. We generally use sample statistics to estimate population parameters, though our sample statistics won't always equal our population parameter.

For example, if we're interested in the heights of UCSD students, our population would be all UCSD students. If we measured all students and took the mean height, we'd have a population parameter. If we randomly sample some UCSD students and take the mean of their heights, we'd have a sample statistic. The mean height of our sample is unlikely to exactly equal the mean height in the population. That is, our sample statistic is unlikely to exactly equal the corresponding population parameter.

3) Sampling error is the discrepancy between our population parameter and the sample statistic we use to estimate that population parameter.


2 - Measurement Theory[edit]

Measurement is the process of assigning numbers or labels to physical phenomena according to some rule. A prime example is measuring length using a ruler or tape measure. The physical phenomena we're measuring is the extent in space, and the rule we apply is embodied by the marks on our ruler that correspond to inches, feet, etc.

Constructs & Operational Definitions[edit]

Sometimes we want to measure internal attributes or characteristics that cannot be directly observed. These are called constructs.

For example, if we want to measure how much stress somebody feels before giving a public talk, we want to measure an internal state. We must decide on some way to measure something physical that corresponds to that state. Here, we might just ask a person to report how stressed they feel on a 1-to-10 scale. This measurement procedure would be known as our operational definition of pre-talk nervousness.

In psychology, we're often interested in constructs (mental states), so operational definitions become very important and can sometimes be contentious.

The attributes of the four measurement scales, taken from Dr. Julian Parris' slides in the video Scales of Measurement

Scales of Measurement[edit]

1) Nominal: A set of categories with no intrinsic order (e.g., baby names, pizza toppings)

2) Ordinal: A set of categories that can be put into order (e.g., finishing place in a race); the distance between adjacent points on an ordinal scale will not necessarily be equal (e.g., 1st place in a race might beat 2nd by one minute, but 2nd place might beat 3rd place by only 30 seconds)

3) Interval: A set of categories that can be put into order, and the distance between adjacent points is equal; in an interval scale, the zero point doesn't literally mean no presence of the attribute (e.g., degrees Celsius, where the change in thermodynamic energy between 1 and 2 degrees is the same as between 2 and 3 degrees, but zero degrees does not literally mean zero thermodynamic energy)

4) Ratio: An interval scale that has a meaningful zero point (e.g. distance, where zero inches means literally no extent in space; degrees Kelvin, where zero degrees literally means no thermodynamic energy)

Reliability & Validity[edit]

We want our measurement to be reliable: It should yield the same value whenever we measure the same entity under the same conditions.

We also want our measure to be valid: It should accurately capture the construct being measured. For example, a well-vetted intelligence test is a relatively valid measure of true intelligence. If we were to operationalize intelligence as a person's shoe size, our measure certainly would not be valid.


3 - Basics of JMP Pro[edit]

JMP is a statistical software package produced by SAS. It's used widely throughout industry and academia, and allows the user to perform sophisticated statistical procedures that most wouldn't have dreamed of attempting as little as a few decades ago.

Anatomy of JMP[edit]

An example JMP data table

JMP stores data in data tables, much like spreadsheets in Microsoft Excel or similar. They are organized by rows and columns, with data points in the individual cells. JMP imposes structure on the data, so things aren't as free-form as in Excel and other spreadsheet programs. Rows correspond to individual observations/units, and columns correspond to different variables we've measured across our observations.

JMP data tables include additional information in panels on the left. There are red triangles that are drop-down menus with additional options, such as analysis scripts we can run on the data in the table. There is a columns list which allows us to interact with our columns in various ways. Each column has an associated modeling type, which tells JMP what type of data (nominal, ordinal, continuous) it's dealing with in that column. It's important to get these right, so JMP knows the proper way to treat the data in the column.

The left panel also has a rows section, that tells us how many rows we have, how many we've selected, and how many we've excluded or hidden from analysis.

To start a new data table, go to File>New>New Data Table. To add columns or rows, just double-click an empty row or column heading space. You can change the column name by double-clicking the name, typing in the new name, and clicking enter. Enter data simply by typing directly into a table cell.

To open a data table, just go to File>Open. JMP can open a large variety of files, including delimited text files, Excel spreadsheets, and even HTML tables from the Internet.

The JMP Interface[edit]

JMP is progressive and context-dependent:

1) Progressive: As you produce output in JMP, you'll be presented with options for further things you can do at that point. JMP doesn't barrage you with all possibilities all at once, but rather only those that it thinks you might want to do next.

2) Context-dependent: The type of output you get from JMP depends on the modeling type of the variables you've analyzed. JMP knows that certain analyses and visualizations are appropriate for certain types of data, and it uses this to make informed decisions about what types of operations to carry out.

If at any point you want to do something that JMP hasn't provided to you as an option, just go to the Statistics Index (under Help>Statistics Index) to find the procedure you're looking for.

In any JMP output window, keep your eyes open for the red triangles: these are menus that provide you with additional options.