on $25 & up

Home |
College Textbooks |
Math & Science Textbooks |
Stats & Probability for Behavioral/Social Sciences Textbooks

ISBN13: 978-0534377700

ISBN10: 053437770X Edition: 5TH 02

Copyright: 2002

Publisher: Duxbury Press

Published: 2002

International: No

ISBN10: 053437770X Edition: 5TH 02

Copyright: 2002

Publisher: Duxbury Press

Published: 2002

International: No

STATISTICAL METHODS FOR PSYCHOLOGY surveys the statistical techniques commonly used in the behavioral and social sciences, especially psychology and education. This book has two underlying themes that are more or less independent of the statistical hypothesis tests that are the main content of the book. The first theme is the importance of looking at the data before formulating a hypothesis. With this in mind, the author discusses, in detail, plotting data, looking for outliers, and checking assumptions (Graphical displays are used extensively). The second theme is the importance of the relationship between the statistical test to be employed and the theoretical questions being posed by the experiment. To emphasize this relationship, the author uses real examples to help the student understand the purpose behind the experiment and the predictions made by the theory. Although this book is designed for students at the intermediate level or above, it does not assume that students have had either a previous course in statistics or a course in math beyond high-school algebra.

1. BASIC CONCEPTS.

Important Terms.

Descriptive and Inferential Statistics.

Measurement Scales.

Using Computers. The Plan of the Book.

2. DESCRIBING AND EXPLORING DATA.

Plotting Data.

Histograms.

Stem-and-Leaf Displays.

Alternative Methods of Plotting Data.

Describing Distributions.

Using Computer Programs to Display Data.

Notation.

Measures of Central Tendency.

Measures of Variability.

BoxPlots: Graphical Representations of Dispersions and Extreme Scores.

Obtaining Measures of Dispersion Using Minitab.

Percentiles, Quartiles, and Deciles.

The Effect of Linear Transformations on Data.

3. THE NORMAL DISTRIBUTION.

The Normal Distribution.

The Standard Normal Distribution.

Using the Tables of the Standard Normal Distribution.

Setting Probable Limits on an Observation.

Measures Related to z.

4. SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.

Two Simple Experiments Involving Course Evaluations and Rude Motorists.

Sampling Distributions.

Hypothesis Testing.

The Null Hypothesis.

Test Statistics and Their Sampling Distributions.

Using the Normal Distribution to Test Hypotheses.

Type I and Type II Errors.

One- and Two-Tailed Tests.

What Does It Mean to Reject the Null Hypothesis? Effect Size.

A Final Worked Example.

Back to Course Evaluations and Rude Motorists.

5. BASIC CONCEPTS OF PROBABILITY.

Probability.

Basic Terminology and Rules.

Discrete Versus Continuous Variables.

Probability Distributions for Discrete Variables.

Probability Distributions for Continuous Variables.

Permutations and Combinations.

The Binomial Distribution.

Using the Binomial Distribution to Test Hypotheses.

The Multinomial Distribution.

6. CATEGORICAL DATA AND CHI-SQUARE.

The Chi-Square Distribution.

Statistical Importance of the Chi-Square Distribution.

The Chi-Square Goodness-of-Fit Test-One-Way Classification.

Two Classification Variables: Contingency Table Analysis.

Chi-Square for Larger Contingency Tables.

Chi-Square for Ordinal Data.

Summary of the Assumptions of Chi-Square.

One- and Two-Tailed Tests.

Likelihood Ratio Test.

Measures of Association.

7. HYPOTHESIS TESTS APPLIED TO MEANS.

Sampling Distribution of the Mean.

Testing Hypotheses about Means - s Known.

Testing a Sample Mean When s is Unknown -The One-Sample t test.

Hypothesis Tests Applied to Means - Two Matched Samples.

Hypothesis Tests Applied to Means - Two Independent Samples.

Confidence Intervals.

A Second Worked Example.

Heterogeneity of Variance: The Behrens-Fisher Problem.

8. POWER.

Factors Affecting the Power of a Test.

Effect Size.

Power Calculations for the One-Sample t.

Power Calculations for Differences Between Two Independent Means.

Power Calculations for Matched-Sample t.

Power Considerations in Terms of Sample Size.

Post-Hoc Power.

9. CORRELATION AND REGRESSION.

Scatterplot.

The Relationship Between Stress and Health.

The Covariance.

The Pearson Product-Moment Correlation Coefficient (r).

The Regression Line.

The Accuracy of Prediction.

Assumptions Underlying Regression and Correlation.

Confidence Limits on Y.

A Computer Example Showing the Role of Test-Taking Skills.

Hypothesis Testing.

The Role of Assumptions in Correlation and Regression.

Factors That Affect the Correlation.

Power Calculation for Pearson's r.

10. ALTERNATIVE CORRELATIONAL TECHNIQUES.

Point-Biserial Correlation and PHI: Pearson Correlation by Another Name.

Biserial and Tetrachoric Correlation: Non-Pearson Correlation Coefficients.

Correlation Coefficients for Ranked Data.

Analysis of Contingency Tables with Ordered Variables.

Kendall's Coefficient of Concordance (W).

11. SIMPLE ANALYSIS OF VARIANCE.

An Example.

The Underlying Model.

The Logic of the Analysis of Variance.

Calculations in the Analysis of Variance.

Computer Solutions.

Derivation of the Analysis of Variance.

Unequal Sample Sizes.

Violations of Assumptions. Transformations.

Fixed Versus Random Models.

Magnitude of Experimental Effect.

Power.

Computer Analyses.

12. MULTIPLE COMPARISONS AMONG TREATMENT MEANS.

Error Rates.

Multiple Comparisons in a Simple Experiment on Morphine Tolerance.

A Priori Comparisons.

Post Hoc Comparisons.

Tukey's Test.

The Ryan Procedure (REGWQ).

The Scheffé Test. Dunnett's Test for Comparing All Treatments with a Control.

Comparison of Dunnett's Test and the Bonferroni t.

Comparison of the Alternative Procedures.

Which Test? Computer Solutions.

Trend Analysis.

13. FACTORIAL ANALYSIS OF VARIANCE.

An Extension of the Eysenck Study.

Structural Models and Expected Mean Squares.

Interactions.

Simple Effects.

Analysis of Variance Applied to the Effects of Smoking.

Multiple Comparisons.

Power Analysis for Factorial Experiments.

Expected Mean Squares.

Magnitude of Experimental Effects.

Unequal Sample Sizes.

Analysis for Unequal Sample Sizes Using SAS.

Higher-Order Factorial Designs.

A Computer Example.

14. REPEATED-MEASURES DESIGNS.

The Structural Model.

F Ratios.

The Covariance Matrix.

Analysis of Variance Applied to Relaxation Therapy.

One Between-Subjects Variable and the One Within-Subjects Variable.

Two Within-Subjects Variables.

Two Between-Subjects Variables and One Within-Subjects Variable.

Two Within-Subjects Variables and One Between-Subjects Variable.

Three Within-Subjects Variables.

Intraclass Correlation.

Other Considerations.

A Computer Analysis Using a Traditional Approach.

Multivariate Analysis of Variance for Repeated-Measures Designs.

15. MULTIPLE REGRESSION.

Multiple Linear Regression.

Standard Errors and Test of Regression Coefficients.

Residual Variance.

Distribution Assumptions.

The Multiple Correlation Coefficient.

Geometric Representation of Multiple Regression.

Partial and Semipartial Correlation.

Suppressor Variables. Regression Diagnostics.

Constructing a Regression Equation.

The "Importance" of Individual Variables.

Using Approximate Regression Coefficients.

Mediating and Moderating Relationships.

Logistic Regression.

16. ANALYSES OF VARIANCE AND COVARIANCE AS GENERAL LINEAR MODELS.

The General Linear Model.

One-Way Analysis of Variance.

Factorial Designs.

Analysis of Variance with Unequal Sample Sizes.

The One-Way Analysis of Covariance.

Interpreting an Analysis of Covariance.

The Factorial Analysis of Covariance.

Using Multiple Covariates.

Alternative Experimental Designs.

17. LOG-LINEAR ANALYSIS.

Two-Way Contingency Tables.

Model Specification.

Testing Models.

Odds and Odds Ratios.

Treatment Effects (Lambda).

Three-Way Tables.

Deriving Models.

Treatment Effects.

18. RESAMPLING AND NON-PARAMETRIC APPROACHES TO DATA.

Bootstrapping as a General Approach.

Bootstrapping with One Sample.

Resampling with Two Paired Samples.

Resampling with Two Independent Samples.

Bootstrapping Confidence Limits on a Correlation Coefficient.

Wilcoxon's Rank-Sum Test.

Wilcoxon's Matched-Pairs Signed-Ranks Test.

The Sign Test.

Kruskal-Wallis One-Way Analysis of Variance.

Friedman's Rank Test for k Correlated Samples.

APPENDICES.

REFERENCES.

ANSWERS TO SELECTED EXERCISES. INDEX.

ISBN10: 053437770X Edition: 5TH 02

Copyright: 2002

Publisher: Duxbury Press

Published: 2002

International: No

STATISTICAL METHODS FOR PSYCHOLOGY surveys the statistical techniques commonly used in the behavioral and social sciences, especially psychology and education. This book has two underlying themes that are more or less independent of the statistical hypothesis tests that are the main content of the book. The first theme is the importance of looking at the data before formulating a hypothesis. With this in mind, the author discusses, in detail, plotting data, looking for outliers, and checking assumptions (Graphical displays are used extensively). The second theme is the importance of the relationship between the statistical test to be employed and the theoretical questions being posed by the experiment. To emphasize this relationship, the author uses real examples to help the student understand the purpose behind the experiment and the predictions made by the theory. Although this book is designed for students at the intermediate level or above, it does not assume that students have had either a previous course in statistics or a course in math beyond high-school algebra.

Table of Contents

1. BASIC CONCEPTS.

Important Terms.

Descriptive and Inferential Statistics.

Measurement Scales.

Using Computers. The Plan of the Book.

2. DESCRIBING AND EXPLORING DATA.

Plotting Data.

Histograms.

Stem-and-Leaf Displays.

Alternative Methods of Plotting Data.

Describing Distributions.

Using Computer Programs to Display Data.

Notation.

Measures of Central Tendency.

Measures of Variability.

BoxPlots: Graphical Representations of Dispersions and Extreme Scores.

Obtaining Measures of Dispersion Using Minitab.

Percentiles, Quartiles, and Deciles.

The Effect of Linear Transformations on Data.

3. THE NORMAL DISTRIBUTION.

The Normal Distribution.

The Standard Normal Distribution.

Using the Tables of the Standard Normal Distribution.

Setting Probable Limits on an Observation.

Measures Related to z.

4. SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.

Two Simple Experiments Involving Course Evaluations and Rude Motorists.

Sampling Distributions.

Hypothesis Testing.

The Null Hypothesis.

Test Statistics and Their Sampling Distributions.

Using the Normal Distribution to Test Hypotheses.

Type I and Type II Errors.

One- and Two-Tailed Tests.

What Does It Mean to Reject the Null Hypothesis? Effect Size.

A Final Worked Example.

Back to Course Evaluations and Rude Motorists.

5. BASIC CONCEPTS OF PROBABILITY.

Probability.

Basic Terminology and Rules.

Discrete Versus Continuous Variables.

Probability Distributions for Discrete Variables.

Probability Distributions for Continuous Variables.

Permutations and Combinations.

The Binomial Distribution.

Using the Binomial Distribution to Test Hypotheses.

The Multinomial Distribution.

6. CATEGORICAL DATA AND CHI-SQUARE.

The Chi-Square Distribution.

Statistical Importance of the Chi-Square Distribution.

The Chi-Square Goodness-of-Fit Test-One-Way Classification.

Two Classification Variables: Contingency Table Analysis.

Chi-Square for Larger Contingency Tables.

Chi-Square for Ordinal Data.

Summary of the Assumptions of Chi-Square.

One- and Two-Tailed Tests.

Likelihood Ratio Test.

Measures of Association.

7. HYPOTHESIS TESTS APPLIED TO MEANS.

Sampling Distribution of the Mean.

Testing Hypotheses about Means - s Known.

Testing a Sample Mean When s is Unknown -The One-Sample t test.

Hypothesis Tests Applied to Means - Two Matched Samples.

Hypothesis Tests Applied to Means - Two Independent Samples.

Confidence Intervals.

A Second Worked Example.

Heterogeneity of Variance: The Behrens-Fisher Problem.

8. POWER.

Factors Affecting the Power of a Test.

Effect Size.

Power Calculations for the One-Sample t.

Power Calculations for Differences Between Two Independent Means.

Power Calculations for Matched-Sample t.

Power Considerations in Terms of Sample Size.

Post-Hoc Power.

9. CORRELATION AND REGRESSION.

Scatterplot.

The Relationship Between Stress and Health.

The Covariance.

The Pearson Product-Moment Correlation Coefficient (r).

The Regression Line.

The Accuracy of Prediction.

Assumptions Underlying Regression and Correlation.

Confidence Limits on Y.

A Computer Example Showing the Role of Test-Taking Skills.

Hypothesis Testing.

The Role of Assumptions in Correlation and Regression.

Factors That Affect the Correlation.

Power Calculation for Pearson's r.

10. ALTERNATIVE CORRELATIONAL TECHNIQUES.

Point-Biserial Correlation and PHI: Pearson Correlation by Another Name.

Biserial and Tetrachoric Correlation: Non-Pearson Correlation Coefficients.

Correlation Coefficients for Ranked Data.

Analysis of Contingency Tables with Ordered Variables.

Kendall's Coefficient of Concordance (W).

11. SIMPLE ANALYSIS OF VARIANCE.

An Example.

The Underlying Model.

The Logic of the Analysis of Variance.

Calculations in the Analysis of Variance.

Computer Solutions.

Derivation of the Analysis of Variance.

Unequal Sample Sizes.

Violations of Assumptions. Transformations.

Fixed Versus Random Models.

Magnitude of Experimental Effect.

Power.

Computer Analyses.

12. MULTIPLE COMPARISONS AMONG TREATMENT MEANS.

Error Rates.

Multiple Comparisons in a Simple Experiment on Morphine Tolerance.

A Priori Comparisons.

Post Hoc Comparisons.

Tukey's Test.

The Ryan Procedure (REGWQ).

The Scheffé Test. Dunnett's Test for Comparing All Treatments with a Control.

Comparison of Dunnett's Test and the Bonferroni t.

Comparison of the Alternative Procedures.

Which Test? Computer Solutions.

Trend Analysis.

13. FACTORIAL ANALYSIS OF VARIANCE.

An Extension of the Eysenck Study.

Structural Models and Expected Mean Squares.

Interactions.

Simple Effects.

Analysis of Variance Applied to the Effects of Smoking.

Multiple Comparisons.

Power Analysis for Factorial Experiments.

Expected Mean Squares.

Magnitude of Experimental Effects.

Unequal Sample Sizes.

Analysis for Unequal Sample Sizes Using SAS.

Higher-Order Factorial Designs.

A Computer Example.

14. REPEATED-MEASURES DESIGNS.

The Structural Model.

F Ratios.

The Covariance Matrix.

Analysis of Variance Applied to Relaxation Therapy.

One Between-Subjects Variable and the One Within-Subjects Variable.

Two Within-Subjects Variables.

Two Between-Subjects Variables and One Within-Subjects Variable.

Two Within-Subjects Variables and One Between-Subjects Variable.

Three Within-Subjects Variables.

Intraclass Correlation.

Other Considerations.

A Computer Analysis Using a Traditional Approach.

Multivariate Analysis of Variance for Repeated-Measures Designs.

15. MULTIPLE REGRESSION.

Multiple Linear Regression.

Standard Errors and Test of Regression Coefficients.

Residual Variance.

Distribution Assumptions.

The Multiple Correlation Coefficient.

Geometric Representation of Multiple Regression.

Partial and Semipartial Correlation.

Suppressor Variables. Regression Diagnostics.

Constructing a Regression Equation.

The "Importance" of Individual Variables.

Using Approximate Regression Coefficients.

Mediating and Moderating Relationships.

Logistic Regression.

16. ANALYSES OF VARIANCE AND COVARIANCE AS GENERAL LINEAR MODELS.

The General Linear Model.

One-Way Analysis of Variance.

Factorial Designs.

Analysis of Variance with Unequal Sample Sizes.

The One-Way Analysis of Covariance.

Interpreting an Analysis of Covariance.

The Factorial Analysis of Covariance.

Using Multiple Covariates.

Alternative Experimental Designs.

17. LOG-LINEAR ANALYSIS.

Two-Way Contingency Tables.

Model Specification.

Testing Models.

Odds and Odds Ratios.

Treatment Effects (Lambda).

Three-Way Tables.

Deriving Models.

Treatment Effects.

18. RESAMPLING AND NON-PARAMETRIC APPROACHES TO DATA.

Bootstrapping as a General Approach.

Bootstrapping with One Sample.

Resampling with Two Paired Samples.

Resampling with Two Independent Samples.

Bootstrapping Confidence Limits on a Correlation Coefficient.

Wilcoxon's Rank-Sum Test.

Wilcoxon's Matched-Pairs Signed-Ranks Test.

The Sign Test.

Kruskal-Wallis One-Way Analysis of Variance.

Friedman's Rank Test for k Correlated Samples.

APPENDICES.

REFERENCES.

ANSWERS TO SELECTED EXERCISES. INDEX.

- Marketplace
- From