on Orders of $25 or more*
|Get your books quickly and easily... and pay nothing for shipping. Just order $25 or more and standard shipping is on us (excludes Marketplace and Rental offerings).|
|$3.99 flat rate|
|UPS 2nd Day Air*||$11.99 flat rate|
|UPS Next Day Air*||$19.98 flat rate|
* Not available for PO boxes and APO/FPO
** Saturday delivery is only available in certain areas. UPS standard rates apply.
*** Separate shipping rates apply for bulk orders
Summary: The approach of Statistical Methods in Education and Psychology, Third Edition, is conceptual rather than mathematical. The authors stress the understanding, applications, and interpretation of concepts rather than derivation and proof or hand-computation. Selection of topics in the book was guided by three considerations: (1) What are the most useful statistical methods?
(2) Which statistical methods are the most widely used in journals in the behavioral ...show more and social sciences?
(3) Which statistical methods are fundamental to further study? Features
Glass, Gene V. : Arizona State University Main
Hopkins, Kenneth D. : University of Colorado at Boulder
The "Image" of Statistics.
Statistics and Mathematics.
2. Measurement, Variables, and Scales.
Variables and their Measurement.
Measurement: The Observation of Variables.
Measurement Scales; Nominal Measurement.
Interrelationships among Measurement Scales.
Continuous and Discrete Variables.
3. Frequency Distributions and Visual Displays of Data.
Grouped Frequency Distributions.
Grouping and Loss of Information.
Graphing a Frequency Distribution: The Histogram.
Frequency and Percentage Polygons.
Type of Distribution.
Cumulative Distributions and the Ogive Curve.
Misleading Graphs--How to Lie with Statistics.
4. Measures of Central Tendency.
More Summation Notation.
Adding or Subtracting a Constant.
Multiplying or Dividing by a Constant.
Sum of Deviations.
Sum of Squared Deviations.
The Mean of the Sum of Two or More Scores.
The Mean of a Difference.
Mean, Median, and Mode of Two or More Groups.
Interpretation of Mode, Median, and Mean.
Central Tendency and Skewness.
Measures of Central Tendency as Inferential Statistics.
Which Measure is Best?
5. Measures of Variability.
H-Spread and the Interquartile Range.
Sum of Squares.
More about the Summation Operator, --.
The Variance of a Population.
The Variance Estimated From a Sample.
The Standard Deviation.
The Effect of Adding or Subtracting a Constant on Measures of Variability.
The Effect of Multiplying or Dividing a Constant on Measures of Variability.
Variance of a Combined Distribution.
Inferential Properties of the Range, s2, and s.
6. The Normal Distribution and Standard Scores.
The Importance of the Normal Distribution.
God Loves the Normal Curve.
The Standard Normal Distribution as a Standard Reference Distribution: z-Scores.
Ordinates of the Normal Distribution.
Areas Under the Normal Curve.
Other Standard Scores.
Areas Under the Normal Curve in Samples.
7. Correlation: Measures of Relationship Between Two Variables.
The Concept of Correlation.
The Measurement of Correlation.
The Use of Correlation Coefficients.
Interpreting r as a Percent.
Linear and Curvilinear Relationships.
Calculating the Pearson Product-Moment Correlation Coefficient, r.
Correlation Expressed in Terms of z-Scores.
Linear Transformations and Correlation.
The Bivariate Normal Distribution.
Effects of Variability on Correlation.
Correcting for Restricted Variability.
Effect of Measurement Error on r and the Correction for Attenuation.
The Pearson r and Marginal Distributions.
The Effect of the Unit of Analysis on Correlation: Ecological Correlations.
The Variance of a Sum.
The Variance of a Difference.
Additional Measures of Relationship: The Spearman Rank Correlation.
The Phi Coefficient: Both X and Y are Dichotomies.
The Point Biserial Coefficient.
The Biserial Correlation.
Biserial versus Point-Biserial Correlation Coefficients.
The Tetrachoric Coefficient.
Causation and Correlation.
8. Regression and Prediction.
Purposes of Regression Analysis.
The Regression Effect.
The Regression Equation Expressed in Standard z-Scores.
Use of Regression Equations.
Estimating Y from X: The Raw-score Regression Equation.
Error of Estimate.
Proportion of Predictable Variance.
Homoscedasticity and the Standard Error of Estimate.
Regression and Pretest-Posttest Gains.
Second-Order Partial Correlation.
Multiple Regression and Multiple Correlation.
The Standardized Regression Equation.
The Raw-Score Regression Equation.
Multiple Regression Equation with Three or More Independent Variables.
Stepwise Multiple Regression.
Illustration of Stepwise Multiple Regression.
Dichotomous and Categorical Variables as Predictors.
The Standard Error of Estimate in Multiple Regression.
The Multiple Correlation as an Inferential Statistic: Correction for Bias.
Curvilinear Regression and Correlation.
Measuring Non-linear Relationships between Two Variables.
Transforming Non-linear Relationships into Linear Relationships.
Dichotomous Dependent Variables: Logistic Regression.
Categorical Dependent Variables more than Two Categories: Discriminant Analysis.
Probability as a Mathematical System.
First Addition Rule of Probabilities.
Second Addition Rule of Probabilities.
Multiplication Rule of Probabilities.
The Binomial and Sign Test.
Intuition and Probability.
Probability as an Area.
Expectations and Moments.
10. Statistical Inference: Sampling and Interval Estimation.
Populations and Samples: Parameters and Statistics.
Infinite versus Finite Populations.
Randomness and Random Sampling.
Accidental or Convenience Samples.
Point and Interval Estimates.
The Standard Error of the Mean.
Relationship of sx to n.
Confidence Intervals when s is Known: An Example.
Central Limit Theorem: A Demonstration.
The Use of Sampling Distributions.
Proof that s2 = s2/n.
Properties of Estimators.
11. Introduction to Hypothesis Testing.
Statistical Hypotheses and Explanations.
Statistical versus Scientific Hypotheses.
Testing Hypotheses about <109>.
Testing H0: <109> = K, a One-Sample z-Test.
Two Types of Errors in Hypothesis Testing.
Hypothesis Testing and Confidence Intervals.
Type-II Error, b, and Power.
Effect of a on Power.
Power and the Value Hypothesized in the Alternative Hypothesis.
Methods of Increasing Power.
Non-Directional and Directional Alternatives: Two-Tailed versus One- Tailed Tests.
Statistical Significance versus Practical Significance.
Confidence Limits for the Population Median.
Inference Regarding <109> when s is not Known: t versus z.
Confidence Intervals Using the t-Distribution.
Accuracy of Confidence Intervals when Sampling Non-Normal Distributions.
12. Inferences about the Difference Between Two Means.
Testing Statistical Hypotheses Involving Two Means.
The Null Hypotheses.
The t-Test for Comparing Two Independent Means.
Confidence Intervals about Mean Differences.
t-Test Assumptions and Robustness.
Homogeneity of Variance.
What if Sample Sizes Are Unequal and Variances Are Heterogeneous: The Welch t' Test.
Independence of Observations.
Testing H0: <109>1 = <109>2 with Paired Observations.
Direct Difference for the t-Test for Paired Observations.
Cautions Regarding the Matched-Pairs Designs in Research.
Power when Comparing Means.
Non-Parametric Alternatives: The Mann-Whitney Test and the Wilcoxon Signed-Rank Test.
13. Statistics for Categorical Dependent Variables: Inferences about Proportions.
The Proportion as a Mean.
The Variance of a Proportion.
The Sampling distribution of a Proportion: The Standard Error of p.
The Influence of n on sp.
Influence of the Sampling Fraction on sp.
The Influence of P on sp.
Confidence Intervals for P.
Quick Confidence Intervals for P.
Testing H0: P = K.
Testing Empirical versus Theoretical Distributions: Chi-Square Goodness of Fit Test.
Testing Differences among Proportions: The Chi-Square Test of Association.
Other Formulas for the Chi-Square Test of Association.
The C2 Median Test.
Chi-Square and the Phi Coefficient.
Independence of Observations.
Inferences about H0: P1 = P2 when Observations are Paired: McNemar's Test for Correlated Proportions.
14. Inferences about Correlation Coefficient.
Testing Statistical Hypotheses Regarding r.
Testing H0: r = 0 Using the t-Test.
Directional Alternatives: "Two-Tailed" vs. "One- Tailed" Tests.
Sampling Distribution of r.
The Fisher Z-Transformation.
Setting Confidence Intervals for r.
Determining Confidence Intervals Graphically.
Testing the Difference between Independent Correlation Coefficients: H0: r1 = e2 = ...ej.
Testing Differences between Two Dependent Correlation Coefficients: H0: e31 = r32.
Inferences about Other Correlation Coefficients.
The Point-Biserial Correlation Coefficient rpr.
Spearman's Rank Correlation: H0: ranks = 0.
Partial Correlation: H0: r12.3 = 0.
Significance of a Multiple Correlation Coefficient.
Statistical Significance in Stepwise Multiple Regression.
Significance of the Biserial Correlation Coefficient rbis.
Significance of the Tetrachoric Correlation Coefficient rtet.
Significance of the Correlation Ratio Eta.
Testing for Non-linearity of Regression.
15. One-Factor Analysis of Variance.
Why Not Several t-Tests?
Sum of Squares Between, SSB.
Sum of Squares Within, SSW.
ANOVA Computational Illustration.
Mean Square Between Groups, MSB.
Mean Square Within Groups, MSW.
ANOVA with Equal n's.
A Statistical Model for the Data.
Estimates of the Terms in the Model.
Sum of Squares.
Restatement of the Null Hypothesis in Terms of Population Means.
Degrees of Freedom.
Mean Squares: The Expected Value of MSW.
The Expected Value of MSB.
Some Distribution Theory.
The F-Test of the Null Hypothesis: Rationale and Procedure.
Type-I versus Type-II Errors: a and b.
A Summary of Procedures for One-Factor ANOVA.
Consequences of Failure to Meet the ANOVA Assumptions: The "Robustness" of ANOVA.
The Welch and Brown-Forsythe Modifications of ANOVA: What Does One Do When --'s and n's Differ?
The Power of the F-Test.
Power When s is Unknown.
A Table for Estimating Power When J=2.
The Non-Parametric Alternative: The Krukal-Wallis Test.
16. Inferences About Variances.
Chi-Square Distributions with u1: c 2u.
The Chi-Square Distribution with u Degrees of Freedom, c2u.
Inferences about the Population Variance: H0: s2 = K.
Inferences about Two Independent Variances: H0: s21 = s22.
Testing Homogeneity of Variance: Hartley's Fmax Test.
Testing Homogeneity Variance from J Independent Samples: The Bartlett Test.
Other Tests of Homogeneity of Variance: The Levene and Brown-Forsythe Tests.
Inferences about H0: s21 = s22 with Paired Observations.
Relationships among the Normal, t, c2 and F-Distributions.
17. Multiple Comparisons and Trend Analysis.
Testing All Pairs of Means: The Studentized Range Statistic, q..
The Tukey Method of Multiple Comparisons.
The Effect Size of Mean Differences.
The Basis for Type-I Error Rate: Contrast vs. Family.
The Newman-Keuls Method.
The Tukey and Newman-Keuls Methods Compared.
The Definition of a Contrast.
Simple versus Complex Contrasts.
The Standard Error of a Contrast.
The t-Ratio for a Contrast.
Planned versus Post Hoc Comparisons.
Dunn (Bonferroni) Method of Multiple Comparisons.
Dunnett Method of Multiple Comparisons.
Scheffe Method of Multiple Comparisons.
Planned Orthogonal Contrasts.
Confidence Intervals for Contrasts.
Relative Power of Multiple Comparison Techniques.
Significance of Trend Components.
Relation to Trends to Correlation Coefficients.
Assumptions of MC Methods.
Multiple Comparisons for Other Statistics.
Chapter Summary and Criteria for Selecting a Multiple Comparison Method.
18. Two and Three Factor ANOVA: An Introduction to Factorial Designs.
The Meaning of Interaction.
Interactions and Generalizability: Factors Do Not Interact.
Interactions and Generalizability: Factors Interact.
Interpreting when Interaction is Present.
Statistical Significance and Interaction.
Data Layout and Notation.
A Model for the Data.
Least-Squares of the Model.
Statement of Null Hypotheses.
Sums of Squares in the Two-Factor ANOVA.
Degrees of Freedom.
Illustration of the Computation for the Two-Factor ANOVA.
Expected Values of Mean Squares.
The Distribution of the Mean Squares.
Determining Power in Factorial Designs.
Multiple Comparisons in Factorial ANOVA Designs.
Confidence Intervals for Means in Two-Factor ANOVA.
Three-Factor ANOVA: An Illustration.
Three-Factor ANOVA Computation.
The Interpretation of Three-Factor Interaction.
Confidence Intervals in Three-Factor ANOVA.
How Factorial Designs Increase Power.
Factorial ANOVA with Unequal n's.
19. Multi-Factor ANOVA Designs: Random, Mixed, and Fixed Effects.
The Random-Effects ANOVA Model.
Assumptions of the Random ANOVA Model.
Mean Square, MSW.
Mean Square, MSB.
The Variance Component, sa2.
Confidence Interval for sa2/se2.
Summary of Random ANOVA Model.
The Mixed-Effects ANOVA Model.
Mixed-Model ANOVA Assumptions.
Mixed-Model ANOVA Computation.
Multiple Comparisons in the Two-Factor Mixed Model.
Crossed and Nested Factors.
Computation of Sums of Squares for Nested Factors.
Determining the Sources of Variation in the ANOVA Table.
Degrees of Freedom for Nested Factors.
Determining Expected Mean Squares.
Error Mean Square in Complex ANOVA Designs.
The Incremental Generalization Strategy: Inferential "Concentric Circles".
Model Simplification and Pooling.
The Experimental Unit and the Observational Unit.
20. Repeated- Measures ANOVA.
A Simple Repeated-Measures ANOVA.
Trend Analysis on Repeated-Measures Factors.
Estimating Reliability via Repeated-Measures ANOVA.
Repeated-Measures Designs with a Between-Subjects Factor.
Repeated-Measures ANOVA with Two Between-Subjects Factors.
Trend Analysis on Between-Subjects Factors.
Repeated-Measures ANOVA with Two Within-Subjects Factors and Two Between-Subjects Factors.
Repeated-Measures ANOVA vs. MANOVA.
21. An Introduction to the Analysis of Covariance.
The Functions of ANCOVA.
ANCOVA Computations, SStotal.
The Adjusted Within Sum of Squares, SS'W.
The Adjusted Sum of Squares Between Groups, SS'B.
Degrees of Freedom in ANCOVA and the ANCOVA Table.
Adjusted Means, Y'j.
Confidence Intervals and Multiple Comparisons for Adjusted Means.
ANCOVA Illustrated Graphically.
Covarying and Stratifying.
Table A. Unit-Normal (z) Distribution.
Table B. Random Digits.
Table C. t-Distribution.
Table D. c2-Distribution.
Table E. Fisher Z-Transformation.
Table F. F-Distribution.
Table G. Power Curves for the F-Test.
Table H. Hartley's Fmax Distribution.
Table I. Studentized Range Statistic: q-Distribution.
Table J. Critical Values of r.
Table K. Critical Values of rranks, Spearman's Rank Correlation.
Table L. Critical Values for the Dunn (Bonferroni) t-Statistic.
Table M. Critical Values for the Dunnett t-Statistic.
Table N. Coefficients (Orthogonal Polynomials) for Trend Analysis.
Table O. Binomial Probabilities when P = .5.
Glossary of Symbols.
*Each chapter begins with "Introduction" and concludes with "Case Study," "Chapter Summary," "Suggested Computer Activities," "Mastery Test Answers to Mastery Test," and "Problems and Exercises."
More prices and sellers below.
Get Free Shipping on orders over $25 (not including Rental and Marketplace). Order arrives in 5-10 business days.
Need it faster?
We offer fast, flat-rate expedited shipping options.
|Sell it back by:|
|Guaranteed cash back:|
|Cost of this book|
after cash back:
Take advantage of Guaranteed Cash Back. Send your book to us in good condition before the end of the buyback period, we'll send YOU a check, and you'll pay less for your textbooks!
When you're done with this book, sell it back to Textbooks.com. In addition to the the best possible buyback price, you'll get an extra 10% cash back just for being a customer.
We buy good-condition used textbooks year 'round, 24/7. No matter where you bought it, Textbooks.com will buy your textbooks for the most cash.
Being online is not required for reading an eTextbook after successfully downloading it. You must only be connected to the Internet during the download process.
What is the Marketplace?
It's another way for you to get the right price on the books you need. We approved every Marketplace vendor to sell their books on Textbooks.com, so you know they're all reliable.
What are Marketplace shipping options?
Marketplace items do not qualify for free shipping. When ordering from the Marketplace, please specify whether you want the seller to send your book Standard ($3.99/item) or Express ($6.99/item). To get free shipping over $25, just order directly from Textbooks.com instead of through the Marketplace.
FREE UPS 2nd Day Air TermsRental and Marketplace items are excluded. Offer is valid from 1/21/2013 12:00PM to 1/23/2013 11:59AM CST. Your order must be placed by 12 Noon CST to be processed on the same day. Minimum order value is $100.00 excluding Rental and Marketplace items. To redeem this offer, select "FREE UPS 2ND DAY AIR" at checkout. Offer not is not valid on previous orders.