Additional actions:

Thumbnail
Modern statistics for the social and behavioral sciences : a practical introduction / Rand Wilcox
Books/Textual Material | CRC Press, Taylor & Francis Group | 2017 | Second edition.
Available at Gumberg Oversized - 1st Floor (HA29 .W51367 2017)
close
see all

Items

Location Call No. Status
Gumberg Oversized - 1st Floor HA29 .W51367 2017 AVAILABLE
Show fewer items
Drop Up Shadow

More Details

Imprint
Boca Raton, FL : CRC Press, Taylor & Francis Group, [2017]
Descript
xxiii, 706 pages ; 29 cm
text txt rdacontent
unmediated n rdamedia
volume nc rdacarrier
Bibliog.
Includes bibliographical references and index.
Summary
Requiring no prior training, Modern Statistics for the Social and Behavioral Sciences provides a two-semester, graduate-level introduction to basic statistical techniques that takes into account recent advances and insights that are typically ignored in an introductory course. Hundreds of journal articles make it clear that basic techniques, routinely taught and used, can perform poorly when dealing with skewed distributions, outliers, heteroscedasticity (unequal variances) and curvature. Methods for dealing with these concerns have been derived and can provide a deeper, more accurate and more nuanced understanding of data. A conceptual basis is provided for understanding when and why standard methods can have poor power and yield misleading measures of effect size. Modern techniques for dealing with known concerns are described and illustrated. -- Provided by Publisher.
Contents
Note continued: 13.6.4. R Function cmanova -- 13.7. Multivariate Regression -- 13.7.1. Multivariate Regression Using R -- 13.7.2. Robust Multivariate Regression -- 13.7.3. R Function mlrreg and mopreg -- 13.8. Principal Components -- 13.8.1. R Functions prcomp and regpca -- 13.8.2. Robust Principal Components -- 13.8.3. R Functions outpca, robpca, robpcaS, Ppca, and Ppca.summary -- 13.9. Exercises -- ch. 14 Robust Regression and Measures of Association -- 14.1. Robust Regression Estimators -- 14.1.1. The Theil -- Sen Estimator -- 14.1.2. R Functions tsreg, tshdreg, and regplot -- 14.1.3. Least Median of Squares -- 14.1.4. Least Trimmed Squares and Least Trimmed Absolute Value Estimators -- 14.1.5. R Functions lmsreg, ltsreg, and ltareg -- 14.1.6. M-estimators -- 14.1.7. R Function chreg -- 14.1.8. Deepest Regression Line -- 14.1.9. R Function mdepreg -- 14.1.10. Skipped Estimators -- 14.1.11. R Functions opreg and opregMC -- 14.1.12. S-estimators and an E-type Estimator -- 14.1.13. R Function tstsreg -- 14.2. Comments on Choosing a Regression Estimator -- 14.3. Inferences Based on Robust Regression Estimators -- 14.3.1. Testing Hypotheses About the Slopes -- 14.3.2. Inferences About the Typical Value of Y Given X -- 14.3.3. R Functions regtest, regtestMC, regci, regciMC, regYci, and regYband -- 14.3.4. Comparing Measures of Location via Dummy Coding -- 14.4. Dealing with Curvature: Smoothers -- 14.4.1. Cleveland's Smoother -- 14.4.2. R Functions lowess, lplot, lplot.pred, and lplotCI -- 14.4.3. Smoothers Based on Robust Measures of Location -- 14.4.4. R Functions rplot, rplotCIS, rplotCI, rplotCIv2, rplotCIM, rplot.pred, qhdsm, and qhdsm.pred -- 14.4.5. Prediction When X Is Discrete: The R Function rundis -- 14.4.6. Seeing Curvature with More Than Two Predictors -- 14.4.7. R Function prplot -- 14.4.8. Some Alternative Methods -- 14.4.9. Detecting Heteroscedasticity Using a Smoother -- 14.4.10. R Function rhom -- 14.5. Some Robust Correlations and Tests of Independence -- 14.5.1. Kendall's tau -- 14.5.2. Spearman's rho -- 14.5.3. Winsorized Correlation -- 14.5.4. R Function wincor -- 14.5.5. OP or Skipped Correlation -- 14.5.6. R Function scor -- 14.5.7. Inferences about Robust Correlations: Dealing with Heteroscedasticity -- 14.5.8. R Functions corb and scorci -- 14.6. Measuring the Strength of an Association Based on a Robust Fit -- 14.7. Comparing the Slopes of Two Independent Groups -- 14.7.1. R Function reg2ci -- 14.8. Tests for Linearity -- 14.8.1. R Functions lintest, lintestMC, and linchk -- 14.9. Identifying the Best Predictors -- 14.9.1. Inferences Based on Independent Variables Taken in Isolation -- 14.9.2. R Functions regpord, ts2str, and sm2strv7 -- 14.9.3. Inferences When Independent Variables Are Taken Together -- 14.9.4. R Function reglVcom -- 14.10. Interactions and Moderator Analyses -- 14.10.1. R Functions olshc4.inter, ols.plot.inter, regci.inter, reg.plot.inter and adtest -- 14.10.2. Graphical Methods for Assessing Interactions -- 14.10.3. R Functions kercon, runsm2g, regi -- 14.11. ANCOVA -- 14.11.1. Classic ANCOVA -- 14.11.2. Robust ANCOVA Methods Based on a Parametric Regression Model -- 14.11.3. R Functions ancJN, ancJNmp, anclin, reg2plot, and reg2g.p2plot -- 14.11.4. ANCOVA Based on the Running-interval Smoother -- 14.11.5. R Functions ancsm, Qancsm, ancova, ancovaWMW, ancpb, ancov-aUB, ancboot, ancdet, runmean2g, qhdsm2g, and 12plot -- 14.11.6. R Functions Dancts, Dancols, Dancova, Dancovapb, DancovaUB, and Dancdet -- 14.12. Exercises -- ch. 15 Basic Methods for Analyzing Categorical Data -- 15.1. Goodness of Fit -- 15.1.1. R Functions chisq.test and pwr.chisq.test -- 15.2. A Test of Independence -- 15.2.1. R Function chi.test.ind -- 15.3. Detecting Differences in the Marginal Probabilities -- 15.3.1. R Functions contab and mcnemar.test -- 15.4. Measures of Association -- 15.4.1. The Proportion of Agreement -- 15.4.2. Kappa -- 15.4.3. Weighted Kappa -- 15.4.4. R Function Ckappa -- 15.5. Logistic Regression -- 15.5.1. R Functions glm and logreg -- 15.5.2. A Confidence Interval for the Odds Ratio -- 15.5.3. R Function ODDSR.CI -- 15.5.4. Smoothers for Logistic Regression -- 15.5.5. R Functions logrsm, rplot.bin, and logSM -- 15.6. Exercises -- Appendix A Answers to Selected Exercises -- Appendix B TABLES -- Appendix C BASIC MATRIX ALGEBRA.
Note continued: 7.8.9. R Function ks -- 7.8.10. Comparing All Quantiles Simultaneously: An Extension of the Kolmogorov-Smirnov Test -- 7.8.11. R Function sband -- 7.9. Graphical Methods for Comparing Groups -- 7.9.1. Error Bars -- 7.9.2. R Functions ebarplot and ebarplot.med -- 7.9.3. Plotting the Shift Function -- 7.9.4. Plotting the Distributions -- 7.9.5. R Function sumplot2g -- 7.9.6. Other Approaches -- 7.10. Comparing Measures of Variation -- 7.10.1. R Function comvar2 -- 7.10.2. Brown-Forsythe Method -- 7.10.3. Comparing Robust Measures of Variation -- 7.11. Measuring Effect Size -- 7.11.1. R Functions yuenv2 and akp.effect -- 7.12. Comparing Correlations and Regression Slopes -- 7.12.1. R Functions twopcor, twolsreg, and tworegwb -- 7.13. Comparing Two Binomials -- 7.13.1. Storer-Kim Method -- 7.13.2. Beal's Method -- 7.13.3. R Functions twobinom, twobici, bi2KMSv2, and power.prop.test -- 7.13.4. Comparing Two Discrete Distributions -- 7.13.5. R Function disc2com -- 7.14. Making Decisions About which Method to Use -- 7.15. Exercises -- ch. 8 Comparing Two Dependent Groups -- 8.1. The Paired T Test -- 8.1.1. When Does the Paired T Test Perform Well? -- 8.1.2. R Function t.test -- 8.2. Comparing Robust Measures of Location -- 8.2.1. R Functions yuend, ydbt, and dmedpb -- 8.2.2. Comparing Marginal M-Estimators -- 8.2.3. R Function rmmest -- 8.2.4. Measuring Effect Size -- 8.2.5. R Function D.akp.effect -- 8.3. Handling Missing Values -- 8.3.1. R Functions rm2miss and rmmismcp -- 8.4. A Different Perspective when Using Robust Measures of Location -- 8.4.1. R Functions loc2dif and 12drmci -- 8.5. The Sign Test -- 8.5.1. R Function signt -- 8.6. Wilcoxon Signed Rank Test -- 8.6.1. R Function wilcox.test -- 8.7. Comparing Variances -- 8.7.1. R Function comdvar -- 8.8. Comparing Robust Measures of Scale -- 8.8.1. R Function rmrvar -- 8.9. Comparing All Quantiles -- 8.9.1. R Functions lband -- 8.10. Plots for Dependent Groups -- 8.10.1. R Function g2plotdifxy -- 8.11. Exercises -- ch. 9 One-Way Anova -- 9.1. Analysis of Variance for Independent Groups -- 9.1.1. A Conceptual Overview -- 9.1.2. ANOVA via Least Squares Regression and Dummy Coding -- 9.1.3. R Functions anova, anoval, aov, and fac2list -- 9.1.4. Controlling Power and Choosing the Sample Sizes -- 9.1.5. R Functions power.anova.test and anova.power -- 9.2. Dealing with Unequal Variances -- 9.2.1. Welch's Test -- 9.3. Judging Sample Sizes and Controlling Power when Data are Available -- 9.3.1. R Functions bdanoval and bdanova2 -- 9.4. Trimmed Means -- 9.4.1. R Functions t1way, tlwayv2, t1wayF, and g5plot -- 9.4.2. Comparing Groups Based on Medians -- 9.4.3. R Function med1way -- 9.5. Bootstrap Methods -- 9.5.1. A Bootstrap-t Method -- 9.5.2. R Functions t1waybt and BFBANOVA -- 9.5.3. Two Percentile Bootstrap Methods -- 9.5.4. R Functions b1way, pbadepth, and Qanova -- 9.5.5. Choosing a Method -- 9.6. Random Effects Model -- 9.6.1. A Measure of Effect Size -- 9.6.2. A Heteroscedastic Method -- 9.6.3. A Method Based on Trimmed Means -- 9.6.4. R Function rananova -- 9.7. Rank-Based Methods -- 9.7.1. The Kruskall -- Wallis Test -- 9.7.2. R Function kruskal.test -- 9.7.3. Method BDM -- 9.7.4. R Functions bdm and bdmP -- 9.8. Exercises -- ch. 10 Two-Way and Three-Way Designs -- 10.1. Basics of a Two-Way Anova Design -- 10.1.1. Interactions -- 10.1.2. R Functions interaction.plot and interplot -- 10.1.3. Interactions When There Are More Than Two Levels -- 10.2. Testing Hypotheses About Main Effects and Interactions -- 10.2.1. R function anova -- 10.2.2. Inferences About Disordinal Interactions -- 10.2.3. The Two-Way ANOVA Model -- 10.3. Heteroscedastic Methods for Trimmed Means, Includingmeans -- 10.3.1. R Function t2way -- 10.4. Bootstrap Methods -- 10.4.1. R Functions pbad2way and t2waybt -- 10.5. Testing Hypotheses Based on Medians -- 10.5.1. R Function m2way -- 10.6. A Rank-Based Method for a Two-Way Design -- 10.6.1. R Function bdm2way -- 10.6.2. The Patel -- Hoel Approach to Interactions -- 10.7. Three-Way Anova -- 10.7.1. R Functions anova and t3way -- 10.8. Exercises -- ch. 11 Comparing More than Two Dependent Groups -- 11.1. Comparing Means in a One-Way Design -- 11.1.1. R Function aov -- 11.2. Comparing Trimmed Means When Dealing With a One-Way Design -- 11.2.1. R Functions rmanova and rmdat2mat -- 11.2.2. A Bootstrap-t Method for Trimmed Means -- 11.2.3. R Function rmanovab -- 11.3. Percentile Bootstrap Methods for a One-Way Design -- 11.3.1. Method Based on Marginal Measures of Location -- 11.3.2. R Function bdlway -- 11.3.3. Inferences Based on Difference Scores -- 11.3.4. R Function rmdzero -- 11.4. Rank-Based Methods for a One-Way Design -- 11.4.1. Friedman's Test -- 11.4.2. R Function friedman.test -- 11.4.3. Method BPRM -- 11.4.4. R Function bprm -- 11.5. Comments on Which Method to Use -- 11.6. Between-By-Within Designs -- 11.6.1. Method for Trimmed Means -- 11.6.2. R Function bwtrim and bw2list -- 11.6.3. A Bootstrap-t Method -- 11.6.4. R Function tsplitbt -- 11.6.5. Inferences Based on M-estimators and Other Robust Measures of Location -- 11.6.6. R Functions sppba, sppbb, and sppbi -- 11.6.7. A Rank-Based Test -- 11.6.8. R Function bwrank -- 11.7. Within-By-Within Design -- 11.7.1. R Function wwtrim -- 11.8. Three-Way Designs -- 11.8.1. R Functions bbwtrim, bwwtrim, and wwwtrim -- 11.8.2. Data Management: R Functions bw2list and bbw2list -- 11.9. Exercises -- ch. 12 Multiple Comparisons -- 12.1. One-Way Anova and Related Situations, Independent Groups -- 12.1.1. Fisher's Least Significant Difference Method -- 12.1.2. The Tukey -- Kramer Method -- 12.1.3. R Function TukeyHSD -- 12.1.4. Tukey -- Kramer and the ANOVA F Test -- 12.1.5. Step-Down Methods -- 12.1.6. Dunnett's T3 -- 12.1.7. Games -- Howell Method -- 12.1.8. Comparing Trimmed Means -- 12.1.9. R Functions lincon, stepmcp and twoKlin -- 12.1.10. Alternative Methods for Controlling FWE -- 12.1.11. Percentile Bootstrap Methods for Comparing Trimmed Means, Medians, and M-estimators -- 12.1.12. R Functions medpb, linconpb, pbmcp, and p.adjust -- 12.1.13. A Bootstrap-t Method -- 12.1.14. R Function linconbt -- 12.1.15. Rank-Based Methods -- 12.1.16. R Functions cidmul, cidmulv2, and bmpmul -- 12.1.17. Comparing the Individual Probabilities of Two Discrete Distributions -- 12.1.18. R Functions binband, splotg2, cumrelf, and cumrelfT -- 12.1.19. Comparing the Quantifies of Two Independent Groups -- 12.1.20. R Functions qcomhd and qcomhdMC -- 12.1.21. Multiple Comparisons for Binomial and Categorical Data -- 12.1.22. R Functions skmcp and discmcp -- 12.2. Two-Way, Between-By-Between Design -- 12.2.1. Scheffe's Homoscedastic Method -- 12.2.2. Heteroscedastic Methods -- 12.2.3. Extension of Welch -- Sidak and Kaiser -- Bowden Methods to Trimmed Means -- 12.2.4. R Function kbcon -- 12.2.5. R Functions con2way and conCON -- 12.2.6. Linear Contrasts Based on Medians -- 12.2.7. R Functions msmed and mcp2med -- 12.2.8. Bootstrap Methods -- 12.2.9. R Functions mcp2a, bbmcppb, bbmcp -- 12.2.10. The Patel -- Hoel Rank-Based Interaction Method -- 12.2.11. R Function rimul -- 12.3. Judging Sample Sizes -- 12.3.1. Tamhane's Procedure -- 12.3.2. R Function tamhane -- 12.3.3. Hochberg's Procedure -- 12.3.4. R Function hochberg -- 12.4. Methods for Dependent Groups -- 12.4.1. Linear Contrasts Based on Trimmed Means -- 12.4.2. R Function rmmcp -- 12.4.3. Comparing M-estimators -- 12.4.4. R Functions rmmcppb, dmedpb, dtrimpb, and boxdif -- 12.4.5. Bootstrap-t Method -- 12.4.6. R Function bptd -- 12.4.7. Comparing the Quantiles of the Marginal Distributions -- 12.4.8. R Function Dqcomhd -- 12.5. Between-By-Within Designs -- 12.5.1. R Functions bwmcp, bwamcp, bwbmcp, bwimcp, spmcpa, spmcpb, spmcpi, and bwmcppb -- 12.6. Within-By-Within Designs -- 12.6.1. Three-Way Designs -- 12.6.2. R Functions con3way, mcp3atm, and rm3mcp -- 12.6.3. Bootstrap Methods for Three-Way Designs -- 12.6.4. R Functions bbwmcp, bwwmcp, bwwmcppb, bbbmcppb, bbwmcppb, bwwmcppb, and wwwmcppb -- 12.7. Exercises -- ch. 13 Some Multivariate Methods -- 13.1. Location, Scatter, and Detecting Outliers -- 13.1.1. Detecting Outliers Via Robust Measures of Location and Scatter -- 13.1.2. R Functions cov.mve and cov.mcd -- 13.1.3. More Measures of Location and Covariance -- 13.1.4. R Functions rmba, tbs, and ogk -- 13.1.5. R Function out -- 13.1.6. A Projection-Type Outlier Detection Method -- 13.1.7. R Functions outpro, outproMC, outproad, outproadMC, and out3d -- 13.1.8. Skipped Estimators of Location -- 13.1.9. R Function smean -- 13.2. One-Sample Hypothesis Testing -- 13.2.1. Comparing Dependent Groups -- 13.2.2. R Functions smeancrv2, hotel, and rmdzeroOP -- 13.3. Two-Sample Case -- 13.3.1. R Functions smcan2, mat2grp, matsplit, and mat2list -- 13.3.2. R functions matsplit, mat2grp, and mat2list -- 13.4. Manova -- 13.4.1. R Function manova -- 13.4.2. Robust MANOVA Based on Trimmed Means -- 13.4.3. R Functions MULtr.anova and MULAOVp -- 13.5. A Multivariate Extension of the Wilcoxon-Mann-Whitney Test -- 13.5.1. Explanatory Measure of Effect Size: A Projection-Type Generalization -- 13.5.2. R Function mulwmwv2 -- 13.6. Rank-Based Multivariate Methods -- 13.6.1. The Munzel -- Brunner Method -- 13.6.2. R Function mulrank -- 13.6.3. The Choi -- Marden Multivariate Rank Test
Subject
ISBN
9781498796781 hardcover
1498796788 hardcover
MARC
OCoLC
20180612022924.0
ta
170301s2017 flua b 001 0 eng cam i
973745433 973803337 973899013
DLC eng rda DLC OCLCO YDX OCLCF YDX FIE OCLCO CHVBK OCLCO U3G DUQ
DUQQ kem
(OCoLC)975173311 (OCoLC)973745433 (OCoLC)973803337 (OCoLC)973899013
(S
pcc
505-00/(S Machine generated contents note: ch. 1 Introduction -- 1.1. Samples Versus Populations -- 1.2. Software -- 1.3. R Basics -- 1.3.1. Entering Data -- 1.3.2. R Functions and Packages -- 1.3.3. Data Sets -- 1.3.4. Arithmetic Operations -- ch. 2 Numerical and Graphical Summaries of Data -- 2.1. Basic Summation Notation -- 2.2. Measures of Location -- 2.2.1. The Sample Mean -- 2.2.2. R Function Mean -- 2.2.3. The Sample Median -- 2.2.4. R Function for the Median -- 2.3. A Criticism of the Median: It Might Trim Too Many Values -- 2.3.1. R Function for the Trimmed Mean -- 2.3.2. A Winsorized Mean -- 2.3.3. R Function winmean -- 2.3.4. What is a Measure of Location-- 2.4. Measures of Variation or Scale -- 2.4.1. Sample Variance and Standard Deviation -- 2.4.2. R Functions var and sd -- 2.4.3. The Interquartile Range -- 2.4.4. R Functions idealf and ideafiQR -- 2.4.5. Winsorized Variance -- 2.4.6. R Function winvar -- 2.4.7. Median Absolute Deviation -- 2.4.8. R Function mad -- 2.4.9. Average Absolute Distance from the Median -- 2.4.10. Other Robust Measures of Variation -- 2.4.11. R Functions bivar, pbvar, tauvar, and tbs -- 2.5. Detecting Outliers -- 2.5.1. A Method Based on the Mean and Variance -- 2.5.2. A Better Outlier Detection Rule: The MAD-Median Rule -- 2.5.3. R Function out -- 2.5.4. The Boxplot -- 2.5.5. R Function boxplot -- 2.5.6. Modifications of the Boxplot Rule for Detecting Outliers -- 2.5.7. R Function outbox -- 2.5.8. Other Measures of Location -- 2.5.9. R Functions mom and onestep -- 2.6. Histograms -- 2.6.1. R Functions hist and splot -- 2.7. Kernel Density Estimators -- 2.7.1. R Functions kdplot and akerd -- 2.8. Stem-and-Leaf Displays -- 2.8.1. R Function stem -- 2.9. Skewness -- 2.9.1. Transforming Data -- 2.10. Choosing a Measure of Location -- 2.11. Exercises -- ch. 3 Probability and Related Concepts -- 3.1. Basic Probability -- 3.2. Expected Values -- 3.3. Conditional Probability and Independence -- 3.4. Population Variance -- 3.5. The Binomial Probability Function -- 3.5.1. R Functions dbinom and pbinom -- 3.6. Continuous Variables and the Normal Curve -- 3.6.1. Computing Probabilities Associated with Normal Curves -- 3.6.2. R Function pnorm -- 3.6.3. R Function qnorm -- 3.7. Understanding the Effects of Nonnormality -- 3.7.1. Skewness -- 3.8. Pearson's Correlation and the Population Covariance (Optional) -- 3.8.1. Computing the Population Covariance and Pearson's Correlation -- 3.9. Some Rules About Expected Values -- 3.10. Chi-Squared Distributions -- 3.11. Exercises -- ch. 4 Sampling Distributions and Confidence Intervals -- 4.1. Random Sampling -- 4.2. Sampling Distributions -- 4.2.1. Sampling Distribution of the Sample Mean -- 4.2.2. Computing Probabilities Associated with the Sample Mean -- 4.3. A Confidence Interval for the Population Mean -- 4.3.1. Known Variance -- 4.3.2. Confidence Intervals When σ Is Not Known -- 4.3.3. R Functions pt and qt -- 4.3.4. Confidence Interval for the Population Mean Using Student's T -- 4.3.5. R Function t.test -- 4.4. Judging Location Estimators Based on their Sampling Distribution -- 4.4.1. Trimming and Accuracy: Another Perspective -- 4.5. An Approach to Nonnormality: The Central Limit Theorem -- 4.6. Student's t and Nonnormality -- 4.7. Confidence Intervals for the Trimmed Mean -- 4.7.1. Estimating the Standard Error of a Trimmed Mean -- 4.7.2. R Function trimse -- 4.7.3. A Confidence Interval for the Population Trimmed Mean -- 4.7.4. R Function trimci -- 4.8. Transforming Data -- 4.9. Confidence Interval for the Population Median -- 4.9.1. R Function sint -- 4.9.2. Estimating the Standard Error of the Sample Median -- 4.9.3. R Function msmedse -- 4.9.4. More Concerns About Tied Values -- 4.10. A Remark About Mom and M-Estimators -- 4.11. Confidence Intervals for the Probability of Success -- 4.11.1. R Functions binomci, acbinomci and and binomLCO -- 4.12. Bayesian Methods -- 4.13. Exercises -- ch. 5 Hypothesis Testing -- 5.1. The Basics of Hypothesis Testing -- 5.1.1. p-Value or Significance Level -- 5.1.2. Criticisms of Two-Sided Hypothesis Testing and p-Values -- 5.1.3. Summary and Generalization -- 5.2. Power and Type II Errors -- 5.2.1. Understanding How n, α, and σ Are Related to Power -- 5.3. Testing Hypotheses About the Mean when σ is Not Known -- 5.3.1. R Function t.test -- 5.4. Controlling Power and Determining the Sample Size -- 5.4.1. Choosing n Prior to Collecting Data -- 5.4.2. R Function power.t.test -- 5.4.3. Stein's Method: Judging the Sample Size When Data Are Available -- 5.4.4. R Functions stein1 and stein2 -- 5.5. Practical Problems with Student's T Test -- 5.6. Hypothesis Testing Based on a Trimmed Mean -- 5.6.1. R Function trimci -- 5.6.2. R Functions stein1.tr and stein2.tr -- 5.7. Testing Hypotheses about the Population Median -- 5.7.1. R Function sintv2 -- 5.8. Making Decisions About which Measure of Location to Use -- 5.9. Bootstrap Methods -- 5.10. Bootstrap-T Method -- 5.10.1. Symmetric Confidence Intervals -- 5.10.2. Exact Nonparametric Confidence Intervals for Means Are Impossible -- 5.11. The Percentile Bootstrap Method -- 5.12. Inferences About Robust Measures of Location -- 5.12.1. Using the Percentile Method -- 5.12.2. R Functions onesampb, momci, and trimpb -- 5.12.3. The Bootstrap-t Method Based on Trimmed Means -- 5.12.4. R Function trimcibt -- 5.13. Estimating Power when Testing Hypotheses About a Trimmed Mean -- 5.13.1. R Functions powt1est and powt1an -- 5.14. A Bootstrap Estimate of Standard Errors -- 5.14.1. R Function bootse -- 5.15. Exercises -- ch. 6 Regression and Correlation -- 6.1. The Least Squares Principle -- 6.2. Confidence Intervals and Hypothesis Testing -- 6.2.1. Classic Inferential Techniques -- 6.2.2. Multiple Regression -- 6.2.3. R Functions ols and lm -- 6.3. Standardized Regression -- 6.4. Practical Concerns About Least Squares Regression and how they Might BE ADDRESSED -- 6.4.1. The Effect of Outliers on Least Squares Regression -- 6.4.2. Beware of Bad Leverage Points -- 6.4.3. Beware of Discarding Outliers Among the Y Values -- 6.4.4. Do Not Assume Homoscedasticity or that the Regression Line is Straight -- 6.4.5. Violating Assumptions When Testing Hypotheses -- 6.4.6. Dealing with Heteroscedasticity: The HC4 Method -- 6.4.7. R Functions olshc4 and hc4test -- 6.4.8. Interval Estimation of the Mean Response -- 6.4.9. R Function olshc4band -- 6.5. Pearson's Correlation and the Coefficient of Determination -- 6.5.1. A Closer Look at Interpreting r -- 6.6. Testing H0: ρ = 0 -- 6.6.1. R Function cor.test -- 6.6.2. R Function pwr.r.test -- 6.6.3. Testing H0: p = 0 When There is Heteroscedasticity -- 6.6.4. R Function pcorhc4 -- 6.6.5. When Is It Safe to Conclude that Two Variables Are Independent-- 6.7. A Regression Method for Estimating the Median of Y and Other Quantiles -- 6.7.1. R Function rqfit -- 6.8. Detecting Heteroscedasticity -- 6.8.1. R Function khomreg -- 6.9. Inferences About Pearson's Correlation: Dealing with Heteroscedasticity -- 6.9.1. R Function pcorb -- 6.10. Bootstrap Methods for Least Squares Regression -- 6.10.1. R Functions hc4wtest, olswbtest, and lsfitci -- 6.11. Detecting Associations Even when there is Curvature -- 6.11.1. R Functions indt and medind -- 6.12. Quantile Regression -- 6.12.1. R Functions qregci and rqtest -- 6.12.2. A Test for Homoscedasticity Using a Quantile Regression Approach -- 6.12.3. R Function qhomt -- 6.13. Regression: Which Predictors are Best-- 6.13.1. The 0.632 Bootstrap Method -- 6.13.2. R function regpre -- 6.13.3. Least Angle Regression -- 6.13.4. R Function larsR -- 6.14. Comparing Correlations -- 6.14.1. R Functions TWOpov and TWOpNOV -- 6.15. Concluding Remarks -- 6.16. Exercises -- ch. 7 Comparing Two Independent Groups -- 7.1. Student's T Test -- 7.1.1. Choosing the Sample Sizes -- 7.1.2. R Function power.t.test -- 7.2. Relative Merits of Student's T -- 7.3. Welch's Heteroscedastic Method for Means -- 7.3.1. R function t.test -- 7.3.2. Tukey's Three-Decision Rule -- 7.3.3. Nonnormality and Welch's Method -- 7.3.4. Three Modern Insights Regarding Methods for Comparing Means -- 7.4. Methods for Comparing Medians and Trimmed Means -- 7.4.1. Yuen's Method for Trimmed Means -- 7.4.2. R Functions yuen and fac21ist -- 7.4.3. Comparing Medians -- 7.4.4. R Function msmed -- 7.5. Percentile Bootstrap Methods for Comparing Measures of Location -- 7.5.1. Using Other Measures of Location -- 7.5.2. Comparing Medians -- 7.5.3. R Function medpb2 -- 7.5.4. Some Guidelines on When to Use the Percentile Bootstrap Method -- 7.5.5. R Functions trimpb2, med2g, and pb2gen -- 7.6. Bootstrap-T Methods for Comparing Measures of Location -- 7.6.1. Comparing Means -- 7.6.2. Bootstrap-t Method When Comparing Trimmed Means -- 7.6.3. R Functions yuenbt and yhbt -- 7.6.4. Estimating Power and Judging the Sample Sizes -- 7.6.5. R Functions powest and pow2an -- 7.7. Permutation Tests -- 7.8. Rank-Based and Nonparametric Methods -- 7.8.1. Wilcoxon--Mann--Whitney Test -- 7.8.2. R Functions wmw and wilcox.test -- 7.8.3. Handling Tied Values and Heteroscedasticity -- 7.8.4. Cliff's Method -- 7.8.5. R Functions cid and cidv2 -- 7.8.6. The Brunner--Munzel Method -- 7.8.7. R Functions bmp and loc2dif.ci -- 7.8.8. The Kolmogorov--Smirnov Test --
Isadore R. Lenglet Collection of Management and Business cop. 1
C0 DUQ
View in classic catalog View in classic catalog View MARC display View MARC display
Back to top