Nuovi arrivi settembre 2009

settembre 18, 2009

component

Principal component analysis / I. T. Jolliffe. – 2. ed. – New York : Springer, c2002. – XXIX, 487 p. ; 24 cm.
(*Springer series in statistics)

Collocazione: 22f/0107

Principal component analysis is central to the study of multivariate data. Although one of the earliest multivariate techniques it continues to be the subject of much research, ranging from new model- based approaches to algorithmic ideas from neural networks. It is extremely versatile with applications in many disciplines. The first edition of this book was the first comprehensive text written solely on principal component analysis. The second edition updates and substantially expands the original version, and is once again the definitive text on the subject. It includes core material, current research and a wide range of applications. Its length is nearly double that of the first edition. Researchers in statistics, or in other fields that use principal component analysis, will find that the book gives an authoritative yet accessible account of the subject. It is also a valuable resource for graduate courses in multivariate analysis. The book requires some knowledge of matrix algebra. Ian Jolliffe is Professor of Statistics at the University of Aberdeen. He is author or co-author of over 60 research papers and three other books. His research interests are broad, but aspects of principal component analysis have fascinated him and kept him busy for over 30 years.

Nuovi arrivi


Nuovi arrivi agosto 2009

agosto 28, 2009

commonCommon errors in statistics (and how to avoid them) / P.I. Good, J.W. Hardin. – 3. ed. – Hoboken (New Jersey) : Wiley, ©2009. – XIII, 273 p. : ill. ; 24 cm.

Collocazione: BS/01./S/0038

Now in its Third Edition, the highly readable Common Errors in Statistics (and How to Avoid Them) continues to serve as a thorough and straightforward discussion of basic statistical methods, presentations, approaches, and modeling techniques. Further enriched with new examples and counterexamples from the latest research as well as added coverage of relevant topics, this new edition of the benchmark book addresses popular mistakes often made in data collection and provides an indispensable guide to accurate statistical analysis and reporting. The authors’ emphasis on careful practice, combined with a focus on the development of solutions, reveals the true value of statistics when applied correctly in any area of research.

Nuovi arrivi


Nuovi arrivi maggio 2009

Maggio 20, 2009

meta analysis*Introduction to meta-analysis / Michael Borenstein… [et al.]. – Chichester : Wiley, 2009. – XXVIII, 421 p. ; 25 cm

Collocazione: BS/01./S/0036

INDICE: List of Figures List of Tables Acknowledgements Preface PART 1: INTRODUCTION 1 HOW A META-ANALYSIS WORKS Introduction Individual studies The summary effect Heterogeneity of effect sizes Summary points 2 WHY PERFORM A META-ANALYSIS Introduction The SKIV meta-analysis Statistical significance Clinical importance of the effect Consistency of effects Summary points PART 2: EFFECT SIZE AND PRECISION 3 OVERVIEW Treatment effects and effect sizes Parameters and estimates Outline 4 EFFECT SIZES BASED ON MEANS Introduction Raw (unstandardized) mean difference D Standardized mean difference, D and G Response ratios Summary points 5 EFFECT SIZES BASED ON BINARY DATA (2×2 TABLES) Introduction Risk ratio Odds ratio Risk difference Choosing an effect size index Summary points 6 EFFECT SIZES BASED ON CORRELATIONS Introduction Computing R Other approaches Summary points 7 CONVERTING AMONG EFFECT SIZES Introduction Converting from the log odds ratio to D Converting from D to the log odds ratio Converting from R to D Converting from D to R Summary points 8 FACTORS THAT AFFECT PRECISION Introduction Factors that affect precision Sample size Study design Summary points 9 CONCLUDING REMARKS Further reading PART 3: FIXED-EFFECT VERSUS RANDOM-EFFECTS MODELS 10 OVERVIEW Introduction Nomenclature 11 FIXED-EFFECT MODEL Introduction The true effect size Impact of sampling error Performing a fixed-effect meta-analysis Summary points 12 RANDOM-EFFECTS MODEL Introduction The true effect sizes Impact of sampling error Performing a random-effects meta-analysis Summary points 13 FIXED EFFECT VERSUS RANDOM-EFFECTS MODELS Introduction Definition of a summary effect Estimating the summary effect Extreme effect size in large study Confidence interval The null hypothesis Which model should we use? Model should not be based on the test for heterogeneity Concluding remarks Summary points 14 WORKED EXAMPLES (PART 1) Introduction Worked example for continuous data (Part 1) Worked example for binary data (Part 1) Worked example for correlational data (Part 1) Summary points PART 4: HETEROGENEITY 15 OVERVIEW Introduction 16 IDENTIFYING AND QUANTIFYING HETEROGENEITY Introduction Isolating the variation in true effects Computing Q Estimating tau-squared The I 2 statistic Comparing the measures of heterogeneity Confidence intervals for T 2 Confidence intervals (or uncertainty intervals) for I 2 Summary points 17 PREDICTION INTERVALS Introduction Prediction intervals in primary studies Prediction intervals in meta-analysis Confidence intervals and prediction intervals Comparing the confidence interval with the prediction interval Summary points 18 WORKED EXAMPLES (PART 2) Introduction Worked example for continuous data (Part 2) Worked example for binary data (Part 2) Worked example for correlational data (Part 2) Summary points 19 SUBGROUP ANALYSES Introduction Fixed-effect model within subgroups Computational models Random effects with separate estimates of T 2 Random effects with pooled estimate of T 2 The proportion of variance explained Mixed-effect model Obtaining an overall effect in the presence of subgroups Summary points 20 META-REGRESSION Introduction Fixed-effect model Fixed or random effects for unexplained heterogeneity Random-effects model Statistical power for regression Summary points 21 NOTES ON SUBGROUP ANALYSES AND META-REGRESSION Introduction Computational model Multiple comparisons Software Analysis of subgroups and regression are observational Statistical power for subgroup analyses and meta-regression Summary points PART 5: COMPLEX DATA STRUCTURES 22 OVERVIEW 23 INDEPENDENT SUBGROUPS WITHIN A STUDY Introduction Combining across subgroups Comparing subgroups Summary points 24 MULTIPLE OUTCOMES OR TIME POINTS WITHIN A STUDY Introduction Combining across outcomes or time-points Comparing outcomes or time-points within a study Summary points 25 MULTIPLE COMPARISONS WITHIN A STUDY Introduction Combining across multiple comparisons within a study Differences between treatments Summary points 26 NOTES ON COMPLEX DATA STRUCTURES Introduction Combined effect Differences in effect PART 6: OTHER ISSUES 27 OVERVIEW 28 VOTE COUNTING A NEW NAME FOR AN OLD PROBLEM Introduction Why vote counting is wrong Vote-counting is a pervasive problem Summary points 29 POWER ANALYSIS FOR META-ANALYSIS Introduction A conceptual approach In context When to use power analysis Planning for precision rather than for power Power analysis in primary studies Power analysis for meta-analysis Power analysis for a test of homogeneity Summary points 30 PUBLICATION BIAS Introduction The problem of missing studies Methods for addressing bias Illustrative example The model Getting a sense of the data Is the entire effect an artifact of bias How much of an impact might the bias have? Summary of the findings for the illustrative example Small study effects Concluding remarks Summary points PART 7: ISSUES RELATED TO EFFECT SIZE 31 OVERVIEW 32 EFFECT SIZES RATHER THAN P -VALUES Introduction Relationship between p-values and effect sizes The distinction is important The p-value is often misinterpreted Narrative reviews vs. meta-analyses Summary points 33 SIMPSONS PARADOX Introduction Circumcision and risk of HIV infection An example of the paradox Summary points 34 GENERALITY OF THE BASIC INVERSE-VARIANCE METHOD Introduction Other effect sizes Other methods for estimating effect sizes Individual participant data meta-analyses Bayesian approaches Summary points PART 8: FURTHER METHODS 35 OVERVIEW 36 META-ANALYSIS METHODS BASED ON DIRECTION AND P -VALUES Introduction Vote counting The sign test Combining p-values Summary points 37 FURTHER METHODS FOR DICHOTOMOUS DATA Introduction Mantel-Haenszel method One-step (Peto) formula for odds ratio Summary points 38 PSYCHOMETRIC META-ANALYSIS Introduction The attenuating effects of artifacts Meta-analysis methods Example of psychometric meta-analysis Comparison of artifact correction with meta-regression Sources of information about artifact values How heterogeneity is assessed Reporting in psychometric meta-analysis Concluding remarks Summary points PART 9: META-ANALYSIS IN CONTEXT 39 OVERVIEW 40 WHEN DOES IT MAKE SENSE TO PERFORM A META-ANALYSIS? Introduction Are the studies similar enough to combine? Can I combine studies with different designs? How many studies are enough to carry out a meta-analysis? Summary points 41 REPORTING THE RESULTS OF A META-ANALYSIS Introduction The computational model Forest plots Sensitivity analysis Summary points 42 CUMULATIVE META-ANALYSIS Introduction Why perform a cumulative meta-analysis? Summary points 43 CRITICISMS OF META-ANALYSIS Introduction One number cannot summarize a research field The file drawer problem invalidates meta-analysis Mixing apples and oranges Garbage in, garbage out Important studies are ignored Meta-analysis can disagree with randomized trials Meta-analyses are performed poorly Is a narrative review better? Concluding remarks Summary points PART 10: RESOURCES AND SOFTWARE 44 SOFTWARE Introduction Three examples of meta-analysis software The software Comprehensive meta-analysis (CMA) 2.0 Revman 5.0 StataTM macros with Stata 10.0 Summary points 45 BOOKS, WEB SITES AND PROFESSIONAL ORGANIZATIONS Books on systematic review methods Books on meta-analysis Web sites INDEX

Nuovi arrivi


Nuovi arrivi maggio 2009

Maggio 20, 2009

141144_cover.indd*Bayesian modeling using WinBUGS / Ioannis Ntzoufras. – Hoboken NJ : John Wiley and Sons, ©2009. – XXIII, 492 p. ; 24 cm

Collocazione: BS/01./S/0035

The BUGS (Bayesian inference Using Gibbs Sampling) project is concerned with free, flexible software for the Bayesian analysis of complex statistical models using Markov Chain Monte Carlo (MCMC) methods. It details the various and commonly-used modeling techniques that are employed by statisticians in a multitude of sciences such as; biostatistics and social science; actuarial science environments. This book presents the reader with a clear and easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. Emphasis is given to Generalized Linear Models (GLMs) familiar to most readers and researchers. Detailed explanations cover model building, prior specification, writing WinBUGS code, and analyzing and interpreting WinBUGS output. Also features comprehensive problems and examples.

Nuovi arrivi