Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen RubinЧитать онлайн книгу.
for Potential Selectivity Biases 6.3 Statistical Controls for Potential Selectivity Biases 6.4 Creating Matched Comparison Groups Using Propensity Score Matching 6.5 Pilot Studies 6.6 Synopses of Research Studies Key Chapter Concepts Exercise for Critically Appraising Published Articles Additional Reading 7 Critically Appraising Quasi-Experiments: Time-Series Designs and Single-Case Designs 7.1 Simple Time-Series Designs 7.2 Multiple Time-Series Designs 7.3 Single-Case Designs 7.4 Synopses of Research Studies Key Chapter Concepts Exercise for Critically Appraising Published Articles Additional Reading 8 Critically Appraising Systematic Reviews and Meta-Analyses 8.1 Advantages of Systematic Reviews and Meta-Analyses 8.2 Risks in Relying Exclusively on Systematic Reviews and Meta-Analyses 8.3 Where to Start 8.4 What to Look for When Critically Appraising Systematic Reviews 8.5 What Distinguishes a Systematic Review from Other Types of Reviews? 8.6 What to Look for When Critically Appraising Meta-Analyses 8.7 Synopses of Research Studies Key Chapter Concepts Exercise for Critically Appraising Published Articles Additional Reading
10 PART 3: CRITICALLY APPRAISING STUDIES FOR ALTERNATIVE EIP QUESTIONS 9 Critically Appraising Nonexperimental Quantitative Studies 9.1 Surveys 9.2 Cross-Sectional and Longitudinal Studies 9.3 Case-Control Studies 9.4 Synopses of Research Studies Key Chapter Concepts Exercise for Critically Appraising Published Articles Additional Reading 10 Critically Appraising Qualitative Studies 10.1 Qualitative Observation 10.2 Qualitative Interviewing 10.3 Other Qualitative Methodologies 10.4 Qualitative Sampling 10.5 Grounded Theory 10.6 Alternatives to Grounded Theory 10.7 Frameworks for Appraising Qualitative Studies 10.8 Mixed Model and Mixed Methods Studies 10.9 Synopses of Research Studies Key Chapter Concepts Exercise for Critically Appraising Published Articles Additional Reading
11 PART 4: ASSESSMENT AND MONITORING IN EVIDENCE-INFORMED PRACTICE 11 Critically Appraising, Selecting, and Constructing Assessment Instruments 11.1 Reliability 11.2 Validity 11.3 Feasibility 11.4 Sample Characteristics 11.5 Locating Assessment Instruments 11.6 Constructing Assessment Instruments 11.7 Synopses of Research Studies Key Chapter Concepts Exercise for Critically Appraising Published Articles Additional Reading 12 Monitoring Client Progress 12.1 A Practitioner-Friendly Single-Case Design 12.2 Using Within-Group Effect-Size Benchmarks Key Chapter Concepts Additional Reading
12
PART 5: ADDITIONAL ASPECTS OF EVIDENCE-INFORMED PRACTICE
13 Appraising and Conducting Data Analyses in EIP
13.1 Introduction
13.2 Ruling Out Statistical Chance
13.3 What Else Do You Need to Know?
13.4 The 05 Cutoff Point Is Not Sacred!
13.5