ADVOCATE HEALTH CARE (RESEARCH DESIGN)

 

 

ADVOCATE HEALTH CARE (RESEARCH DESIGN)

SYNOPSIS:

*      Definition

*      Elements of research methods

*      Pyramid of research design

*      How to assess research designs for Validity & Error

*      The 4 basic clinical research designs and examples

RESEARCH METHODOLOGY: DEFINITION

ü  A collective term for the structured process of conducting Research

ü  There are many different methodologies used in various Types of research

ü  The term is usually considered to include research design, Data gathering and data analysis

ü  The research methods are the most important part of any Study.

ü  This is the blueprint for your study of which everything isbuilt upon.

RESEARCH METHODS:

 Research methods consist of:

1.      Study design  (e.g., randomized controlled trial, cohort, case-control)

2.      Population to be sampled

- Sample size and power calculation

      -Inclusion and exclusion criteria

            -Subject selection and assignment

3.      Assignment to either the control group or the treatment group

            -Treatment

-Procedures

-Measurements

-Data analysis

IMPORTANCE OF THE RESEARCH QUESTION

ü  Research design and methods will be driven by the research question

ü  Based on this question, is it necessary to select a research design that is both ethical and feasible

 For example:

Does cigarette smoking cause lung cancer?

 Hypothesis: Cigarette smoking causes lung cancer.

Does circumcision cause penile cancer?

 Hypothesis: Circumcision causes penile cancer.

 

 

IMPORTANCE OF VALIDITY AND ERROR

ü  When selecting a research design there are two elements you must consider:

ü  Validity 1) the extent to which a test measures what it claims to measure2) it is vital for a test to valid in order for the results to be accurately applied and interpreted 3) is determined by a body of research that demonstrates the relationship between the test and whatever it is intended to measure.

ü  Reliability 1) the degree to which an assessment tool produces stable and consistent results.

ü  Error 1) represents something other than what is being measured.

 

TYPES OF VALIDITY

*      Internal Validity: the ability of a study to unambiguously determine the causal relationship

between two or more variables: with what certainty can we conclude that X caused the measurable difference we found in Y?

*      External Validity (often called generalizability): the degree to which conclusions can be generalized to the universe outside of the study (can the results of the study be generalized to the sample population or other groups?)

 It is through proper study design that high levels of validity, both internal and external, can be achieved Without internal validity, you cannot have external validity.

 

TYPES OF VALIDITY: EXAMPLES

Internal Validity

             a study may have poor internal validity if testing was not performed the same way in treatment and control groups or if confounding variables were not accounted for in the study design or analysis

External Validity

a study performed exclusively in a particular gender, racial, or geographic sub-group, such as white females in Appalachia, may not be applicable to Hispanic men in the northwest.

TYPES OF ERROR

Random Error – a wrong result due to chance vary in magnitude and direction.

Systematic Error – a wrong result due to bias tend to be consistent in magnitude and/or direction.

 

VALIDITY VS. ERROR

 Error directly affects the validity of a study.





















 

 

 

 

 

 

 

 


                                           

 

 

High Error results in Low Validity                         Low Error results in High Validity

 

RESEARCH DESIGN:

 

*      Randomized controlled trial and meta-analysis - one of the simplest and most powerful tools in research; quantitative, comparative, controlled experiments where subjects are allocated at random to receive one of several clinical interventions one of which is the standard of comparison or control.

*      Nonrandomized trial, concurrent and historical controls- participants are not assigned by chance to different treatment groups and may choose which group they want to be in, or they may be assigned to the groups by the researchers.

*      Cohort study, prospective and restospective- observe the effect on a specific group with a certain trait over time individuals with differing exposures to a suspected factor are identified and then observed for the occurrence of certain health effects over a period of time (ex. does exposure to smoking cause lung cancer)

*      Case-control study- Retrospectively compares patients with a disease to those who do not have the disease and how frequently the exposure to a risk factor is present in each group to determine the relationship between the risk factor and the disease

*      Cross-sectional study- measures the prevalence of health outcomes in a population at a point in time (single point of data collection) or over a short period of time disease and exposure measured simultaneously in a given population (ex. looking at prevalence of breast cancer in a population)

*      Case study - an in-depth study of one person’s every aspect of life and history to seek patterns and causes for disease can be retrospective or prospective

*      Case report - lowest level of evidence and first line of evidence where new issues and ideas emerge unique and cannot be explained by known diseases or syndromes that show an important variation of the disease or condition  show unexpected events that may yield new or useful information.

 

 

 

 

 

 

THE 4 BASIC CLINICAL RESEARCH DESIGNS

 Clinical Trials, Cohort (longitudinal), Case-Control, Case

Series The 4 designs fall under two categories: experimental and observational

Experimental (always contains an intervention)

 1. Clinical Trial - Randomized, Controlled, Double-Blind

 1a. Quasi-experimental – lacks randomization and blinding

Observational (does not contain an intervention)

 2. Cohort or Longitudinal

 3. Case-Control

 4. Case Series

KEY FEATURES OF A RCT(RANDOMIZED CONTROLLED TRIAL)

Ø  Gold standard for determining the effect of clinical intervention in a group of patients

Ø  Random allocation of subjects to intervention groups Patients, providers and investigators unaware of the treatment given to the groups (i.e., single/double/triple blinded)

Ø  Subjects are aware of being observed and may behave differently regardless of the actual effect induced by the treatment intervention (Hawthorne effect)

Ø  Patients are analyzed according to the initial treatment assignment irrespective of whether they received the intended intervention (intention to treat analysis)

Ø  Analysis is focused on estimating the size of difference in the predefined outcomes between intervention groups

RESEARCH DESIGN MATRIX

Strength

Method

Error

Validity

 

Strongest

 

Weakest

 

 

RCT

Cohort

 

 

Case-Control

 

         Weakest

 

 

Case Series

 

Comments

Popular posts from this blog

What Is International Trade?

Types of Foreign Exchange Transactions

Approach to managerial Decision making