Case Study Method In Psychology Examples In Everyday

What is a case study?

A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context. It is an established research design that is used extensively in a wide variety of disciplines, particularly in the social sciences. A case study can be defined in a variety of ways (Table ​5), the central tenet being the need to explore an event or phenomenon in depth and in its natural context. It is for this reason sometimes referred to as a "naturalistic" design; this is in contrast to an "experimental" design (such as a randomised controlled trial) in which the investigator seeks to exert control over and manipulate the variable(s) of interest.

Stake's work has been particularly influential in defining the case study approach to scientific enquiry. He has helpfully characterised three main types of case study: intrinsic, instrumental and collective[8]. An intrinsic case study is typically undertaken to learn about a unique phenomenon. The researcher should define the uniqueness of the phenomenon, which distinguishes it from all others. In contrast, the instrumental case study uses a particular case (some of which may be better than others) to gain a broader appreciation of an issue or phenomenon. The collective case study involves studying multiple cases simultaneously or sequentially in an attempt to generate a still broader appreciation of a particular issue.

These are however not necessarily mutually exclusive categories. In the first of our examples (Table ​1), we undertook an intrinsic case study to investigate the issue of recruitment of minority ethnic people into the specific context of asthma research studies, but it developed into a instrumental case study through seeking to understand the issue of recruitment of these marginalised populations more generally, generating a number of the findings that are potentially transferable to other disease contexts[3]. In contrast, the other three examples (see Tables ​2, ​3 and ​4) employed collective case study designs to study the introduction of workforce reconfiguration in primary care, the implementation of electronic health records into hospitals, and to understand the ways in which healthcare students learn about patient safety considerations[4-6]. Although our study focusing on the introduction of General Practitioners with Specialist Interests (Table ​2) was explicitly collective in design (four contrasting primary care organisations were studied), is was also instrumental in that this particular professional group was studied as an exemplar of the more general phenomenon of workforce redesign[4].

What are case studies used for?

According to Yin, case studies can be used to explain, describe or explore events or phenomena in the everyday contexts in which they occur[1]. These can, for example, help to understand and explain causal links and pathways resulting from a new policy initiative or service development (see Tables ​2 and ​3, for example)[1]. In contrast to experimental designs, which seek to test a specific hypothesis through deliberately manipulating the environment (like, for example, in a randomised controlled trial giving a new drug to randomly selected individuals and then comparing outcomes with controls),[9] the case study approach lends itself well to capturing information on more explanatory 'how', 'what' and 'why' questions, such as 'how is the intervention being implemented and received on the ground?'. The case study approach can offer additional insights into what gaps exist in its delivery or why one implementation strategy might be chosen over another. This in turn can help develop or refine theory, as shown in our study of the teaching of patient safety in undergraduate curricula (Table ​4)[6,10]. Key questions to consider when selecting the most appropriate study design are whether it is desirable or indeed possible to undertake a formal experimental investigation in which individuals and/or organisations are allocated to an intervention or control arm? Or whether the wish is to obtain a more naturalistic understanding of an issue? The former is ideally studied using a controlled experimental design, whereas the latter is more appropriately studied using a case study design.

Case studies may be approached in different ways depending on the epistemological standpoint of the researcher, that is, whether they take a critical (questioning one's own and others' assumptions), interpretivist (trying to understand individual and shared social meanings) or positivist approach (orientating towards the criteria of natural sciences, such as focusing on generalisability considerations) (Table ​6). Whilst such a schema can be conceptually helpful, it may be appropriate to draw on more than one approach in any case study, particularly in the context of conducting health services research. Doolin has, for example, noted that in the context of undertaking interpretative case studies, researchers can usefully draw on a critical, reflective perspective which seeks to take into account the wider social and political environment that has shaped the case[11].

Table 6

Example of epistemological approaches that may be used in case study research

How are case studies conducted?

Here, we focus on the main stages of research activity when planning and undertaking a case study; the crucial stages are: defining the case; selecting the case(s); collecting and analysing the data; interpreting data; and reporting the findings.

Defining the case

Carefully formulated research question(s), informed by the existing literature and a prior appreciation of the theoretical issues and setting(s), are all important in appropriately and succinctly defining the case[8,12]. Crucially, each case should have a pre-defined boundary which clarifies the nature and time period covered by the case study (i.e. its scope, beginning and end), the relevant social group, organisation or geographical area of interest to the investigator, the types of evidence to be collected, and the priorities for data collection and analysis (see Table ​7)[1]. A theory driven approach to defining the case may help generate knowledge that is potentially transferable to a range of clinical contexts and behaviours; using theory is also likely to result in a more informed appreciation of, for example, how and why interventions have succeeded or failed[13].

Table 7

Example of a checklist for rating a case study proposal[8]

For example, in our evaluation of the introduction of electronic health records in English hospitals (Table ​3), we defined our cases as the NHS Trusts that were receiving the new technology[5]. Our focus was on how the technology was being implemented. However, if the primary research interest had been on the social and organisational dimensions of implementation, we might have defined our case differently as a grouping of healthcare professionals (e.g. doctors and/or nurses). The precise beginning and end of the case may however prove difficult to define. Pursuing this same example, when does the process of implementation and adoption of an electronic health record system really begin or end? Such judgements will inevitably be influenced by a range of factors, including the research question, theory of interest, the scope and richness of the gathered data and the resources available to the research team.

Selecting the case(s)

The decision on how to select the case(s) to study is a very important one that merits some reflection. In an intrinsic case study, the case is selected on its own merits[8]. The case is selected not because it is representative of other cases, but because of its uniqueness, which is of genuine interest to the researchers. This was, for example, the case in our study of the recruitment of minority ethnic participants into asthma research (Table ​1) as our earlier work had demonstrated the marginalisation of minority ethnic people with asthma, despite evidence of disproportionate asthma morbidity[14,15]. In another example of an intrinsic case study, Hellstrom et al.[16] studied an elderly married couple living with dementia to explore how dementia had impacted on their understanding of home, their everyday life and their relationships.

For an instrumental case study, selecting a "typical" case can work well[8]. In contrast to the intrinsic case study, the particular case which is chosen is of less importance than selecting a case that allows the researcher to investigate an issue or phenomenon. For example, in order to gain an understanding of doctors' responses to health policy initiatives, Som undertook an instrumental case study interviewing clinicians who had a range of responsibilities for clinical governance in one NHS acute hospital trust[17]. Sampling a "deviant" or "atypical" case may however prove even more informative, potentially enabling the researcher to identify causal processes, generate hypotheses and develop theory.

In collective or multiple case studies, a number of cases are carefully selected. This offers the advantage of allowing comparisons to be made across several cases and/or replication. Choosing a "typical" case may enable the findings to be generalised to theory (i.e. analytical generalisation) or to test theory by replicating the findings in a second or even a third case (i.e. replication logic)[1]. Yin suggests two or three literal replications (i.e. predicting similar results) if the theory is straightforward and five or more if the theory is more subtle. However, critics might argue that selecting 'cases' in this way is insufficiently reflexive and ill-suited to the complexities of contemporary healthcare organisations.

The selected case study site(s) should allow the research team access to the group of individuals, the organisation, the processes or whatever else constitutes the chosen unit of analysis for the study. Access is therefore a central consideration; the researcher needs to come to know the case study site(s) well and to work cooperatively with them. Selected cases need to be not only interesting but also hospitable to the inquiry [8] if they are to be informative and answer the research question(s). Case study sites may also be pre-selected for the researcher, with decisions being influenced by key stakeholders. For example, our selection of case study sites in the evaluation of the implementation and adoption of electronic health record systems (see Table ​3) was heavily influenced by NHS Connecting for Health, the government agency that was responsible for overseeing the National Programme for Information Technology (NPfIT)[5]. This prominent stakeholder had already selected the NHS sites (through a competitive bidding process) to be early adopters of the electronic health record systems and had negotiated contracts that detailed the deployment timelines.

It is also important to consider in advance the likely burden and risks associated with participation for those who (or the site(s) which) comprise the case study. Of particular importance is the obligation for the researcher to think through the ethical implications of the study (e.g. the risk of inadvertently breaching anonymity or confidentiality) and to ensure that potential participants/participating sites are provided with sufficient information to make an informed choice about joining the study. The outcome of providing this information might be that the emotive burden associated with participation, or the organisational disruption associated with supporting the fieldwork, is considered so high that the individuals or sites decide against participation.

In our example of evaluating implementations of electronic health record systems, given the restricted number of early adopter sites available to us, we sought purposively to select a diverse range of implementation cases among those that were available[5]. We chose a mixture of teaching, non-teaching and Foundation Trust hospitals, and examples of each of the three electronic health record systems procured centrally by the NPfIT. At one recruited site, it quickly became apparent that access was problematic because of competing demands on that organisation. Recognising the importance of full access and co-operative working for generating rich data, the research team decided not to pursue work at that site and instead to focus on other recruited sites.

Collecting the data

In order to develop a thorough understanding of the case, the case study approach usually involves the collection of multiple sources of evidence, using a range of quantitative (e.g. questionnaires, audits and analysis of routinely collected healthcare data) and more commonly qualitative techniques (e.g. interviews, focus groups and observations). The use of multiple sources of data (data triangulation) has been advocated as a way of increasing the internal validity of a study (i.e. the extent to which the method is appropriate to answer the research question)[8,18-21]. An underlying assumption is that data collected in different ways should lead to similar conclusions, and approaching the same issue from different angles can help develop a holistic picture of the phenomenon (Table ​2)[4].

Brazier and colleagues used a mixed-methods case study approach to investigate the impact of a cancer care programme[22]. Here, quantitative measures were collected with questionnaires before, and five months after, the start of the intervention which did not yield any statistically significant results. Qualitative interviews with patients however helped provide an insight into potentially beneficial process-related aspects of the programme, such as greater, perceived patient involvement in care. The authors reported how this case study approach provided a number of contextual factors likely to influence the effectiveness of the intervention and which were not likely to have been obtained from quantitative methods alone.

In collective or multiple case studies, data collection needs to be flexible enough to allow a detailed description of each individual case to be developed (e.g. the nature of different cancer care programmes), before considering the emerging similarities and differences in cross-case comparisons (e.g. to explore why one programme is more effective than another). It is important that data sources from different cases are, where possible, broadly comparable for this purpose even though they may vary in nature and depth.

Analysing, interpreting and reporting case studies

Making sense and offering a coherent interpretation of the typically disparate sources of data (whether qualitative alone or together with quantitative) is far from straightforward. Repeated reviewing and sorting of the voluminous and detail-rich data are integral to the process of analysis. In collective case studies, it is helpful to analyse data relating to the individual component cases first, before making comparisons across cases. Attention needs to be paid to variations within each case and, where relevant, the relationship between different causes, effects and outcomes[23]. Data will need to be organised and coded to allow the key issues, both derived from the literature and emerging from the dataset, to be easily retrieved at a later stage. An initial coding frame can help capture these issues and can be applied systematically to the whole dataset with the aid of a qualitative data analysis software package.

The Framework approach is a practical approach, comprising of five stages (familiarisation; identifying a thematic framework; indexing; charting; mapping and interpretation), to managing and analysing large datasets particularly if time is limited, as was the case in our study of recruitment of South Asians into asthma research (Table ​1)[3,24]. Theoretical frameworks may also play an important role in integrating different sources of data and examining emerging themes. For example, we drew on a socio-technical framework to help explain the connections between different elements - technology; people; and the organisational settings within which they worked - in our study of the introduction of electronic health record systems (Table ​3)[5]. Our study of patient safety in undergraduate curricula drew on an evaluation-based approach to design and analysis, which emphasised the importance of the academic, organisational and practice contexts through which students learn (Table ​4)[6].

Case study findings can have implications both for theory development and theory testing. They may establish, strengthen or weaken historical explanations of a case and, in certain circumstances, allow theoretical (as opposed to statistical) generalisation beyond the particular cases studied[12]. These theoretical lenses should not, however, constitute a strait-jacket and the cases should not be "forced to fit" the particular theoretical framework that is being employed.

When reporting findings, it is important to provide the reader with enough contextual information to understand the processes that were followed and how the conclusions were reached. In a collective case study, researchers may choose to present the findings from individual cases separately before amalgamating across cases. Care must be taken to ensure the anonymity of both case sites and individual participants (if agreed in advance) by allocating appropriate codes or withholding descriptors. In the example given in Table ​3, we decided against providing detailed information on the NHS sites and individual participants in order to avoid the risk of inadvertent disclosure of identities[5,25].

What are the potential pitfalls and how can these be avoided?

The case study approach is, as with all research, not without its limitations. When investigating the formal and informal ways undergraduate students learn about patient safety (Table ​4), for example, we rapidly accumulated a large quantity of data. The volume of data, together with the time restrictions in place, impacted on the depth of analysis that was possible within the available resources. This highlights a more general point of the importance of avoiding the temptation to collect as much data as possible; adequate time also needs to be set aside for data analysis and interpretation of what are often highly complex datasets.

Case study research has sometimes been criticised for lacking scientific rigour and providing little basis for generalisation (i.e. producing findings that may be transferable to other settings)[1]. There are several ways to address these concerns, including: the use of theoretical sampling (i.e. drawing on a particular conceptual framework); respondent validation (i.e. participants checking emerging findings and the researcher's interpretation, and providing an opinion as to whether they feel these are accurate); and transparency throughout the research process (see Table ​8)[8,18-21,23,26]. Transparency can be achieved by describing in detail the steps involved in case selection, data collection, the reasons for the particular methods chosen, and the researcher's background and level of involvement (i.e. being explicit about how the researcher has influenced data collection and interpretation). Seeking potential, alternative explanations, and being explicit about how interpretations and conclusions were reached, help readers to judge the trustworthiness of the case study report. Stake provides a critique checklist for a case study report (Table ​9)[8].

Table 8

Potential pitfalls and mitigating actions when undertaking case study research

Table 9

Stake's checklist for assessing the quality of a case study report[8]

I. Why Are Research Methods Important?

Science, at a basic level attempts to answer questions (such as "why are we aggressive) through careful observation and collection of data. These answers can then (at a more complex or higher level) be used to further our knowledge of us and our world, as well as help us predict subsequent events and behavior.

But, this requires a systematic/universal way of collecting and understanding data -- otherwise there is chaos.

At a Practical level, methodology helps US understand and evaluate the merit of all the information we're confronted with everyday. For example, do you believe in the following studies?

1) study indicated that the life span of left-handed people is significantly shorter than those who are right hand dominant.

2) study demonstrated a link between smoking and poor grades.

There are many aspects of these studies that are necessary before one can evaluate the validity of the results. However, most people do not bother to find out the details (which are the keys to understanding the studies) but only pay attention to the findings, even if the findings are completely erroneous.

They are also practical in the work place:

1) Mental Health Profession - relies on research to develop new therapies, and learn which therapies are appropriate and effective for different types of problems and people.

2) Business World - marketing strategies, hiring, employee productivity, etc.

II.Different Types of Research Methods

1) Basic Research

answer fundamental questions about the nature of behavior. Not done for application, but rather to gain knowledge for sake of knowledge.

For Example, look at the titles of these publications:

a) Short and long-term memory retrieval: A comparison of effects of information overload and relatedness.

b) Electrophysiological activity in the central nucleus of the amygdala: Emotionality and stress ulcers in rats.

Some people erroneously believe that basic research is useless. In reality, basic research is the foundation upon which others can develop applications and solutions. So while basic research may not appear to be helpful in the real world, it can direct us toward practical applications such as, but definitely not limited to:

a) Skinner - trained animals to work for reinforcement - lead to work schedules and applications in I/O psychology, therapy, and education.

b) all those therapeutic techniques that clinical psychologists and other therapists use to help people must studied to determine which are most effective for which situations, people, and problems.

2) Applied Research

concerned with finding solutions to practical problems and putting these solutions to work in order to help others.

Some examples of publication titles:

a) Effects of exercise, relaxation, and management skills training on physiological stress indicators.

b) Promoting automobile safety belt use by young children.

Today, there is a push to more applied research. This is no small part due to the perspective in the United States where we want solutions and we want them now! BUT, we still need to keep our perspective on the need for basic research.

3) Program Evaluation

look at existing programs in such areas as government, education, criminal justice, etc., and determine the effectiveness of these programs. DOES THE PROGRAM WORK?

For example - Does capital punishment work? Think of all the issues surrounding this program and how hard it is to examine its effectiveness. The most immediate issue, how do you define the purpose and "effectiveness" of capital punishment? If the purpose is to prevent convicted criminals from ever committing that same crime or any other crime, than capital punishment is an absolute - 100% effective. However, if the point of capital punishment is to deter would-be criminals from committing crimes, then it is a completely different story.

III.How Do Non-Scientists Gather Information?

We all observe our world and make conclusions. HOW de we do this:

1) seek an authority figure - teacher tells you believe them. Is this such a good idea?

For example, if your teacher tells you that there is a strong body of evidence suggesting that larger brains = greater intelligence.

2) intuition - discussed in previous chapter.

Are women are more romantic then men?
Is cramming for an exam is the best way to study?

Whatever you opinion, do you have data to support your OPINIONS about these questions???

Luckily, there is a much better path toward the TRUTH...the Scientific Method.


How do we find scientific truth? The scientific method is NOT perfect, but it is the best method available today.

To use the scientific method, all topics of study must have the following criteria:

1) must be testable (e.g., can you test the existence of god?)

2) must be falsifiable - easy to prove anything true (depends on situation), but systematically demonstrating a subject matter to be false is quite difficult (e.g., can you prove that god does not exist?)

A. Goals of the Scientific Method

Describe, Predict, Select Method, Control, Collect Data, Analyze, Explanation

1) Description - the citing of the observable characteristics of an event, object, or individual. Helps us to be systematic and consistent.

This stage sets the stage for more formal stages - here we acquire our topic of study and begin to transform it from a general concept or idea into a specific, testable construct.

a) Operational Definitions - the definition of behaviors or qualities in terms of how they are to be measured. Some books define it as the description of ...the actions or operations that will be made to measure or control a variable.


How can you define "life change"? One possibility is the score on the Social Readjustment Rating Scale.

How do you define obesity, abnormality, etc. in a way that is testable and falsifiable?

2) Prediction - here we formulate testable predictions or HYPOTHESES about behavior (specifically, about our variables). Thus, we may define a hypothesis as a tentative statement about the relationship between two or more variables. For example, one may hypothesize that as alcohol consumption increases driving ability decreases.

Hypotheses are usually based on THEORIES - statements which summarize and explain research findings.

3) Select Methodology & Design - chose the most appropriate research strategy for empirically addressing your hypotheses.

4) Control - method of eliminating all unwanted factors that may effect what we are attempting to study (we will address in more detail later).

5) Collect Data - although the book is a little redundant and does not differentiate well between this stage and selecting the design and method, data collection is simply the execution and implementation of your research design.

6) Analyze & Interpret the Data - use of statistical procedures to determine the mathematical and scientific importance (not the "actual" importance or meaningfulness) of the data. Were the differences between the groups/conditions large enough to be meaningful (not due to chance)?

Then, you must indicate what those differences actually mean...discovery of the causes of behavior, cognition, and physiological processes.

7) Report/Communicate the Findings - Psychology is a science that is based on sharing - finding answers to questions is meaningless (to everyone except the scientist) unless that information can be shared with others. We do this through publications in scientific journals, books, presentations, lectures, etc.

B. Ways of Conducting Scientific Research

1) Naturalistic Observation - allow behavior to occur without interference or intervention by the researcher.

we all do this (people watch)

weaknesses: often not easy to observe without being intrusive.

strengths: study behavior in real setting - not lab.

2) Case Study - in depth investigation of an individual's life, used to reconstruct major aspects of a person's life. Attempt to see what events led up to current situation.

Usually involves: interview, observation, examine records, & psych. testing.

weaknesses: very subjective. Like piecing together a puzzle, often there are gaps - relies on memory of the individual, medical records, etc.

strengths: good for assessing psychological disorders - can see history and development.

3) Survey - either a written questionnaire, verbal interview, or combination of the two, used to gather information about specific aspects of behavior.


weaknesses: self-report data (honesty is questionable)

strengths: gather a lot of information in a short time.

gather information on issues that are not easily observable.

4) Psychological Testing - provide a test and then score the answers to draw conclusions from.

Examples. - I.Q. tests, personality inventories, S.A.T., G.R.E., etc...

weaknesses: validity is always a question; honesty of answers.

strengths: can be very predictive and useful if valid.

5) Experimental Research (only way to approach Cause & Effect) - method of controlling all variables except the variable of interest which is manipulated by the investigator to determine if it affects another variable.

V. KEY TERMS (you will need to get very familiar with these terms to succeed in Psychology. You can also look in the glossary of terms we have provided for these and other important terms):

1) variable - any measurable condition, event, characteristic, or behavior that can be controlled or observed in a study.

Independent Variable (IV)- the variable that is manipulated by the researcher to see how it affects the dependent variable.

Dependent Variable (DV)- the behavior or response outcome that the researcher measures, which is hoped to have been affected by the IV.

2) control - any method for dealing with extraneous variable that may affect your study.

Extraneous variable - any variable other than the IV that may influence the DV in a specific way.

Example - how quickly can rats learn a maze (2 groups). What to control?

3) Groups (of subjects/participants) in an Experiment - experimental vs control

experimental group - group exposed to the IV in an experiment.

control group - group not exposed to IV. This does not mean that this group is not exposed to anything, though. For example, in a drug study, it is wise to have an experimental group (gets the drug), a placebo control group (receives a drug exactly like the experimental drug, but without any active ingredients), and a no-placebo control group (they get no drug...nothing)

both groups must be treated EXACTLY the same except for the IV.

4) Confound - occurs when any other variable except the IV affects the DV (extraneous variable) in a systematic way. In this case, what is causing the effect on the DV? Unsure.

Example - Vitamin X vs Vitamin Y. Group 1 run in morning, group 2 in afternoon. Do you see a problem with this? (I hope so)

Many things may lead to confounds (here are just two examples):

5) Experimenter Bias - if the researcher (or anyone on the research team) acts differently towards those in one group it may influence participants' behaviors and thus alter the findings. This is usually not done on purpose, but just knowing what group a participant is in may be enough to change the way we behave toward our participants.

6) Participant Bias (Demand Characteristics) - participants may act in ways they believe correspond to what the researcher is looking for. Thus, the participant may not act in a natural way.

7). Types of Experimental Designs: true experiment, quasi-experiment, & correlation.

a) The True Experiment: Attempts to establish cause & effect

To be a True Experiment, you must have BOTH - manipulation of the IV & Random Assignment (RA) of subjects/participants to groups.

1) manipulation of the IV - manipulation of the IV occurs when the researcher has control over the variable itself and can make adjustments to that variable.

For example, if I examine the effects of Advil on headaches, I can manipulate the doses given, the strength of each pill, the time given, etc.. But if I want to determine the effect of Advil on headaches in males vs females, can I manipulate gender? Is gender a true IV?

2) Random Assignment - randomly placing participants into groups/conditions so that all participants have an equal chance of being assigned to any condition.

b) Quasi-Experimental Designs: same as the true experiment, but now there is no random assignment of subjects to groups. Still have one group which gets the IV and one that does not, but subjects are not randomly assigned to groups.

There are many types of quasi designs (actually, too many to go into detail here). What is vital to know is that in all of them, there's a lack of RA.

c) Correlation: attempts to determine how much of a relationship exists between variables. It can not establish cause & effect.

1) to show strength of a relationship we use the Correlation Coefficient (r).

The coefficient ranges from -1.0 to +1.0:

-1.0 = perfect negative/inverse correlation

+1.0 = perfect positive correlation

0.0 = no relationship

positive correlation- as one variable increases or decreases, so does the other. Example. studying & test scores.

negative correlation - as one variable increases or decreases, the other moves in the opposite direction. Example. as food intake decreases, hunger increases.


1) Between-subjects design: in this type of design, each participant participates in one and only one group. The results from each group are then compared to each other to examine differences, and thus, effective of the IV. For example, in a study examining the effect of Bayer aspirin vs Tylenol on headaches, we can have 2 groups (those getting Bayer and those getting Tylenol). Participants get either Bayer OR Tylenol, but they do NOT get both. T

2) Within-subjects design: in this design, participants get all of the treatments/conditions. For example, in the study presented above (Bayer vs Tylenol), each participant would get the Bayer, the effectiveness measured, and then each would get Tylenol, then the effectiveness measured. See the differences?


Validity - does the test measure what we want it to measure? If yes, then it is valid.

For Example - does a stress inventory/test actually measure the amount of stress in a person's life and not something else.

Reliability - is the test consistent? If we get same results over and over, then reliable.

For Example - an IQ test - probably won't change if you take it several times. Thus, if it produces the same (or very, very similar) results each time it is taken, then it is reliable.

However, a test can be reliable without being valid, so we must be careful.

For Example - the heavier your head, the smarter you are. If I weighed your head at the same time each day, once a day, for a week, it would be virtually the same weight each day. This means that the test is reliable. But, do you think this test is valid (that is indeed measures your level of "smartness")? Probably NOT, and therefore, it is not valid.

Find Psychology Confusing? It doesn't have to be. Become an Member and get the tools and resources you need to make Psychology easier to understand - and get better grades while you're at it. Find out more.


Leave a Reply

Your email address will not be published. Required fields are marked *