skip to content

Cambridge Centre for Teaching and Learning


Core ethical principles should be followed irrespective of whether you are designing your study for a publishable research outcome, or simply evaluating an educational intervention. You should consider the effects of your study on participants at all stages of your research study, which will include recruitment of participants, to writing up and dissemination of findings.  You must remember that your participants are people, and not just sources of reseach data. 

Ethical issues may arise from the nature of the research or evaluation project. For instance, ethical questions might arise from: the methods of data collection (do participants know they are being observed?); the nature of participants (are they of an age to give informed consent?); the procedures adopted to gather data (could this create distress?); the type of data collected (is it personal or sensitive information?); what is to be done with the data (might publication expose participants to embarrassment or cause reputational damage to the University?); and reporting the data (will participants understand the study and their contribution?). A consideration of the ethical issues should be factored into your research at all stages, and not just at the point of submitting an application for ethics review. 

However, while formal ethics review is often not needed for evaluation, it will be required for anything that is to be published or made publicly available.  Remember that ethics reviews cannot be undertaken retrospectively. If you think that you would like to showcase or disseminate interesting findings about your practice or initiative, plan ahead for the ethics review before you start the evaluation process.  


Similarities between research and evaluation

There are many similarities between educational research and evaluation, particularly in educational contexts. Researchers and evaluators both pose questions and select the methodological approach best suited to investigate their questions in their university setting. Both research and evaluation are concerned with producing information and with promoting explanation and understanding, which can both in turn contribute to enchanced teaching practices, decision-making about the curriculum,  or policy formation. They both may operate at different levels: individual courses or programmes, local departments or Colleges, institutional policies or resources, or national enhancements of the higher education sector. 


Differences between research and evaluation

However, there are also some key differences in approach. The following table presents some of the distinguishing features of research and evaluation studies. Although the two processes may overlap, this presentation of the distinctions might be useful to clarify your approach to your own higher educational project.

Research Evaluation
To be publishable it should be based in relevant theory AND normally requires ethical approval. Should be based in practice and relevant for and understandable by practitioners and stakeholders. Does not normally require ethical approval - is not designed to be publishable.
May not prescribe or know its intended outcomes in advance. Concerned with the achievement of intended outcomes.
Intellectual property held by the researcher. Cedes ownership to the institution or sponsor, upon completion.

Contributes to knowledge in the field, regardless of its practical application; provides empirical information - i.e. "what is".

Conducted to gain, expand and extend knowledge; to generate theory, "discover" and predict what will happen.

Designed to use the information / facts to judge the worth, merit, value, efficacy, impact and effectiveness of something - i.e. "what is valuable".

Conducted to assess performance and to provide feedback; to inform policy-making and "uncover". The concern is with what has happened or is happening.

Designed to question, demonstrate or prove. Provides the basis for drawing conclusions, and information on which others might or might not act - i.e. it does not prescribe. Designed to improve practices. Provides the basis for decision-making; might be used to increase or withhold resources or to change practice.
Provides information for others to use. Disseminated widely, perhaps to specific audiences and/or specific journals. Provides information for a local audience as evidence for adjustment; not for the public domain.
Judgements are made by peers, standards for which include: validity, reliability, accuracy, causality, generalisability, rigour. Should be publishable. Judgements are made by stakeholders, standards for which include: utility, feasibility, perceived relevance, efficacy, fitness for purpose.

Source: adapted from Research or Evaluation?, Educational Development Unit, Imperial College London


Useful framing questions

As you develop your study, it will be useful to reflect on the following questions.

If you are thinking of designing a research project:

  • What are your research questions or hypotheses? Are they theoretically underpinned, relevant, and interesting and accessible to your target audience?
  • Who is your target audience? Are you aiming at practitioners and a disciplinary audience or are you trying to reach a wider cross-disciplinary or general educationalist audience?

If you are thinking of evaluating current practices:

  • What are the objectives of the change / intervention on practice?
  • What would count as success and how / when would you know?
  • Whose performance, opinions, views or perceptions matter?
  • What are you comparing against - is there a base-line or previous performance you can capture, are there different groups to compare, or are you comparing against some 'ideal'?

Teaching & Learning Newsletter

Stay informed of upcoming events and hear about innovative practices across the university by subscribing to our newsletter.

Check out the latest issue.