Data and Evaluation

The Student Success Team has built its foundation on research and data.

To this end, in addition to the data used for the Access and Participation Plan (APP), the Data and Evaluation Manager creates numerous reports as well as data packs for subject areas to use. This allows the work within subject areas to be focused on the problems that arise within, rather than just based on the institutional targets.

The aim of these reports is to provide information in an easily digestible manner and incorporates summaries of some of the more technical work that occurs centrally. The Student Success Central Team has found this to be a more engaging method for subject areas by utilising their own data and positioning it against other subject areas within the University. It also encourages a culture of frequent monitoring and examination of trends that allows agility to amend practices when contemporary issues may occur.


Alongside these reports, data is also distributed to subject areas for monitoring of Student Success activities, which is then remotely retrieved to enable institutional analysis of interventions. This is explained in more detail in the Evaluation Framework section. There is always work ongoing regarding identification of more nuanced issues such as whether the gap is affected more:

  • within certain stages
  • by the foundation year
  • by module selection

as well as several other areas. Intersectionality is also examined as well as some work on institutional aggregated predictive analytics, much like the Office for Students has conducted for the national data, to see how the institutional awarding gap may change in the future. We do not use machine learning or predict students’ final outcomes; no predictive work is conducted on an individual student basis.

Evaluation Framework

The Student Success Evaluation Framework provides a methodology and protocol for the Student Success Team at Kent to assess the level of impact of student success interventions on students' attainment, engagement and continuation. It also aims to establish the extent to which the aggregated effect of these interventions is leading the University to meet its institutional Office for Student (OfS) targets. 

Why evaluate student success interventions?

  • Evaluation outcomes provide an assessment tool for academic schools to know the extent to which their interventions are effective and are having a positive impact on students’ achievement. It is also a tool for schools to explore avenues for student support, or for staff engagement in Equality Diversity and Inclusion activities. The impact of these interventions will subsequently result in a reduction of our institutional gaps. 
  • The OfS Regulatory Framework requires institutions to provide evidence of impact on the delivery of the Access and Participation Plan (APP) through evaluation, and in line with the standards of evidence guide provided by the OfS.
  • To meet the institutional principles of accountability, good practice and value for money, this evaluation framework provides a tool to identify interventions which are most effective in reducing attainment differentials, thus leading to a more agile mechanism to meet OfS targets, a faster reduction in gaps, and improved equality of opportunity for students.  

How do we evaluate?

The Student Success evaluation framework is informed by Theory of Change (ToC), as an approach to programme planning and evaluation.For programme planning, the Student Success ToC provides a tool and road map for academic schools and divisions to:

  • define the purpose and rationale of their interventions and activities
  • establish how the data on awarding and continuation gaps will inform decisions around targets and expected outcomes
  • identify the domains of change and pre-conditions necessary to effectively develop the programme
  • outline the strategic priorities in terms of type of interventions and target groups of students within socio-demographic characteristics, stages, and evaluation outcomes.

For programme evaluation, the Student Success ToC includes two types of evaluation that are interlinked, process evaluation and impact evaluation.

Process Evaluation allows us to understand how interventions have been implemented and delivered, and identify the extent to which such a process has been effective in achieving the expected outcome of the interventions. Through the data monitoring of student and staff engagement in these interventions, case studies, and feedback analysis, we can establish how effective they have been in targeting students, and how the intervention has contributed to changing behaviours and attitudes for institutional change. In terms of the OfS standards of evidence this type of evaluation is defined as Type 1 Narrative. culpa. Labore cupidatat eiusmod.

Impact Evaluation

provides a methodology to establish what difference the intervention made and to what extent the intervention outcomes can contribute to students' improvement in attainment or continuation.

This type of evaluation is considered by the OfS.

We begin the impact evaluation by establishing intervention groupings, this gives us a much better chance of having a population that we are comfortable drawing conclusions from. 

There are broad groupings and granular groupings in the Student Success Evaluation Framework, such as skills workshops being a granular, but skills interventions as a whole being broad. By calculating the students’ change in overall average between two data points we can then examine where attendance at an intervention correlated with a higher increase in average. 

Outliers (data points that do not follow the behaviour of the rest of the group) are removed prior to this analysis taking place and we account for the fact that, for example, different subject areas are likely to have a different distribution of the changes in averages.      

The methodology has developed over time as we refine and make the statistical analysis more robust. We justify the implementation of different statistical techniques by examining the distributions and clarify our reason for removing data that we identify as an outlier.

Once we have a clean dataset of non-outlier data, we move to categorise intervention codes which align to a statistically significant improvement on students’ attainment or attendance, and these are then subject to contribution analysis. Once the contribution analysis has been completed in line with the Theory of Change model, we can establish which types of intervention have enough evidence to have a chain of causality that links them to improved attendance or attainment.