The limitations of training

The limitations of training

I’m prompted to write about the limitations of training – a long held view – because I suspect we are about to see at least a minor surge in authorities and organisations commissioning ‘training’ in risk-benefit assessment. To the degree that this indicates a growing commitment to making judgments about provision for play from a risk-benefit perspective, this is to be welcomed.

It’s worth noting, however, that a risk-benefit assessment technique is itself neutral – it has no opinion; it can engender a range of decisions, ones that can contradict each other.  But those promoting risk-benefit assessment – me included – are anything but neutral about what they wish to achieve from a risk-benefit approach to assessing play provision.  A conundrum, then, and worth a few words to reflect on it.


I’ve long believed, against the trend, that the problem with training, is that there’s rather too much of it.  More precisely, what is so often asked of training is what it is constitutionally unable to deliver.  To support this contention I’ll need to say a few words about training’s scope, that is, my understanding of training ‘strictly speaking’.  The immediate prompt to say a few words about this is the suspicion that risk-benefit assessment training opportunities are about to be unleashed.  It is therefore important to make best possible use of the resources to be deployed.  A narrow focus on training as the key means for inculcating a risk-benefit approach, may not in practice turn the trick we require of it.

I speak about training, ‘strictly speaking’, knowing full well that the term ‘training’ in practice encompasses a range of learning engagements.  Nevertheless, for the purpose of this article, I think it helpful to make stark some distinctions the better to highlight what I see as one of the fundamental difficulties in changing the way play provision is understood, provided for and assessed – the need to change attitudes and understandings. .

There is a conundrum to notice, and it is this: proponents of risk-benefit assessment – including me – see that it has the potential to encourage and support a less risk averse, less risk illiterate approach to risk in play.  In other words, we’re on a mission. We are not neutral.  We see that, in the right hands, a risk-benefit approach can support the sort of decision making we want to encourage.

The problem arises, however, that any risk-benefit assessment technique is simply that, a technique or tool.  Techniques and tools are inherently neutral, they can be used in a number of ways.  What counts, then, is the understandings and commitments of the technique or tool user.  From this perspective, this goes more to the question: How can prospective risk-benefit assessors be assisted to ‘make their own’ the rationale and values that have animated those of us who value risk in play.  Mere adeptness in the use of a technique or tool is no guarantee that the desired outcome for play provision can be secured.

The difference between learning to do something and deciding what to do

The core of the difficulty is that key areas of decision-making depend to a significant extent on the way the ‘decider’ sees the world – their ‘world view’ – and not simply their competence to perform this or that technique.  Training, however, is not designed, is not able, to address such questions with the fullness required.

The distinction that needs to be marked is, ‘the difference between learning to do something and deciding what to do’[1].   Training, ‘strictly speaking’, is primarily about ‘learning to do something’.   In ‘What is an educational process?’[2]  R. S. Peters describes training thus:

‘The concept of ‘training’ has application when (i) there is some specifiable type of performance that has to be mastered, (ii) practice is required for the mastery of it, (iii) little emphasis is placed on the underlying rationale.

Peter’s goes on to say that ‘training’ has wider application than the acquisition of skills, it also:

‘has application whenever anything coming up to a clear cut specification has to be learnt. [e.g.] Military training includes not only arms drills…it also includes the inculcation of habits such as punctuality and tidiness.’

It is significant that Peter’s identifies that training places little emphasis on understanding the rationale underlying what is being trained.  I want to refine that point just a little and say that in practice, in some training at least, attempts may be made to explicate the rationale underpinning technique but the training mode of learning militates against this being anything more than  a skate across the surface of ideas.

Put another way, one can be trained to perform a given technique whether one cares or not, understands or not, believes or not, in the underlying rationale that generated the need for some sort of technique in the first place[3].  A pacifist can be trained to be a good shot, and excellent at drill, but do not ask her to plan your campaign against the enemy.  The pacifist world view is more likely to be directed towards surrender, subverting thereby the purpose of armed forces, notwithstanding her adeptness in martial skills.

There are of course areas where rule-governed routines, eminently trainable in themselves, are fundamental to our safety and to the achievement of our goals.  I expect readers will be unanimous in the view, and plenty pleased, that airline pilots, reliably, routinely and repetitively go through a pre-flight checklist before take-off.  There is a point, however, when the undercarriage (apologies for turn of phrase. Irresistible) of  routine, template and checklist need to be folded away.  Indeed, for certain sorts of decision-making,  over-reliance on technique or template is to impede substantive decision-making, to mistake what is involved in the type of decision that needs to be made.   This area is one where the ‘world view’ of the decision-maker is critical, necessarily affecting the way they ‘see’ the issue or area under view, and which simultaneously points to the type or range of responses thought to be available.  In other words, meaning is made or constructed, the facts themselves are mute.

This is all tied up with the question of ‘what counts as evidence’ – what might it be saying? – and stands counter to the superstition that policy-making can in any neutral sense be ‘evidence-based’.  This perspective is illustrated in David Seedhouse’s, ‘Health Promotion: Philosophy, Prejudice and Practice[4]‘ where he says:

‘…that persistent heavy drinking is likely to cause disease has been established by epidemiological research as firmly as that particular science can establish anything, but whether such drinking is bad for a person’s health depends on one’s interpretation of health, and this in turn depends upon how one thinks life ought to be lived. If a person chooses a “hard living” lifestyle, even if the person becomes diseased as a result, this does not automatically mean that this was a bad choice (not if this is the life he genuinely wanted to live).  The causation of the disease is a matter of evidence, or even fact, the interpretation of the behaviour that cause the disease depends upon what the interpreter values…’

In terms of play provision – and much else – this sort of thinking comes alive in the range of responses to, for example, injury statistics. Stated crudely, one position is that injuries are always to be avoided. Injury denotes a failure somewhere that has to be corrected.  Here, injury statistics speak unequivocally.  The facts are anything but mute, they are entreaties; entreaties  to take measure to reduce injury figures.   From a play perspective, this represents a narrow and mistaken view[5].  But one can see immediately that a risk-benefit assessment will be massively influenced by the meaning an assessor attributes to accidents and injuries.

Judgment, values and working from principles

We need now to consider the other half of the Rhush Rees distinction, ‘deciding what to do’.  What is it to ‘decide’ something?    Thinking about what is entailed by a risk-benefit approach helpfully illuminates the points I wish to make.

As a technique, though significant for play, there is nothing startling or new about a risk-benefit approach.  And because it is at one level simply a technique, it can be used well or badly.  But what on earth might ‘well’ or ‘badly’ mean in this context?  The technique is not of the same type as a formula used to demonstrate a mathematical proof.  Here the ‘wellness’ or ‘badness’ in either the technique itself, or in the manner of its application, has a direct bearing on the possibility of securing a right or wrong answer.  But a risk-benefit assessment technique is not like that.   Whichever technique is used, there can be no certain right or wrong answer.  In fact, the language of right or wrong mischaracterises the nature of what is going on here.  What a risk-benefit assessment requires is a decision  underpinned by reasons. And reasons are always open to counter-reasons.

For the risk-benefit assessor,  it is inescapable that the ‘decision has to come from the person involved’[6].  This can, and often does, leave individuals with a sense of vulnerability, a sense of being exposed to blame or sanction if ‘something goes wrong’.  And of course what constitutes ‘something going wrong’ will vary, from one person to another, one institution to another, from one part of the same institution to another part.

In institutional terms,  factors affecting the assessor judgment may include:  whether or not the organisation has a formal policy making explicit the need for risk in play.  And if there is a formal policy,  does the assessor have faith in it?   Or is there the suspicion, based perhaps on experience, that institutional policy-making has some of the features of a ritual performed, a form of ceremonial utterance, divorced, and perhaps opposed to, the actual practice allowed and supported on the ground?

In personal terms, the life experience of the assessor can have a bearing on how they respond to children and teenagers doing things that they may consider hairy.   On some occasions at least, what they believe and understand ‘rationally’, may not be aligned with what they feel; feelings and anxieties which may drag them away from the logic of their own intellectual position.   Training can have little or no impact on the sort of considerations highlighted here.

The problems confronting any risk-benefit assessor in making judgment about where the balance between risk and benefit lie will vary situation to situation.  When confronting these type of problems, ‘there is no sort of simplification which will make them less difficult’[7].  Nevertheless knowledge and confidence can grow from familiarity with addressing such problems, provided that there is concordance – made manifest in the detail of everyday organisational life – between the organisation’s world view and that of the risk-benefit assessor.

What is to be done?

I suspect the first move is simply a matter of clarification: to clarify the nature of the area we think we need to address.   The implication of what I’ve said in this piece, is that the we need the primary task to be the mutual exploration of ideas, understandings, values and objectives; and the secondary task to be the  teaching of a technique or risk-benefit assessment method(s).  This would point to the need to construct opportunities for a discursive, exploratory, questioning approach to learning, where thought and counter-thought might find expression; where doubt is allowed and discussed, where the inescapable ethical dimensions of the sort of questions we are required to confront are drawn out, rather than avoided.

Equally, because we are aiming for the practical application of ideas in a wide variety of contexts – just think of the range of provision where risk-benefit assessment should be required; think also of the range of attitudes, objectives, anxieties, divergences of view, functional and dysfunctional power structures contained in that variety of provision – it is useful to think about the sort of support  that could be given direct, on the ground, to the ‘frontline’.  A short-term mentoring programme, perhaps.  Working within a provision has two distinct advantages: first,  it aims to address whole teams, or whole organisations – these are the sites where in practice the ideas informing a risk-benefit approach are at risk of dilution, opposition, misinterpretation; second, the risk-benefit assessments are real-time, real-place – the questions to be confronted are not exercises abstracted from day-to-day reality.  Rather, the assessment is embedded within that reality.


There is of course no one or perfect way of inculcating values and understandings but there are wrong or misapplied ways.  This article suggests that too narrow a focus on training may be one of them.

[1] Rhush Rhees, ‘Without answers’, 1969.  In fact it was rereading an article co-authored by Phil Turner (with me very much as the ‘co’, if I recall correctly) on ‘The Limitations of Management Training’, for our consultancy Common Knowledge, that reminded me of the quote.

[2] From ‘The concept of education’, Edited by R. S. Peter, Routledge & Kegan Paul first published 1967

[3] None of this is an argument against the need for technique, or technical skills.

[4] Published by John Wilely & Sons Ltd,2004. David Seedhouse, Professor of Health and Social Ethics at Auckland University of Technology, New Zealand.

[5] The arguments supporting this assertion are well rehearsed and need not be repeated here.

[6] Rhus Reees, ibid

[7] Rhees ibid

2 responses to “The limitations of training

  1. Arthur, As you’ll discover if you come to my Nottingham workshop, I use the term `training’ in my advertising materials just to act as a measure of reassurance to those who wish to `Understand’ what is meant by the risk-benefit approach but as of yet, haven’t grasped the point.
    You are spot on about NVQs etc. My own recent experience of teaching reinforced this when I realised that I was being taught how to memorise information purely for the purpose of passing an exam, not to understand the subject. I fear that is how most education is delivered these days.
    The workshop is actually all about releasing people from years of reliance on external, third parties (people, tools, systems, standards, etc) who are allowed to take responsibility for judgments only they should, and can, make. I also raise the Injury question during the session so that delegates can work through the issue for themselves.
    I quite like the term `Mentor’, as is also used by Michael Follett for his OPAL programme (not training!), as Mentoring seems to me to be a better term for what actually takes place.
    The best moment in any workshop is when the penny actually drops, and someone suddenly becomes enthusiastic about getting back to work and addressing their past notions. I hope to meet you in Nottingham soon.


  2. This is nice.

    you haven’t said it, and this critique is devastating when applied to the NVQ system. Like you, I have been a long-time sceptic about ‘training’. I ran a pilot ‘competence-based’ programme, before NVQs arrived, and the key issue for my workplace assessors, almost to the exclusion of any other, was expressed like this: ”Yes, yes, I know he can do it, but he doesn’t do it, or he does when I’m watching him, but I don’t trust that he wants to do it, or understands why…“


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s