Pages

Showing posts with label research question. Show all posts
Showing posts with label research question. Show all posts

Wednesday, 4 January 2023

Types of Likert scales

For some reason I don't think I have only ever done one post on Likert scales (here). So let's do something about that. 

While I assume most of us will know what Likert scales are, we are always best to start with a definition. So they are a type of assessment question "format developed by Rensis Likert" which provide a much more systematic approach for question analysis than a 'phrase completion scale' question, which is where someone provides their own word (Kempf-Leonard, 2005b, p. 53). Data analysis becomes much more difficult when participants have chosen their own responses; while Likert-style standardisation does remove richness, it allows for more comparison. Likert came up with the idea in 1932 when completing his PhD (Sack, 2020). 

Likert scales are "used to measure attitudes [and] have an introductory stem consisting of a positively or negatively stated proposition followed by a graduated response key composed of adverbs (e.g., 'strongly') and verbs (e.g., 'agree')" (Kempf-Leonard, 2005b, p. 53). A number can be allocated to each option which allows us to more easily measure the strength of the responses, to weight the answers provided. The most common is for a five point scale may have five points included, or a six point scale may go from five to zero (see the image accompanying this post). Negatively asked questions are often scored in the opposite direction.

If we are putting together a questionnaire, we should try and use the same - or very similar - Likert scales throughout our survey if we can. This is to prevent tiring our surveyees. We only want them to have to think about answering the question, not HOW to answer the question as well.

There are a range of Likert scale anchors ("the descriptors of the various response categories", Kempf-Leonard, 2005a, p. 504) that we can use. A few examples follow:

  • Frequency scales are seeking to find how often - or when - participants do something. Questions might be "How often do you read this blog?" with scales such as "always, very frequently, occasionally, rarely, very rarely, never" (Sack, 2020) or "daily, every second day, twice a week, weekly". There are a lot of standard frequency scales (Vagias, 2006).
  • Agreement scales ask how much participants agree or disagree with the question or statement (Sack, 2020). "This blog has new content every day", and "Strongly disagree, Disagree, Somewhat disagree, Neither agree or disagree, Somewhat agree, Agree, Strongly agree". 
  • Importance scales are asking participants for a personal judgement. In some ways this is similar to the next option (quality scales) so phrasing should be careful and well-crafted so as not to be leading (Sack, 2020). Questions such as "Reading the blog every day is..", "Not at all important, Low importance, Slightly important, Neutral, Moderately important, Very important, Extremely important" (Vagias, 2006). 
  • Quality scales are seeking information about participant standards (Sack, 2020). Questions might be "Blog posts contain good information" These too are very common, and the example image this post is an example: "Excellent, Very good, Good, Fair, Poor, Very poor"
  • Likelihood scales ask participants about what is true for them. A seven point scale to answer the question "I read this blog every day",  would be "Very untrue of me, Untrue of me, Somewhat untrue of me, Neutral, Somewhat true of me, True of me, Very true of me" (Vagias, 2006). 

I hope this is helpful!


Sam

References:

Kempf-Leonard, K. (Ed.). (2005a). Encyclopedia of Social Measurement (Volume 2 F-O, 1st ed.). Elsevier. 

Kempf-Leonard, K. (Ed.). (2005b). Encyclopedia of Social Measurement (Volume 3 P-Z, 1st ed.). Elsevier.

Sack, H. (5 August 2020). Rensis Likert and the Likert Scale Method. http://scihi.org/rensis-likert/

Vagias, W. M. (2006). Likert-type scale response anchors. Clemson University. https://www.peru.edu/oira/wp-content/uploads/sites/65/2016/09/Likert-Scale-Examples.pdf

read more "Types of Likert scales"

Monday, 19 December 2022

Mapping the research question

I have written about this before (see here), but it is really important to ensure that all key components of a research question feeds into the project title, the aims, the objectives, and the research outcomes. 

When we ask a research question, we need to sit back and consider all the components we need to find out in order to answer it. We may break our question down into the sub-questions so we can be sure we are asking the 'right' questions to answer our question; so that our aims and outcomes will clearly get us to what we want to know; and that our literature review scopes the environment clearly so we can then construct a clear and unambiguous method to collect our data. Or we may ask a single question, and simply pull out the elements from the question itself that we know we need to explore fully. In this post we are exploring a single question.

Each of the question components will form a section of our literature review, so we can scope each idea, and then link it to the next idea. We build argument that way. Our methodology is designed to collect the data so we can measure fit with the research question, and determine just how well we have answered our question.

For example, the question (the image accompanying this post): "To what extent does quality career advice support the successful transition of Māori students into higher education?" contains seven concepts:

  1. To what extent 
  2. quality career advice 
  3. support 
  4. transition 
  5. successful transition 
  6. Māori students 
  7. higher education

I think that having roughly six areas to define - one fewer than the student had originally proposed in the research question above - is probably about right. Remember that each of these sections must be thoroughly explored in the literature review, as we need a baseline to measure and make sense of our findings (once we collect them). Most of these elements need to be built into our collection methods and data collection questions; and then the corresponding data must be collected in order to answer the question. Then the structure of our findings may be written to mirror our literature (if it fits that structure). We can back-check our methods by auditing our research design to see if it will meaningfully collect our data from the participants in a way that answers our question. 

We can also see with this question that the research period needs to be of a good duration. We might need a five year study to answer this meaningfully: interviewing student participants as they leave secondary school, and following them through their tertiary training until they are successfully navigating the job market. We need to consider whether this is achievable in our own timeframe.

This is a lego-like process. Everything should fit together without loose ends. And if you want a great example of research having left many loose ends, check out the horror story here).


Sam

read more "Mapping the research question"

Monday, 25 October 2021

A fragmented project outline

When we are putting together a project plan, we need to be really careful about what words we chose. We need to ensure that the language we use in our project title reflects the wording used in our research question. And in our sub-questions. And in our aims. And in our objectives. And we need to ensure that all the themes we work through in our literature review also matches the wording we have used.

If we do that, we create a cohesive picture of what our research project is setting out to do. We create a 'seamless whole' for our project. We not only reassure our supervisor, but we also scope our project down to manageable proportions... and there are no surprises. There SHOULD be no surprises.

It is easy to see when work is not cohesive. In the following example, we can see that there are WAY too many components, which fragments the work. The title for this project is (and note this contains seven elements which need to be explained in the literature review:

How [the] pastoral care needs of internationally qualified nursing students were managed by their tertiary education provider during the initial Covid-19 lockdown.

Further, when dig into the questions and aims, we can see that the student - through not being specific, and not using enough scoping language to confine the project - unwittingly adds a further five elements:

Research Question

Aim

How were [the] pastoral care needs of internationally qualified nursing students managed by their tertiary education provider during the initial Covid-19 lockdown?


[To] explore the management of the pastoral care offered by a tertiary provider during the first Covid-19 lockdown period and the personal impact on students with a specific focus on gaining employment during this time

Sub-questions

Objectives

What disruption did Covid-19 have on the work plans of the internationally qualified nurses?

To describe how the disruption of work plans was managed.

How did pastoral care from the tertiary education provider during the Covid-19 lockdown assist the wellbeing of internationally qualified nurses?

To understand the role of pastoral care in operational management of tertiary education.

To understand how one tertiary education provider managed pastoral care needs of enrolled internationally qualified nurses.

This leaves us with only seven in the original research question, but with an additional five topics from the sub-questions:

  1. Pastoral care
  2. Internationally qualified nurses
  3. [nursing] students
  4. Disruption
  5. Work plans
  6. Employment
  7. Covid-19 lockdown
  8. Tertiary education
  9. [institution] provider
  10. Management
  11. Personal impact
  12. Wellbeing.

To further mess things up, this project contained additional elements which were not yet in the questions or aims, but probably should have been included in order to make sense of the findings (once they were collected). Those are:
  1. Exploring the international education market
  2. Detailing New Zealand education policy
  3. Detailing the New Zealand international education market
  4. Detailing the culture and policies of the institution where the research took place.
That equals 16 themes for the literature review (far, far too many, but necessary for the way the questions have been so messily asked). However, the actual literature review in this particular research project explored the following literature review themes:

  1. Tertiary education operational management (education management; student management)
  2. Pastoral care (defined in tertiary education; Pastoral care Code of Practice to cover all students; International approaches to pastoral care)
  3. Export education policy
  4. New Zealand regulations for pastoral care in education
  5. International student customer experience
  6. customer service
  7. Wellbeing
Three items in the literature review were brand new: not in the questions, nor in the aims. A bonus (not)! They are those of: exploring the ideas of customer service and customer experience; and what the New Zealand legal position was. To further confuse the reader, twelve elements - including many from the research question itself - were left unexplored:
  1. Internationally qualified nurses
  2. [nursing] students
  3. Disruption
  4. Work plans
  5. Employment
  6. Covid-19 lockdown
  7. Tertiary education
  8. [institution] provider
  9. Personal impact
  10. Exploring the international education market
  11. Detailing the New Zealand international education market
  12. Detailing the culture and policies of the institution where the research took place.

Even worse, the management element apparently promised in the research question was poorly explored. So even though a key element from the research question made it into the literature review, it was not meaningful.

It is CRITICALLY important to ensure that all key components from the project title feed into the research question; are broken up into the sub-questions so we can be sure we are asking the 'right' questions to answer our question; that our aims and outcomes will clearly get us to what we want to know; and that our literature review scopes the environment clearly so we can then construct a clear and unambiguous method to collect our data.

Overall, having about six areas - one less than the student had originally in the research title - is probably about right. And each of those should have been thoroughly explored in the literature review.



Sam
read more "A fragmented project outline"

Friday, 13 August 2021

Answering the research question

Year on year I have research supervisees who remain incredibly shy of saying whether they answered their research question... or by how much.

There seems to be a reluctance on the part of students to overtly state whether the question was answered - or not.

From a marker's point of view, we need the answer to be stated clearly and succinctly. We want the conclusion to overtly state that the research question was answered; was not answered; or was partially answered. Then the researcher can clearly summarise: what was answered; what remains unanswered; the substantive limitations of the project; and how future researchers might address those limitations.

The conclusion should state how much we have answered our question. We need to say - as plainly as possible - that yes, we answered our research question. Then we detail why and by how much, before we move on to detailing our limitations. For example, in the paragraph below, the question is reiterated and then answered. Overtly. When we read the following, there is no doubt in the reader's mind that the research question was addressed by the research:

"... This research set out to understand the impacts of COVID-19 on NFPs in the Nelson region by collecting the experiences of six NFPs which continued to operate throughout the pandemic as essential services. In understanding the impacts, the research sought to identify the resilience factors across the participant organisations that were critical in enabling continued functioning. The author, having worked in the NFP sector, was interested to know how NFPs coped with the worst crisis in living memory, to understand what made them strong and resilient. The research found that the participants of this study not only coped well, but they also adapted and found opportunities amongst the adversity."

The example above had a short contextualising sentence before restating the research question, which I have not repeated. Then this statement. In the following paragraphs, the author then detailed how much each of the sub-questions were answered, before covering limitations and future research. But it is clear from the outset of the chapter just how much the research question is answered. The reader is not left hanging.

The alternative point is to clearly state that, no, we did not answer our research question, then detailing why, and by and how much we did not answer our research question.

I have students asking if they should start with a critique. No. Start by answering the research question. The reader has just read all the evidence to this point, and now want to know that the researcher is able to make a judgement.

Make that call. Put our minds at rest.


Sam

read more "Answering the research question"

Monday, 28 June 2021

What is operationalisation?

When thinking through a research project, we first need to come up with a sound research question: an overarching question that our research sets out to answer. Then we divide our research question up into several aims, which are goals that we need to achieve in order to answer our research question. Aims are what we need to achieve, or broader, general statements.

Under each aim we will have objectives, or the operationalisation of our aims (Jones, 2015). Operationalisation is what we have to do to achieve our aims. Operationalisation contains specific statements of the learning which will occur as we work through our project.

Operationalisation can be thought of as a quantitative tool, but I find it useful in determining WHY, HOW, WHEN, WHERE, and from WHOM we will collect our data, regardless of what inquiry strategy we will be adopting in our method. Operationalisation helps us to convert "abstract concepts into measurable observations" (Bhandari, 2020). Some concepts can be easily measured (i.e. age), while others are much more difficult (i.e. motivation). By operationalising our aims, we can set up our early thinking on how to systematically and carefully collect our data where we can't 'see it happening' (i.e. field data) (Bhandari, 2020).

Initially our research question feeds our aims. The flow is down from the top, so to speak. As we find our base information, and we start populating our concept map. However, the process becomes iterative, as over time, the research on each of our aims, and in coming to understand the limits and constraints of our operationalisation, will feed back up the chain to modify our research question.

Back and forth, until we have thoroughly clarified what our project is setting out to achieve. Once we are clear in this space, then we can move on to methodology and methods.

The video below contains an example research question, with one supporting aim, and the related operationalisation from a student project.

  • The overarching research question is in the HR field: “How can local SMEs improve staff retention?” The concept map is showing one of the aims required in order to answer the research question will need to be “Are there common HR strategies being used by SMEs to improve/maintain staff retention?”.
  • To operationalise the aim (or the objective), we will need to “identify international best practice SME HR retention strategies in high performing nations (we are going to pick these based on OECD rankings) using databases, periodicals, journals and government websites. Then through the Nelson Tasman Chamber of Commerce, we are going to survey the local member SMEs. Then we are going to compare the local results to the international staff best practice retention strategies, and find out what the gap is. Then we will report on that.”

Operationalisation effectively forms the embryonic beginning to our method.

Please note that in the video the example is missing the mission critical data, which might be what happens if the NTCC – the Nelson Tasman Chamber of Commerce - won’t or can’t participate? In that case we would approach the local Institute of Directors, and the Nelson Small Business Group, to ask if either would be interested in participating. This is shown in the image accompanying this post, where there is an overarching research question; Aim one; Operationalisation one, including – in red – alternative plans for mission-critical data.

I hope this helps!


Sam

References:

read more "What is operationalisation?"

Friday, 7 May 2021

Clear methods writing: actual to plan

I had an interesting problem with a student recently, where they were writing the wrong elements in the wrong tense in their methods chapter. I found it hard to pinpoint why it was wrong, but - once I realised WHICH conventions were being flouted - really made things clearer for me... which will translate into my being able to provide clearer instruction.

Initially, I thought this was a matter of the student not having specifically cited particular elements drawn from the source at the point they were mentioned in the sentence. However, I soon realised that they were combining the expert theory/view with what they had done. So instead of the academic writing sandwich: "John Doe says to do this" followed by "so I did this [applied project detail]", the student was mashing the two together. For example, the student wrote:

“Semi-structured interviews gave the researcher an advantage to plan ahead of time and allowed the researcher to appear competent and prepared (Newton, 2010)"

When I read the paragraph, I was confused. Did the student do this (i.e. was this applied in their primary research - actual), or did the cited author write this (i.e. being cited as an expert or theoretical view - plan)? I was left confused as to whether this was actual or plan. Done, or advice. Reality or theory.

I advised the student that when we cite, then we cite in the present tense, because the advice remains current. However, we apply the theory to our own research findings and write in the past tense, because we're writing up our results after we have done our research.

I also suggested that, if the initial paragraph was written from the point of view of the expert advice, one option would be to change the paragraph to:

As a research tool, SSI gives the researcher an advantage to plan ahead of time, allowing the researcher to appear competent and prepared (Newton, 2010).

This makes the ownership of the theory very clear. But now we need to rework the student's sentence to fit with what had been intended, so we could write:

For this project, the researcher found that using semi-structured interviews allowed the mapping of a range of participant responses, and the advance preparation of answers to likely questions, and prompts to keep the exploration on track.

If we put these two sentences together, we end up with a paragraph explaining one of the elements about why semi-structured interviews were chosen. But we are missing a definition of what semi-structured interviews are, to lead off, and to complete that triplet. As a rough example:

Semi-structured interviews (SSI) have been defined as “ascertain[ing] subjective responses from persons regarding a particular situation or phenomenon they have experienced” (McIntosh & Morse, 2015, p. 1). As a research tool, SSI gives the researcher an advantage to plan ahead of time, allowing the researcher to appear competent and prepared (Newton, 2010). For this project, the researcher found that SSIs allowed the mapping of a range of participant responses, the advance preparation of answers to likely questions, and prompt development which all helped to keep the interview exploration on track.

Then we would write the next triplet (or even pair if the McIntosh and Morse definition related to both concepts well enough); perhaps those of social cues; then perhaps freedom of expression, and so forth.

It is amazing how much we can learn, just from looking at the writing of others.


Sam

References:

read more "Clear methods writing: actual to plan"

Friday, 16 April 2021

Research proposal alignment

One thing that I find many research students struggle to get their heads around is that all the elements of the research proposal for their project should align. When writing the research proposal, the title, the research question, and the sub-questions should all use the same language, and 'tell the same story'. The title should provide the shortest summary of the research project (the abstract in the thesis is the second shortest summary). The information provided by each sub-question should collectively answer the research question, like pieces of a jigsaw puzzle. The questions are the "how" we will get the data that the project needs.

The rationale should lead the reader logically from the introduction to the research question, flowing seamlessly into the research question. The aim should reflect the likely outcomes of the research question; the objectives should collectively provide the outcomes of the sub-questions. They too should fit together like pieces of a jigsaw puzzle, illustrating the "what" we get at the end of the project.

Then the proposed literature outline should reflect all the components mentioned in the title, the research question, the sub-questions, the aim, the objectives and the rationale. The literature review outline is like a shopping list for all the ingredients needed to put the project research proposal together.

However, it can be difficult to align these elements. I suggest that students use a table, like the image illustrating this post, so that they can see all the elements laid out briefly, on one page. They can colour map the terms to check that the language is relatively consistent, and that all - or most - of the elements in the research questions, aims and objectives exist in the literature review.

This is a double-check that all elements are present. It gives us the opportunity to ensure that our terms are consistent, and that we have used the same language. It lets us check that our sub-questions are reflected in our objectives. It lets us check that our literature review outline covers all the elements that our supervisor or marker will expect to see. Creating this table is not in place of writing the proposal itself: instead it is a 'cross-add' to ensure that all the required concepts are present.

The table usually only needs to be done once, and by then we have harmonised and streamlined all the concepts, tightening the project and our language. It it a tool for scoping; focusing and sharpening our ideas do that our ideas will be clear to ourselves and to others.


Sam

read more "Research proposal alignment"

Friday, 5 March 2021

Data collection questions framework

Where qualitative research is complex, with fragments of data being collected and confirmed across several different data collection questions, it can be difficult to see what is and isn't covered. To assist students, I created a framework to help them think about the questions they need to ask in order to collect the necessary data.

As can be seen in the accompanying image, I created a Word document with 6 columns (download here). I ask supervisees to fill in as much as they can for each column when they are putting together their data collection questions, then email each draft to me so we can discuss them. We will work through several drafts, until students are comfortable that they will collect the data they need.

The columns in the framework are as follows, with clarifiers:

  1. List EACH SINGLE data collection question, in the order you will ask your participant. Ask questions one at a time. Aim for 15 questions max for an hour’s interview. Chose those questions that we MUST have an answer to in order to answer our research question
  2. Why do you want to find out from this question? Note what ANSWERS we are seeking for this question. What do we want to find? What do we expect to find? Is there a difference between the two?
  3. Where has this been used in previous research? If we can use questions that have already been asked in research we have used in our literature review, then we can compare our data with expert results. Bonus.
  4. How will you analyse the resulting data? Think about HOW we will analyse the data. If we are simply going to be looking for themes, what themes will we be looking for? Do we have a list of codes already? Or have we already thought about the categories?
  5. How will this answer your research question? (which aim will this contribute to?). We must not 'dump and run' and focus just on a single aim, but get specific. Which part of an aim will this particular question help to answer? If we don't get specific, how will we cover everything off?
  6. Can your participants answer this? Is the meaning and language unambiguous? Are the instructions clear? We need to check that the language is clear; that the meaning is clear; that the question is precise; that if we are recycling someone else’s work, the question is worded in the same way, in the same context; that the participant has the knowledge to answer the question; that only one question is asked at any one time; that the question instructions are clear.

While sometimes we might simply be able to ask particular questions under each aim and be satisfied that we have a list of questions that will answer our overarching research question, this is not always the case. This framework at least means that we really think about what we are asking our participants, and why.

And we don't waste the opportunity to collect the 'right' data :-)


Sam

read more "Data collection questions framework"

Wednesday, 27 January 2021

Creating a set of data collection questions

Data collection questions are largely created from the reading we have done for our literature review. From all the good quality research papers we have read, we should be starting to form a picture of how we too can answer our research question, using good quality research examples to help us to construct our questions, our strategy to ask our questions, and how we might analyse the data from the questions we ask.

The process that I like to use is:

· Download the template “03 Data Questions Framework v3.docx”, available here: https://drive.google.com/file/d/1hIwdCTq0ro0RX5IsTeH0A8W22COyITh2/view?usp=sharing.

· Go through the research sources found in the literature review, and explore the methodology, to see what questions they asked which provided the same type of results you are seeking

· List all the questions that you may have to ask in the first column of the Data Questions Framework table then:

o detail what type of answer you want to get to this question in column 2

o add a citation for where this question has been used in previous research in column 3

o consider how you will analyse the data you get from this question in column 4

o map all the possible questions to its relevant research aim in column 5

o indicate that you have checked for sense, clarity, have proofed the question, and have checked that the instructions are clear in column 6

· Try to answer each of the elements above are detailed as fully as possible

· Keep adding questions to the framework until you will have potentially gathered enough data to answer your research question

· Refine the wording of each question, to make them all as clear and unambiguous as possible

· Reorganise the questions in what would be the most logical order to ask them

· Save this framework. Then create another copy, which you will edit further than the next step

· Before sending to your supervisor for review, check that:

o each question asked is a single question: i.e. one ’question’ does not contain three questions, for example: “What are your views on the work conditions for your current jobs? (e.g. working hours and pay) Are they different from your expectations? If so how?”. This should be split into three questions

o you do not have more than 15 questions overall for a one hour interview; you do not have more than 20 questions overall for survey

o you are not unintentionally asking the same things in different ways (we may intentionally ask the same thing in different ways to cross-check for validity)

o personal questions, such as demographics, are asked at the end of the data collection instrument.

This process seems to work well for students.


Sam

read more "Creating a set of data collection questions"

Wednesday, 23 September 2020

How we ask questions

The way one phrases a question can make a large difference to the answer one receives. I don't think we think enough about this, and it was brought home to me very clearly, when reading a really great book on psychometric testing (as one does!), by Brian Cripps (2017).
"There is an apocryphal story in market research about two priests who are discussing whether it is a sin to pray and smoke at the same time.
"The first says that he thinks it is a sin. The second thinks it is not.
"So they agree to go to their respective [bishops] to gain a higher opinion. When they meet again, the first priest says that his [bishop] regarded smoking while praying as a sin. The second said that his [bishop] was adamant that it was not a sin.

"So the second priest said to the first, ‘What did you ask your superior?’

"He replied, ‘I asked my superior if it was a sin to smoke when praying.’

"'Ah,’ said the second priest. ‘My superior said it was fine. But I asked, is it a sin to pray when smoking?'" (Cripps, 2013, p. 29)
This is a very good analogy for us to think about how questions are asked, either by ourselves in interviews, or by others when using assessment instruments.

Whenever we are considering using a career inventory with a client, we need to first read through the questions and consider which may derail our clients, which may cause problems, or which may cause cultural misunderstandings. Considering this as a first step is particularly important in New Zealand, as few tests have been normalised for our nation.

First working through the instrument will help us to highlight those cultural differences to help us more accurately debrief our clients, post-test... and we can be watchful for how questions have been asked, which may have affected how our clients have answered.

That should lead to better career decision-making.


Sam

  • Reference: Cripps, B. (Ed.) (2017). Psychometric Testing: Critical perspectives. Wiley Blackwell.
read more "How we ask questions"

Monday, 13 July 2020

Preparing an Interview Script

As mentioned in other posts (here) the development of the data collection questions takes time, planning and care. We should have developed a research protocol which covers the elements we need to plan before the interview. Before the interview, in our interview protocol, we will already have planned to meet somewhere with few distractions, where the participant feels safe. We should also have planned how we would get to our location.

However, in addition, we also need to have developed an interview script which we take with us to the interview itself (Bolderston, 2012; Remenyi, 2011), which is the topic of this post.

The interview script is in three parts:
  1. Warm-up: within the interview script we should have listed what we will say to introduce our project to our participants, how we will remind them about their rights and our responsibilities, and the ‘rules of engagement’ around breaks, phones, potential discomfort, or potential withdrawal. We reiterate how long the interview will last, and how we will keep their identity confidential, and how their data will be used. We would remind our participants about who will have access to the interview recording – such as professional transcribers, research supervisors – and that these people are bound by either confidentiality agreements or codes of ethics. Although all this is quite serious, we are building trust in this stage, and need to be explicit about appreciating the gift that the participant is giving us. We should note a time down against this section, and have practiced delivering it, so that we know roughly how long it will take. Be prepared for questions, and factor in the time to answer these (Bolderston, 2012; McNamara, 2010; Remenyi, 2011).

  2. Interview questions: then we get into the main body of the interview, going through the data collection questions – if structured – item by item, further exploring using our prompts; or if semi-structured, following our lines of enquiry as they arise outside the script. Ask easy to answer questions early in the script to give the participant confidence. Get the participant feeling confident in talking. Build rapport. Work towards having the most important questions about a third of the way into the interview, while the participant is still fresh. Be watchful of the participant going off track, and steer them back onto your agenda. Make notes on the script. Jot impressions. Note the timestamp on the recording so we can go back to this point later (McNamara, 2010; Patton, 2014; Remenyi, 2011).

    We should have time estimates against each of our interview questions, and have practiced on a few volunteers to get a reasonable idea of how long it would take an ‘ideal’ candidate to answer our questions. As a rough rule of thumb, it takes around an hour to ask and obtain answers to ten to twelve open-ended questions. Once we have run through our questions, if we have time, we can go back to our notes and ensure that we have covered everything we needed to.

  3. Thanks: we need to thank the participant for their time, remind them of the next steps in the process (perhaps if there will be an email or a phone call follow up), and then wind the session up.
A note about time: as the interviewer, we need to keep a close eye on time, and when we get to the end of our agreed time, even if we have not received answers to all our questions, our ‘contract’ with our participant ends. We need to be careful not to have act hunger and push on without checking that going over the time even by five minutes. Stop, and thank the participant. If we have very few questions left to obtain answers to, we could ask the participant if they have the time and energy to answer our remaining questions.

If we have more than a couple of questions, we could also ask if we could call them later to obtain the remaining answers, at the same time as we might contact them to seek clarification of any of their other answers.

In general people are very generous, but we should be very careful not to take unnecessary advantage of that generosity.


Sam

References:
  • Bolderston, A. (2012). Conducting a research interview. Journal of Medical Imaging and Radiation Sciences, 43(1), 66-76. https://doi.org/10.1016/j.jmir.2011.12.002
  • McNamara, C. (2010). General Guidelines for Conducting Interviews. https://managementhelp.org/businessresearch/interviews.htm
  • Patton, M. Q. (2015). Qualitative research and Evaluation Methods (4th ed.). SAGE Publications, Inc.
  • Remenyi, D. (2011). Field Methods for Academic Research - Interviews, Focus Groups and Questionnaires in Business and Management Studies (2nd ed.). Academic Publishing Ltd.
read more "Preparing an Interview Script"

Friday, 10 July 2020

The interview protocol

When interviewing, it is usually good idea to set up an interview protocol before starting, and prepare a script to print and take with us. The script should detail our introductory statements, our questions and prompts, how long we expect each question to last, and have some blank notes areas for us to jot down any impressions as the interview progresses (see here for more information on data collection questions).

The things to consider are (McNamara, 2010; Remenyi, 2011):
  1. Before the interview starts:
    • Where we will hold the interview. We need to have somewhere where the participant feels safe, where we won't be interrupted, where it is quiet, where there is enough natural light, etc. We need to be aware that our participants may like to be interviewed in a familiar environment, such as at home or in the office. We need to have planned our travel to the location, and organised any other logistics such as refreshments, tissues, notes, pens, charging of any electronic equipment etc.
    • Ensure we have a hardcopy of our interview script (see here)
    • We need to ensure our participants understand their rights and our responsibilities with their data. We need to have clear boundaries and expectations, confidentiality, and how data will be recorded, analysed and stored (and storage duration). Participants need to understand and sign the informed consent form. We need to be sure that the participant understands the interview purpose. We reiterate how to contact us. They need to have already agreed to and have signed the informed consent form, which we will already have on file
    • We need to explain the interview structure, note-taking, the recording, and are explicit about the duration of the interview. We agree our time to meet and when the interview will end.
  2. During the interview:
    • First check that the recording is working before you get into the main body of the interview
    • Work through our interview script in our suggested order, using our listed prompts
    • Treat all our participant's answers with equanimity, as if we have "heard it all before" (Patton, 2014, p. 671) to reduce judgment and bias
    • Use note-taking judiciously (this is where a formatted interview script will help us - there will be a gap that we can simply make notes alongside the relevant question
    • The script should list bridging statements to help us to segue between elements of the interview - to smooth the transition between different sections of the interview
    • Keep the balance of who is talking on the participant side: aim for a maximum of 10% of interviewer time, and 90% participant.
  3. After the interview:
    • Thank our participants for the gift of their time in an email, a note, or a card. Reiterate that if they have any questions they can contact us. Let them know that you will email them whatever you have promised (such as a secure link to their recording, a copy of the paper, or a summary of the findings) as a courtesy
    • Check the recording, password it, photograph the notes, and save all the files to the location specified in our ethics approval
    • Make notes alongside our interview to clarify them while things are fresh in our mind. Photograph the additional notes and save to our save location
    • Note any reflections that we already have, and any impressions that we may now be noting. A useful list to consider is what went well; what could we have improved; what was missing; what surprised us. Save those files too to our save location.
While this is not an exhaustive list, it does give us a place to start from.


Sam

References:
  • McNamara, C. (2010). General Guidelines for Conducting Interviews. https://managementhelp.org/businessresearch/interviews.htm
  • Patton, M. Q. (2015). Qualitative research and Evaluation Methods (4th ed.). SAGE Publications, Inc.
  • Remenyi, D. (2011). Field Methods for Academic Research - Interviews, Focus Groups and Questionnaires in Business and Management Studies (2nd ed.). Academic Publishing
read more "The interview protocol"

Wednesday, 8 July 2020

Research question defining steps

The web-based survey and analysis platform, Qualtrix, has a great post that provides four steps for people to use in constructing a robust research question. Those steps are:
  1. Observe/identify: the first step is always to "sift through the [many] inputs and discover the higher-level trends that are worth the investment of resources" (Scott, 19 July 2019), via a literature review, a gap analysis, a design brief or a tender, or a series of preliminary project scoping interviews. Scott notes that pilot studies can help determine which avenues or methods are likely to be the most fruitful (19 July 2019).
  2. Review key factors: this next step requires us to evaluate all information, and refine from a more in-depth literature and industry review, perhaps using a macro-environmental - PESTEL - analysis (Kotler, 1991; read more on this here), and starting to consider how we might collect the data we need to answer the question, and whether the effort is going to be worth the potential return (Scott, 19 July 2019). As we are now thinking about methodology and methods, we can also start to consider the likely biases, assumptions and limitations that will arise. Scott (19 July 2019) also suggests that the following four factors should be considered at this stage:
    • "which factors affect the solution to the research problem"
    • "which ones can be controlled and used for the purposes of the company, and to what extent"
    • "the functional relationships between the factors"
    • "which ones are critical to the solution [to] the research problem".
  3. Prioritise: we now have a lot of information, and need to sift our findings and decide on what we MUST have to answer our research question, versus what is nice to have. The 'nice to haves' get put aside for a later project. Again, Scott suggests we ask ourselves some questions at this stage:
    • Who? Who are the people with the problem? Are they end-users, stakeholders, teams within your business? Have you validated the information to see what the scale of the problem is?
    • What? What is its nature and what is the supporting evidence?
    • Why? What is the business case for solving the problem? How will it help?
    • Where? How does the problem manifest and where is it observed?
  4. Align: this is where we discuss, debate and decide with all stakeholders. We clarify the project rationale. Funnily enough, although this is the shortest section, it often takes the longest time, as the people who have not been involved in steps 1 to 3 now have to be persuaded to the 'logical' step 4 alignment of proposed action. Once we collectively decide, we write the brief, outlining what we aim to determine, and ensure that everyone is aligned with the agreed project outcomes.
Then we detail the project planning, controls, milestoning and deadlines. We are now ready to start.


Sam

References:

read more "Research question defining steps"

Monday, 6 July 2020

Different types of interview questions

I have written about interview technique a number of times before (here), but haven't really explored the different types of data collection questions that there are for qualitative interviewing.

A number of researchers have developed interesting taxonomies of question types, often relating to specific fields or methodologies - such as Bernard (2011) in anthropology, and Spradley (1979) in ethnography - which we can each use to suit our own circumstances.

The types of questions I tend to use when interviewing are:
  1. Open-ended questions: "relevant and meaningful" which "invite thoughtful, in-depth responses that elicit whatever is salient to the interviewee", not the interviewer (Patton, 2014, p. 631)... which is why we need all the following options to create a sound set of interview questions. Open-ended questions "have no definitive response and contain answers that are recorded in full" (Gray, 2004, p. 194)
  2. Grand Tour questions: these are large, sweeping, general questions asking the interviewee to describe the 'terrain' of their experience, where we learn "native terms about [the] cultural scenes" we are seeking to understand (Spradley, 1979, p. 86). Grand tour questions can be scoped to focus on "space, time, process, a sequence of events, people, activities or objects" (Spradley, 1979, p. 87). The same approach can be used in smaller, 'mini' grand tours. There are a number of types of grand - or mini - tour question sub-types:
    • Typical, e.g., "Could you describe a typical day at the office?" (Spradley, 1979, p. 87)
    • Specific, e.g., "Tell me what you did yesterday, from the time you got to work until you left?" (Spradley, 1979, p. 87)
    • Guided, e.g., "Could you show me around the office?" (Spradley, 1979, p. 87)
    • Task-related (Spradley, 1979), e.g., Could you compile the report and show me what you do where? This can lead to clarifiers
  3. Clarifiers: questions such as "What are you doing now?" and "what is this?" can be used to prompt in Grand tour questions, particularly in Task-related questions (Spradley, 1979, p. 87)
  4. Native language questions: "are designed to minimize the influence of [interviewee's] translation competence", where we ask "How would you refer to it?" about making typing mistakes of a secretary to check our understanding of a particular act, role, person or process, they might answer "I would call them typos" (Spradley, 1979, p. 89)
  5. Prompts: are short questions to the interviewee so they refine the initial answer, and "sharpen their thoughts to provide what can be critical definitions or understandings" (Guest et al., 2012, p. 220). There are a number of sub-types:
    • Direct Prompts: these are where the "interviewer asks clearly, 'What do you mean when you say X?' or 'Can you give an example of Y?' Probes may also be statements: 'Tell me more about that,' or 'Explain that to me a little bit'" (Guest et al., 2012, p. 220).
    • Indirect prompts: these keep the interview moving by keeping "the interviewee talking and encourage further explanation without asking another question". These might be non-verbal, such as head nodding or smiling; or verbal, such as "mmm hmm", or "yes" (Guest et al., 2012, p. 219).
    • Silent prompts: "just remaining quiet and waiting for an [interviewee] to continue" (Bernard, 2011, p. 162). Although Guest et al., suggest this is an indirect prompt (2012), I think that silence is more powerful a tool than being only an indirect prompt: silence can convey camaraderie, empathy, reminiscence, unfinished business, waiting, and create a void that most will step forward to fill.
    • Echo prompts: there are "particularly useful when an informant is describing a process, or an event. 'I see. The goat’s throat is cut and the blood is drained into a pan for cooking with the meat. Then what happens?' This probe is neutral and doesn’t redirect the interview. It shows that you understand what’s been said so far" (Bernard, 2011, p. 162).
  6. Closed ended-questions: where the answer is dichotomous (yes, or no), or some form of 'fixed' choice answers via an option list or a Likert scale. These questions are most often used in surveys (Teddlie & Tashakkori, 2008), but can be useful to get people started on a topic, to end a topic, or to provide a particularly structured answer that enables the interviewer to transition into a new area. Closed-ended questions tend to "restrict the richness of alternative responses, but are easier to analyse." (Gray, 2004, p. 195)
Understanding question types will help us craft our interview script so that we collect the data we need to meet the needs of our interview: whether that is interviewing a client, or collecting data to answer a research question.


Sam

References:
  • Bernard, H. R. (2011). Research Methods in Anthropology Qualitative and Quantitative Approaches (5th ed.). AltaMira Press
  • Gray, D. E. (2004). Doing Research in the Real World. SAGE Publications Ltd.
  • Guest, G., Namey, E. E., & Mitchell, M. L. (2012). Collecting Qualitative Data: A field manual for applied research. SAGE Publications, Inc.
  • Patton, M. Q. (2015). Qualitative research and Evaluation Methods (4th ed.). SAGE Publications, Inc.
  • Spradley, J. P. (1979). The Ethnographic Interview. Harcourt Brace Jovanovich College Publishers.
  • Teddlie, C., & Tashakkori, A. (2008). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. SAGE Publications, Inc.
read more "Different types of interview questions"