Centralized document for how to run each session for controls in the cerebellar battery (2022-2023).
***NOTE*** this is generic information for running participants; this may not entirely apply for cerebellar battery. Use discretion.
Enter check_audioLevels
then click Part A. (full check_audioLevels guide here)
Set out a paper copy of the appropriate consent form (highlighted above) on the participant's desk.
Open the lab email and monitor for participant's arrival.
The participant will notify you of their arrival by either emailing speechmotor@waisman.wisc.edu or calling the phone in 544. When they contact you:
Grab the clipboard with the SPEECH STUDY sign, greet them at the entrance, go upstairs, and direct them to 544A.
Participants can remove their mask once in the experiment room. They will need to remove their mask during speaking tasks.
Participants are not required to wear a mask on the path we take them from the entrance to the exam room. We are not permitted to exclude participants who don't wear a mask or aren't vaccinated. Talk to the lab manager if you have concerns about this (university-level) policy.
These should not have any occasion to change between studies, but it is good to check anyway. Note that you should run these checks on the REACHING computer, which is on the cart (not Quiberon).
Check sampling rate of tablet:
Check refresh rate of monitor:
Obtain consent using the UCSF consent form.
"This is a consent form for participation in our study. It tells you about our research and what you will be doing today. In this study you will be doing some brief assessments, some experiments where you use a joystick to make reaching movements, some experiments where you speak into a microphone and listen to your own speech over headphones, and some experiments where you listen to different sounds and answer questions about them.
If at any time you would like to stop participating in the study, that is okay. You will still be compensated for the time you have spent here today. All of your information will be kept confidential, and if you have any questions, you can ask me today or contact our lead researcher, Dr. Benjamin Parrell at the number listed on the consent form. I will give you some time to look this over now. You can sign this copy, which we will keep with us, and the second copy here on the desk is yours to keep."
At the beginning of this session:
This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the FIRST session.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This is a reaching experiment, with a separate interface computer.
The lights should be OFF for this experiment so the participant can only see the cursor, not their actual hand.
These should not have any occasion to change between studies, but it is good to check anyway. Note that you should run these checks on the REACHING computer which is on the cart (not Quiberon).
Check sampling rate of tablet:
Check refresh rate of monitor:
Start the experiment by typing in run_cerebReachAdapt_expt and hitting enter. Enter the required responses. Then place the pen on the tablet.
Tell the participant:
“The point of this experiment is to better understand how the brain controls reaching movements. We will be measuring how accurately you can reach to targets in different locations. Your data will be used as a normative baseline for future comparisons with neurological patients. So, please try your best to pay attention and follow all instructions.”
“In this experiment, you will be playing a game where you’ll be trying to make quick and accurate reaches to different target locations. For each trial, you will move the cursor to a home location in the center, which will be indicated with a circle. Then you'll make a quick reach to a target that appears somewhere on the screen. You should reach as quickly and accurately as you can, and slice through the targets rather than stopping at the target”
“Please take some time now to adjust the seat height and scoot in close to the work station. You will be making many reaches towards the edges of the tablet, and I want you to be able to do that without moving any parts of your body other than your arm. Keep the same posture throughout the experiment, and rest your other hand in your lap. Be sure to not swirl around in the chair.”
You will hold this pen at the red base with your dominant hand – maintain the same grip throughout the experiment. (demonstrate)
There are no pre-planned breaks, but if you need to rest for a bit, just wait before moving the cursor to the center for the next trial.
“Do you have any questions?”
“I’ll give you a minute to get comfortable before turning out the lights.”
After they are comfortable, turn off the lights, then press SPACE to start the experiment
Tell the participant: “Move your hand to the start circle and wait for the target to appear. Your goal is to move your hand to the target. Reach through the target as accurately as possible in a quick straight line. Once you start moving, follow all the way through. If you hear a knocking sound, that means that you moved fast enough and far enough for a good trial.”
Throughout the experiment, you should monitor how they are performing their reaches. You may have to issue corrections. Common issues:
When they get “Too Slow” message: “The too slow message means that you did not move fast enough for a valid trial. Remember to slice through the targets with your hand.” This is related to movement time, not reaction time
If participant gets “slice through the target" message: “Remember to move accurately through the target in a quick, straight line.”
Instructions will appear on the screen for the participant. In this phase, the cursor will move as they move their hand, but the direction that it moves will not correspond to where they are reaching. Their goal in this section is to move your hand directly to the target, ignoring the cursor. They should still reach as accurately as possible in a quick straight line and slice through the target.
If they have any questions, answer them. Press space when they are ready to begin.
There are two washout phases. Participants will see instructions on the screen before each phase. Answer any questions they may have. When they are ready to continue, press space.
Washout without feedback: In this block they will no longer see the cursor. They should continue to move their hand directly through the target.
Washout with feedback: In this block they will see the cursor again. They should continue to move their hand directly for the target.
There is no restart script. This is a relatively short experiment, and can be restarted if necessary.
No special equipment setup needs to be completed between the two reaching studies.
This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the FIRST session. This is after cerebReachAdapt so you do not need to re-deliver general instructions on how to reach.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This is a reaching experiment, with a separate interface computer.
The lights should be OFF for this experiment so the participant can only see the cursor, not their actual hand.
These should not have any occasion to change between studies, but it is good to check anyway. Note that you should run these checks on the REACHING computer which is on the cart (not Quiberon).
Check sampling rate of tablet:
Check refresh rate of monitor:
Start the experiment by typing in run_cerebReachComp_expt and hitting enter. Enter the required responses. Then place the pen on the tablet.
Tell the participant:
“The point of this experiment is to better understand how the brain controls reaching movements. We will be measuring how accurately you can reach to targets in different locations. Your data will be used as a normative baseline for future comparisons with neurological patients. So, please try your best to pay attention and follow all instructions.”
“In this experiment, you will be playing a game where you’ll be trying to make quick and accurate reaches to different target locations. For each trial, you will move the cursor to a home location in the center, which will be indicated with a circle. Then you'll make a quick reach to a target that appears somewhere on the screen. You should reach as quickly and accurately as you can, and slice through the targets rather than stopping at the target”
“Please take some time now to adjust the seat height and scoot in close to the work station. You will be making many reaches towards the edges of the tablet, and I want you to be able to do that without moving any parts of your body other than your arm. Keep the same posture throughout the experiment, and rest your other hand in your lap. Be sure to not swirl around in the chair.”
You will hold this pen at the red base with your dominant hand – maintain the same grip throughout the experiment. (demonstrate)
There are no pre-planned breaks, but if you need to rest for a bit, just wait before moving the cursor to the center for the next trial.
“Do you have any questions?”
“I’ll give you a minute to get comfortable before turning out the lights.”
After they are comfortable, turn off the lights, then press SPACE to start the experiment
Tell the participant: “Move your hand into the start circle and wait for the target to appear. Once the target appears, reach in a smooth motion to hit the target. Make sure that you slice through the target without stopping. You will see a cursor representing your hand position during the reach.”
“The knock just means you reached far enough, but it doesn’t mean that you hit the target.”
Throughout the experiment, you should monitor how they are performing their reaches. You may have to issue corrections. Common issues:
When they get “Too Slow” message: “The too slow message means that you did not move fast enough for a valid trial. Remember to slice through the targets with your hand.” This is related to movement time, not reaction time
If participant gets “slice through the target" message: “Remember to move accurately through the target in a quick, straight line.”
Instructions will appear on the screen for the participant. In this phase, they will continue reaching in a smooth motion to hit the target. They will not see the cursor representing their hand position and the target may move.
Participants may anticipate the jump by starting slow and get messages about their movement speed. Encourage them to keep moving in one smooth sweep.
There is no restart script. This is a relatively short experiment, and can be restarted if necessary.
Before typical production, participant will change computers to the speech computer.
This is a straight production experiment with no auditory feedback manipulations or anything special. Participants will see words on the screen and read them out loud. They do not need headphones. The study is meant to record typical productions of word-initial vocieless stops.
This experiment is part of the cerebellar battery being run at multiple sites. Participants will be coming for full-day visits. As such, they will likely have completed consent forms and participant histories at a different time, not after this individual experiment (at some future point there will be a document on how to wrangle a full- or multi-day participant visit at each site).
To run the experiment, type in the command run_cerebTypicalProduction_expt
. When prompted, enter in the participant code and their height.
The participant will see the instructions on the screen.
The participants will first do a practice phase to get the first-time production of the word out of the way, and to get familiar with the words and the pacing of the experiment. You can also use this time to fine-tune the gain on the microphone.
At the end of practice, ask the participant: "The rest of the experiment will go just like that. Are you feeling okay to move onto the task, or would you like to practice again?" Make sure that the speed is okay for them.
If they would like to redo the practice, they can, but there is no benchmark they need to pass in order to move on.
Note: if they are having a hard time with how fast the trials are going, you can type 'redo' when prompted, and then type 'yes' when asked if the participant would like slower trials. This will slow the trial down by 0.5 seconds, and then run through practice again with the slower pace.
"We'll now continue with the main experiment, which will take 5-10 minutes. Do you have any questions before we start?"
If no questions, "Whenever you're ready, you may begin."
Things to keep an eye on:
Note: there is no pause function in this script, only the automatic breaks (every 20 trials).
C:\Users\Public\Documents\experiments\cerebTypicalProduction\acousticData\
. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebTypicalProduction\acousticData
get_acoustSavePath('cerebTypicalProduction')
. Copy the participant's folder into the path generated by get_acoustLoadPath('cerebTypicalProduction')
As of 9/26/2022 there is no restart script for this function. However, the entire experiment takes about 5 minutes so you can just rerun it.
Before paced VOT:
This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the FIRST session.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This is an unusual experiment in that it is not an altered auditory feedback study. In this study, participants will first see a word that they will use in that trial. They will then hear a series of clicks through the headphones. The clicks will start relatively slow but get faster through the trial. There will be a countdown for the first 3 clicks, so they can get used to the pace and prepare. Then the word will appear on the screen. At that point, they will repeat the word in time with the clicks.
---
This experiment is conducted using PTB so there is not the usual duplicated experimenter display. Instructions to the experimenter are displayed in Matlab's command window.
Special hardware requirements:
Before the experiment begins:
To test the connection between the headphone amp and the Focusrite, in the Matlab command window, type:
test_outInputGain;
The metronome channel should run up to amplitude of between +/- 0.1 to 0.2 or so. If you get any warnings about either channel, check that the gain is set to 50% on both the headphone and the Focusrite ports. If your figure shows clipping in the line labeled channel 2, turn down the gain on the headphone side if possible. If not, turn the gain down on the Focusrite side if possible. Then run the test again.
Setup diagram:
run_cerebPacedVot_expt
in the command window and hitting enter.
You will first run through equipment checks and will have to answer a few questions about the equipment setup:
Then, tell the participant: "You will hear some clicks shortly. This is just a check to make sure our equipment is working properly."
After the automatic check is done, check the output plotted in the figure. You should be able to clearly see the clicks in the signal, with a max amplitude of about +/- 0.2. If either channel is not receiving input, Matlab will inform you which channel is wrong. Double check that everything is connected and powered on and redo the hardware check.
After the hardware checks, you will do a volume check. Tell the participant: "We'll start with a volume check to make sure the metronome is at a good loudness. You'll want to be able to comfortably hear this while you are talking."
The command window will then ask you if you would like to repeat the volume check. If you have to adjust the volume at all, press 1. If the volume was good, press 0. Note: The PTB screen is listening for this input, not Matlab, so you do not have to actually type in the command window.
Matlab will then prompt you to press the spacebar when you are ready to repeat. Repeat as many times as is necessary.
This experiment will then move onto a practice phase so that participants can get used to the task, and so that you as the experimenter can see if the pacing clicks need to be slowed down.
Tell the participant: "We'll start with a practice section so you can get used to the task." The remainder of the instructions will be on the screen.
When they finish the practice, the experiment will ask you if it seems like they need to go slower. This is a provision for the cerebellar patients, who often have a hard time repeating one syllable at speed. We do expect that people will have difficulty towards the end of the clicks, when they are speaking the fastest. However, if people consistently had difficulty near the beginning of the task (i.e., in within the first 5 clicks that they were speaking for), you can type in "slower" and the practice will repeat with a slightly slower pace.
If it seems that the participant runs out of breath too early, encourage them to try again, but this time take a deep breath before they start speaking.
If the participant did not have difficulty, ask them: "Are you comfortable with the task, or would you like to practice again?" If they express hesitation that the clicks are too hard at the end, you can tell them that that is okay, and they should just try their best. You can repeat practice as many times as they like.
"We'll now move onto the main experiment, which will take about 10 minutes. You will be doing the same task you just practiced for the rest of the experiment. Do you have any questions before we start?"
If no questions, "Whenever you're ready, you may begin."
Things to keep an eye on:
This is a self-paced experiment, in that participants have to press the spacebar to start the next trial. So if participants need a little break, they can always stop at a trial. There is no additional pause function.
"Great job! You are finished with the speaking portion of the experiment. I will be in to take off your headphones."
The data will be saved in C:\Users\Public\Documents\experiments\cerebPacedVot\acousticData\
. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebPacedVot\acousticData
[[ Gotta figure out what to do for this experiment. RK]]
This is a suggested time for a break, since you will have to do some equipment setup, and it is approximately the middle of the experiments.
This experiment is part of the cerebellar battery run in 2022-2023.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This experiment has the option to run just 'adapt' (adaptation experiment), just 'comp' (compensation experiment), or 'both' (adaptation followed by compensation). The adaptation experiment can also be run by itself with the option 'adapt_short', which has 80 trials instead of 200, and stops before the passage reading portion. At UW, we will use the option 'both'. At UCSF, session three will use 'comp'.
You will need the RAINBOW PASSAGE for this experiment if you are doing the adaptation portion. (You don't need this if doing 'adapt_short'.) In the adaptation portion, participants do one run, then take a break to read the rainbow passage, and then do a second run.
This experiment uses:
Before running this experiment, you need to check the audio levels of the noise over the headphones, as well as the noise + participant speech (if at UW, see this document for more details.)
Tell the participant: "In this experiment, you will be speaking and listening. When you see the word on screen, read it out loud, just like you would normally say it. You will be speaking into the microphone on the desk, and you will hear your own voice and some noise played back through these headphones. Do you have any questions?"
Type run_cerebAAF_expt into the command window and hit enter. You will be prompted to type in adapt, adapt_short, comp, or both. Type in the appropriate experiment for your site and hit enter.
In all cases, the first task that will come up is an LPC order check.
"For this first section, you will see one word at a time appear on the screen."
Use the check_audapterLPC GUI to find an appropriate LPC for this participant. If you need to fix the vowel boundaries, use the Change OSTs button.
If you are running "adapt" or "both" or "adapt_short", the adaptation section will be next.
Tell the participant: "This next speaking section lasts about 10 minutes. Read each word when it appears on screen."
For adapt_short: After 80 trials, the experiment will conclude.
For both or adapt: Run first 160 trials. After trial 160, there will be a message on screen saying “Time for a break,” and the command window will say “Pausing experiment for passage reading.”
Tell the participant: "In this next section, you'll be reading a short passage. Please pick up the sheet of paper on the table in front of you titled 'The Rainbow Passage'. You'll be reading the text on this paper out loud. While you speak, you will hear your own voice and noise through the headphones. Do you have any questions before we start?
Great. When you are ready, let me know and I will start our recording system. When you hear the noise start through the headphones, you can go ahead and read the passage out loud.."
Follow the command window prompts to start the Audapter feedback and stop it once they’ve read the passage.
Tell the participant: There are a few more speaking trials with single words again.
Start the “adaptation run 2”.
If you are running "comp", this will be first. If you are running "both", this will be second.
This experiment starts with duration training to get the participant saying the vowel at an optimal duration where feedback can be used, but is not too unnaturally slow.
Tell the participant:
Next, we will do some practice speaking trials where we will try to get you to talk at a particular speed. Read the words aloud as they appear, just as you did previously. Try to speak in a way that is a bit more stretched out than how you would normally talk, but still somewhat natural. For example, instead of saying “head” I might say “heeeead.” After you say each word, you will get some feedback about the speed of your speech. The computer may tell you to speak slower or faster. If you spoke at a good speed, you will see a green circle.
If the participant had trouble, give them some tips. This tasks is challenging for many people. Just remember that the most important part is that their vowel is long enough that they can hear their own vowel formants and compensate for them.
You should try to have them get 7 out of 10 or better. If they’ve got the gist but the automated system isn’t cooperating, you can tell them, “you’re doing great – just keep doing that and ignore what the computer says.”
If they got less than 7/10, choose redo. Once they got 7/10 or better, tell the participant: You’ll now be doing the same task with more trials. You will still get feedback about the speed of your speech. This section lasts about 10 minutes.
Choose move on.
As of 10/21/2022 RK is unaware of a restart script for this experiment.
No equipment adjustments are needed before the formant JND study.
This experiment is part of the cerebellar battery run in 2022-2023.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This is a perceptual experiment that first records tokens from the participant, and then uses those tokens as the basis of the perceptual stimuli.
This code can run in either f0 mode (pitch perception) or f1 mode (formant perception). You will have to specify which one at the beginning of the experiment.
Participant will need:
Tell the participant: "In this experiment there will be two sections. In the first section, you will say {ba / bed} a few times and we will record your speech. In the second section, you will hear three different tokens and say which are most similar to each other. There will be a short pause between sections while I get things set up. Do you have any questions?"
Type run_cerebJND_expt in the Matlab command window and press enter.
You will be prompted to enter in either f0 or f1.
After the stimuli have been generated, you will move onto the actual perceptual section of the participant. Tell the participant:
"In this section, you will hear three different tokens. Your task is to say if the second token is most like the FIRST token, or the LAST token. If the second token is most like the FIRST token, press F. If the second token is most like the THIRD token, press J. Do you have any questions?"
Answer questions, then tell the participant: "First we will start with a practice round just so you can get used to the task."
The practice phase has very large stimulus differences and will take about 1 minute (10 trials).
If they are okay to move on, tell the participant: "We will now move onto the main section. The task may get very difficult and you might not be sure what is the right answer. That is okay, just take your best guess. This will take about 10 minutes."
Start the main task. It maxes out at 100 trials or 32 reversals, whichever comes first.
As of 10/21/2022 there is no remedy for a crashed experiment.
You will need to take the participant's headphones off to start the pitch compensation study. The first section in pitch compensation is a calibration phase and the participant can hear themselves in free field. Have the participant put the headphones back on when they start the main phase of the study.
(Note: this is to avoid the participant having to hear you listen to various different samples of the pitch shifting algorithm; nothing bad will happen if you accidentally leave the headphones on. It is much worse to not put them back on afterwards)
This experiment is part of the cerebellar battery run in 2022-2023. For controls, it is in the FIRST session. For patients, it is in the SECOND session.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:This is a experiment examining compensation to upwards and downwards pitch shifting of the speaker's own voice.
It needs a calibration part during with the f0 of the speaker is calculated and the correct shifting paradigm is selected. If the participant has already run the adaptation version of this experiment (pitchAdaptRetest), the experiment will take the stored information about the algorithm and not run the calibration phase again. This calibration part is included in both scripts: run_pitchAdaptRetest_expt.m and run_pitchComp_expt.m.
Tell the participant: "This experiment will have a calibration phase, followed by one main section. You will be reading the vowel "ah" from the computer screen and listening to your speech over headphones.
Don’t hesitate to ask questions or raise concerns at any point."
Calculate the f0 of the speaker: The participant is instructed to say “AH” (The instructions also appear on the screen of the participant). You see the waveform in a figure on the experiment computer. You will be prompted ‘Is the recorded sample good?’ at the command line.
If the recording looks okay and there is no clipping visible in the figure (see below for an example), press ‘y’ and hit enter.
If there is clipping visible in the figure (see below), reduce the microphone gain, then press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.
If there is a problem with the audio signal (participants didn’t speak, said the wrong thing, coughed, etc.), press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.
Fig 1: Two examples of speech waveforms. In the image on the left, the waveform falls between -1 and 1 (indicated by the red lines). This is an example of an appropriately set microphone gain. In the image on the right, the waveform is “clipped”—it is cut off by the -1 and 1 boundaries. In this case, the microphone gain needs to be reduced. If the microphone gain is too low (not shown), the waveform will have a very small range. Aim to use most of the range between -1 and 1 without any clipping.
Enter a percentage value for upper and lower boundaries for pitch tracking: this value will be used to calculate the upper and lower boundaries for tracking the vocal pitch. The closer the boundaries are to the f0, the better the estimate of the pitch and the better the shifting algorithm works. However, there will be problems if the actual pitch goes outside these boundaries. The exact boundaries that work best depend on the speaker. The default value is 20%, and will work well for most speakers. In this next section, we will set the pitch boundaries. After inspecting 3 figures with this percentage (see below), you can change the value. In most cases, the 20% value is the right one. There might be pitch contours that are not as steady as in the given example and the green and blue lines touch the red lines ones in a while. In this case, the boundaries should be a little bit larger, such as 25% or 30%.
You will see a figure pop up like this: NOTE: IF THE UPPER PANEL IS EMPTY, RESTART MATLAB BECAUSE PITCH SHIFTING IS NOT WORKING.
Click “enter” for the next figure that shows the pitch contour, shifted up (green line). Press ENTER again.
The next slide is the pitch contour, shifted down.
Press “enter” again.
Confirm percentage: Both the green and blue lines should be contained within the red horizontal borders in all the three figures. if the participant said 'bod,' it's OK if a portion at the end is outside the red borders. If this is the case, enter ‘y’ in the command line, after prompted: 'Is the percentage good?', {'y', 'n'}.
Examples of boundaries: In the figures below, you see two bars: one bar shows the pitch shifting output of Audapter; the lower bar shows the extracted pitch from the waveform. These can differ slightly but should not differ to a large extent. The default boundary value is 20% (example pitch shifted up first figure); the 10% boundaries in the example below are too narrow. In case the speaker has an unstable pitch and 20% is too narrow, the boundaries must be adjusted to a larger value, e.g., 25% or 30%. In general, 20 % is the lowest value, and boundaries need only be adjusted to a larger value. The 10% figure is for demonstrating how a too-narrow band looks like. The final disturbances are caused by the ‘d’ in the word “bod” and can be ignored.
Select pitch shifting manner: Next, the AlgorithmSelect window opens. Here, you select one of three algorithms that Audapter can use to shift the pitch of the voice. There are nine buttons on the left of the screen. When you click on one of these buttons, it will play back an audio sample demonstrating how the participant’s voice will sound with that combination of pitch shifting direction (up, down, none) and pitch shifting algorithm on the right (pp_none, pp_peaks, or pp_valleys). It’s not important to understand the differences in the pitch shifting algorithms; they just indicate certain ways to shift a pitch up and down. Listen to the different algorithms to see which one sounds the most natural. In most cases, this will be pp_none. Then click the radio button on the right corresponding to the best algorithm and click “Select Algorithm”.
"Please put the headphones on now." [make sure they are on correctly]
"On each trial, you will say the word “ah” for about a second like you just did. Start when the text prompt appears on the screen and keep going until the text prompt disappears. Try to keep the pitch of your voice at a constant, monotone level. So, try not to raise your pitch or lower your pitch. You will be given feedback on the screen if you say “ah” for less than the required time or speak too quietly. Just continue with the task and try to adjust your speech accordingly. Before the actual session starts, we will do some practice so you can get used to the task. You will have several breaks throughout the experiment.
Do you have any questions?
Practice trials:
During the practice trials, the experimenter can make
1) the trial duration (from onset to end of trial) longer if the speaker has difficulties to start on time, (default is 2 seconds)
2) the display time (the time the word is on the screen) a bit longer than 1 second: if speakers take a long time to start speaking, the word produced is often too short. In this case, add the value of the start onset time (displayed in the command line during the study as: time before onset) to the duration of the display time (speakers must produce a vowel around 1 second). So, if the speaker always starts to produce the word after 0.4 seconds, the new display time is 1.4 seconds to make up for the missed speaking time.
The experiment starts after you press "enter".
What to monitor for:
As of 10/24/2022 there is no restart script for this experiment.
No equipment adjustments are needed before the pitch JND study.
See section 6 (formant JND; use option f0).
Enter check_audioLevels
then click Part A. (full check_audioLevels guide here)
Set out a paper copy of the appropriate consent form (highlighted above) on the participant's desk.
Open the lab email and monitor for participant's arrival.
The participant will notify you of their arrival by either emailing speechmotor@waisman.wisc.edu or calling the phone in 544. When they contact you:
Grab the clipboard with the SPEECH STUDY sign, greet them at the entrance, go upstairs, and direct them to 544A.
Participants can remove their mask once in the experiment room. They will need to remove their mask during speaking tasks.
Participants are not required to wear a mask on the path we take them from the entrance to the exam room. We are not permitted to exclude participants who don't wear a mask or aren't vaccinated. Talk to the lab manager if you have concerns about this (university-level) policy.
This experiment is part of the cerebellar battery run in 2022-2023. For controls and patients, it is in the SECOND session.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This is a experiment examining how speakers adapt their pitch in the upward opposite direction in response to downward pitch shifting of their own voice during production of the word "bod" and the vowel "ah".
At the start of the experiment, there is a calibration phase where the f0 of the speaker is extracted and during which the correct pitch shift paradigm is selected. Extracting the f0 is done before each session and experiment; the latter calibration part is only included before the experiment that is run first; this can either be the adaptation or compensation study, depending on the setup of the experiment. The algorithm setting is taken from that the first collected data instead, and calibration is not redone.
The adaptation study consists of 1 to 4 sessions that are in a fixed order: The 4 sessions are:
1. pitch shifting producing the word "bod", 2. control session producing the word "bod" (unperturbed feedback), 3. pitch shifting producing the vowel "ah", 4. control session producing the vowel "ah"
Each phase requires an updated initial f0, because the pitch of the voice often declines or increases naturally, even in the unperturbed sessions.
RESTART MATLAB, because sometimes the pitch shifting paradigm doesn't work correctly if other Audapter scripts have been running.
Enter run_pitchAdaptTwoWords_expt in the command line and press "enter".
Enter the ID of the participant and their height.
Instructions when running 4 sessions:
"In this experiment, there will be 4 sections. Brief breaks between all these sections are included while the experimenter initiates the next part and there will be breaks during the sessions itself.
During each of the sections, you will be reading words off the computer screen and listening to your speech over the headphones.
Don’t hesitate to ask questions or raise concerns at any point."
Note: part of this calibration section may not occur if this participant has already done pitchComp; in this case, only the initial f0 will be calculated again.
We are first going to adjust our script to match your voice. In the next section, some instructions will appear on the screen. Take your time to read it and ask questions if you are not sure what to do. After that, you will see a word appear on the screen. When you see the word on the screen, read it out loud, just like you would normally say it, only a slightly bit longer. Try to keep the pitch of your voice constant. After you speak, there will be a short break while I calibrate our equipment.
Do you have any questions?"
The experimenter presses "enter" to start the recording after the instructions on the screen of the participant.
Calculate the f0 of the speaker: The participant is instructed to say “Bod” (The instructions also appear on the screen of the participant) or "ah" during session three and four. You see the waveform in a figure on the experiment computer. You will be prompted ‘Is the recorded sample good?’ at the command line.
If the recording looks okay and there is no clipping visible in the figure (see below for an example), press ‘y’ and hit enter.
If there is clipping visible in the figure (see below), reduce the microphone gain, then press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.
If there is a problem with the audio signal (participants didn’t speak, said the wrong thing, coughed, etc.), press ‘n’ and hit enter. The whole process will repeat. Repeat as needed until the audio looks good and no clipping is visible in the figure.
Fig 1: Two examples of speech waveforms. In the image on the left, the waveform falls between -1 and 1 (indicated by the red lines). This is an example of an appropriately set microphone gain. In the image on the right, the waveform is “clipped”—it is cut off by the -1 and 1 boundaries. In this case, the microphone gain needs to be reduced. If the microphone gain is too low (not shown), the waveform will have a very small range. Aim to use most of the range between -1 and 1 without any clipping.
Enter a percentage value for upper and lower boundaries for pitch tracking: this value will be used to calculate the upper and lower boundaries for tracking the vocal pitch. The closer the boundaries are to the f0, the better the estimate of the pitch and the better the shifting algorithm works. However, there will be problems if the actual pitch goes outside these boundaries. The exact boundaries that work best depend on the speaker. The default value is 20%, and will work well for most speakers. In this next section, we will set the pitch boundaries. After inspecting 3 figures with this percentage (see below), you can change the value. In most cases, the 20% value is the right one. There might be pitch contours that are not as steady as in the given example and the green and blue lines touch the red lines ones in a while. In this case, the boundaries should be a little bit larger, such as 25% or 30%.
You will see a figure pop up like this: IMPORTANT: WHEN THE TOP PANEL DOESN'T SHOW UP/ IS EMPTY, RESTART MATLAB:
Click “enter” for the next figure that shows the pitch contour, shifted up (green line). Press ENTER again
The next slide is the pitch contour, shifted down.
Press “enter” again.
Confirm percentage: Both the green and blue lines should be contained within the red horizontal borders in all the three figures. if the participant said 'bod,' it's OK if a portion at the end is outside the red borders. If this is the case, enter ‘y’ in the command line, after prompted: 'Is the percentage good?', {'y', 'n'}.
Examples of boundaries: In the figures below, you see two bars: one bar shows the pitch shifting output of Audapter; the lower bar shows the extracted pitch from the waveform. These can differ slightly but should not differ to a large extent. The default boundary value is 20% (example pitch shifted up first figure); the 10% boundaries in the example below are too narrow. In case the speaker has an unstable pitch and 20% is too narrow, the boundaries must be adjusted to a larger value, e.g., 25% or 30%. In general, 20 % is the lowest value, and boundaries need only be adjusted to a larger value. The 10% figure is for demonstrating how a too-narrow band looks like. The final disturbances are caused by the ‘d’ in the word “bod” and can be ignored.
Select pitch shifting manner: Next, the AlgorithmSelect window opens. Here, you select one of three algorithms that Audapter can use to shift the pitch of the voice. There are nine buttons on the left of the screen. When you click on one of these buttons, it will play back an audio sample demonstrating how the participant’s voice will sound with that combination of pitch shifting direction (up, down, none) and pitch shifting algorithm on the right (pp_none, pp_peaks, or pp_valleys). It’s not important to understand the differences in the pitch shifting algorithms; they just indicate certain ways to shift a pitch up and down. Listen to the different algorithms to see which one sounds the most natural. In most cases, this will be pp_none. Then click the radio button on the right corresponding to the best algorithm and click “Select Algorithm”.
Check if the speaker is wearing the headphones. If not wearing:
"Please put the headphones on now." [make sure they are on correctly]
First two sessions, producing "bod":
"On each trial, you will see the word “bod” appear on the screen, just like before. When you see the word on the screen, read it out loud, just like you would normally say it. Keep the pitch of your voice as constant as possible, so it sounds monotone."
Experimenter can show this by saying "bod" with no pitch fluctuations/monotone.
You will be speaking into the microphone, and you will hear your own voice played back to you through the headphones. Try to say the word “bod” the same as you would when you can hear yourself. There will be a break every 14 trials. If you need to take a break at some other time, like to cough or take a drink of water, you can press "p" on the keyboard. You will get some trials to practice."
"Do you have any questions before we start?"
Last two sections, producing "ah":
"On each trial, you will see the vowel "ah" appear on the screen, just like before. When you see the word on the screen, start reading the word out loud. Keep the pitch of your voice as constant as possible, so it sounds monotone. Keep producing the vowel until the word disappears from the screen. You will get prompted when the vowel is too short or you start too late."
Experimenter can show this by saying "ah" with no pitch fluctuations/monotone.
You will be speaking into the microphone, and you will hear your own voice played back to you through the headphones. Try to say the word “ah” the same as you would when you can hear yourself. There will be a break every 14 trials. If you need to take a break at some other time, like to cough or take a drink of water, you can press "p" on the keyboard. You will get some trials to practice."
"Do you have any questions before we start?"
Things to keep an eye on:
As of 10/24/2022 there is no restart script for this experiment.
This experiment is part of the cerebellar battery run in 2022-2023. For controls, it is in the SECOND session. For patients, it is in the THIRD session.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
"This experiment has one short section and then one long section.There will be breaks between sections while I set up the next part.
For this first section, you will see one word at a time appear on the screen. When you see the word on screen, read it out loud, just like you would normally say it. You will be speaking into the microphone on the desk, and you will hear your own voice and some noise played back through these headphones. Do you have any questions?"
MATLAB command: run_cerebTimeAdapt_expt
The first phase of this experiment both gets the participant used to how the study is going to go, and also records some initial tokens so that you can set the OST file (see this article for information on how OST files work). The OST file in this experiment has three OST transitions:
The participant should say the word like it is the answer to something, "Best." or "Best!" If they are saying it like it is an item in a list ("best..."), or like a question, encourage them to change how they say it by demonstrating. You may have to correct them again during the experiment. The key is that the vowel should not be too drawn out.
The initial pretest phase has 9 trials. After the trials are over, the GUI audapter_viewer* will open with the trials. Use audapter_viewer to tweak the OST** file if necessary (see this article on how to use audapter_viewer). These segment transitions are quite robust so you will likely not need to change much; you may need to tweak parameters, but it is highly unlikely that you will need to tweak heuristics.
*See this guide on how to use audapter_viewer
**See this guide on how to set OSTs
When you are satisfied with the OST tracking, click "Continue/Exit". You will get a dialog asking if you want to save; click "Save and Exit". This will ensure that the new parameters are saved both into the OST file and into the experiment file for that participant. Then a dialog will pop up to make sure it is being saved in the right place. The automatically selected option should be the local folder for that participant/experiment; if it is not, you can find another folder instead.
After the OSTs are set, another GUI will pop up for you to segment the most recent practice trials. There will be two user events (denoted by cyan lines): one corresponding to where OST status 2 was for that trial, and one corresponding to where OST status 6 was for that trial. They will be labeled as "vStart" and 'tStart' respectively. Click and drag on the lines to adjust these events to correspond with the actual location of the start of /E/ and the start of the /t/ closure for the trial, then press 'continue' to continue to the next trial.
If you messed up on one of the events, you can click "previous" to go back to that trial (unless it was the last trial).
The information about the interval between vStart and tStart will be automatically fed into the PCF file (configures perturbation).
(For more detailed instructions on how to use audioGUI, see this article.)
When you are done with the last trial, a figure will pop up and you will be asked if you want to accept that duration of durHold. The dots in the figure should be roughly below the line. If not, click "no" and redo the practice phase again.
"We'll now begin the main section, which will probably take about 10 minutes. Just like in the practice phase, you'll see a word on the screen, and then say that word like you normally would. Do you have any questions before we start?"
If no questions, "Whenever you're ready, you may begin."
Things to keep an eye on:
If you need to pause for any reason (other than adjusting OSTs), press the 'p' key on the keyboard. The experiment will pause at the top of the next trial loop.
C:\Users\Public\Documents\experiments\cerebTimeAdapt\acousticData\
. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebTimeAdapt\acousticData
get_acoustSavePath('cerebTimeAdapt')
. Copy the participant's folder into the path generated by get_acoustLoadPath('cerebTimeAdapt')
(At UW) Fill out the Lab Notebook on the server, located at \\wcs-cifs\wc\smng\admin\ As of 10/14/2022 there is no restart script for this experiment.
This experiment is part of the cerebellar battery run in 2022-2023.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This experiment uses formant clamping to simulate acceleration, deceleration, undershoot, and overshoot of the vowel /ai/.
This is very reliant on accurate OST tracking from Audapter. For this, we individualize OST parameters for each participant using an in-house GUI called audapter_viewer. Here is a video guide for how to use audapter_viewer. If you would like more information about the particular heuristics that are used for OST tracking, see this guide.
Note: You MUST use UW's version of Audapter (and accompanying Matlab code) for this!! Other versions do not have formant clamping. The experiment code does a hard check for the formant clamping before starting so you will find out quickly if your Audapter is not set up right.
Before running the participant, determine if they are a speaker with monophthongization of the target vowel or not. Speakers with monophthongization cannot participate in this experiment because it renders the manipulations null!
Monophthongization of /ai/ is a typical feature of Southern American English and Black English, though not all speakers of these dialects will necessarily have it (depending on their other linguistic experiences)
Monophthongization means the vowel in “buy” or “guide” will sound more like “bah” or “gahd”
If you cannot hear this specifically without looking at a spectrogram, you will get the opportunity to do that during the LPC order check.
Tell the participant: “This experiment has three shorter sections and then one long section. There will be breaks between sections while I set up the next part.”
UW:
if control, spXXX
If patient, caXXX
UCSF, UC-Berkeley:
Currently, the code looks for the substring ‘ca’ to identify patients. This can be changed to look for an additional condition if you have some other identifier in your own system
In this phase, participants will see words on the screen and say them out loud.
Tell the participant: “For this first section, you will see one word at a time appear on the screen. When you see the word on the screen, read it out loud, just like you would normally say it. You will be speaking into the microphone on the desk, and you will hear your own voice and some noise played back through the headphones. Do you have any questions?"
The participant will complete 30 trials, 10 trials per word (bod, bead, bide).
If you have not yet determined if the speaker has monophthongization, look at the formant trajectories in “bide” as they show up on the control screen.
The check_audapterLPC GUI will then come up. Use the GUI to find an appropriate LPC order for the participant.
If you still aren’t sure about the monophthongization, you can look at the formants again in this GUI.
You should coach them until they say the phrase in the right way: [sound examples of good productions: buy donuts example; guide tutors example]
It is important to use focus (emphasis) on the capitalized word so that it is long enough without being a very unnatural speech rate.
They should NOT put pauses between words, because they will be confounding the experimental conditions (and making it difficult to automatically track the segments). It should be a smooth, slow-ish speech rate.
When they have gotten comfortable with saying the phrases, press the space bar to advance to the screen that gives them the general instructions. Tell them: “Okay, you can start whenever you are ready.”
They will read each phrase 9 times in random order
When they finish, tell them: “I am just going to make some measurements, so you can relax for a few minutes.”
After they have finished, audapter_viewer will open. Use audapter_viewer to set the OST parameters for the participant. [See guide on Audapter’s OST capabilities or how to use audapter_viewer]
Status 2: onset of /ai/
This is the most important status! This is the status that finds beginning of the target vowel and thus, the beginning of the perturbation
This status should be rather robustly tracking the very beginning of the vowel, but if you need it to be a touch late to avoid accidental triggers at other points, that is okay. It should not be more than 50 ms late or so, however.
Status 4: start of /d/ in “guide” or “donuts”
This is also the most important status! This is the status that finds the end of the vowel and thus the end of the perturbation.
You should try to get this status as close to the end of the vowel as possible, since /d/ usually has enough voicing such that Audapter tries to track formants through it.
When you are satisfied with the parameters, click “Continue and Exit”.
Click “Save and Exit”
Verify the folder you would like to save into
If you had to change anything from the default, it is HIGHLY RECOMMENDED to run the OST setting phase again to make sure that they work with new data (and thus that they can generalize to the participant’s speech)
If you have to repeat, tell the participant: “We’re just going to do that one more time so I can make sure everything is set up correctly.”
If everything was okay, audioGUI will then pop up for you to hand-correct four landmarks on all 18 trials. [See example of how to segment: buy donuts; guide tutors --- the full phrases are slightly different than the current version, but the segmentation of /ai/ is the same.]
Note: the segmentation can take a while so if you are comfortable with multitasking and you have the technological means (e.g. you are in the same room as them), you can make chitchat with them while you make adjustments
aiStart: beginning of vowel
Move this event to the beginning of the /ai/ in “buy”
a2iStart
Move this event to when F2 starts moving up towards the second quality in /ai/ in earnest.
iPlateauStart
Move this event to where F2 starts to reach the plateau (do not mark the peak—mark where the F2 trajectory starts to flatten out)
dStart
Move this event to where the /d/ closure starts. This should be where formant energy reduces; some voicing will almost certainly still be there.
Tell the participant:
“In this section, you will practice saying the phrases at a good speed. When you say each phrase, you will get some feedback about how fast you were talking. If you see a BLUE circle, it’ll tell you to speak a little faster. If you see a YELLOW circle, it will tell you to speak a little slower. If you see a GREEN circle, that means you were speaking at a good speed.”
Pause to confirm
“So if you are told to speak a little slower or a little faster, you don’t have to really change how you are speaking drastically. Keep speaking smoothly and clearly, and just adjust a little. So like if you said [speak quickly] “we BUY donuts now” and have to slow down, you can just say [speak more slowly] “we BUY donuts now”, you don’t have to put any extra pauses in or anything.”
Pause to confirm
They will do 10 trials (5 of each phrase).
Keep general track of how they do (usually too fast, usually too slow, usually good, etc.)
Keep an eye on the OSTs. The duration feedback is based on the OST values, so if they are not tracking correctly, the feedback will be off.
You will be given the option to repeat.
If you need to adjust the OSTs, you can do that, and then run again
Give general guidance on how fast to speak to the participant if necessary (referring to if they were generally fast/slow)
Tell the participant: “This is the last section. It will be just like the section you just did, but will last longer, about 20 minutes. There will be breaks every 20 trials. If you need to pause at another time, like to cough or to drink water, you can press p on the keyboard. Do you have any questions?”
During the experiment:
Keep an eye on their OST tracking. You can adjust mid-experiment if necessary by pressing ‘a’
To restart taimComp in the event of a crash:
This experiment is part of the cerebellar battery run in 2022-2023. For controls, it is in the SECOND session. For patients, it is in the THIRD session.
In this battery, participants come in for multiple sessions and do multiple experiments in a row. As such, this is a bare bones document on how to run the experiment. Procedures for consent, hearing screening, awareness surveys, general equipment set up, and payment are not included in this document. See the documents below for how these procedures are implemented in this multi-study session:
This is a perceptual experiment, using prefabricated tokens.
fullfile(get_exptLoadPath, 'cerebDurJND', 'stimuli')
Matlab command: run_cerebDurJND_expt
"In this experiment, you will be listening to three words, which will differ mainly in the duration of the vowel. You will then press a button on the keyboard to indicate if the second word sounded more similar to the first sound or the third sound. If you are not sure, just make your best guess."
At this point you can answer any questions they might have (there are also task reminders in each trial).
"You will start first with a practice phase, where we can make sure that the volume is okay and to get you used to the task. In order to move onto the full task, you will need to get at least 5 of the 6 practice trials correct."
This experiment starts with a practice phase so that participants can get used to how the task is run. The practice phase uses stimuli with very large intervals, so participants should be able to hear the difference. During practice, the participant will get feedback on if their answers were right or not. They will automatically move onto the full phase of the experiment once they get at least 5/6 practice trials correct.
Note: this experiment assumes that 100 ms is larger than the biggest JND among cerebellar patients, based on data from a different study (for these particular stimuli, it is 100 ms vs. 200 ms, so it is a very large proportional difference). However, if you get a participant that cannot complete the practice because they cannot hear the differences, please contact Robin right away so additional stimuli can be made, and adjustments to the code installed. There will not be an infinite loop---if the participant fails practice more than 5 times, you will be able to manually override.
Once they have passed the practice, they will go automatically to the main phase of the experiment. They will complete either 100 trials or until they get to 30 reversals, whichever is first. This will last about 10 minutes.
C:\Users\Public\Documents\experiments\cerebDurJND\acousticData\
. Copy the participant's folder into: \\wcs-cifs\wc\smng\experiments\cerebDurJND\acousticData
get_acoustSavePath('cerebDurJND')
. Copy the participant's folder into the path generated by get_acoustLoadPath('cerebDurJND')
As of 9/23/2022 there is no restart script to start the experiment over in the middle of the experiment.
Suggested time for break.
Equipment setup:
Last updated RPK 10/24/2022