

Parselmouth – Praat in Python, the Pythonic way#
Parselmouth is a Python library for the Praat software.
Though other attempts have been made at porting functionality from Praat to Python, Parselmouth is unique in its aim to provide a complete and Pythonic interface to the internal Praat code. While other projects either wrap Praat’s scripting language or reimplementing parts of Praat’s functionality in Python, Parselmouth directly accesses Praat’s C/C++ code (which means the algorithms and their output are exactly the same as in Praat) and provides efficient access to the program’s data, but also provides an interface that looks no different from any other Python library.
Please note that Parselmouth is currently in premature state and in active development. While the amount of functionality that is currently present is not huge, more will be added over the next few months. As such, feedback and possibly contributions are highly appreciated.
Drop by our Gitter chat room or post a message to our Google discussion group if you have any question, remarks, or requests!
Installation#
Basics#
Parselmouth can be installed like any other Python library, using (a recent version of) the Python package manager pip
, on Linux, macOS, and Windows:
pip install praat-parselmouth
To update your installed version to the latest release, add -U
(or --upgrade
) to the command:
pip install -U praat-parselmouth
Warning
While the Python module itself is called parselmouth
, the Parselmouth package on the Python Package Index has the name praat-parselmouth
.
Note
To figure out if you can or should update, the version number of your current Parselmouth installation can be found in the parselmouth.VERSION
variables. The version of Praat on which this version of Parselmouth is based and the release date of that Praat version are available as PRAAT_VERSION
and PRAAT_VERSION_DATE
, respectively.
Python distributions#
- Anaconda
If you use the Anaconda distribution of Python, you can use the same
pip
command in a terminal of the appropriate Anaconda environment, either activated through the Anaconda Navigator or conda tool.- Homebrew & MacPorts
We currently do not have Homebrew or MacPorts packages to install Parselmouth. Normally, Parselmouth can just be installed with the accompanying
pip
of these distributions.- PyPy
Binary wheels for recent versions of
PyPy
are available on the Python Package Index (PyPI), and can be installed withpip
.- Other
For other distributions of Python, we are expecting that our package is compatible with the Python versions that are out there and that
pip
can handle the installation. If you are using yet another Python distribution, we are definitely interested in hearing about it, so that we can add it to this list!
PsychoPy#
As a Python library, Parselmouth can be used in a PsychoPy experiment. There are two different ways in which PsychoPy can be installed: it can just be manually installed as a standard Python library, in which case Parselmouth can just be installed next to it with pip
. For Windows and Mac OS X, however, standalone versions of PsychoPy exist, and the software does currently not allow for external libraries to be installed with pip
.
To install Parselmouth in a standalone version of PsychoPy, the following script can be opened and run from within the PsychoPy Coder interface: psychopy_installation.py
Note
If running the script results in an error mentioning TLSV1_ALERT_PROTOCOL_VERSION
, the version of PsychoPy/Python is too old and you will need to follow the manual instructions underneath.
Alternatively, you can follow these steps to manually install Parselmouth into a standalone version of PsychoPy:
Find out which version of Python PsychoPy is running.
To do so, you can run
import sys; print(sys.version_info)
in the Shell tab of the PsychoPy Coder interface. Remember the first two numbers of the version (major and minor; e.g., 3.6).On Windows, also run
import platform; print(platform.architecture()[0])
and remember whether the Python executable’s architecture is32bit
or64bit
.
Download the file
praat_parselmouth-x.y.z-cpVV-cpVVm-AA.whl
(for Windows) orpraat_parselmouth-x.y.z-cpVV-cpVVm-macosx_10_6_intel.whl
(for Mac OS X) - where:x.y.z will be the version of Parselmouth you want to install
VV are the first two numbers of the Python version
For Windows, AA is
win32
if you have a32bit
architecture, andwin_amd64
for64bit
Be sure to find the right file in the list, containing both the correct Python version, and
win32
/win_amd64
(Windows) ormacosx
(Mac OS X) in its name!Rename the downloaded file by replacing the
.whl
extension by.zip
.Extract this zip archive somewhere on your computer, in your directory of choice. Remember the name and location of the extracted folder that contains the file
parselmouth.pyd
(Windows) orparselmouth.so
(Mac OS X).Open PsychoPy, open the Preferences window, go to the General tab.
In the General tab of the PsychoPy Preferences, in the paths field, add the folder where you just extracted the Parselmouth library to the list, enclosing the path in quotemarks. (On Windows, also replace all
\
charachters by/
.)For example, if the list was empty (
[]
), you could make it look like['C:/Users/Yannick/parselmouth-psychopy/']
or['/Users/yannick/parselmouth-psychopy/']
.On Windows, to find the right location to enter in the PsychoPy settings, right click
parselmouth.pyd
, choose Properties, and look at the Location field.On Mac OS X, to find the right location to enter in the PsychoPy settings, right click
parselmouth.so
, choose Get info, and look at the where field.On Mac OS X, dragging the folder into a terminal window will also give you the full path with slashes.
Click Ok to save the PsychoPy settings, close the Preferences window, and restart PsychoPy.
Optional: if you want to check if Parselmouth was installed correctly, open the PsychoPy Coder interface, open the Shell tab, and type
import parselmouth
.If this results in an error message, please let us know, and we’ll try to help you fix what went wrong!
If this does not give you an error, congratulations, you can now use Parselmouth in your PsychoPy Builder!
Troubleshooting#
It is possible that you run into more problems when trying to install or use Parselmouth. Supporting all of the different Python versions out there is not an easy job, as there are plenty of different platforms and setups.
If you run into problems and these common solutions are not solving them, please drop by the Gitter chat room, write a message in the Google discussion group, create a GitHub issue, or write me a quick email. We would be very happy to solve these problems, so that future users can avoid them!
Multiple Python versions#
In case you have multiple installations of Python and don’t know which pip
belongs to which Python version (looking at you, OS X):
python -m pip install praat-parselmouth
Finding out the exact location of the python
executable (to call the previous command) for a certain Python installation can be done by typing the following lines in your Python interpreter:
>>> import sys
>>> print(sys.executable)
If executing this in your Python shell would for example print /usr/bin/python
, then you would run /usr/bin/python -m pip install praat-parselmouth
in a terminal to install Parselmouth. (-U
can again be added to update an already installation to the latest version.)
Combining these two approaches, you can install Parselmouth from within Python itself without knowing where that version of Python is installed:
>>> import sys, subprocess
>>> subprocess.call([sys.executable, '-m', 'pip', 'install', 'praat-parselmouth'])
Extra arguments to pip
can be added by inserting them as strings into the list of arguments passed to subprocess.call
(e.g., to update an existing installation of Parselmouth: [..., 'install', '-U', 'praat-parselmouth']
).
Pip version#
If the standard way to install Parselmouth results in an error or takes a long time, try updating pip
to the latest version (as pip
needs to be a reasonably recent version to install the binary, precompiled wheels) by running
pip install -U pip
If you do not have pip
installed, you follow these instructions to install pip: https://pip.pypa.io/en/stable/installing/
ImportError: DLL load failed
on Windows#
Sometimes on Windows, the installation works, but importing Parselmouth fails with an error message saying ImportError: DLL load failed: The specified module could not be found.
. This error is cause by some missing system files, but can luckily be solved quite easily by installing the “Microsoft Visual C++ Redistributable for Visual Studio 2017”.
The “Microsoft Visual C++ Redistributable for Visual Studio 2019” installer can be downloaded from Microsoft’s website, listed under the “Other Tools and Frameworks” section. These are the direct download links to the relevant files:
For a 64-bit Python installation: https://aka.ms/vs/16/release/VC_redist.x64.exe
For a 32-bit Python installation: https://aka.ms/vs/16/release/VC_redist.x86.exe
To check which Python version you are using, you can look at the first line of output when starting a Python shell. The version information should contain [MSC v.xxxx 64 bit (AMD64)]
in a 64-bit installation, or [MSC v.xxxx 32 bit (Intel)]
in a 32-bit installation.
Examples#
Parselmouth can be used in various contexts to combine Praat functionality with standard Python features or other Python libraries. The following examples give an idea of the range of possibilities:
Plotting#
Using Parselmouth, it is possible to use the existing Python plotting libraries – such as Matplotlib and seaborn – to make custom visualizations of the speech data and analysis results obtained by running Praat’s algorithms.
The following examples visualize an audio recording of someone saying “The north wind and the sun […]”: the_north_wind_and_the_sun.wav, extracted from a Wikipedia Commons audio file.
We start out by importing parselmouth
, some common Python plotting libraries matplotlib
and seaborn
, and the numpy
numeric library.
[1]:
import parselmouth
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
[2]:
sns.set() # Use seaborn's default style to make attractive graphs
plt.rcParams['figure.dpi'] = 100 # Show nicely large images in this notebook
Once we have the necessary libraries for this example, we open and read in the audio file and plot the raw waveform.
[3]:
snd = parselmouth.Sound("audio/the_north_wind_and_the_sun.wav")
snd
is now a Parselmouth Sound object, and we can access its values and other properties to plot them with the common matplotlib
Python library:
[4]:
plt.figure()
plt.plot(snd.xs(), snd.values.T)
plt.xlim([snd.xmin, snd.xmax])
plt.xlabel("time [s]")
plt.ylabel("amplitude")
plt.show() # or plt.savefig("sound.png"), or plt.savefig("sound.pdf")

It is also possible to extract part of the speech fragment and plot it separately. For example, let’s extract the word “sun” and plot its waveform with a finer line.
[5]:
snd_part = snd.extract_part(from_time=0.9, preserve_times=True)
[6]:
plt.figure()
plt.plot(snd_part.xs(), snd_part.values.T, linewidth=0.5)
plt.xlim([snd_part.xmin, snd_part.xmax])
plt.xlabel("time [s]")
plt.ylabel("amplitude")
plt.show()

Next, we can write a couple of ordinary Python functions to plot a Parselmouth Spectrogram
and Intensity
.
[7]:
def draw_spectrogram(spectrogram, dynamic_range=70):
X, Y = spectrogram.x_grid(), spectrogram.y_grid()
sg_db = 10 * np.log10(spectrogram.values)
plt.pcolormesh(X, Y, sg_db, vmin=sg_db.max() - dynamic_range, cmap='afmhot')
plt.ylim([spectrogram.ymin, spectrogram.ymax])
plt.xlabel("time [s]")
plt.ylabel("frequency [Hz]")
def draw_intensity(intensity):
plt.plot(intensity.xs(), intensity.values.T, linewidth=3, color='w')
plt.plot(intensity.xs(), intensity.values.T, linewidth=1)
plt.grid(False)
plt.ylim(0)
plt.ylabel("intensity [dB]")
After defining how to plot these, we use Praat (through Parselmouth) to calculate the spectrogram and intensity to actually plot the intensity curve overlaid on the spectrogram.
[8]:
intensity = snd.to_intensity()
spectrogram = snd.to_spectrogram()
plt.figure()
draw_spectrogram(spectrogram)
plt.twinx()
draw_intensity(intensity)
plt.xlim([snd.xmin, snd.xmax])
plt.show()

The Parselmouth functions and methods have the same arguments as the Praat commands, so we can for example also change the window size of the spectrogram analysis to get a narrow-band spectrogram. Next to that, let’s now have Praat calculate the pitch of the fragment, so we can plot it instead of the intensity.
[9]:
def draw_pitch(pitch):
# Extract selected pitch contour, and
# replace unvoiced samples by NaN to not plot
pitch_values = pitch.selected_array['frequency']
pitch_values[pitch_values==0] = np.nan
plt.plot(pitch.xs(), pitch_values, 'o', markersize=5, color='w')
plt.plot(pitch.xs(), pitch_values, 'o', markersize=2)
plt.grid(False)
plt.ylim(0, pitch.ceiling)
plt.ylabel("fundamental frequency [Hz]")
[10]:
pitch = snd.to_pitch()
[11]:
# If desired, pre-emphasize the sound fragment before calculating the spectrogram
pre_emphasized_snd = snd.copy()
pre_emphasized_snd.pre_emphasize()
spectrogram = pre_emphasized_snd.to_spectrogram(window_length=0.03, maximum_frequency=8000)
[12]:
plt.figure()
draw_spectrogram(spectrogram)
plt.twinx()
draw_pitch(pitch)
plt.xlim([snd.xmin, snd.xmax])
plt.show()

Using the FacetGrid functionality from seaborn
, we can even plot plot multiple a structured grid of multiple custom spectrograms. For example, we will read a CSV file (using the pandas library) that contains the digit that was spoken, the ID of the speaker and the file name of the audio fragment: digit_list.csv, 1_b.wav,
2_b.wav, 3_b.wav, 4_b.wav, 5_b.wav, 1_y.wav, 2_y.wav, 3_y.wav, 4_y.wav, 5_y.wav
[13]:
import pandas as pd
def facet_util(data, **kwargs):
digit, speaker_id = data[['digit', 'speaker_id']].iloc[0]
sound = parselmouth.Sound("audio/{}_{}.wav".format(digit, speaker_id))
draw_spectrogram(sound.to_spectrogram())
plt.twinx()
draw_pitch(sound.to_pitch())
# If not the rightmost column, then clear the right side axis
if digit != 5:
plt.ylabel("")
plt.yticks([])
results = pd.read_csv("other/digit_list.csv")
grid = sns.FacetGrid(results, row='speaker_id', col='digit')
grid.map_dataframe(facet_util)
grid.set_titles(col_template="{col_name}", row_template="{row_name}")
grid.set_axis_labels("time [s]", "frequency [Hz]")
grid.set(facecolor='white', xlim=(0, None))
plt.show()
/home/docs/checkouts/readthedocs.org/user_builds/parselmouth/envs/docs/lib/python3.11/site-packages/seaborn/axisgrid.py:118: UserWarning: The figure layout has changed to tight
self._figure.tight_layout(*args, **kwargs)

Batch processing of files#
Using the Python standard libraries (i.e., the glob
and os
modules), we can also quickly code up batch operations e.g. over all files with a certain extension in a directory. For example, we can make a list of all .wav
files in the audio
directory, use Praat to pre-emphasize these Sound objects, and then write the pre-emphasized sound to a WAV
and AIFF
format file.
[1]:
# Find all .wav files in a directory, pre-emphasize and save as new .wav and .aiff file
import parselmouth
import glob
import os.path
for wave_file in glob.glob("audio/*.wav"):
print("Processing {}...".format(wave_file))
s = parselmouth.Sound(wave_file)
s.pre_emphasize()
s.save(os.path.splitext(wave_file)[0] + "_pre.wav", 'WAV') # or parselmouth.SoundFileFormat.WAV instead of 'WAV'
s.save(os.path.splitext(wave_file)[0] + "_pre.aiff", 'AIFF')
Processing audio/1_y.wav...
Processing audio/2_y.wav...
Processing audio/1_b.wav...
Processing audio/5_b.wav...
Processing audio/5_y.wav...
Processing audio/3_y.wav...
Processing audio/bat.wav...
Processing audio/2_b.wav...
Processing audio/the_north_wind_and_the_sun.wav...
Processing audio/3_b.wav...
Processing audio/bet.wav...
Processing audio/4_b.wav...
Processing audio/4_y.wav...
After running this, the original home directory now contains all of the original .wav
files pre-emphazised and written again as .wav
and .aiff
files. The reading, pre-emphasis, and writing are all done by Praat, while looping over all .wav
files is done by standard Python code.
[2]:
# List the current contents of the audio/ folder
!ls audio/
1_b.wav 2_y_pre.aiff 4_b_pre.wav bat.wav
1_b_pre.aiff 2_y_pre.wav 4_y.wav bat_pre.aiff
1_b_pre.wav 3_b.wav 4_y_pre.aiff bat_pre.wav
1_y.wav 3_b_pre.aiff 4_y_pre.wav bet.wav
1_y_pre.aiff 3_b_pre.wav 5_b.wav bet_pre.aiff
1_y_pre.wav 3_y.wav 5_b_pre.aiff bet_pre.wav
2_b.wav 3_y_pre.aiff 5_b_pre.wav the_north_wind_and_the_sun.wav
2_b_pre.aiff 3_y_pre.wav 5_y.wav the_north_wind_and_the_sun_pre.aiff
2_b_pre.wav 4_b.wav 5_y_pre.aiff the_north_wind_and_the_sun_pre.wav
2_y.wav 4_b_pre.aiff 5_y_pre.wav
[3]:
# Remove the generated audio files again, to clean up the output from this example
!rm audio/*_pre.wav
!rm audio/*_pre.aiff
Similarly, we can use the pandas library to read a CSV file with data collected in an experiment, and loop over that data to e.g. extract the mean harmonics-to-noise ratio. The results
CSV has the following structure:
condition |
… |
pp_id |
---|---|---|
0 |
… |
1877 |
1 |
… |
801 |
1 |
… |
2456 |
0 |
… |
3126 |
The following code would read such a table, loop over it, use Praat through Parselmouth to calculate the analysis of each row, and then write an augmented CSV file to disk. To illustrate we use an example set of sound fragments: results.csv, 1_b.wav, 2_b.wav, 3_b.wav, 4_b.wav, 5_b.wav, 1_y.wav, 2_y.wav, 3_y.wav, 4_y.wav, 5_y.wav
In our example, the original CSV file, results.csv contains the following table:
[4]:
import pandas as pd
print(pd.read_csv("other/results.csv"))
condition pp_id
0 3 y
1 5 y
2 4 b
3 2 y
4 5 b
5 2 b
6 3 b
7 1 y
8 1 b
9 4 y
[5]:
def analyse_sound(row):
condition, pp_id = row['condition'], row['pp_id']
filepath = "audio/{}_{}.wav".format(condition, pp_id)
sound = parselmouth.Sound(filepath)
harmonicity = sound.to_harmonicity()
return harmonicity.values[harmonicity.values != -200].mean()
# Read in the experimental results file
dataframe = pd.read_csv("other/results.csv")
# Apply parselmouth wrapper function row-wise
dataframe['harmonics_to_noise'] = dataframe.apply(analyse_sound, axis='columns')
# Write out the updated dataframe
dataframe.to_csv("processed_results.csv", index=False)
We can now have a look at the results by reading in the processed_results.csv
file again:
[6]:
print(pd.read_csv("processed_results.csv"))
condition pp_id harmonics_to_noise
0 3 y 22.615414
1 5 y 16.403205
2 4 b 17.839167
3 2 y 21.054674
4 5 b 16.092489
5 2 b 12.378289
6 3 b 15.718858
7 1 y 16.704779
8 1 b 12.874451
9 4 y 18.431586
[7]:
# Clean up, remove the CSV file generated by this example
!rm processed_results.csv
Pitch manipulation and Praat commands#
Another common use of Praat functionality is to manipulate certain features of an existing audio fragment. For example, in the context of a perception experiment one might want to change the pitch contour of an existing audio stimulus while keeping the rest of the acoustic features the same. Parselmouth can then be used to access the Praat algorithms that accompish this, from Python.
Since this Praat Manipulation
functionality has currently not been ported to Parselmouth’s Python interface, we will need to use Parselmouth interface to access raw Praat commands.
In this example, we will increase the pitch contour of an audio recording of the word “four”, 4_b.wav, by one octave. To do so, let’s start by importing Parselmouth and opening the audio file:
[1]:
import parselmouth
sound = parselmouth.Sound("audio/4_b.wav")
We can also listen to this audio fragment:
[2]:
from IPython.display import Audio
Audio(data=sound.values, rate=sound.sampling_frequency)
[2]:
However, now we want to use the Praat Manipulation
functionality, but unfortunately, Parselmouth does not yet contain a Manipulation
class and the necessary functionality is not directly accessible through the Sound object sound
. To directly access the Praat commands conveniently from Python, we can make use of the parselmouth.praat.call function.
[3]:
from parselmouth.praat import call
manipulation = call(sound, "To Manipulation", 0.01, 75, 600)
[4]:
type(manipulation)
[4]:
parselmouth.Data
Note how we first pass in the object(s) that would be selected in Praat’s object list. The next argument to this function is the name of the command as it would be used in a script or can be seen in the Praat user interface. Finally, the arguments to this command’s parameters are passed to the function (in this case, Praat’s default values for “Time step (s)”, “Minimum pitch (Hz)”, and “Maximum pitch (Hz)”). This call to parselmouth.praat.call
will then return the result of the command
as a Python type or Parselmouth object. In this case, a Praat Manipulation
object would be created, so our function returns a parselmouth.Data
object, as a parselmouth.Manipulation
class does not exist in Parselmouth. However, we can still query the class name the underlying Praat object has:
[5]:
manipulation.class_name
[5]:
'Manipulation'
Next, we can continue using Praat functionality to further use this manipulation
object similar to how one would achieve this in Praat. Here, note how we can mix normal Python (e.g. integers and lists), together with the normal use of Parselmouth as Python library (e.g., sound.xmin
) as well as with the parselmouth.praat.call
function.
[6]:
pitch_tier = call(manipulation, "Extract pitch tier")
call(pitch_tier, "Multiply frequencies", sound.xmin, sound.xmax, 2)
call([pitch_tier, manipulation], "Replace pitch tier")
sound_octave_up = call(manipulation, "Get resynthesis (overlap-add)")
[7]:
type(sound_octave_up)
[7]:
parselmouth.Sound
The last invocation of call
resulted in a Praat Sound
object being created and returned. Because Parselmouth knows that this type corresponds to a parselmouth.Sound
Python object, the Python type of this object is not a parselmouth.Data
. Rather, this object is now equivalent to the one we created at the start of this example. As such, we can use this new object normally, calling methods and accessing its contents. Let’s listen and see if we succeeded in increasing the pitch by
one octave:
[8]:
Audio(data=sound_octave_up.values, rate=sound_octave_up.sampling_frequency)
[8]:
And similarly, we could also for example save the sound to a new file.
[9]:
sound_octave_up.save("4_b_octave_up.wav", "WAV")
[10]:
Audio(filename="4_b_octave_up.wav")
[10]:
[11]:
# Clean up the created audio file again
!rm 4_b_octave_up.wav
We can of course also turn this combination of commands into a custom function, to be reused in later code:
[12]:
def change_pitch(sound, factor):
manipulation = call(sound, "To Manipulation", 0.01, 75, 600)
pitch_tier = call(manipulation, "Extract pitch tier")
call(pitch_tier, "Multiply frequencies", sound.xmin, sound.xmax, factor)
call([pitch_tier, manipulation], "Replace pitch tier")
return call(manipulation, "Get resynthesis (overlap-add)")
Using Jupyter widgets, one can then change the audio file or the pitch change factor, and interactively hear how this sounds.
To try this for yourself, open an online, interactive version of this notebook on Binder! (see link at the top of this notebook)
[13]:
import ipywidgets
import glob
def interactive_change_pitch(audio_file, factor):
sound = parselmouth.Sound(audio_file)
sound_changed_pitch = change_pitch(sound, factor)
return Audio(data=sound_changed_pitch.values, rate=sound_changed_pitch.sampling_frequency)
#w = ipywidgets.interact(interactive_change_pitch,
# audio_file=ipywidgets.Dropdown(options=sorted(glob.glob("audio/*.wav")), value="audio/4_b.wav"),
# factor=ipywidgets.FloatSlider(min=0.25, max=4, step=0.05, value=1.5))
PsychoPy experiments#
Parselmouth also allows Praat functionality to be included in an interactive PsychoPy experiment (refer to the subsection on installing Parselmouth for PsychoPy for detailed installation instructions for the PsychoPy graphical interface, the PsychoPy Builder). The following example shows how easily Python code that uses Parselmouth can be injected in such an experiment; following an adaptive staircase experimental design, at each trial of the experiment a new stimulus is generated based on the responses of the participant. See e.g. Kaernbach, C. (2001). Adaptive threshold estimation with unforced-choice tasks. Attention, Perception, & Psychophysics, 63, 1377–1388., or the PsychoPy tutorial at https://www.psychopy.org/coder/tutorial2.html.
In this example, we use an adaptive staircase experiment to determine the minimal amount of noise that makes the participant unable to distinguish between two audio fragments, “bat” and “bet” (bat.wav, bet.wav). At every iteration of the experiment, we want to generate a version of these audio files with a specific signal-to-noise ratio, of course using Parselmouth to do so. Depending on whether the participant correctly identifies whether the noisy stimulus was “bat” or “bet”, the noise level is then either increased or decreased.
As Parselmouth is just another Python library, using it from the PsychoPy Coder interface or from a standard Python script that imports the psychopy
module is quite straightforward. However, PsychoPy also features a so-called Builder interface, which is a graphical interface to set up experiments with minimal or no coding. In this Builder, a user can create multiple experimental ‘routines’ out of different ‘components’ and combine them through ‘loops’, that can all be configured
graphically:
For our simple example, we create a single routine trial
, with a Sound
, a Keyboard
, and a Text
component. We also insert a loop around this routine of the type staircase
, such that PsychoPy will take care of the actual implementation of the loop in adaptive staircase design. The full PsychoPy experiment which can be opened in the Builder can be downloaded here: adaptive_listening.psyexp
Finally, to customize the behavior of the trial
routine and to be able to use Parselmouth inside the PsychoPy experiment, we still add a Code
component to the routine. This component will allow us to write Python code that interacts with the rest of the components and with the adaptive staircase loop. The Code
components has different tabs, that allow us to insert custom code at different points during the execution of our trial.
First, there is the Begin Experiment tab. The code in this tab is executed only once, at the start of the experiment. We use this to set up the Python environment, importing modules and initializing variables, and defining constants:
[1]:
# ** Begin Experiment **
import parselmouth
import numpy as np
import random
conditions = ['a', 'e']
stimulus_files = {'a': "audio/bat.wav", 'e': "audio/bet.wav"}
STANDARD_INTENSITY = 70.
stimuli = {}
for condition in conditions:
stimulus = parselmouth.Sound(stimulus_files[condition])
stimulus.scale_intensity(STANDARD_INTENSITY)
stimuli[condition] = stimulus
The code in the Begin Routine tab is executed before the routine, so in our example, for every iteration of the surrounding staircase loop. This allows us to actually use Parselmouth to generate the stimulus that should be played to the participant during this iteration of the routine. To do this, we need to access the current value of the adaptive staircase algorithm: PsychoPy stores this in the Python variable level
. For example, at some point during the experiment, this could be 10
(representing a signal-to-noise ratio of 10 dB):
[2]:
level = 10
To execute the code we want to put in the Begin Routine tab, we need to add a few variables that would be made available by the PsychoPy Builder, normally:
[3]:
# 'filename' variable is also set by PsychoPy and contains base file name of saved log/output files
filename = "data/participant_staircase_23032017"
# PsychoPy also create a Trials object, containing e.g. information about the current iteration of the loop
# So let's quickly fake this, in this example, such that the code can be executed without errors
# In PsychoPy this would be a `psychopy.data.TrialHandler` (https://www.psychopy.org/api/data.html#psychopy.data.TrialHandler)
class MockTrials:
def addResponse(self, response):
print("Registering that this trial was {}successful".format("" if response else "un"))
trials = MockTrials()
trials.thisTrialN = 5 # We only need the 'thisTrialN' attribute of the 'trials' variable
# The Sound component can also be accessed by it's name, so let's quickly mock that as well
# In PsychoPy this would be a `psychopy.sound.Sound` (https://www.psychopy.org/api/sound.html#psychopy.sound.Sound)
class MockSound:
def setSound(self, file_name):
print("Setting audio file of Sound component to '{}'".format(file_name))
sound_1 = MockSound()
# And the same for our Keyboard component, `key_resp_2`:
class MockKeyboard:
pass
key_resp_2 = MockKeyboard()
# Finally, let's also seed the random module to have a consistent output across different runs
random.seed(42)
[4]:
# Let's also create the directory where we will store our example output
!mkdir data
Now, we can execute the code that would be in the Begin Routine tab:
[5]:
# ** Begin Routine **
random_condition = random.choice(conditions)
random_stimulus = stimuli[random_condition]
noise_samples = np.random.normal(size=random_stimulus.n_samples)
noisy_stimulus = parselmouth.Sound(noise_samples,
sampling_frequency=random_stimulus.sampling_frequency)
noisy_stimulus.scale_intensity(STANDARD_INTENSITY - level)
noisy_stimulus.values += random_stimulus.values
noisy_stimulus.scale_intensity(STANDARD_INTENSITY)
# use 'filename' to save our custom stimuli
stimulus_file_name = filename + "_stimulus_" + str(trials.thisTrialN) + ".wav"
noisy_stimulus.resample(44100).save(stimulus_file_name, 'WAV')
sound_1.setSound(stimulus_file_name)
Setting audio file of Sound component to 'data/participant_staircase_23032017_stimulus_5.wav'
Let’s listen to the file we have just generated and that we would play to the participant:
[6]:
from IPython.display import Audio
Audio(filename="data/participant_staircase_23032017_stimulus_5.wav")
[6]:
In this example, we do not really need to have code executed during the trial (i.e., in the Each Frame tab). However, at the end of the trial, we need to inform the PsychoPy staircase loop whether the participant was correct or not, because this will affect the further execution the adaptive staircase, and thus value of the level
variable set by PsychoPy. For this we add a final line in the End Routine tab. Let’s say the participant guessed “bat” and pressed the a
key:
[7]:
key_resp_2.keys = 'a'
The End Routine tab then contains the following code to check the participant’s answer against the randomly chosen condition, and to inform the trials
object of whether the participant was correct:
[8]:
# ** End Routine **
trials.addResponse(key_resp_2.keys == random_condition)
Registering that this trial was successful
[9]:
# Clean up the output directory again
!rm -r data
Web service#
Since Parselmouth is a normal Python library, it can also easily be used within the context of a web server. There are several Python frameworks that allow to quickly set up a web server or web service. In this examples, we will use Flask to show how easily one can set up a web service that uses Parselmouth to access Praat functionality such as the pitch track estimation algorithms. This functionality can then be accessed by clients without requiring either Praat, Parselmouth, or even Python to be installed, for example within the context of an online experiment.
All that is needed to set up the most basic web server in Flask is a single file. We adapt the standard Flask example to accept a sound file, access Parselmouth’s Sound.to_pitch, and then send back the list of pitch track frequencies. Note that apart from saving the file that was sent in the HTTP request and encoding
the resulting list of frequencies in JSON, the Python code of the pitch_track
function is the same as one would write in a normal Python script using Parselmouth.
[1]:
%%writefile server.py
from flask import Flask, request, jsonify
import tempfile
app = Flask(__name__)
@app.route('/pitch_track', methods=['POST'])
def pitch_track():
import parselmouth
# Save the file that was sent, and read it into a parselmouth.Sound
with tempfile.NamedTemporaryFile() as tmp:
tmp.write(request.files['audio'].read())
sound = parselmouth.Sound(tmp.name)
# Calculate the pitch track with Parselmouth
pitch_track = sound.to_pitch().selected_array['frequency']
# Convert the NumPy array into a list, then encode as JSON to send back
return jsonify(list(pitch_track))
Writing server.py
Normally, we can then run the server typing FLASK_APP=server.py flask run
on the command line, as explained in the Flask documentation. Please do note that to run this server publicly, in a secure way and as part of a bigger setup, other options are available to deploy! Refer to the Flask deployment documentation.
However, to run the server from this Jupyter notebook and still be able to run the other cells that access the functionality on the client side, the following code will start the server in a separate thread and print the output of the running server.
[2]:
import os
import subprocess
import sys
import time
# Start a subprocess that runs the Flask server
p = subprocess.Popen([sys.executable, "-m", "flask", "run"], env=dict(**os.environ, FLASK_APP="server.py"), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Start two subthreads that forward the output from the Flask server to the output of the Jupyter notebook
def forward(i, o):
while p.poll() is None:
l = i.readline().decode('utf-8')
if l:
o.write("[SERVER] " + l)
import threading
threading.Thread(target=forward, args=(p.stdout, sys.stdout)).start()
threading.Thread(target=forward, args=(p.stderr, sys.stderr)).start()
# Let's give the server a bit of time to make sure it has started
time.sleep(2)
[SERVER] * Serving Flask app 'server.py'
[SERVER] * Debug mode: off
[SERVER] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
[SERVER] * Running on http://127.0.0.1:5000
[SERVER] Press CTRL+C to quit
Now that the server is up and running, we can make a standard HTTP request to this web service. For example, we can send a Wave file with an audio recording of someone saying “The north wind and the sun […]”: the_north_wind_and_the_sun.wav, extracted from a Wikipedia Commons audio file.
[3]:
from IPython.display import Audio
Audio(filename="audio/the_north_wind_and_the_sun.wav")
[3]:
To do so, we use the requests library in this example, but we could use any library to send a standard HTTP request.
[4]:
import requests
import json
# Load the file to send
files = {'audio': open("audio/the_north_wind_and_the_sun.wav", 'rb')}
# Send the HTTP request and get the reply
reply = requests.post("http://127.0.0.1:5000/pitch_track", files=files)
# Extract the text from the reply and decode the JSON into a list
pitch_track = json.loads(reply.text)
print(pitch_track)
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 245.46350823786898, 228.46732333120045, 220.229881904913, 217.9494117767135, 212.32120094882643, 208.42371077564596, 213.3210292245136, 219.22164169979897, 225.08564349338334, 232.58018420251648, 243.6102854675347, 267.9586673940531, 283.57192373203253, 293.09087794771966, 303.9716558501677, 314.16812500255537, 320.11744147538917, 326.34395013825196, 333.3632387299925, 340.0277922275489, 345.8240749033839, 348.57743419008335, 346.9665344057159, 346.53179321965666, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 445.1355539937184, 442.99367847432956, 0.0, 0.0, 0.0, 0.0, 0.0, 236.3912949256524, 233.77304383699934, 231.61759183978316, 229.252937317608, 226.5388725505901, 223.6713912521482, 217.56247158178041, 208.75233223541412, 208.36854272051312, 205.1132684638252, 202.99628328370704, 200.74245529822406, 198.379243723561, 195.71387722456126, 192.92640662381228, 189.55087006373063, 186.29856999154498, 182.60612897184708, 178.0172095327713, 171.7286500573546, 164.43397092360505, 163.15047735066148, 190.94898597265222, 180.11404296436555, 177.42215658133307, 176.85852955755865, 175.90234348007218, 172.72381274834703, 165.07291074214982, 170.84308758689093, 173.84326581969435, 175.39817924857263, 174.73813404735137, 171.30666910901442, 167.57344824865035, 165.26925804867895, 164.0488248694515, 163.3665771538607, 162.9182321154844, 164.4049979046003, 164.16734205916592, 160.17875848111373, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 163.57343758482958, 160.63654708070163, 150.27906547408838, 143.6142724404569, 139.70737167424176, 138.15535972924215, 137.401926952887, 137.45520345586323, 136.78723483908712, 135.18334597312617, 132.3066180187801, 136.04747210818914, 138.65745092917942, 139.1335736781387, 140.238485464634, 141.83711308294014, 143.10991285599226, 144.40501561368708, 146.07295382762607, 147.47513524525806, 148.1692013818143, 149.54122031709116, 151.0336292203337]
[SERVER] 127.0.0.1 - - [14/Aug/2023 22:09:33] "POST /pitch_track HTTP/1.1" 200 -
Since we used the standard json
library from Python to decode the reply from server, pitch_track
is now a normal list of float
s and we can for example plot the estimated pitch track:
[5]:
import matplotlib.pyplot as plt
import seaborn as sns
[6]:
sns.set() # Use seaborn's default style to make attractive graphs
plt.rcParams['figure.dpi'] = 100 # Show nicely large images in this notebook
[7]:
plt.figure()
plt.plot([float('nan') if x == 0.0 else x for x in pitch_track], '.')
plt.show()

Refer to the examples on plotting for more details on using Parselmouth for plotting.
Importantly, Parselmouth is thus only needed by the server; the client only needs to be able to send a request and read the reply. Consequently, we could even use a different programming language on the client’s side. For example, one could make build a HTML page with JavaScript to make the request and do something with the reply:
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<script type="text/javascript" src="jquery.min.js"></script>
<script type="text/javascript" src="plotly.min.js"></script>
<script type="text/javascript">
var update_plot = function() {
var audio = document.getElementById("audio").files[0];
var formData = new FormData();
formData.append("audio", audio);
$.getJSON({url: "http://127.0.0.1:5000/pitch_track", method: "POST",
data: formData, processData: false, contentType: false,
success: function(data){
Plotly.newPlot("plot", [{ x: [...Array(data.length).keys()],
y: data.map(function(x) { return x == 0.0 ? undefined : x; }),
type: "lines" }]);}});
};
</script>
</head>
<body>
<form onsubmit="update_plot(); return false;">
<input type="file" name="audio" id="audio" />
<input type="submit" value="Get pitch track" />
<div id="plot" style="width:1000px;height:600px;"></div>
</form>
</body>
Again, one thing to take into account is the security of running such a web server. However, apart from deploying the flask server in a secure and performant way, we also need one extra thing to circumvent a standard security feature of the browser. Without handling Cross Origin Resource Sharing (CORS) on the server, the JavaScript code on the client side will not be able to access the web service’s reply. A Flask extension exists however, Flask-CORS, and we refer to its documentation for further details.
[8]:
# Let's shut down the server
p.kill()
[9]:
# Cleaning up the file that was written to disk
!rm server.py
Projects using Parselmouth#
The following projects provide larger, real-life examples and demonstrate the use of Parselmouth:
The my-voice-analysis and myprosody projects by Shahab Sabahi (@Shahabks) provide Python libraries for voice analysis and acoustical statistics, interfacing Python to his previously developed Praat scripts.
David R. Feinberg (@drfeinberg) has written multiple Python scripts and programs with Parselmouth to analyse properties of speech recordings:
Praat Scripts is a collection of Praat scripts used in research, translated into Python.
Voice Lab Software is a GUI application to measure and manipulate voices.
Note
If you have a project using Parselmouth that could be useful for others, and want to add it to this list, do let us know on Gitter!
API Reference#
Parselmouth consists of two main modules, parselmouth
and parselmouth.praat
, though both modules will be imported on importing parselmouth
.
Main module with a Python interface to Praat. |
|
Submodule with functions to call Praat commands and run Praat scripts. |
Citing Parselmouth#
A manuscript introducing Parselmouth (and supplementary material) has been published in the Journal of Phonetics. Scientific work and publications can for now cite Parselmouth in the following way:
Jadoul, Y., Thompson, B., & de Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. https://doi.org/10.1016/j.wocn.2018.07.001
@article{parselmouth,
author = "Yannick Jadoul and Bill Thompson and Bart de Boer",
title = "Introducing {P}arselmouth: A {P}ython interface to {P}raat",
journal = "Journal of Phonetics",
volume = "71",
pages = "1--15",
year = "2018",
doi = "https://doi.org/10.1016/j.wocn.2018.07.001"
}
Since Parselmouth exposes existing Praat functionality and algorithm implementations, we suggest also citing Praat when using Parselmouth in scientific research:
Boersma, P., & Weenink, D. (2021). Praat: doing phonetics by computer [Computer program]. Version 6.1.38, retrieved 2 January 2021 from http://www.praat.org/
@misc{praat,
author = "Paul Boersma and David Weenink",
title = "{P}raat: doing phonetics by computer [{C}omputer program]",
howpublished = "Version 6.1.38, retrieved 2 January 2021 \url{http://www.praat.org/}",
year = "2021"
}