Blueprint Model of Production, Pythonically

I try to implement the formal description of phonology and phonetics from Nelson and Heinz (2025) in python.
Author

Josef Fruehwald

Published

August 12, 2025

Doi

I just saw a really interesting paper by Scott Nelson and Jeffrey Heinz (Nelson and Heinz 2025) that proposes a model of phonology and phonetics as complex function application that maintains a discrete phonology while also alowing for things like incomplete neutralization. I myself am always sort of able to follow formal notation, but get a better understanding if I try rewriting it in a programming language of some sort. So I’m giving it a go here in Python.

Caveats
  1. I haven’t conferred with the authors about this. You should not construe anything I say here as being cosigned by them in any way!
  2. I’m not the most formal savvy guy out there. It’s possible I’m errorfully miscontruing their paper, but I am earnestly trying to be accurate.
  3. You should really just go read their paper.

My approach to the functions

In their paper they say

For every \(n\)­ary function, there is an equivalent (\(n+1\))-ary relation. Since phonology is a unary function (i.e., it has one input, a UR), it can also be envisioned as a binary relation consisting of UR and SR pairs \(\langle\) UR, SR \(\rangle\).

To capture that fact, and also to make my python functions operate in a way that I feel is principled, I’m going to have (almost) every function return a tuple of its input and output like so:

def my_fun(x:str) -> tuple[str, str]:
  return (x, x[0])

my_fun("hello!")
('hello!', 'h')

Typing

I’m also including typing on most of the functions as well. If you’re not familiar, x:str means that this function has one parameter x, and that parameter is a string. The -> tuple[str, str] part means that the function returns a tuple with two values, both of them strings. By including typing, I was able to rely on automatic type checking in my IDE to tell me if the values I was actually returning were what I thought I was returning. They eventually get a little illegible… but they’re still mechanically useful.

Getting started with the lexicon

Nelson & Heinz weren’t very explicit about what the structure of the lexicon was, and I think that’s cause it doesn’t really matter. So I just need to make a decision and say that each item in the lexicon will be a list with two values. The first example they work through was word-final devoicing, so I’ll say the first will either be "+" or "-" for [+voice] or [-voice]. The second value will be either "#" for “word final” or "." for “everything else”.

import numpy as np
import numpy.typing as npt
from typing import Callable
L = [
  ["+", "#"],
  ["-", "#"],
  ["+", "."],
  ["-", "."]
]

The Phonology

I’m going to write the phonology (P()) to map every underlying representation (UR) in the lexicon to its surface representation (SR). So I really want the function to look like P(L).

For the actual mapping of a specific UR to a specific SR, I’ll define a function internal to P() called neut(). I guess this neut() function is what’s “encapsulated” within P(), and it’s the closest to what feels familiar as a “phonological rule”.

def P(L:list[list[str]])->list[tuple[list[str], list[str]]]:
  """
  Map all URs to SRs
  """
  def neut(UR: list[str]) -> tuple[list[str], list[str]]:
    """
    Neutralize '+' to '-' when a '#' follows.
    """
    if UR[0] == "+" and UR[1] == "#":
      return (UR, ["-", "#"])
    return (UR, UR)
  return list(map(neut, L))

Here’s how the phonology looks when applied to the lexicon.

P(L)
[(['+', '#'], ['-', '#']),
 (['-', '#'], ['-', '#']),
 (['+', '.'], ['+', '.']),
 (['-', '.'], ['-', '.'])]

Phonetics

The Phonetics (which they call ‘A’ in the paper) maps the Phonology (P) to phonetic targets. They get incomplete neutralization out of this by saying both the UR and the SR get targets mapped.

Here’s where I needed to make some decisions I wasn’t sure about, specifically in what the output of A was. One possibility is that it should just be a list of the targets

\[ \left[\begin{array}{c} x_1, y_1\\ \ldots\\ x_i, y_i \end{array}\right] \]

But the input was a \(\langle\) UR, SR \(\rangle\) tuple, and in keeping with treating \(n\) ary functions as \((n+1)\)-ary relations, maybe the output should also be a list of tuples.

\[ \left[\begin{array}{ll} \langle\langle\text{UR},\text{SR}\rangle, & [x_1, y_1]\rangle\\ \ldots\\ \langle\langle\text{UR},\text{SR}\rangle, & [x_i, y_i]\rangle \end{array}\right] \]

I decided to go this second route, since it felt more principled than changing how these functions work midway.

I also had to make a few decisions about the internal functions of A(). Nelson & Heinz weren’t very specific about how the cue assignment worked, or whether the process was identical for URs and SRs. I feel like the model would get very unconstrained if it was different, so I wrote just one internal target() function that maps over UR, SR pairs. This target() function is most similar to what I think of as the “Phonology-Phonetics Interface”.

Then, inside the targeting function, I have an internal voicing() function that maps feature values to specific cue values. This is the most similar to what I think of as a “phonetic implementation rule”.

def A(P:list[tuple[list[str], list[str]]]) -> list[tuple[tuple[list[str],list[str]],npt.NDArray]]:
  """
  Assign targets to P(L)
  """
  def target(UR_SR:tuple[list[str], list[str]]) -> tuple[tuple[list[str],list[str]],npt.NDArray]:
    """
    Assign targets to each representation in <UR, SR>
    """
    def voicing(rep: list[str]) -> float|None:
      """
      Return cue value for each feature value.
      """
      if rep[0] == "+":
        return 2.0
      if rep[0] == "-":
        return 1.0
      
    return (UR_SR, np.array(list(map(voicing, UR_SR))))
  
  return list(map(target, P))

The typing is pretty incomprehensible, and we’re dealing with two levels of function embedding, but in the end we get the kind of output we’re looking for.

A(P(L))
[((['+', '#'], ['-', '#']), array([2., 1.])),
 ((['-', '#'], ['-', '#']), array([1., 1.])),
 ((['+', '.'], ['+', '.']), array([2., 2.])),
 ((['-', '.'], ['-', '.']), array([1., 1.]))]

Intent

Finally, there’s the “Intent” (I) function that takes in the cue values from A() and weights & combines them according to an intention to keep underlying features distinct.

Now, a function that looks like I(A(P(L))) won’t do the trick, because the intention value varies. I could let I() take a second parameter like I(A(P(L)), i), but none of the other functions I’ve written so far have done that, so I don’t want start now. Instead, I wrote an I_factory() function which returns a parameterized I() function. The I() function then weights and sums the target values to return the Phonetic Realization.

At this point, I decided that I wasn’t going to bother with returning a tuple of \(\langle\) input, output \(\rangle\) because

  1. This is the last step.
  2. The typing would be monstrous.
def I_factory(i:float = 0.0) -> Callable:
  """
  Return a parameterized Intent function
  """
  def I(A:list[tuple[tuple[list[str],list[str]],npt.NDArray]]) -> list[float]:
    """
    Map A() to a phonetic cue, according to distinctness intent.
    """
    weights = np.array([i, 1-i])
    def PR(ur_sr_target):
      """
      Weight and combine cues.
      """
      return np.dot(ur_sr_target[1], weights)
    return list(map(PR, A))
  return(I)

Here it is in action!

# Full Neutralization
I_factory(i = 0)(A(P(L)))
[np.float64(1.0), np.float64(1.0), np.float64(2.0), np.float64(1.0)]
# Incomplete Neutralization
I_factory(i = 0.1)(A(P(L)))
[np.float64(1.1), np.float64(1.0), np.float64(2.0), np.float64(1.0)]

Thoughts

A kind of interesting thing to note is that the Intention function only looked at and weighted the cues from A(), but the full \(\langle\) UR, SR \(\rangle\) tuples were also there. I don’t know what I() could have done with them, but they were right there.

Overall, even if I got things wrong, this has felt like an enlightening exercise. It’s definitely an approach to Phonology and Phonetics I’ll be noodling over for a bit.

References

Nelson, Scott, and Jeffrey Heinz. 2025. “The Blueprint Model of Production.” Phonology 42 (January): e12. https://doi.org/10.1017/S0952675725100055.

Reuse

CC-BY 4.0

Citation

BibTeX citation:
@online{fruehwald2025,
  author = {Fruehwald, Josef},
  title = {Blueprint {Model} of {Production,} {Pythonically}},
  series = {Væl Space},
  date = {2025-08-12},
  url = {https://jofrhwld.github.io/blog/posts/2025/08/2025-08-12_blueprint-phonology-in-python/},
  doi = {10.59350/qh4z4-4hf23},
  langid = {en}
}
For attribution, please cite this work as:
Fruehwald, Josef. 2025. “Blueprint Model of Production, Pythonically.” Væl Space. August 12, 2025. https://doi.org/10.59350/qh4z4-4hf23.