MyNixOS website logo
Description

Process Text and Compute Linguistic Alignment in Conversation Transcripts.

Imports conversation transcripts into R, concatenates them into a single dataframe appending event identifiers, cleans and formats the text, then yokes user-specified psycholinguistic database values to each word. 'ConversationAlign' then computes alignment indices between two interlocutors across each transcript for >40 possible semantic, lexical, and affective dimensions. In addition to alignment, 'ConversationAlign' also produces a table of analytics (e.g., token count, type-token-ratio) in a summary table describing your particular text corpus.




ConversationAlign

Open-source software for computing main effects and indices of alignment across coversation partners in dyadic conversation transcripts.

ConversationAlign website

GitHubrelease status

GitHubstars

Maintenance License: GPLv3+ R-CMD-check

illustration of processing pipeline used by ConversationAlign

Overview

ConversationAlign analyzes alignment and computes main effects across more than 40 unique dimensions between interlocutors (conversation partners) engaged in two-person conversations. ConversationAlign transforms raw language data into simultaneous time series objects across >40 possible dimensions via an embedded lookup database. There are a number of issues you should consider and steps you should take to prepare your data.

License

ConversationAlign is licensed under the GNU LGPL v3.0.

Installation and Technical Considerations

One of the main features of the ConversationAlign algorithm involves yoking norms for many different lexical, affective, and semantic dimensions to each content word in your conversation transcripts of interest. We accomplish this by joining your data to several large lookup databases. These databases are too large to embed within ConversationAlign. When you load ConversationAlign, all of these databases should automatically download and load from an external companionn repository ConversationAlign_Data. ConversationAlign needs these data, so you will need a decent internet connection to load the package. It might take a second or two to complete the download if Github is acting up. Install the development version of ConversationAlign from GitHub using the devtools package.

# Check if devtools is installed, if not install it
if (!require("devtools", quietly = TRUE)) {
  install.packages("devtools")
}

# Load devtools
library(devtools)

# Check if ConversationAlign is installed, if not install from GitHub
if (!require("ConversationAlign", quietly = TRUE)) {
  devtools::install_github("Reilly-ConceptsCognitionLab/ConversationAlign")
}

# Load SemanticDistance
library(ConversationAlign)

Step 1: Read and Format Transcript Options

read_dyads()

  • Reads transcripts from a local drive or directory of your choice.
  • Store each of your individual conversation transcripts (.csv, .txt, .ai) that you wish to concatenate into a corpus in a folder. ConversationAlign will search for a folder called my_transcripts in the same directory as your script. However, feel free to name your folder anything you like. You can specify a custom path as an argument to read_dyads()
  • Each transcript must nominally contain two columns of data (Participant and Text). All other columns (e.g., meta-data) will be retained.

Arguments to read_dyads:

  • my_path default is ‘my_transcripts’, change path to your folder name
#will search for folder 'my_transcripts' in your current directory
MyConvos <- read_dyads()

#will scan custom folder called 'MyStuff' in your current directory, concatenating all files in that folder into a single dataframe
MyConvos2 <- read_dyads(my_path='/MyStuff')

read_1file()

  • Read single transcript already in R environment. We will use read_1file() to prep the Marc Maron and Terry Gross transcript. Look at how the column headers have changed and the object name (MaronGross_2013) is now the Event_ID (a document identifier),

Arguments to read_1file:

  • my_dat object already in your R environment containing text and speaker information.
MaryLittleLamb <- read_1file(MaronGross_2013)
#print first ten rows of header
knitr::kable(head(MaronGross_2013, 10), format = "pipe")
speakertext
MARONI’m a little nervous but I’ve prepared I’ve written things on a piece of paper
MARONI don’t know how you prepare I could ask you that - maybe I will But this is how I prepare - I panic
MARONFor a while
GROSSYeah
MARONAnd then I scramble and then I type some things up and then I handwrite things that are hard to read So I can you know challenge myself on that level during the interview
GROSSBeing self-defeating is always a good part of preparation
MARONWhat is?
GROSSBeing self-defeating
MARONYes
GROSSSelf-sabotage



Step 2: Clean, Format, Align Norms

prep_dyads()

-Cleans, formats, and vectorizes conversation transwcripts to a one-word-per-row format -Yokes psycholinguistic norms for up to three dimensions at a time (from <40 possible dimensions) to each content word. -Retains metadata

Arguments to prep_dyads():

  • dat_read name of the dataframe created during read_dyads()
  • omit_stops T/F (default=T) option to remove stopwords
  • lemmatize T/F (default=T) lemmatize strings converting each entry to its dictionary form
  • which_stoplist quoted argument specifying stopword list to apply, options include none, MIT_stops, SMART_stops, CA_OriginalStops, or Temple_stops25. Default is Temple_stops25.
NurseryRhymes_Prepped <- prep_dyads(dat_read=NurseryRhymes, lemmatize=TRUE, omit_stops=T, which_stoplist="Temple_stops25")

Example of a prepped dataset embedded as external data in the package with ‘anger’ values yoked to each word.

knitr::kable(head(NurseryRhymes_Prepped, 10), format = "simple", digits=2)
Event_IDParticipant_IDExchange_CountTurn_CountText_PrepText_Cleanemo_anger
ItsySpiderYin11theNANA
ItsySpiderYin11itsyitsy-0.02
ItsySpiderYin11bitsybitsy-0.02
ItsySpiderYin11spiderspider0.04
ItsySpiderYin11climbedclimb-0.09
ItsySpiderYin11upup-0.06
ItsySpiderYin11theNANA
ItsySpiderYin11waterwater-0.17
ItsySpiderYin11spoutspout0.05
ItsySpiderMaya12downdown0.03



Step 3: Summarize Data, Alignment Stats

summarize_dyads()

This is the computational stage where the package generates a dataframe boiled down to two rows per converation with summary data appended to each level of Participant_ID. This returns the difference time series AUC (dAUC) for every variable of interest you specified and the correlation at lags -2,,0, 2. You decide whether you want a Pearson or Spearman lagged correlation.

Arguments to summarize_dyads():

  • df_prep dataframe created by prep_dyads() function
  • custom_lags user specifies a custom set of turn-lags. Default is NULL with ConversationAlign producing correlations at a lead of 2 turns, immediate response, and lag of 2 turns for each dimension of interest.
  • sumdat_only default is TRUE, produces grouped summary dataframe with averages by conversation and participant for each alignment dimension, FALSE retrains all of the original rows, filling down empty rows of summary statistics for the conversation (e.g., AUC)
  • corr_type specifies correlation madel (parametric default = ‘Pearson’); other option ‘Spearman’ for computing turn-by-turn correlations across interlocutors for each dimension of interest.
MarySumDat <- summarize_dyads(df_prep = NurseryRhymes_Prepped, custom_lags=NULL, sumdat_only = TRUE, corr_type='Pearson') 
colnames(MarySumDat)
#>  [1] "Event_ID"           "Participant_ID"     "Dimension"         
#>  [4] "Dimension_Mean"     "AUC_raw"            "AUC_scaled100"     
#>  [7] "Talked_First"       "TurnCorr_Lead2"     "TurnCorr_Immediate"
#> [10] "TurnCorr_Lag2"
knitr::kable(head(MarySumDat, 10), format = "simple", digits = 3)
Event_IDParticipant_IDDimensionDimension_MeanAUC_rawAUC_scaled100Talked_FirstTurnCorr_Lead2TurnCorr_ImmediateTurnCorr_Lag2
ItsySpiderMayaemo_anger0.0010.7831.630Yin-1-1-1
ItsySpiderYinemo_anger-0.0330.7831.630Yin-1-1-1
JackJillAnaemo_anger-0.0663.7294.662Franklin111
JackJillFranklinemo_anger0.0303.7294.662Franklin111
LittleLambDaveemo_anger-0.0011.4861.486MaryNANANA
LittleLambMaryemo_anger-0.0311.4861.486MaryNANANA

Optional: Generate corpus analytics

corpus_analytics()

It is often critical to produce descriptives/summary statistics to characterize your language sample. This is typically a laborious process. corpus_analytics will do it for you, generating a near publication ready table of analytics that you can easily export to the specific journal format of your choice using any number of packages such as flextable or tinytable.

Arguments to corpus_analytics():

  • dat_prep dataframe created by prep_dyads()function
NurseryRhymes_Analytics <-  corpus_analytics(dat_prep=NurseryRhymes_Prepped)
knitr::kable(head(NurseryRhymes_Analytics, 10), format = "simple", digits = 2)
measuremeanstdevminmax
total number of conversations3.00NANANA
token count all conversations (raw)1506.00NANANA
token count all conversations (post-cleaning)1032.00NANANA
exchange count (by conversation)38.0013.1124.0050.00
word count raw (by conversation)502.0047.03456.00550.00
word count clean (by conversation)344.0048.66312.00400.00
cleaning retention rate (by conversation)0.680.040.640.73
morphemes-per-word (by conversation)1.000.001.001.00
letters-per-word (by conversation)4.220.144.124.38
lexical frequency lg10 (by conversation)3.670.183.483.84



News and Getting Help

For bugs, feature requests, and general questions, reach out via one of the following options:

Metadata

Version

0.3.2

License

Unknown

Platforms (75)

    Darwin
    FreeBSD
    Genode
    GHCJS
    Linux
    MMIXware
    NetBSD
    none
    OpenBSD
    Redox
    Solaris
    WASI
    Windows
Show all
  • aarch64-darwin
  • aarch64-freebsd
  • aarch64-genode
  • aarch64-linux
  • aarch64-netbsd
  • aarch64-none
  • aarch64-windows
  • aarch64_be-none
  • arm-none
  • armv5tel-linux
  • armv6l-linux
  • armv6l-netbsd
  • armv6l-none
  • armv7a-linux
  • armv7a-netbsd
  • armv7l-linux
  • armv7l-netbsd
  • avr-none
  • i686-cygwin
  • i686-freebsd
  • i686-genode
  • i686-linux
  • i686-netbsd
  • i686-none
  • i686-openbsd
  • i686-windows
  • javascript-ghcjs
  • loongarch64-linux
  • m68k-linux
  • m68k-netbsd
  • m68k-none
  • microblaze-linux
  • microblaze-none
  • microblazeel-linux
  • microblazeel-none
  • mips-linux
  • mips-none
  • mips64-linux
  • mips64-none
  • mips64el-linux
  • mipsel-linux
  • mipsel-netbsd
  • mmix-mmixware
  • msp430-none
  • or1k-none
  • powerpc-netbsd
  • powerpc-none
  • powerpc64-linux
  • powerpc64le-linux
  • powerpcle-none
  • riscv32-linux
  • riscv32-netbsd
  • riscv32-none
  • riscv64-linux
  • riscv64-netbsd
  • riscv64-none
  • rx-none
  • s390-linux
  • s390-none
  • s390x-linux
  • s390x-none
  • vc4-none
  • wasm32-wasi
  • wasm64-wasi
  • x86_64-cygwin
  • x86_64-darwin
  • x86_64-freebsd
  • x86_64-genode
  • x86_64-linux
  • x86_64-netbsd
  • x86_64-none
  • x86_64-openbsd
  • x86_64-redox
  • x86_64-solaris
  • x86_64-windows