DISCLAIMER: All of my programs are free of charge. That being said, the user assumes all liability for the proper use and any complications pertaining to these files. I hope you can put them to good use!
Hintzman’s Minerva2, Old vs. New simulation: This is a basic simulation of Douglas Hintzman’s Minerva2 memory model. The simulation consists of three macros: one that creates a long-term memory structure; one that simulates a study session, and one that simulates a test session. It is fully modifiable. You tell the sheet how many items you want in LTM, and how many elements will constitute a vector. LTM is then randomly generated. You then specify how many items you want to have studied, how many encodings per word, and the learning rate parameter. Study words are then randomly generated, and elements are “lost” as a function of the learning parameter. Finally you indicate how many Old and New words are to be tested; Old words are randomly selected from the studied items, and New words are randomly generated. Output contains Echo Content, and Normalized Echo Content. Also, for each item you receive the Echo Intensity, the correlation between Echo Content and the Probe, and a degree of Fit measurement. Moreover, descriptive statistics are listed as a function of word type (old, new). The sheet contains a tab with instruction text, a blank template, a large scale simulation, and a tab with ANOVAs, correlations, and other statistics based on the large scale simulation (to verify that the simulations work). Download the file here: Minerva2_Simulation.
Hintzman’s Minerva2, 2AFC simulation: This simulation is adapted for the use of modeling 2AFC recognition memory data. It is very similar to the model posted above (Old vs. New simulation), but with some important modifications; it too is fully modifiable. In this case, the user can specify the parameters of long-term memory, and the size of the vectors. Additionally, for multiple within-subjects conditions, the user can specify different numbers of studied items, number of encodings, and learning parameters. I have added to the model two parameters. The user can control (for each condition) a Foil Similarity parameter, that matches a certain proportion of the foil elements to those of the studied item against which it is to be tested. There is also a Guessing parameter. This tells the model to guess on trials in which the difference in Echo Intensity between the studied item and the foil falls below a certain threshold.
Output contains the LTM structure, the studied items, the test items and corresponding foils, a trial-by-trial breakdown of the results, and tables that average the results across each condition. If the user specifies, the macros will also output the Echo Content and Normalized Echo content, and provide corresponding Correlation and Fit measures for these. The first tab of the workbook has instructions on where to enter the parameter values; the second tab is a blank template. The other sheets contain several simualations that vary a single parameter, holding all the others constant. This allows the user to affirm that the model is behaving rationally. Download the files here: Minerva_2AFCSimulation.