I have been fairly productive in the last weeks, despite a lot of administrative things that needed to be sorted out and uncertainties regarding my partner’s future employment that made it hard to plan for the future.
Specifically, we finished up a manuscript (“Exploration of the Rate of Forgetting as a Domain-Specific Individual Differences Measure“) that is based on chapter 4 of my PhD thesis and submitted the work in early May. This was together with my PhD promotors Rob R. Meijer and Hedderik van Rijn. We’re now waiting for reviews.
Additionally, Hedderik and I wrote up our exploration of an implementation of the serial reaction time task in a very simple virtual reality environment that came out of a collaboration with the local company STARK Learning. We were able to show that the expected speed-up of reaction times emerged in the VR implementation of the task, which is a nice proof of concept that we hope other researchers can use to deploy the task in novel conditions. I got the “notification of formal acceptance” last night and we hope the paper goes through production swiftly and will be available in PLoS ONE soon.
The most exciting new development, however, is that a new PhD student joined the lab: Maarten van der Velde started his PhD in May and I am excited to co-supervise him, together with Jelmer Borst and Hedderik van Rijn. Hedderik and I submitted an abstract to ICCM 2018 and Maarten is going to help processing the data and preparing it for the poster presentation in Madison. It’s going to be very interesting for me to work with someone that has a formal computer science education – I’ll have a lot to learn from him!
We submitted our paper “Opportunity for verbalization does not improve visual change detection performance: A state-trace analysis” to Behavior Research Methods and it is now available online.
In the paper, we tested whether engaging in articulatory suppression (i.e., repeating aloud non-sense syllables) during a visual change detection task is necessary to obtain useful data. We conclude that, “Enforcing precautionary articulatory suppression does not seem to be necessary to get interpretable data from visual change detection tasks.”. This conclusion is based on a Bayesian state-trace analysis of data from 15 participants that each did about 2,500 trials of a simple visual change detection task.
This paper is based on the work that I did during the first year of my PhD (’12 – ’13). It has gone through multiple re-writes and I am very happy that it is now published in Behavior Research Methods. My thanks to my co-authors (Candice Morey, Melissa Prince, Andrew Heathcote, and Richard Morey) for their help and contribution.
The conference paper I submitted to CogSci 2016 has been accepted for an oral presentation. I implemented the reviewers’ suggestions and clarified some points in the paper and just submitted the revised and final version of the paper. I also uploaded the final version of the conference paper to the corresponding Github repository. You can find the PDF there.
Now I am curious what time slot they’ll assign me for my presentation. I hope it’s early on during the conference but not in the very first session.
This morning, I received an e-mail from the editor of BRM stating,
It is a pleasure to accept your manuscript entitled “Opportunity for verbalization does not improve visual change detection performance: A state-trace analysis.” in its current form for publication in the Behavior Research Methods. Your manuscript has been sent to the Journal’s Production Department and you can expect to receive proofs within approximately three weeks.
I can’t wait for the proofs and for this paper to be published!
I just received the following e-mail:
Dear Florian Sense:
We are very pleased to inform you that your paper submission “On the Link between Fact Learning and General Cognitive Ability” has been accepted for oral presentation at CogSci 2016. We received 656 paper submissions this year, and each underwent careful peer review. While many submissions were found to be of high quality, time and space constraints allowed us to accept 222 (34%) for oral presentation and a further 258 (39%) for poster presentation. Your submission will be allocated a standard 25-minute presentation period in order for you, or another one of the paper’s authors, to present this paper and to answer questions from the audience.
Awesome! I am really glad that the paper was accepted and that I can present it in Philadelphia in August. The paper is based on the data that was collected up until mid-January (N = 89) but the data collection is still on-going. By the time I can present, the data collection should be complete and I’ll probably be working on the paper. So it’ll be great to have a chance to present my work there and get some input.
After the paper was rejected, we submitted it to another journal in early January. On March 13th, we got the response informing us that the paper was accepted with minor revisions. We spent the last three weeks implementing those and today I submitted the revised version to Behavior Research Methods. Here’s the abstract:
Evidence suggests that there is a tendency to verbally recode visually-presented information, and that in some cases verbal recoding can boost memory performance. According to multi-component models of working memory, memory performance is increased because task-relevant information is simultaneously maintained in two codes. The possibility of dual encoding is problematic if the goal is to measure capacity for visual information exclusively. To counteract this possibility, articulatory suppression is frequently used with visual change detection tasks specifically to prevent verbalization of visual stimuli. But is this precaution always necessary? There is little reason to believe that concurrent articulation affects performance in typical visual change detection tasks, suggesting that verbal recoding might not be likely to occur in this paradigm, and if not, precautionary articulatory suppression would not always be necessary. We present evidence confirming that articulatory suppression has no discernible effect on performance in a typical visual change-detection task in which abstract patterns are briefly presented. A comprehensive analysis using both descriptive statistics and Bayesian state-trace analysis revealed no evidence for any complex relationship between articulatory suppression and performance that would be consistent with a verbal recoding explanation. Instead, the evidence favors the simpler explanation that verbal strategies were either not deployed in the task or, if they were, were not effective in improving performance, and thus have no influence on visual working memory as measured during visual change detection. We conclude that in visual change detection experiments in which abstract visual stimuli are briefly presented, pre-cautionary articulatory suppression is unnecessary.
The manuscript along with the raw data and all the code can be found on my Github page. I am really happy that this paper will soon be out there for others to read. Employing articulatory suppression is a pain in the neck – both for the participants and the experimenter. I hope many people see this paper and are spared from the extra burden when they run experiments that fall within the boundary conditions explain in our paper.
I just went through the submission process to send my conference paper off to CogSci. The conference will be held in Philadelphia in late August this year and I’d love to go.
For the conference paper, I used a subset of the data that I collected over the last couple of months. Part of the experimental manipulation was to split the participants in two groups: half of them learned vocabulary with our adaptive learning method and the other half learned with a flashcard method. For this conference paper, I wanted to write only about the subset of people that learned with the adaptive method. Specifically, I looked into the link between the model’s estimated rate of forgetting parameter and participants’ scores on two tests: one of working memory capacity (I used three complex span tasks for this) and a test of general cognitive ability (I used the Q1000 for this).
The title of my submission is On the Link between Fact Learning and General Cognitive Ability and the raw data, the scripts for the analysis, and the PDF of the submitted paper can be found on Github.
I submitted a conference paper to ICCM last year and it was published in the proceedings. Afterwards a special issue was put together by Niels Taatgen, Marieke van Vugt, Jelmer Borst, and Katja Mehlhorn and in the introduction they write,
Despite the diversity of the field, cognitive modelers are still united by the original goal of understanding the human mind through computer simulation. A major forum for sharing, discussing, and integrating ideas is the International Conference of Cognitive Modeling (ICCM), which meets twice every 3 years to discuss the latest developments in the field. The best five papers of the 2015 conference—reflecting the breadth of the current state of the art—have been selected for this special section.
My paper was one of the five and I was invited to submit an extended version to the special issue.
The extended version of my paper is titled “An Individual’s Rate of Forgetting Is Stable Over Time but Differs Across Materials” and is now available through the Early Access system. The paper gives an overview of the adaptive learning method developed in our lab and the title conveniently sums up the main finding: parameters estimated for a person during learning are stable over time but differ between materials. The parameters are still relatively stable over materials but less than they are over time.
This implies that if a learner comes back to study the same type of material at a later time, we can safely use their parameters from a previous session as a best guess (i.e., default). If they come back to study a different type of material, however, we might want to be more careful.
After the paper was rejected, we made some modifications. Today, I submitted it again to a journal that we hope is more fitting: Behavior Research Methods. We debated whether a traditional analysis of variance should be added to the paper but decided that it is not suitable to test any relevant or meaningful hypothesis we might have about the data.
I am looking forward to the reviewers’ comments and that our findings are available to the rest of the community as soon as possible!
We submitted our paper in early August and got a response from the action editor in in beginning of October. The paper was rejected with the reason: “this paper is probably better suited for a methods-oriented journal”. The reviewers made some helpful comments that we’re now using to revise the paper and submit it somewhere else.
Having only the state-trace analysis might not be the best strategy since not many people are familiar with it. Therefore, we’ll add a Bayes factor analysis of variance to further underline the point that certain effects are not present in the data. We’ll also try and extent the discussion a bit to highlight the theoretical relevance of the work.