Using Turbolinks and jQuery dataTables in Rails 5

Turbolinks can enhance the performance of your web application by ‘hot swapping’ the <body> tag using Ajax every time you click to a new page. It’s a great idea and clean implementation, however, any libraries written to initialize objects using the $( document ).ready() in jQuery can run into issues. This is because the page (i.e. document) itself only loads once—the rest of the ‘clicks’ are being performed using Ajax.

The jQuery library dataTables utilizes the conventional document-ready scheme to initialize a data table. So with Turbolinks enabled, it gets initialized on the first page load, but not on subsequent navigation clicks. Therefore, when returning to the table, the DOM doesn’t know that your table is using dataTables.

There are a few workarounds, the most promising from How To Upgrade to Turbolinks 5 on GoRails, but the compatibility coffee script that automagically makes everything work appears to have been removed from the Turbolinks repository. But for this isolated application, the integration isn’t that hard, you just need to tap into the turbolinks:load event. Here’s how I did it:

  1. Make sure gem 'turbolinks', '~> 5.2.0' and gem 'jquery-turbolinks' are in your Gemfile.

  2. Add all the requires to application.js (note: I’m using Bootstrap):

    //= require jquery.dataTables.js
    //= require dataTables.bootstrap4.js
    //= require dataTables.responsive.js
    //= require responsive.bootstrap4.js
    //= require jquery.turbolinks
    //= require turbolinks
  3. Initialize a data table (id = ‘data-table’) either in your table’s view, or in a <scripts>' tag at the bottom of application.html.erb:

$( document ).on('turbolinks:load', function() {

Also note, I am passing defaults through the dataTables.bootstrap.js file at the line starting with $.extend( true, DataTable.defaults, {.

Despiking Neural Data using Linear Interpolation

Excessive spiking in a neural signal can affect the interpretation of lower frequency analyses like those investigating the local field potential (LFP). One method to mitigate the influence of spikes on these low frequency components is to remove them from the original signal using linear interpolation. This technique has been characterized by others and is written into the Fieldtrip toolbox.

Below is a simple function despike.m for MATLAB to remove spikes using linear interpolation.

My primary complaint about this method is that it does not make any “smart” or adaptive decisions concerning the time interval of the spike being replaced. However, if spikes were sorted, each set of spike timestamps from an individual unit could be associated with a spike width (see the spikeWidth variable in my example) and be replaced using a spike-specific time window.

One very simple alternative used when spike contamination is of very high frequency is to simply apply a median filter, like medfilt1. This is less applicable to neural data being sampled at rates greater than ~2kHz where the spike waveform is represented by multiple data points.

Simulating a Local Field Potential in MATLAB

Have you ever wondered: is my filter working? In neurophysiology, we are often faced with mashing together data, and in the process, sampling rates and filter settings can get lost. You are always one parameter away from interpreting a beta oscillation (13-30 Hz) as a delta oscillation (1-4 Hz), which can lead research far from the truth.

To check whether a filter is working properly, a ground truth local field potential (LFP) or “wideband” signal should be used, where the timing and frequencies are known. Here, I have created an LFP generator with control over all aspects of timing, frequency and amplitude.

Ground Truth LFP M-file

The LFP generator combines sinusoids based on the input parameters to generate a single waveform as the output. The variable, t, represents the time course and is the same length as the lfp.

An Example

My primary reason for creating this was to test if the spectrogram function I use is accurate in both the time and frequency domain. For instance, most analyses require my filter to be non-causal (i.e., centered on the input phenomena with zero-lag). Here is an example of how to use groundTruthLFP.m.

The LFP I constructed is 10 seconds long with a sampling rate of 1,000 Hz. The first frequency I added was at 4 Hz, from 0-10 seconds with unity amplitude. The third frequency was 30 Hz, from 0-5 seconds with an amplitude of 3. Then, Iran the lfp through my spectrogram function to see if the output matches the signal I designed.


Altogether, it looks really good. You can get a feel for the rolloff properties of the filter and causality. Most importantly, the frequencies and timing are spot on.


This function does not pay any special attention to the phase of the input signal. It begins all sinusoids at the time specifed in oscillationOnOff at 0π. Therefore, waves of similar frequency will have phase interactions that may lead to noticeable segments of constructive and destructive interference in the output. For example, notice how the following example has nearby input frequencies, resulting to a non-trivial spectrogram.


This example highlights just one of considerations when analyzing LFPs from the brain, where signals arriving from different brain structures may have no phase coordination, but are ultimately summed together. That is, the fun.

Cross Correlation Normalization in MATLAB

Cross Correlation Primer

A cross correlation measures the similarity of two signals over time. It’s an important analytical tool in time-series signal processing as it can highlight when two signals are correlated but exhibit some delay from one another.

For instance, imagine that you are talking with a friend in Tokyo while making a simultaneous recording from the microphone of your phone (in the States), and the headset of their phone. Both signals will represent your voice, but will not be correlated because of the delay: the local recording will be at “How are you?” while the distant recording will be at “Hello!”. A cross correlation takes two time series signals and sweeps them across each other to determine exactly when, and to what extent the signals are correlated in time. In this case, a cross correlation will reveal a perfect correlation of the signals, albeit with a delay.

In neural physiology, cross correlation is often used to determined the relationship between two phenomena. It could reveal that one neuron always fires before, or after another one. Or, it may expose how the firing of a neuron relates to local field potential activity.

Cross Correlation in MATLAB

The MATLAB xcorr function will cross correlate two time-series signals. The MATLAB documentation offers a good example using two sensors at different locations that measured vibrations caused by a car as it crosses a bridge. What I want to show here is the functionality of using the ‘coeff’ scale option to normalize the cross correlation. By normalizing, the cross correlation ignores the magnitude disparity of the source signals.


(right) The two source signals are perfectly anti-correlated in time and differ in magnitude.

(left) The two source signals are correlated in time and differ in magnitude.


The raw cross correlation (middle row) scales the y-values based on the magnitude of the source signals. This may be interpreted as, the signals on the left are correlated to a higher degree than the anti-correlated signals on the right.

So why use normalization? One case might be where the source signals are coming from uncalibrated sensors (i.e., the phase information is accurate, but the magnitude is not). Here, you are only interested in whether the phase of the signals is correlated in time.

Interpreting “x lags y”

The left column cross correlation tells you that the maximum correlation occurs when signal x lags signal y by 0 samples. This is simply because the two signals are perfectly correlated in time. In the right column, I included a data tip showing the greatest positive correlation of the “perfectly” anti-correlated source signals: it occurs where signal x lags signal y by -166 samples, but only reaches a positive correlation of 0.8333. Even though the signals have the same frequency, the cross correlation will never reach 1 because as the time lag is increased, signal a will overlap less with signal b. Note, these signals alway reach an identical positive correlation at +166 samples because the source signals are symmetrical.

How-to Normalize

The normalization procedure is rather straight forward. I’ve appended a YouTube video that explains cross correlation and normalization in mathematical detail. In brief, the ‘coeff’ method can be bootstrapped using the following code:

acor_norm = xcorr(x,y)/sqrt(sum(abs(x).^2)*sum(abs(y).^2));

Time Cost of Initializing Arrays in MATLAB

How much does initializing arrays actually improve performance in MATLAB? Quite a lot. However, sometimes it’s impossible or just a pain to pre-calculate the array size. I wanted to determine the sloppy-code-time tradeoff (in seconds). I setup a simple routine to build arrays either initialized or uninitialized and timed the execution.

Below are the results from building two arrays from 0 to 100 million elements with 1 million element steps. This was performed on a MacBook Pro 2.8 GHz Intel Core i7 with 16 GB 1600 MHz DDR3.


Take home: if an array is going to contain over ~1 million elements, it’s worth the time upfront to initialize it. You can try it yourself.

List or Edit Recently Modified Files in MATLAB

Git is a good way to keep track of file changes and integrates with MATLAB but doesn’t quickly tell you what files you have been working on. The MATLAB editor has persistence after closing (so that files will re-open in tabs) and Open > RECENT FILES is good to see where you left off. If you ever mistakenly close all tabs, MATLAB actually keeps record of your workspace and you can recover it. However, if want to hop back into a file from days ago or work across machines, the workflow is subpar.

workon.m will list the n most recently edited files in the current directory (recursively) and then let you open one, or many of those files for editing.

>> workon
[01] --> entrainment_HighResolution.m (now)
[02] --> workon.m (now)
[03] --> Ray_LFPspikeCorr.m (3 hours ago)
[04] --> wholeSessionPowerCorr.m (18 hours ago)
[05] --> deltaPhaseSpiking_wBeta.m (5 days ago)
[06] --> deltaPhaseBetaEvents.m (6 days ago)
[07] --> run_code.m (6 days ago)
[08] --> entrainmentTrialShuffle.m (6 days ago)
[09] --> dklPeakDetect.m (7 days ago)
[10] --> deltaRTchaos_simpleCorr.m (8 days ago)
[11] --> Leventhal2012_Fig6_spikePhaseHist_allFreqs.m (15 days ago)
[12] --> ff.m (15 days ago)
[13] --> periEventTrialTs.m (15 days ago)
[14] --> eventsLFPv2.m (15 days ago)
[15] --> spectrum_MRL.m (18 days ago)

Next, you can open any number of those files by passing in a single integer (for one file) or an integer vector of all the files you want to open. For instance, to open the 7 most recently edited files in the MATLAB editor:

>> workon(1:7)

M-file Task List from MATLAB Comments

There are a variety of ways to track what needs to get done in your code. I am a huge fan of Wunderlist as a simple-free checklist to stay on track. GitHub allows you to create checklists, and their Issues functionality is top notch. Finally, some IDEs will automatically find // TODO tags and organize them for you—and this is the functionality I wanted in MATLAB.

I work independently and a lot of my code is experimental. At the end of the day, I can end up with a handful of new files (each with a handful of todos) and a short list of analyses to run. Comments are fast but they often get lost, especially after a file gets closed.

Using MATLAB comments to track tasks seemed like the most agile solution to all my problems. The mattasks project recursively scans a directory for any comment with the format [ ] This is a task! and exports a pretty markdown file with every task, from every file.

Full documentation can be found in the mattasks GitHub repository. The example below was generated by running mattasks.m on the test directory.

Printing Size of Variable from Clipboard in MATLAB

The single most used command for me while debugging is size(varName). After the variables pile up in the workspace, the workspace viewer is not a great source of information. Ideally, the workspace viewer could be a bit more dynamic having a sort column for “last used” with some abstract information about where the variable was last called from, akin to some of the profiler features.

This snippet, when saved as “sz.m”, will find a variable a workspace variable based on the current clipboard value and output the size from simply typing: sz. This essentially replaces, size(⌘V). Because it needs access to the working environment, it cleans up all the variables used to accomplish this task at the end (i.e., graceful).

Equalizing Sampling Rate for 1-D Correlations in MATLAB

We often have two signals with different sampling rates that we want to correlate. One example is when we want to correlate local field potential (LFP) activity of a neural signal to some average spike rate of a neuron. Our LFP signal is recorded at 24 kHz, but the spike density estimate (SDE) we create from individual spike timestamps is given an artificial sampling rate of 1 kHz. End-to-end, the LFP and SDE are potentially correlated, but to operate on those as variables in MATLAB they need to be the same length.

In this function, we use 1-D interpolation to force d1 data into the length of d2 and call that d1new. Ideally, your d2 data has the higher sampling rate so that d1 is being ‘upsampled’ instead of ‘downsampled’.

Consider two signals (d1 and d2) where f is either sin() or cos() and the time variable t is 0-100s, but each t has a different sampling rate.

t1 = linspace(0,100,100); % 0-100s, 100 points, 'low' sampling rate
t2 = linspace(0,100,400); % 0-100s, 400 points, 'high' sampling rate
d1 = sin(t1);
d2 = cos(t2);
d1new = equalVectors(d1,d2);

The top plot shows the signals overlaid in Time with each point from t marked in black. The middle plot highlights that when plotted by Samples, the two signals are unequal in size. The bottom plot shows how 1-D interpolation through equalVectors and the d1new variable equalize the vector lengths, thereby making a correlation possible.


Infinite Highlight Color Marks in Ruby (modulo based)


Consider the case where you have many sentences pieced together from different sources and want to highlight each one of those sentences based on the source number. In HTML, the standard for highlighting text utilizes the <mark> tag with the background-color attribute set to a light yellow color.

My background color is set using style="background-color: hsl(50,90%,90%);"

The first value in hsl() is a value in degrees for hue which ranges from 0 to 360, spanning from red to green to blue and back to red in circular form. The goal of the following function (i.e., line of code) is to return maximally different hues based on the total amount of colors you want to use.

If the first color is a standard yellow (used above) then startHue is 50. nColors is the total number of colors you want to use and ii is the current color you want to select. Consider where nColors = 6:

This is from the first source (where ii = 1).1 The second source has a different color.2 So does the third!3 All of the colors are meant to contrast.4 Unless nColors is very large.5 This is the where ii = 6; that's all of them.6

Pro Tip: If you want to selectively highlight text based on a word or regex pattern in Ruby on Rails there is a great Text Helper called highlight.

MATLAB Line Colors for The University of Michigan

One maize. One blue. One brand. This function makes the University of Michigan colors available in an array (just like how lines() works in MATLAB). To get all the primary colors:

colors = linesUM;

To limit the amount of colors returned (for instance, to use the colors as a colormap):

colors = linesUM(3);

Finally, to get all the secondary colors just pass in Inf for the amount of colors and set onlySecondary to true:

colors = linesUM(Inf,true);

A Better Way to Operate: The Surgical Placemat

In early 2018, I met an expert on checklists—yes, they exist. She works closely with NASA to develop checklists that keep astronauts safe and productive. I asked her, "what is one way to use a checklist that I wouldn't have thought of?" After all, I bought into the fact that someone could be good at making checklists, but I wanted to know what else I was missing. She asked me to give her a situation or problem that might lend itself to a checklist. I said, "surgery," followed with, "we have protocols, and logs, but we don't use a checklist and it burns us a lot." Not that our animals were dying, but I've seen the antiseptic skipped over, lidocaine forgotten, and I have showed up to empty oxygen bottles because the valve wasn't closed at the end of the day.

My new checklist expert friend went on to ask, "does each step have an item, or tool associated with it?". "Yes," I followed, "just about all of them." She then said, "well here's one thing to think about doing," as if this was one of thirty tricks up her sleeve, "you could make a large placemat with a bunch of pouches, and each pouch is preloaded with your tool and the order of pouches walks you through the surgery." How simple, I thought; and brilliant. For the rest of the afternoon all I could think about was sewing custom, ordered pouches into canvas for everything I do in a day.

Atul Gwande's, The Checklist Manifesto, is a very readable exposé on the utility of checklists not only in surgery, but almost all personal and professional endeavors. Checklists assist pilots in starting a plane as well as any and all emergency situations in the air. "FLY THE PLANE" is often number one on the emergency checklist, testament to how simple and direct a good checklist should be. A checklist can help an investor stay true to a method and keep them from letting their gut get the best of them (and their clients). However, what was missing from the book is what my checklist expert honed in on: tactics to increase compliance.

A checklist is only worth its weight if it's followed; step by step, line by line. That's where a checklist—a good checklist—beats a protocol. Protocols are developed as a "Do-Confirm" style list that is better at informing an auditor to the procedure than being something an operator can efficiently utilize. The "Read-Do" style checklist is more useful, more tactful, and built for the experienced operator. Still, the Read-Do method suffers from compliance, as it is easy for anyone with experience to think they know the next three steps, skip ahead, and ink the check-check-check after skipping ahead.

That's where the physical checklist comes into play. The next step in the list is not possible or revealed until a real-world action satisfying the previous step has taken place. For me, the surgical placemat was a perfect solution. It outlines all the items and tools we need in surgery and sits directly on the operating table. It can be preloaded before the operation and thrown out at the end. Although we perform many types of operations in our lab, the setup and teardown is mostly the same.

This is just a prototype—a start—as it is a constant challenge to balance too much with too little, and succinctly separate what needs to be checked-off from what simply needs to be performed. But it seems like the right step forward.


When Movement Breaks - Lecture & Commentary

When Movement Breaks - Lecture & Commentary

Central Nervous System (CNS) Aspects of Motor Control II

In this lecture I review how the basal ganglia, thalamus and cerebellum work with the motor cortex to influence, and produce movement. I use historic cases of motor deficits and disease, including Parkinson's disease, to reflect on how knowledge has progressed. Included are discussions on how the MPTP model of Parkinson's disease was founded, the standard "rate model" developed by Albin et al., and modern theories concerning firing rates and firing patterns than are employed by the thalamocortical system.

Simpler Subplots: Converting Grid Coordinates to Axes Position

When you're working with a subplot matrix where each row is some new analysis, using the standard "position" or p input to MATLAB subplot is not intuitive. Rather, you want to specify your subplot by rows and columns. This tiny function, prc, does that conversion.

The function requires the number of columns in your figure (cols) and the two-element position of your desired subplot (rc) given as an array in the format, [row, col].



The figure above was generated with the following code.

Making your own colormap (cmap) in MATLAB

This function will use the first row of an image and allow you to use that as a colormap, or as individual colors for your own application. It just extracts colors and puts them into an array. For instance, I designed this "traffic light" color scheme and want to apply it to a plot where each color represents some reaction time.


You can use any image, but here is how to design the gradient in Adobe Photoshop:

  1. Open a new document with a width greater or equal to the amount of elements you will be coloring in MATLAB (for instance, width: 1500 pixels, height: 200 pixels). The height is only so you can visualize the color scheme; the function will only require a single row.

  2. Use the Shape tool to draw a rectangle the size of the canvas.

  3. Double click the rectangle layer and apply a Gradient Overlay. Use the color tools to make your gradient.

  4. Save your document as a Photoshop file first if you want to archive the original, but then export it to a JPG of the original size either using Save As, or Save for Web.

Screen Shot 2018-03-09 at 9.09.24 AM.png

The mycmap function has one mandatory input: the image filename. There is one optional input: the number of elements to be returned. For instance, if you are using the colors to set cmap, you probably don't need to set the number of elements (it will just return a colors array the width of your image). On the other hand, if you are using the colors to set the color of say, 55 different lines on a plot, you will pass that number as the second input to the function, and the returned array colors will have 55 rows.


Below are phase plots from an electrophysiology experiment where each line represents a single trial. Each line has been colored based on reaction time (low-to-high) using the colors from the mycmap function.


Finding Consecutive Numbers that Exceed a Threshold in MATLAB


The goal is to determine if there exists a span of n p-values that exceed threshold t

pVals = [0.91 0.66 0.23 0.96 0.99 0.97 0.99 0.83 0.12 0.96 0.99 0.97 0.76];

n = 3;

t = 0.95;

The total amount of values exceeding the threshold can be found simply:

sum(pVals > t);

However, we are also requiring that those p-values be consecutive. We can using a moving sum centered on the current element plus n on the binary index of p-value threshold crossings to do this:

ntpIdx = movsum(pVals > t,[0 n-1]) == n;

ntpIdx =

  1×13 logical array

   0   0   0   1   1   0   0   0   0   1   0   0   0

If you only want the initial crossing indexes:

ntpIdx_init = logical(diff([0 ntpIdx]) == 1);

ntpIdx_init =

  1×13 logical array

   0   0   0   1   0   0   0   0   0   1   0   0   0

Characterizing Tremor from Video using Frequency Analysis in MATLAB

Previously I shared a function useful for Creating an Actogram from Video in MATLAB. However, in cases where movement is rhythmic, like in cases of Parkinson's disease and essential tremor, understanding movement in the frequency domain is helpful. Using similar video analysis principles (assessing the change in pixel values from frame-to-frame) extracting frequency information is rather straight forward.

[allFrames,pos] = videoFreqAnalysis(videoFile,resizeFactor,ROItimestamp,freqList);

This method allows you to analyze a video for the frequencies provided in freqList given that the maximum frequency is less than the video frame rate divided by two (Shannon's sampling theorem).

Find these functions on Github: MoveAlgorithms


Exporting your original video to a smaller size will speed processing. This sample video of my hand is 15 seconds, taken with an iPhone, then exported as 540p with no audio.

The video processing follows these simple steps:

  1. Prompt the user to select a region of interest (ROI). The resulting ROI is used for analysis; anything outside of the selected ROI is not used. This helps focus the analysis and also speeds the analysis.
  2. Process all video frames. Every frame is converted to black and white, slightly contrasted, and then put into the matrix allFrames.
  3. Threshold the data so only pixels that show large movement (in the 80th percentile) are analyzed.
  4. Convert time domain data in the frequency domain using a complex scalogram (tip: we also use this with low-pass filtered electrophysiology data).
Top:  Based on the  fpass  and resulting  freqList , a scalogram heatmap is presented that shows the power of each frequency band over the duration of the video.   Bottom:  This plot removes all temporal information and represents the mean frequency transform of the scalogram, useful for finding peaks in the frequency content of the video.

Top: Based on the fpass and resulting freqList, a scalogram heatmap is presented that shows the power of each frequency band over the duration of the video.

Bottom: This plot removes all temporal information and represents the mean frequency transform of the scalogram, useful for finding peaks in the frequency content of the video.

Together, the tremor from the video is present from the 5-10 second mark and peaks in the ~3 Hz band. The one major pitfall of this method is the lack of a standard amplitude measure. Depending on the video resolution and lighting conditions the amplitude will change. Controlling for these variables would make for a more meaningful comparisons between videos or subjects.

Frame-by-frame Heatmap

One visualization of the frequency domain is to take the pixels from the ROI and analyze them over a ±1 second window, where the red colors represent greater amplitude of the fpass band. The ROI is very large in the video below, but shows where movement is identified using this method.

Creating an Actogram from Video in MATLAB

An actogram is a way of quantifying movement, useful in behavioral studies in order to understand how an animal moves over the duration of an experiment. An actogram is usually based on movement sensors (e.g., IR beams), but because video is often being recorded anyways, it's a natural method of extracting movement information. The method I present here produces actogram data based on the changes in pixel values from frame-to-frame:

frameData = videoActogram(videoFile,frameInterval,resizePx);

For large videos you may also use a frameInterval to skip frames between each data point. The result is a frameData array consisting of frame number, frame timestamp, and movement data. More details are included within the file comments.

Find these functions on Github: MoveAlgorithms


Exporting your original video to a smaller size will speed processing. This sample video of my hand is 15 seconds, taken with an iPhone, then exported as 540p with no audio. It takes about 10 seconds to get the actogram data using every frame.

Using the test_videoActogram.m function you can easily plot the actogram data:


For a full view, you can use the overlayActogram.m function to create a video with a running actogram below the original video. Note, if you used a frameInterval your resulting video will only contain those frames (i.e. your video will be abbreviated).

Using Compose in MATLAB for Pretty Tick Labels

Creating descriptive and well formatted text labels for x- or y-ticks in MATLAB is essential to the presentation of your data. I have always struggled to remember the best way to replace tick marks and default numerical labels with text labels. I can not claim that the compose function in MATLAB is hidden, but I don't see it used often and it performs precisely what is required in many of cases.

For example, you have sampled a reaction time (RT) distribution using 10 quantiles. In RT experiments the RT histogram will generally have some type of long-tailed distribution, therefore each quantile contains a different range of RTs.


Let's say you want to compare some neuronal firing rate associated with those quantiles of RTs.

% RTs = reaction times
% Zscores = Z score of neuronal firing associated with RT
RTs = [0.10758143999999 0.113643519999982 0.119705600000088 0.12435456000005 0.127467520000003 0.131112959999996 0.134963199999959 0.138751999999897 0.14278655999999 0.147066880000125 0.151367680000021 0.155709439999896 0.160829439999929 0.166154239999969 0.173260799999866 0.184340479999889 0.197160960000019 0.215386720000026 0.24209264000001 0.293253120000031];
Zscores = [1.91375996684532 2.11512451719412 2.31635992789693 1.97677278262753 1.83302457695781 1.76635184611673 2.58036012006912 1.40779739402557 2.29017317616969 1.44587687689824 1.76950209157802 1.60579505116933 1.69767216641066 1.87616271740106 1.75912513938938 1.41276581199477 1.23089211422587 1.03332795801901 0.863934150049046 0.683887429578394];

colors = cool(numel(Zscores));
markerSize = 50;
for iiRT = 1:numel(RTs)
    lns(iiRT) = plot(RTs(iiRT),Zscores(iiRT),'.','markerSize',markerSize,'color',colors(iiRT,:));
    hold on;

ylabel('Z score');
title('Z score vs. RT');

However, you may not want to all of those data points to cluster where the quantile spacing is tight. Therefore, you can plot the x-axis on a linear scale, but now your x-ticks do not represent RTs, rather, they are a linear scale.

colors = cool(numel(Zscores));
markerSize = 50;
for iiRT = 1:numel(RTs)
    lns(iiRT) = plot(iiRT,Zscores(iiRT),'.','markerSize',markerSize,'color',colors(iiRT,:));
    hold on;
ylabel('Z score');
title('Z score vs. RT');

To correct this situation, you would want to replace those x-ticks with RT values. The primary issue when it comes to data presentation is formatting those 'floats' into something manageable and readable. Although it appears that xtickformat would accomplish this, it doesn't appear to retain values set by xticklabels. In addition, using compose gives you access to a cell string that can be used in a legend.

RTs_labels = compose('%1.3f',RTs);

* As a final note, although scatter would work instead of a FOR loop, it does not return independent handles for each point, so a colorized legend is not as easy to achieve.