Category: Test Chains Level: Intermediate Reading time: 20 minutes Updated: 2025-10-31

Creating Test Chains (Batteries)

Quick Summary: Build sequences of multiple tests with instructions, consent forms, and completion pages. Perfect for test batteries, longitudinal studies, and complex experiments.

What You'll Learn

  • Creating and managing test chains
  • Adding instructions and consent forms
  • Configuring test sequences
  • Setting up completion pages with codes and redirects
  • Understanding randomization
  • Managing participant sessions and progress

Overview

Test chains (also called batteries) allow you to string together multiple PEBL tests into a single participant experience. Participants complete tests in sequence, with their progress automatically saved so they can pause and resume later.

Use cases:

  • Cognitive test batteries (multiple memory/attention tests)
  • Pre-test instructions and post-test debriefing
  • Informed consent before testing
  • Studies requiring specific test order
  • Platform integration (Prolific, MTurk, etc.) with completion codes

Step-by-Step Guide

Step 1: Access Test Chains

  1. Log in and go to My Research Studies
  2. Select your study
  3. Click the Test Chains tab

Step 2: View Existing Chains

You'll see a list of chains in your study (if any exist):

  • Chain name and ID
  • Number of items in the chain
  • Creation date
  • Management buttons (Edit, Delete, View Analytics)

Step 3: Participant Code Method

Before generating URLs, select how participants will be identified:

Dropdown Options:

  1. Auto-generate code - Automatic IDs using fingerprinting + sequential numbering
  2. Enter a code - Manually specify participant ID
  3. Participant code entry - Participants create their own codes
  4. Platform Integration - Use Prolific, MTurk, SONA, Qualtrics, or SurveyMonkey
Select your method and the URLs will update automatically.

For details on each method, see Participant Codes.

Step 4: Get Chain URLs

Once you've selected your participant code method:

Production URL: Copy the full URL shown

  • Includes chain ID, token, and participant parameter (if applicable)

Short URL (recommended):

  • Click 🔗 Short URL button
  • Generates a memorable short link
  • Automatically copied to clipboard
  • Shows click tracking information

Note: Short URLs are not available for Platform Integration method (variables can't be encoded in short URLs).

Step 5: Test the Chain

Before recruiting participants:

  1. Click ▶ Try it out or visit the URL yourself
  2. Complete the entire chain as a participant would
  3. Verify:
  • Instructions display correctly
    • Tests run properly
    • Progress saves correctly
    • Completion page works
    • Redirects function (if configured)

Chain Components

Item Types

Chains can include several types of items:

1. PEBL Tests

Purpose: The actual cognitive tests participants complete

Features:

  • Automatically launches in browser
  • Passes participant ID and parameters
  • Detects completion automatically
  • Uploads data with chain context

Configuration:

  • Select from available test library
  • Configure parameters (same as individual tests)
  • Set display name for progress tracker

2. Instruction Pages

Purpose: Provide information to participants between tests

Features:

  • Custom HTML content
  • Simple "Continue" button
  • Can include images, formatting, lists

Use cases:

  • Study overview at start
  • Instructions for upcoming test
  • Break reminders ("Take a 2-minute break")
  • Context switching ("Next: attention tests")

Example:

<h2>Welcome to the Spatial Memory Study</h2>

<p>You will complete 3 short tests measuring different aspects of memory.</p> <p>Each test takes 5-10 minutes.</p> <ul> <li>Corsi Block Test - visual-spatial memory</li> <li>Memory Span - verbal memory</li> <li>N-Back - working memory</li> </ul>

<p>Click Continue when you're ready to begin.</p>

3. Consent Forms

Purpose: Obtain informed consent before testing

Features:

  • Custom HTML content
  • Required checkbox before continuing
  • "I acknowledge" text (customizable)
  • Continue button disabled until checked

Use cases:

  • IRB-required informed consent
  • Data sharing agreements
  • Age verification
  • Terms of service acceptance

Example:

<h2>Informed Consent</h2>

<p><strong>Study Title:</strong> Spatial Memory in Aging</p> <p><strong>Principal Investigator:</strong> Dr. Jane Smith</p>

<h3>Purpose</h3> <p>This study examines how spatial memory changes with age...</p>

<h3>Procedures</h3> <p>You will complete three computerized tests taking approximately 30 minutes total...</p>

<h3>Risks and Benefits</h3> <p>Risks are minimal. You may experience mild fatigue...</p>

<h3>Confidentiality</h3> <p>Your data will be stored securely and identified only by a participant code...</p>

<h3>Contact</h3>

<p>Questions? Contact Dr. Smith at jane.smith@university.edu</p>

4. Randomize Markers

Purpose: Define sections where test order should be randomized

How it works:

  • Add "Randomize Start" marker
  • Add tests to randomize
  • Add "Randomize End" marker
  • Tests between markers are shuffled per participant

Use case: Counterbalance test order to control for fatigue/practice effects

Important: Each participant gets a consistent random order throughout their session (not re-randomized on resume).

Chain Configuration

Sequence Order

Default: Tests run in the order you add them

With randomization:

  • Fixed items run in order
  • Randomized sections are shuffled
  • Each participant gets unique order
  • Order stays consistent if they resume

Example Structure:

1. Consent Form (fixed)
  1. Instructions (fixed)
  2. [Randomize Start]
  3. Test A
  4. Test B
  5. Test C
  6. [Randomize End]
  7. Debriefing (fixed)
Possible orders for participants:
  • Participant 1: Consent → Instructions → B → C → A → Debriefing
  • Participant 2: Consent → Instructions → A → C → B → Debriefing
  • Participant 3: Consent → Instructions → C → A → B → Debriefing

Completion Page Configuration

Configure what happens when participants finish the chain:

Completion Code Format

Purpose: Generate unique codes for verification/credit

Options:

  • Leave empty: No code shown, auto-redirect only
  • Custom format: Use template variables

Template Variables:

  • {PARTICIPANT} - Participant ID
  • {SESSION} - Session UUID
  • {TIMESTAMP} - Unix timestamp
  • {RANDOM} - 16-character random hex
  • {TOKEN} - Study token
  • {CHAIN} - Chain ID

Examples:

STUDY-{PARTICIPANT}-{RANDOM}

→ STUDY-SUBJ001-A3F9D2E84C7B1...

PEBL-{TIMESTAMP} → PEBL-1730400000

{CHAIN}-{SESSION}

→ CHAIN<em>ABC-a1b2c3d4-...

Redirect URL

Purpose: Send participants to another page after completion

Options:

  • Leave empty: Show completion page, no redirect
  • URL with variables: Redirect with completion info

Use template variables to pass information:

https://platform.com/done?code={COMPLETION</em>CODE}&participant={PARTICIPANT}

Platform-specific examples: See Platform Integration

Callback URL (Advanced)

Purpose: Server-to-server notification of completion

When to use:

  • More reliable than client-side redirect
  • Verify completion before granting credit
  • Log completions in your system

Receives POST data:

{

"session<em>id": "uuid", "participant</em>id": "SUBJ001", "chain<em>id": "CHAIN</em>xyz", "completion<em>code": "STUDY-SUBJ001-...", "status": "completed"

}

Session Management

How Sessions Work

First visit:

  1. Participant clicks chain URL
  2. System creates new session with UUID
  3. Random order generated (if using randomization)
  4. Participant starts at first item
Resuming:
  1. Participant returns to same URL
  2. System recognizes participant (by ID or fingerprint)
  3. Finds incomplete session
  4. Resumes at last incomplete item
  5. Uses same random order as before

Progress Tracking

Participants see:

  • Progress bar at top showing % complete
  • Item counter: "Item 3 of 7"
  • Visual indicator of current item

Researchers see (in analytics):

  • Which items each participant completed
  • Current item if in progress
  • Completion status (completed, in progress, abandoned)
  • Time stamps for each item

Abandonment

What happens if participant closes browser?

  • Progress saved automatically after each item
  • Can resume anytime (within study expiration)
  • Same session continues
  • Randomization order preserved

Threshold for "abandoned":

  • Test chains: 7 days of inactivity
  • Single tests: 2 days of inactivity

After threshold, session marked as "likely abandoned" in analytics but can still resume.

Best Practices

1. Keep Chains Reasonably Short

Recommended: 30-45 minutes maximum

Why:

  • Higher completion rates
  • Less fatigue effects
  • Better data quality
  • Fewer technical issues (browser crashes, etc.)

For longer studies: Break into multiple chains/sessions

2. Front-Load Important Tests

Strategy: Place most critical tests early in chain

Reasoning:

  • Some participants will drop out
  • Fatigue increases over time
  • You'll get complete data on important tests even from partial completions

3. Include Clear Instructions

At minimum:

  • Welcome/overview: What the study is about
  • Time estimate: Total expected duration
  • Break reminders: For chains > 20 minutes
  • Technical requirements: Browser, audio, full-screen

Example instruction page:

<h2>Study Overview</h2>

<p><strong>Time required:</strong> Approximately 30 minutes</p> <p><strong>What you'll do:</strong> Complete 4 short cognitive tests</p> <p><strong>Important:</strong></p> <ul> <li>Use a computer (not a phone)</li> <li>Find a quiet place without distractions</li> <li>Complete in one sitting if possible</li> <li>You can resume later if needed</li>

</ul>

4. Test Extensively Before Launch

Checklist:

  • [ ] Complete chain yourself multiple times
  • [ ] Test on different browsers (Chrome, Firefox, Safari)
  • [ ] Try resuming mid-chain
  • [ ] Verify data uploads for each test
  • [ ] Check completion page/redirect works
  • [ ] Test with pilot participants

5. Use Randomization Appropriately

When to randomize:

  • Multiple tests measuring similar constructs
  • Controlling for order/fatigue effects
  • Counterbalancing conditions

When NOT to randomize:

  • Tests have specific order requirements
  • Tests build on each other
  • Instructions reference specific upcoming test

6. Configure Completion Carefully

For external platforms (Prolific, MTurk):

  • Always use completion codes
  • Test redirect thoroughly
  • Verify credit granting works

For internal studies:

  • Consider callback URL for reliability
  • Generate unique codes for tracking
  • Plan for completion verification

Common Issues

Problem: Participants Can't Resume

Causes:

  • Browser fingerprint changed (cleared cache, different device)
  • Used different participant code
  • Session expired (past study expiration date)

Solutions:

  • Use manual participant codes for critical studies
  • Extend study expiration if needed
  • Instruct participants to use same device/browser

Problem: Progress Not Saving

Causes:

  • Network interruption during upload
  • Browser blocking localStorage
  • JavaScript errors

Solutions:

  • Check browser console (F12) for errors
  • Try different browser
  • Verify internet connection stable
  • Check if browser in private/incognito mode (blocks localStorage)

Problem: Tests in Wrong Order

Causes:

  • Randomization configured when fixed order intended
  • Participant saw different random order than expected

Solutions:

  • Review randomization markers in chain configuration
  • Check session's randomizedorder in database
  • Remember: randomization is per-participant, not global

Problem: Completion Code Not Working in Platform

Causes:

  • Code format doesn't match platform requirements
  • Redirect URL missing {COMPLETIONCODE} variable
  • Platform variables not configured correctly

Solutions:

  • Review Platform Integration guide
  • Test complete flow in platform's sandbox/preview
  • Check platform's current API documentation

Advanced: Managing Multiple Chains

Use Cases for Multiple Chains

  1. Pilot vs. Main: Separate chain for pilot testing
  2. Conditions: Different chains for experimental conditions
  3. Versions: Updated chain while keeping old one for comparison
  4. Populations: Child version vs. adult version

Naming Conventions

Use descriptive names:

  • SpatialMemoryPilotv1
  • AttentionBatteryAdult
  • CognitiveBatteryConditionA
  • MemoryTestsPrePostPRE

Duplicating Chains

To create a variant:

  1. Create new chain manually
  2. Copy item configuration from original
  3. Modify as needed
  4. Test thoroughly (separate chain = separate session tracking)

Related Topics


Need more help? Check the related topics above or contact your platform administrator for technical support.


Related Topics