top of page

RESEARCH & DESIGN PROJECTS

Here you will find a selection of projects I developed in collaboration with fellow researchers, designers, engineers, and strategists across industry, academia, and government.

 

New projects will be added as they are completed. Please check back for new updates.

Thematic Analysis and Network Dashboard Visualization

Center for Human, Artificial Intelligence, and Robot Teaming (CHART)

  • Designed and secured IRB approval for a qualitative study on unmanned aerial vehicle (UAV) innovation across academic, industrial, government, and military domains; conducted Subject Matter Expert (SME) interviews to capture normative and non-normative UAV innovation practices and technology trends

  • Performed a qualitative thematic analysis of SME interviews to identify internal and cross-sector norms, innovation processes, and key enablers and inhibitors of UAV design and development

  • Translated qualitative insights into a human-centered network model, designing interaction workflows, visual hierarchies, and usability structures grounded in systems thinking and cognitive engineering principles

  • Developed and deployed an interactive network dashboard using Python (Dash framework) leveraging Cytoscape.js and Plotly.js to calculate network measures and visualize thematic analysis results with hierarchical layouts, dynamic node highlighting, and contextual data windows for intuitive exploration and decision support

  • The interactive network dashboard is fully data-agnostic, allowing any thematic analysis dataset to be seamlessly ingested and visualized without modification

  • Packaged the completed interactive network dashboard into a standalone Windows executable, enabling local deployment, offline usability, and simplified distribution for non-technical research team members

Project details are confidential.

Human-Artificial Intelligence Team Interaction Dynamics and Team Trust

Dynamical Systems in Psychology, Arizona State University  

  • Analyzed communication dynamics in human-AI teams using message message-frequency time series to investigate how interaction stability, coupling, and predictability relate to team trust and distrust

  • Conducted full dynamical systems preprocessing in MATLAB, including attractor reconstruction (i.e., time-delay estimation via AMI/autocorrelation, embedding dimensionality via false nearest neighbors [FNN], and radius selection), generating validated phase-space representations for recurrence analysis

  • Performed joint recurrence quantification analysis (JRQA) and multidimensional RQA (MdRQA) to quantify team-level interaction patterns such as recurrence rate, determinism, average/maximum line length, laminarity, and trapping time (i.e., metrics linked to team communication stability and coordination)

  • Identified differences in recurrence structures that are indicative of human-AI teams with 1 versus 2 AI teammates and the spreading of trust versus distrust

  • Discovered methodological limitations in existing recurrence toolkits (e.g., CRP Toolbox not supporting >2 systems for JRQA), proposing future extensions including multilayer JRPs and MdJRQA for richer modeling of multivariate team interaction data

  • Produced actionable insights for designing adaptive AI teammates by showing which communication characteristics (e.g., stable vs. oscillatory messaging patterns) signal emerging trust or distrust, informing predictive and trust-aware AI teaming capabilities

Trust and Communication’s Multilevel and Mediated Relationship with Performance in Human-AI Teams

Human Systems Engineering PhD Program, Arizona State University

  • Constructed a multilevel-ready communication metric by conducting an exploratory factor analysis of 12 coded team communication behaviors resulting in 3 factors and associated factor scores to model team process

  • Implemented four multilevel mediation models in R to evaluate whether communication mediates the trust-performance relationship in human vs. human-AI teams (HATs)

    • Identified data-structure and convergence constraints that impacted mediation pathways

  • Ran two growth models assessing how trust evolves across missions and how trust trajectories influence individual performance, revealing opposing patterns across team types (declining trust and performance in HATs; increasing in human teams)

  • Tested four multilevel models examining the direct and moderated effects of team communication, team member role, and team type on individual performance

    • Found team communication supports photographer performance but degrades navigator performance due to role-based taskwork-teamwork tradeoffs

  • Identified key team-type moderation effects, including reduced performance in HATs driven by the AI agent’s lack of teamwork skills and increased performance in human teams due to greater adaptability and information-sharing resilience

  • Applied effect-size estimation, cross-level probing, and multilevel variance partitioning to characterize how communication dynamics operate differently at the individual and team levels, informing design requirements for future AI teammates with explicit teamwork competencies

Team Trust Linear Lag-1 Order Model

Dynamic Methods for Complex Systems Science, Arizona State University  

  • Developed a Python-based lag-1 autoregressive trust model integrating trust propensity, mission-level team trust scores, individual and team performance variables, and experimentally manipulated communicative and behavioral trust and distrust spread conditions in a human-AI teaming (HAT) scenario

  • Implemented multiple model variants, including an existing weighted lag model for multi-agent systems, an unweighted formulation, and an adjusted-coefficient model to compare how each parameterization captures mission-by-mission fluctuations and stabilizations in team trust across five experimental conditions

  • Evaluated model fidelity by comparing predicted trust trajectories to raw trust scores for representative teams, identifying where parameter weighting failed to recover observed patterns and where additional variables may be required for HAT specific trust modeling and calibration

MSEM Human-AI Communication and Team Performance Model

Structural Equation Modeling, Arizona State University

  • Designed a three-level multilevel structural equation model (MSEM) in MPlus and R to examine how individual-level verbal communication behaviors predict mission-level team performance, with team-type (human vs. human-AI teams) modeled as a level-3 covariate

  • Conducted two-level confirmatory factor analyses (CFAs) to evaluate whether 12 coded verbal behaviors load onto a latent team communication factor; inspected model fit, factor loadings, ICCs, and Heywood cases to diagnose cross-level measurement feasibility

  • Identified insufficient mission-level variance and non-converging CFA structures, concluding that team communication is not a single latent factor under the current dataset and that a three-level MSEM is inappropriate

  • Evaluated one-factor individual-level CFA solutions using the lavaan package in R, finding only partial support for communication as a unidimensional construct and highlighting behavioral indicators that do not map cleanly onto a single factor

  • Explained implications for future human-AI teaming studies, including the need for redesigned communication coding schemes, improved measurement alignments across missions, and experimental designs that support multilevel latent variable modeling

Modeling and Analysis of Team Composition and Task Decision Performance in Human Teams and Human-AI Teams

Fundamentals of Complex Adaptive Systems Science, Arizona State University

  • Developed a probabilistic dynamical systems model in Python to simulate team task performance across three team compositions (Human-Human-Human, Human-Human-AI, and Human-AI-AI), using role-based activity diagrams and weighted decision probabilities to represent interdependent team tasks

  • Ran 1,000-iteration simulations per team type and compared simulated success rates to analytically derived expectations, validating model accuracy to within 0.1% across conditions

  • Performed logistic regression and Wald tests to quantify the effect of team composition on performance, finding significantly higher success for all-human teams and for human-AI teams with fewer AI teammates

  • Generated systems-level insights demonstrating how increasing AI presence reduces coordination-dependent performance in high-interdependent tasks, informing guidelines for human-AI team design and role allocation

A Literature Review on Team Composition, Trust, and Distrust in Human-AI Teams

Human-Automation Interaction, Arizona State University

  • Conducted a literature review to identify how team composition, trust, and distrust influence coordination and performance in human teams and human-AI teams

  • Identified a critical research gap: no validated distrust measures and almost no studies examining distrust independently from trust in human-AI teams, despite theoretical expectations that distrust has distinct cognitive and behavioral effects

  • Designed a follow-up experiment using three-member reconnaissance teams (Human-Human-Human, Human-Human-AI, and Human-AI-AI) with trust vs. distrust manipulations, proposing collection of team performance metrics and repeated trust and distrust assessments across missions

  • Outlined a multilevel structural equation modeling (MSEM) framework to analyze individual-, mission-, and team-level predictors of performance, enabling examination of how team composition moderates the effects of trust and distrust on team effectiveness

  • Delivered recommendation for addressing gaps in human-AI teaming research including developing distrust measures, incorporating anthropomorphism and attribution factors, and examining coordination dynamics under varied human-AI ratios to guide future experimentation and AI integration strategies

Comparative Usability Study Proposal: Reusable Insulin Cartridge Delivery Pen

Human Factors in Medical Systems, Arizona State University

  • Conducted a human factors evaluation of a reusable insulin cartridge delivery pen, analyzing task demands, perceptual and motor usability issues, error-prevention, and safety-critical failure points for users with dexterity and vision limitations

  • Proposed an improved pen concept with enhanced feedback modalities, increased text legibility, revised dosage indicators, a “dial-back” function, and a release-button lock to target reductions in dosing errors and improved usability for at-risk populations

  • Developed a comparison usability study aligned with FDA validation study principles, including representative use-environment design, scenario-based task sequences, and predefined safety-critical measures

  • Specified experimental procedures, effectiveness and safety metrics, and planned independent-samples t-tests to compare baseline and redesigned devices, enabling evidence-based assessment of usability and error-prevention performance

Ergonomic Moka Pot: A User Experience Research Approach

Methods and Tools in Human Systems Engineering, Arizona State University

  • Led end-to-end design and research for a moka pot with an integrated smart pressure gauge, aimed at improving user experience and brewing safety

  • Conducted mixed-methods research including surveys, observational studies, and co-design workshops to gather user requirements and preferences

  • Performed heuristic and Perception, Cognition, and Action (PCA) – based task analyses to identify pain points and optimize ergonomic design

  • Developed detailed personas and journey maps to inform design decisions and guide iterative prototyping

  • Facilitated multiple rounds of usability testing, collecting qualitative and quantitative data to refine functionality and interface design

  • Synthesized findings to produce actionable design recommendations, resulting in a moka pot prototype that enhances both usability and user safety

The Integration of Human Factors Teams in Healthcare Systems

Systems Thinking, Arizona State University

  • Designed a systems-based integration strategy to embed human factors teams within medical device development lifecycles, applying complexity leadership theory (CLT) to align administrative, adaptive, and enabling leadership functions

  • Mapped the medical device development ecosystem using CLT concepts to identify structural bottlenecks, communicate breakdowns, and misaligned decision pathways that prevent effective integration of human factors work

  • Analyzed healthcare-sector challenges (e.g., siloed workflows, regulatory pressures, limited early-stage usability input, and risk-averse cultures) to diagnose systemic barriers to human factors adoption

  • Applied CLT constructs such as adaptive spaces, network interactions, and conflict-enabled innovation to develop mechanisms that promote cross-functional collaboration between engineering, clinical, regulatory, and human factors teams

  • Synthesized findings into a multi-level solution model that operationalizes enabling leadership to support HF involvement, promote adaptive problem solving, and reduce reliance on rigid administrative structures

  • Produced a comprehensive mind map and systems diagram illustrating stakeholder relationships, information flows, and innovation opportunities to inform organizational decision-making and human factors team placement

  • Generated actionable recommendations for healthcare organizations to restructure workflows, redesign communication channels, and create adaptive spaces that accelerate safe and user-centered medical device development

Scalia - 2023 - The Integration of Human Factors Teams in Healthcare Systems.png

Multilevel Model of Customer Purchasing Habits

Multilevel Modeling, Arizona State University

  • Acted as the lead data analyst for a retail organization, examining 29K+ customer transactions nested within 84 shopping malls to identify behavioral and contextual drivers of in-store purchasing behavior

  • Built and iteratively refined a two-level multilevel model using R, incorporating customer-level predictors (e.g., customer reward membership status, customer’s home distance from mall, customer’s age, customer’s household income, number of items purchased, and coupon used during transaction) and mall-level characteristics (mall size, number of total stores in the mall, main competitor has store in the same mall, and mall is two or more stories)

  • Conducted centering, random-effects testing, simple slopes analysis, and assumption diagnostics to ensure valid inference

  • Identified significant within-mall predictors of spending, including customer age, household income, and number of items purchased

  • Detected key cross-level interactions such as mall size amplifying the effects of age and income on customer spending

  • Produced actionable recommendations for increasing sales (e.g., targeting higher-income customer segments, optimizing store placement in larger malls, and minimizing co-location with competitors)

  • Documented modeling pipeline, decision logic, and interpretation in a comprehensive report suitable for technical and non-technical stakeholders

Multilevel Human-AI Team Trust Model

Multilevel Modeling, Arizona State University

  • Designed and executed a 3-level multilevel model predicting trust in remotely piloted aircraft military teams with an AI pilot by modeling individual-, mission-, and team-level influences on trust in human and human-AI teams

  • Analyzed individual multi-mission performance data showing that trust varied by team member role

  • Identified team composition as a predictor of trust where human-AI teams had lower trust compared to human teams across missions

  • Applied model comparison, ICC estimation, centering decisions, interaction probing, and simple slopes analysis to produce interpretable, multilevel insights for human-AI team designs

A Deepfake Detection Task: How Theory of Mind and the Uncanny Valley Influence Human’s Deepfake Detection Abilities

Applied Cognitive Science, Arizona State University

  • Designed an applied cognitive science study examining how individual differences in theory of mind (ToM), uncanny valley perceptions, and trust propensity influence human performance in deepfake detection tasks, integrating literature from cognitive psychology, human-robot interaction, and disinformation research

  • Developed a 60-trial deepfake discrimination task in OpenSesame using authentic and deepfake video stimuli from a public dataset

    • Implemented randomized block structure, controlled timing, and real-time response capture for accuracy, bias, and discrimination latency measurement

  • Constructed a multi-measure theory of mind assessment battery (Adult Eyes test, Adult Faces Test, Cambridge Mind Reading Test, and Reading Minds in Film test) and planned Promax-rotated factor analysis with maximum likelihood extraction to derive Bartlett-weighted factor scores representing affect-based ToM ability

  • Proposed a multilevel analytic framework including signal detection analysis (hits, false alarms, d’, and bias), K-means clustering for ToM grouping, and ANCOVA/regression models assessing how ToM levels and uncanny valley perceptions predict detection accuracy under authentic vs. deepfake conditions

  • Developed a set of human factors performance measures including self-confidence calibration, general trust propensity, and AI-specific trust propensity to assess cognitive vulnerabilities and decision-bias patterns in human-AI media environments

  • Identified applied implications for information security, trust calibration, and human-AI teaming; highlighting cognitive factors that make operators more or less susceptible to deepfake manipulation and proposing future work to strengthen human detection capabilities

Visual Search Experiment

Applied Cognitive Science, Arizona State University

  • Designed and implemented a visual search experiment in OpenSesame and Python to examine perceptual attention, feature integration, and visual workload using customized stimuli (orange/purple circles and squares) adapted from established cognitive paradigms

  • Developed a fully crossed within-subjects design in which participants completed conjunction, color-feature, and shape-feature searches with 0, 4, or 14 distractors, enabling assessment of how display complexity influences detection accuracy and response time (RT)

  • Programmed trial sequencing, randomization, and condition logic so each participant experienced all target-distractor configurations, supporting precise evaluation of search efficiency and attentional demands

  • Created controlled visual displays and integrated real-time correctness feedback (green/red indicators) and block-level performance summaries (RT and accuracy)

  • Ensured stimulus timing, input handling, and visual presentation met human factors research standards, generating insights applicable to interface design, signal detection tasks, and operator performance under varying perceptual loads

To view the Visual Search Experiment download the .zip folder containing the OpenSesame file below

Signal Detection Analysis: Facial Emotion & Social Judgment Discrimination

Applied Cognitive Science, Arizona State University

  • Conducted a full signal detection analysis (SDT) to quantify human sensitivity (d’) and response bias (c) in discriminating happy vs angry and approving vs. disapproving facial expressions using item-level and subject-level metrics

  • Computed hit rates, false alarm rates, and SDT parameters across facial gender conditions, revealing consistent human sensitivity to emotional expressions and systematic biases (e.g., tendencies to classify faces as angry or disapproving)

  • Evaluated gender-based perceptual differences, demonstrating sensitivity differences between male and female faces and quantifying bias shifts related to perceived emotional intensity

  • Performed statistical testing (t-tests and d’ and c) to identify significant differences in discrimination performance across stimulus categories, providing insight into human perceptual asymmetries relevant to human factors domains such as vigilance, threat detection, and personnel selection

  • Synthesized results to inform human factors considerations around facial affect recognition, including how perceptual biases and sensitivity variations may influence operator decision-making in real-world environments

Signal Detection Analysis: Visual Art Recognition, Working Memory, and Cognitive Predictors

Applied Cognitive Science, Arizona State University

  • Applied signal detection analysis (SDT) to assess recognition memory for paintings from eight artists, calculating subject-level and item-level discriminability (d’) and (c) across impressionist and abstract expressionist categories, and identifying significantly higher recognition sensitivity for impressionist artworks

  • Evaluated the robustness of SDT modeling by correlating subject- and item-level parameters, finding near perfect alignment (r  .98 - .997), validating analytical reliability across levels of measurement

  • Integrated SDT outcomes with cognitive performance measures, correlating recognition d’ with N-Back working memory, Alpha Span, and perceptual speed scores, revealing positive associations consistent with human factors models of cognitive capacity and information retention

  • Examined whether subjective aesthetic value predicts discriminability, demonstrating that artwork likability significantly increased recognition sensitivity, highlighting the influence of affective salience on memory-dependent decision making

  • Generated insights relevant to human factors applications in perception, memory, and training; particularly how stimulus properties, cognitive resources, and individual differences shape detection and recognition performance in complex operational contexts

Heart Rate Variability Stability and Workload in Military Teams

Dynamical Systems Modeling with Use Cases in Human Factors, Georgia Institute of Technology

  • Investigated how physiological stability (heart rate variability [HRV]) and subjective workload respond to perturbations in three-person remotely piloted aircraft system (RPAS) military teams

  • Applied dynamical systems methods such as phase space reconstruction (average mutual information, autocorrelation time delay, embedding dimension estimation) and largest Lyapunov exponent calculation to quantify HRV system stability (stable, metastable, and chaotic) at the individual team-member level

  • Implemented controlled failure perturbations (automation failures, autonomy failures, hybrid failures, communication cuts, system power loss, and malicious attack scenarios) and compared HRV stability and workload across non-perturbed and perturbed missions to assess human response to degraded conditions

  • Analyzed subjective workload using unweighted NASA-TLX collected after each non-perturbation and perturbation mission, identifying significant workload increases for the photographer role and role-specific frustration changes across experimental sessions

  • Conducted paired t-tests and role-based comparisons showing that perturbations elevated workload but did not systematically shift HRV stability at the experimental session level, suggesting perturbation effects manifest more strongly in cognitive and experiential workload than in aggregated physiological metrics

  • Identified a strong negative correlation between HRV stability and workload for navigators during perturbed missions, indicating that increased subjective workload coincided with more stable HRV dynamics which highlighted role-dependent physiological-cognitive coupling under stress

  • Generated implications for team training, showing that perturbations can selectively impact communication-critical roles and recommending targeted role-based training approaches to improve resilience in human teams comprised of distinct roles with distinct task differences

UAV Operator Workstation Design and Interface Development

Engineering Psychology II, Georgia Institute of Technology

  • Designed three role-based ergonomic unmanned aerial vehicle UAV operator workstations (pilot, navigator, payload operator) applying human factors principles including situation awareness (SA), proximity compatibility, display integration, multimodal warning design, and anthropometric workspace layout (5th – 95th percentile accommodation)

  • Led GUI design and backend programming in Visual Basic to create interactive, macro-enabled workstation displays, enabling simulated information flow, control activation, and system feedback for instructional and demonstration purposes

  • Developed dynamic displays (e.g., engine and fuel trend displays) and static diagnostic gauges aligned with SA levels 1 – 3, supporting operator perception, comprehension, and projection of UAV state in abnormal and nominal conditions

  • Designed workstation layouts following the Proximity Compatibility Principle, ensuring primary information was spatially prioritized, perceptually grouped, and arranged to match task proximity and operator workflow demands

  • Specified control types (position, velocity, and acceleration controls) and placed all high-frequency controls within the primary reach envelope, optimizing usability, minimizing movement cost, and supporting rapid response during time-critical tasks

  • Integrated human factors standards and guidance ensuring complaint display formatting, ergonomic seating configuration, visual accessibility, and reduced cognitive workload

  • Produced a complete workstation concept demonstrating how human factors driven interface design enhances UAV operator performance, reduces error likelihood, and supports training for complex multi-role remote drone tasks

Poster.png

To view the macro-enabled PowerPoint download the .zip folder containing the .pptm file below

Technology in Training: A Scoping Review of Immersive Media and AI for Workforce Development

Work and Organizational Psychology, Georgia Institute of Technology

  • Conducted a scoping review of 59 peer-reviewed articles, narrowing to 10 high-quality empirical studies, to evaluate how immersive media technologies (virtual reality [VR], augmented reality [AR], and mixed reality [MR]) and artificial intelligence (AI) influence workplace training performance and transfer across safety, manufacturing, production, surgery, customer service, and educational domains

  • Identified two dominant cross-study themes: training topic (e.g., safety, assembly, surgery, etc.) and training modality (VR, AR, MR, and AI) and analyzed how each contributes to short-term performance gains and longer-term transfer outcomes

  • Evaluated evidence on VR-based training, finding consistent improvements in motivation, engagement, and task-specific performance, but limited empirical support for long-term transfer of training, highlighting critical research gaps for human factors practitioners and training system designers

  • Reviewed AI-assisted training systems where AI served as a real-time feedback agent, synthesizing findings that demonstrate improved trainee decision making, problem detection, and skill acquisition, but minimal evidence for AI as a standalone training modality

  • Mapped limitations across the literature, including insufficient research on AR and MR for workplace training, sparse empirical evaluation of AI-enabled training, limited multi-month transfer studies, and mismatches between immersive training environments and real-world task constraints

  • Generated actionable recommendations for human factors professionals, including adopting immersive training to improve motivation and risk perception, designing feedback-rich training supported by AI, and prioritizing longitudinal evaluation of training transfer in future workplace training research

The Effects of Perceptual and Cognitive Load on Action Affordances: A Research Proposal

Sensation and Perception, Georgia Institute of Technology

  • Developed an applied research proposal investigating how perceptual load (PL) and cognitive load (CL) influence the perception and use of action affordance objects (AAOs) during air-traffic-control (ATC) tasks, integrating theories from affordances, selective attention, working memory, and human-machine interaction

  • Designed a 2x2 factorial experiment using the LABY ATC microworld, manipulating perceptual load via peripheral aircraft density and cognitive load via number of controlled central aircraft to model realistic operator workload demands

  • Proposed comparisons of physical affordance-based controls (dials, sliders, and throttles) versus graphical visualizations (i.e., mouse-driven interface elements) to assess affordance advantages in reaction time, operator preference, and performance under varying workload

  • Integrated signal detection, dual-task interfaces, and selective attention frameworks to predict how high perceptual load and cognitive load degrade action affordance object use, response times, and ATC performance

  • Outlined analytic procedures including t-tests, repeated-measures ANOVAs, and PL/CL interaction probing to evaluate effects on reaction time, control modality selection, and task performance

  • Generated implications for human factors and user experience design in safety-critical domains, including when affordance-rich controls may support operator efficiency and when workload conditions may negate these benefits

Design Diary: Usability and Heuristic Evaluation Website

Human-Computer Interaction, The Ohio State University

  • Created a full web-based usability evaluation platform to review consumer technologies, integrating human-computer interaction (HCI) design principles, psychology, and user-centered analysis; implemented structured navigation, consistent visual hierarchy, and low memory-load page organization following data display best practices 

  • Conducted eight comprehensive design reviews each incorporating detailed assessment of mapping, affordances, cognitive biases, color effects, perceptual cues, operant conditioning, mental models, and system feedback

  • Performed heuristic evaluations across 14 categories and 54 heuristics, scoring usability goals, user control, affordances, accessibility, consistency, human limitations, linguistic clarity, responsiveness, and predictability: computed category averages and overall usability scores to generate ranked product recommendations

  • Developed a linked HCI glossary (50+ terms) to support interpretability of technical concepts using embedded hyperlinks across all reviews

  • Authored empirical-style product analyses using real photographs, error cases, task flows, and interaction breakdowns; synthesized findings to explain how design decisions affect user behavior, cognitive load, and system performance

  • Implemented the website using WordPress menus, hierarchical page structures, and interactive glossary links to present an organized, human-centered evaluation experience for readers

Universal Medical Symbol Comprehension and Usability Study

Human Centered Design Team, Battelle Memorial Institute

  • Directed a mixed-method study evaluating user comprehension, interpretation, and usability of universal medical symbols, analyzing two cross-sectional surveys (N = 200) to assess baseline comprehension of currently used medical symbols and the effectiveness of redesigned symbols

  • Performed all quantitative analyses, including descriptive statistics for each symbol and t-tests to determine significant user preference for modified versus original symbols

  • Conducted a qualitative thematic analysis of open-ended responses to identify misinterpretation patterns and generate design recommendations for symbol clarity, perceptual affordances, and intuitive meaning

  • Identified that none of the original medical symbols met the American National Standards Institute’s recommended 85% comprehension threshold, and demonstrated improved comprehension through redesigned symbol prototypes

  • Synthesized findings into actionable recommendations for medical device instructions for use, labeling, and patient-facing safety communication, informing Battelle’s human-centered design guidance

A Human Centric Approach to Symbol Comprehension and Usability

How You Doin’: The Effect of Relationship Status on Self-Esteem

Social Psychology Laboratory, The Ohio State University

  • Designed and executed an experimental study (N = 229) investigating how relationship-status priming influences self-esteem, applying concepts from social cognition, self-schema activation, and gender-based differences in self-evaluation

  • Developed experimental manipulations using the Relationship Assessment Scale and a modified single-status version to prime participants to think about romantic relationships or being single before completing the Rosenberg Self-Esteem Scale

  • Built and administered a multi-branch Qualtrics survey with randomized assignment to priming vs. control conditions; implemented gender and relationship-status factors in a 2x2 and 2x2x2 ANOVA framework to test main effects and interactions

  • Analyzed effects showing the priming participants to reflect on their relationships lowered self-esteem, contrary to predictions, and that males consistently reported higher self-esteem than females across conditions

  • Interpreted significant interactions between gender, priming condition, and relationship status, revealing that self-esteem decreased most when participants in relationships were to evaluate their partner, suggesting a reevaluation effect

  • Discussed implications for clinical interviewing and assessment, noting how relationship-focused prompts may temporarily depress self-esteem and risk misclassification during mental-health intake evaluations

Project details are coming soon.
All rights reserved © 2025 Matthew Scalia
bottom of page