Baz Cosmopoulos

University of Michigan '27 | Financial Math & Data Science

bcosm@umich.edu

GitHub
LinkedIn
DemandEngine Icon DemandEngine

AI-powered SaaS idea generation platform.

Hey, I'm Baz. I'm a junior at UMich who loves coding and trading, but also just a chill guy. Currently, I'm:

  • _

Experience Timeline

Mar 2025 – Present

Founder & Solo Full-Stack Engineer

DemandEngine

  • Built a data pipeline ingesting 5k+ Reddit/HN posts daily, extracting 32k pain-point signals to generate 1-2k unique SaaS ideas.
  • Deployed a DeepSeek-7B LLM with a twin-pass ranking system, achieving a 95% manual QA pass rate for generated ideas.
  • Engineered a scalable React + Django/FastAPI stack on GCP/Vercel, handling 100 concurrent users with p99 latency under 1s.

Jan 2025 – Present

VP External & Quant Sports Betting Team Lead

Michigan Finance and Mathematics Society

  • Leading development of a multi-factor sports betting strategy for NCAA basketball, achieving significant backtested ROI.

Sep 2024 – Nov 2024

Structural Analysis Intern

Rocket Lab

  • Reduced full-satellite modal analysis compute time by over 95% (1.5 hours to <5 mins) by developing a reduced-order composite fuel tank model.
  • Closed a critical thermal analysis risk item for a preliminary design review by verifying satellite structural integrity under orbital temperature gradients.
  • Established a standardized test workflow for composite inserts and used chi-square analysis to determine B-basis load allowables per NASA standards.

May 2024 – Jul 2024

Building Physics Intern

Harris

  • Co-authored the first ASHRAE paper on modular-build emissions.
  • Designed a data center thermal-storage cooling system.

Jan 2024 – Apr 2024

Mechanical Engineering Co-op

Copeland

  • Simulated compressor part collisions in NX Nastran and automated lab-test analytics in Python to verify long-term component reliability.

Sep 2023 – Jan 2024

Mechanical Team Member

Michigan Robotic Submarine

  • Designed and CFD-optimized a torpedo launcher, bumping RoboSub scoring potential by 2k+ points while integrating cleanly into the hull.

May 2023 – Jul 2023

Software Engineer

Stealth Startup

  • Shipped bilingual FastPitch TTS and LoRA-tuned RAG LLMs, wiring the stack into a fully conversational voice customer-service platform.

Jun 2021 – Aug 2021

Engineering Research Intern

EMTECH

  • Co-authored an ESA CubeSat subsystem guide and delivered thermal/electrical analyses for the mission’s hardware simulator, with work being later used in ESA's Space Rider mission.

Featured Project

Deep Reinforcement Learning Hedging Agent

Options-based Deep RL hedging for a 10k-share SPY book, trained on rBergomi (GPU) and deployed in Backtrader. Beats classical delta-hedging on volatility, drawdowns, and cost.

↓ 23.8%
Volatility
↓ 96.0%
Trading costs
↓ 36.3%
Max drawdown
24.1×
Hedging efficiency
95%
Win rate
100k
rBergomi GPU

Backtests on SPY (2008–2023) using historical underlying + options; costs: $0.65/contract + 10 bps slippage. See repo for exact config.

rBergomi Demo

Interactive Rough Volatility Simulation

Check out the rBergomi stochastic volatility model that powers the training environment for the Deep RL hedging agent below. This demo generates realistic option pricing paths with the same mathematical model used to train the agent that outperforms classical delta-hedging. Adjust parameters to see how market dynamics change in real-time.

Basic Parameters

Advanced Parameters

Hoops-Spread — NCAA Point-Spread Alpha

End-to-end XGBoost spread model with Boruta→SHAP feature selection, a 50+ subreddit cascading sentiment pipeline (VADER→Flair→DistilBERT+sarcasm), and walk-forward backtests with half-Kelly sizing.

+9.75%
ROI (½-Kelly)
63.2%
Hit rate
15,276
Bets
2.96
Sharpe
+0.213
Mean CLV
4.32u
Max DD

Window: Seasons 2019–2022 walk-forward (train 2007–2018). Half-Kelly staking. Strict chronological splits; sentiment windows lag tip-off by 24h. Out-of-sample only.

More details

What this is

A reproducible NCAA point-spread pipeline that learns a cover probability and sizes stakes with half-Kelly. Two profiles: Market-signal (includes opening total as a weak market prior) and Fundamental (excludes it).

How it works

  • Features: pace/efficiency/SOS, travel & altitude, rolling team stats, 50+ subreddit sentiment EMAs.
  • Sentiment: cascading VADER → Flair → DistilBERT (+ sarcasm); escalate only on uncertainty; dedup + EMAs.
  • Selection: Boruta-SHAP; Model: Optuna-tuned XGBoost; SHAP attribution for interpretability.
  • Backtest: walk-forward refits on 30-day horizons, half-Kelly, bankroll & VaR tracking; CLV and bootstrap CIs.

Results

  • Market-signal: ROI +9.75%, Hit 63.2%, Bets 15,276, Sharpe 2.96, Max DD 4.32u, Mean CLV +0.213, ROI 95% CI +9.61%…+9.89%.
  • Fundamental (baseline): ROI +1.62%, Hit 59.4%, Bets 20,284.

Reproduce quickly

pip install -e .
# Train only
hoops-spread modeling
# Backtest only
hoops-spread backtest
# Train + Backtest (expects features in data/features)
hoops-spread all

Early upstream data/sentiment orchestration lives in /wip; finalized modeling/backtesting are production-ready.

Repo map (fast tour)

hoops_spread/modeling/*hoops_spread/backtesting/*config/boruta_features_sentiment.txt/wip/* (upstream data + sentiment).

Notes & limitations

Bet sizing and frictions matter; upstream collection is being consolidated into a single DAG; injury/refs/travel feeds are on the roadmap.

Hybrid Monte Carlo Options Pricer

Revamp in progress View on GitHub →

Modular pricing engine that generates rBergomi-style paths and evaluates early exercise with four complementary methods, plus an optional Torch-based Bayesian meta-model for post-processing and uncertainty.

rBergomi Asymptotics LSM Dual/Martingale Branching OpenMP CUDA-optional Torch BNN

No live demo. Engine focuses on correctness, modularity, and research workflows.

More details

What this is

A modular American-style options pricer built around Monte Carlo path generation under rough volatility. The engine compares and combines multiple early-exercise estimators and can feed their outputs into a Torch-based Bayesian meta-model to quantify prediction uncertainty—useful for research or downstream screening.

Methods implemented

  • Asymptotic analysis: boundary-style early-exercise approximations for fast screening.
  • Branching processes: upper/lower bounds via randomized tree exploration.
  • LSM (Longstaff–Schwartz): regression of continuation values across paths.
  • Martingale/duality optimization: variance-reduced bounds on the American price.

(Methods are implemented per standard literature, e.g., Keller Meeting on American Option Pricing, 2005.)

Rough-vol paths (rBergomi flavor)

Fractional Gaussian noise with FFT acceleration; automatic estimation of H, vol-of-vol, and correlation from recent returns, then forward-variance construction for path simulation.

Bayesian meta-model (optional)

Torch/LibTorch model (supports MC-Dropout) to post-process pricer outputs and produce a mean prediction with uncertainty bands. Runs on CPU by default; GPU/CUDA is optional.

Implementation notes

C++ core with OpenMP for parallel batches; LibTorch for the BNN; CLI pipeline to augment option datasets (no public demo).

Current status

Actively being revamped; updated results pending. Keeping the card number-free for now.

Published Research

Trader Behavior in 2024 Election Prediction Markets

An analysis of retail and institutional impact on Kalshi prediction markets. This research investigates the microstructure of political prediction markets, segmenting traders using a Gaussian Mixture Model to analyze their respective impacts on price movements and market efficiency.

View Data & Code on GitHub →

Key Findings

→ Retail Influence: Retail flow was more predictive of subsequent price changes in the KH market.

→ Institutional Impact: Institutional impact was more significant in the DJT market.

→ Market Resilience: Markets showed high resilience to large volume trades from either group.

→ Complex Corrections: Institutional behavior post-mispricing was complex and not purely corrective.

Skills Visualization

Click and drag to explore the network. Hover over a node to highlight its connections.

Legend

Skill Category
Project / Experience