Experience Timeline
Mar 2025 – Present
Founder & Solo Full-Stack Engineer
DemandEngine
- Built a data pipeline ingesting 5k+ Reddit/HN posts daily, extracting 32k pain-point signals to generate 1-2k unique SaaS ideas.
- Deployed a DeepSeek-7B LLM with a twin-pass ranking system, achieving a 95% manual QA pass rate for generated ideas.
- Engineered a scalable React + Django/FastAPI stack on GCP/Vercel, handling 100 concurrent users with p99 latency under 1s.
Jan 2025 – Present
VP External & Quant Sports Betting Team Lead
Michigan Finance and Mathematics Society
- Leading development of a multi-factor sports betting strategy for NCAA basketball, achieving significant backtested ROI.
Sep 2024 – Nov 2024
Structural Analysis Intern
Rocket Lab
- Reduced full-satellite modal analysis compute time by over 95% (1.5 hours to <5 mins) by developing a reduced-order composite fuel tank model.
- Closed a critical thermal analysis risk item for a preliminary design review by verifying satellite structural integrity under orbital temperature gradients.
- Established a standardized test workflow for composite inserts and used chi-square analysis to determine B-basis load allowables per NASA standards.
May 2024 – Jul 2024
Building Physics Intern
Harris
- Co-authored the first ASHRAE paper on modular-build emissions.
- Designed a data center thermal-storage cooling system.
Jan 2024 – Apr 2024
Mechanical Engineering Co-op
Copeland
- Simulated compressor part collisions in NX Nastran and automated lab-test analytics in Python to verify long-term component reliability.
Sep 2023 – Jan 2024
Mechanical Team Member
Michigan Robotic Submarine
- Designed and CFD-optimized a torpedo launcher, bumping RoboSub scoring potential by 2k+ points while integrating cleanly into the hull.
May 2023 – Jul 2023
Software Engineer
Stealth Startup
- Shipped bilingual FastPitch TTS and LoRA-tuned RAG LLMs, wiring the stack into a fully conversational voice customer-service platform.
Jun 2021 – Aug 2021
Engineering Research Intern
EMTECH
- Co-authored an ESA CubeSat subsystem guide and delivered thermal/electrical analyses for the mission’s hardware simulator, with work being later used in ESA's Space Rider mission.
Featured Project
Deep Reinforcement Learning Hedging Agent
Options-based Deep RL hedging for a 10k-share SPY book, trained on rBergomi (GPU) and deployed in Backtrader. Beats classical delta-hedging on volatility, drawdowns, and cost.
Backtests on SPY (2008–2023) using historical underlying + options; costs: $0.65/contract + 10 bps slippage. See repo for exact config.
▶ rBergomi Demo
Interactive Rough Volatility Simulation
Check out the rBergomi stochastic volatility model that powers the training environment for the Deep RL hedging agent below. This demo generates realistic option pricing paths with the same mathematical model used to train the agent that outperforms classical delta-hedging. Adjust parameters to see how market dynamics change in real-time.
Basic Parameters
Advanced Parameters
More details
What this is
A Deep RL agent that hedges a 10,000-share SPY position with liquid long-dated ATM calls & puts. The policy is Recurrent PPO (LSTM) trained on 100k rBergomi rough-volatility paths (GPU), then validated in Backtrader on historical SPY underlying + options. Goal: minimize PnL volatility and drawdown with materially lower cost than classical delta-hedging.
How it works
Simulator: rBergomi paths + ATM option prices (CuPy/CUDA).
Policy: RecurrentPPO (LSTM).
State (13 dims): SPY, ATM call/put mids, current call/put holdings, portfolio Δ & Γ, time-to-go, vol & lagged changes.
Actions (continuous, 2): call_qty ∈ [−100,+100], put_qty ∈ [−100,+100] per step (clipped/rounded to contracts).
Reward: minimize |ΔPnL| (or MSE/CVaR) + λ·transaction_costs; position limits enforced.
Training: HPO with Optuna, then long final train; evaluation over multiple seeds with VecNormalize.
Backtesting setup
- Engine: Backtrader (daily).
- Data:
data/spy_underlying.csv
,data/spy_options.csv
(real options, ATM selection with fallbacks). - Costs: $0.65/contract commission + 10 bps slippage on notional; contract multiplier 100; rolling near-expiry; daily rebalance.
- Constraints: ±200 contracts/type outstanding; ±100 contracts/trade; 30-calendar-day tenor target.
Results (vs classical delta-hedge)
- Volatility: −23.8% (0.35 vs 0.46).
- Max drawdown: −36.3% (−0.85 vs −1.33).
- Trading costs: −96.0% ($83,168 vs $2,087,810).
- Hedging efficiency: 12.02× (vs 0.48).
- Win rate across seeds: 95%.
(See repo tables for full metrics: downside deviation, ulcer/pain indices, equity stability.)
Reproduce quickly
pip install -r requirements.txt && pip install -e .
# Hyperparameter search (default 10 trials)
hedgerl train --hpo
# Final training with your chosen loss/weights
hedgerl train --loss_type abs --w 0.05 --lam 0.001
# Backtest (single or looped seeds)
hedgerl backtest
hedgerl backtest --loop 5
# Explore Pareto frontier (risk ↔ cost)
hedgerl generate-pareto
Repo map (just the bits recruiters care about)
src/sim/rbergomi_sim.py
(GPU rough-vol paths & ATM pricing) •
src/env/hedging_env.py
(state/action/reward) •
src/agents/train_ppo.py
(RecurrentPPO + Optuna) •
src/backtester/*
(Backtrader, model wrapper, delta baseline) •
model_files/*
(exported weights + normalization)
Notes & limitations
Sim-to-real gaps (liquidity, microstructure) remain; ATM selection uses robust mids with BS fallback; commission & slippage are parameterized; single-asset scope (multi-asset on roadmap).
Hoops-Spread — NCAA Point-Spread Alpha
End-to-end XGBoost spread model with Boruta→SHAP feature selection, a 50+ subreddit cascading sentiment pipeline (VADER→Flair→DistilBERT+sarcasm), and walk-forward backtests with half-Kelly sizing.
Window: Seasons 2019–2022 walk-forward (train 2007–2018). Half-Kelly staking. Strict chronological splits; sentiment windows lag tip-off by 24h. Out-of-sample only.
More details
What this is
A reproducible NCAA point-spread pipeline that learns a cover probability and sizes stakes with half-Kelly. Two profiles: Market-signal (includes opening total as a weak market prior) and Fundamental (excludes it).
How it works
- Features: pace/efficiency/SOS, travel & altitude, rolling team stats, 50+ subreddit sentiment EMAs.
- Sentiment: cascading VADER → Flair → DistilBERT (+ sarcasm); escalate only on uncertainty; dedup + EMAs.
- Selection: Boruta-SHAP; Model: Optuna-tuned XGBoost; SHAP attribution for interpretability.
- Backtest: walk-forward refits on 30-day horizons, half-Kelly, bankroll & VaR tracking; CLV and bootstrap CIs.
Results
- Market-signal: ROI +9.75%, Hit 63.2%, Bets 15,276, Sharpe 2.96, Max DD 4.32u, Mean CLV +0.213, ROI 95% CI +9.61%…+9.89%.
- Fundamental (baseline): ROI +1.62%, Hit 59.4%, Bets 20,284.
Reproduce quickly
pip install -e .
# Train only
hoops-spread modeling
# Backtest only
hoops-spread backtest
# Train + Backtest (expects features in data/features)
hoops-spread all
Early upstream data/sentiment orchestration lives in /wip
; finalized modeling/backtesting are production-ready.
Repo map (fast tour)
hoops_spread/modeling/*
• hoops_spread/backtesting/*
• config/boruta_features_sentiment.txt
• /wip/*
(upstream data + sentiment).
Notes & limitations
Bet sizing and frictions matter; upstream collection is being consolidated into a single DAG; injury/refs/travel feeds are on the roadmap.
Hybrid Monte Carlo Options Pricer
Modular pricing engine that generates rBergomi-style paths and evaluates early exercise with four complementary methods, plus an optional Torch-based Bayesian meta-model for post-processing and uncertainty.
No live demo. Engine focuses on correctness, modularity, and research workflows.
More details
What this is
A modular American-style options pricer built around Monte Carlo path generation under rough volatility. The engine compares and combines multiple early-exercise estimators and can feed their outputs into a Torch-based Bayesian meta-model to quantify prediction uncertainty—useful for research or downstream screening.
Methods implemented
- Asymptotic analysis: boundary-style early-exercise approximations for fast screening.
- Branching processes: upper/lower bounds via randomized tree exploration.
- LSM (Longstaff–Schwartz): regression of continuation values across paths.
- Martingale/duality optimization: variance-reduced bounds on the American price.
(Methods are implemented per standard literature, e.g., Keller Meeting on American Option Pricing, 2005.)
Rough-vol paths (rBergomi flavor)
Fractional Gaussian noise with FFT acceleration; automatic estimation of H, vol-of-vol, and correlation from recent returns, then forward-variance construction for path simulation.
Bayesian meta-model (optional)
Torch/LibTorch model (supports MC-Dropout) to post-process pricer outputs and produce a mean prediction with uncertainty bands. Runs on CPU by default; GPU/CUDA is optional.
Implementation notes
C++ core with OpenMP for parallel batches; LibTorch for the BNN; CLI pipeline to augment option datasets (no public demo).
Current status
Actively being revamped; updated results pending. Keeping the card number-free for now.
Published Research
Trader Behavior in 2024 Election Prediction Markets
An analysis of retail and institutional impact on Kalshi prediction markets. This research investigates the microstructure of political prediction markets, segmenting traders using a Gaussian Mixture Model to analyze their respective impacts on price movements and market efficiency.
View Data & Code on GitHub →Key Findings
→ Retail Influence: Retail flow was more predictive of subsequent price changes in the KH market.
→ Institutional Impact: Institutional impact was more significant in the DJT market.
→ Market Resilience: Markets showed high resilience to large volume trades from either group.
→ Complex Corrections: Institutional behavior post-mispricing was complex and not purely corrective.
Skills Visualization
Click and drag to explore the network. Hover over a node to highlight its connections.