Druce Vertes

Brooklyn, New York, United States Contact Info
1K followers 500+ connections

Join to view profile

Contributions

Activity

Join now to see all activity

Experience & Education

  • Tuck Advisors

View Druce’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Licenses & Certifications

Publications

  • Beyond the 4% Rule: Flexible Withdrawal Strategies Using Certainty-Equivalent Spending

    Advisor Perspectives

    The Bengen 4% rule was devised using historical 30-year retirement cohorts and a no-shortfall constraint. A retiree can spend 4% of his or her starting portfolio each year and adjust annually for inflation, and historically this strategy has never experienced a shortfall before 30 years. If you just wanted to maximize spending over a 30-year retirement without regard for shortfall risk using a gradient-free optimizer like Optuna, spending 6.3% of your portfolio each year with a 3.7% floor…

    The Bengen 4% rule was devised using historical 30-year retirement cohorts and a no-shortfall constraint. A retiree can spend 4% of his or her starting portfolio each year and adjust annually for inflation, and historically this strategy has never experienced a shortfall before 30 years. If you just wanted to maximize spending over a 30-year retirement without regard for shortfall risk using a gradient-free optimizer like Optuna, spending 6.3% of your portfolio each year with a 3.7% floor historically resulted in higher total spending, but sometimes you ran out of money. In between, we can find rules for different levels of risk aversion by maximizing certainty-equivalent spending under CRRA utility. These generalized Bengen rules under different levels of risk aversion let you spend more than 4%, at some risk of necessitating reductions in the case of poor investment returns.

    See publication
  • Deep Reinforcement Learning For Trading Applications

    Alpha Architect

    Reinforcement learning is a machine learning paradigm that can learn complex behavior to achieve maximum reward in dynamic environments, like trading. In this post, I share code to apply deep reinforcement learning to environments including trading a non-random time series, and discuss implications of emergent multi-agent behavior in markets.

    See publication
  • Quantitative Fun With Fund Names

    StreetEYE Blog

    Using data to study how funds are named, with a machine learning-driven fund name generator.

    See publication
  • Machine Learning for Investors: A Primer

    Alpha Architect

    An overview of machine learning for investors.

    See publication
  • The Top 100 People To Follow To Discover Financial News On Twitter, May 2017

    StreetEYE Blog

    Using data to discover the most influential sharers of financial news on Twitter, and to visualize the FinTwittersphere.

    See publication
  • Active vs. Passive Investing and the “Suckers at the Poker Table” Fallacy

    CFA Institute Enterprising Investor

    Warren Buffett says most people should invest passively. He also says the more people invest passively, the easier it is for him to outperform them. How can both be true?

    See publication
  • Getting Schooled in Risk: The Lessons of Poker

    CFA Institute Enterprising Investor

    What the game of poker teaches us about investing.

    See publication
  • Safe Withdrawal Rates, Optimal Retirement Portfolios, and Certainty-Equivalent Spending

    SSRN

    Generalizing Bengen ‘4% rule’ to allow variable spending, we used certainty equivalent spending with constant relative risk aversion to show how risk-neutral retirees will maximize lifetime spending by varying spending as a percentage of assets, while highly risk-averse retirees will tend toward a fixed spending strategy.

    See publication

Projects

  • Beyond the 4% rule: Finding the best flexible withdrawal rates for retirement spending within a risk framework by maximizing 'certainty-adjusted spending'

    -

    Using an evaluation metric like certainty-equivalent spending under constant relative risk aversion, and gradient free optimizers like Hyperopt and Optuna, we can identify optimal risk-adjusted spending rules for retirement spending that combine a fixed minimum withdrawal amount, and a percentage of the portfolio value.

  • Deep Reinforcement Learning For Trading

    -

    Intro to reinforcement learning with trading examples.

  • Distributed hyperparameter tuning with Ray Tune, Hyperopt and Optuna

    -

    How to run a large gradient-free optimization task like hyperparameter tuning on a cloud computing cluster

  • MTA Dashboard

    -

    A modern data stack project using DuckDB data warehouse column-oriented database, dbt, and Plotly Dash to analyze MTA turnstile data.

  • Portfolio optimization with CVXOPT

    -

    Use CVXOPT optimization platform to identify optimal asset allocations from Damodaran historical return data in a mean-variance framework.

Languages

  • French

    -

  • German

    -

More activity by Druce

View Druce’s full profile

  • See who you know in common
  • Get introduced
  • Contact Druce directly
Join to view full profile

People also viewed

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Druce Vertes

Add new skills with these courses