Simple Moving Average Strategy with a Volatility Filter

I would describe my trading approach as systematic long term trend following. A trend following strategy can be difficult mentally to trade after experiencing multiple consecutive losses when a trade reverses due to a volatility spike or the trend reverses. Volatility tends to increase when prices fall. This is not good for a long only trend following strategy, especially when initially entering trades.

Can adding a volatility filter to a simple system improve performance?

SMA System with Volatility Filter Rules

  • Buy Rule: Go long if close is greater than the N period SMA and a volatility measure is less than its median over the last N periods.
  • Exit Rule: Exit if long and close is less than the N period SMA

SMA System without Volatility Filter Rules

  • Buy Rule: Go long if close is greater than the N period SMA
  • Exit Rule: Exit if close is less than the N period SMA

For this test, my volatility measure is the 52 period standard deviation of the 1 period change of close prices and I will use a 52 period SMA.

I will test the strategy on the total return series of the S&P500 using weekly prices from 1/1/1990 to 4/17/2012.

yuck… the equity curves look pretty good up until 1999, then not so good after that.



CAGR maxDD MAR # Trades Ending Equity Percent Winning Trades
SMA with Volatility Filter 4.369174 -22.3993 0.195059 34 $239,104.70 58.82
SMA System 7.442673 -22.2756 0.334119 57 $464,198.80 53.57

This test shows that adding a volatility filter to our entries can actually hinder performance. Keep in mind this is ny no means an exhaustive test on a single instrument. I also chose the 52 period SMA and SDEV somewhat arbitrarily because it represents a year.

Reading through trading forums, it is clear to see that people are in search of the “holy grail” trading system. Some people claim to have found the “holy grail” system, but that system is usually combination of 10+ indicators and rules that say “use indicator A, B, and C when the market is doing X or use indicators D, E, and F when the market is doing Y.” Beware of these “filters” and always test yourself.

Stay tuned for future posts that will look at adding a similar filter on a multiple instrument test.

What have you found with adding entry filters to trading systems?

require(quantstrat) = "GSPC"
stock(, currency="USD",multiplier=1)
getSymbols("^GSPC", src='yahoo', index.class=c("POSIXt","POSIXct"), from='1990-01-01')
GSPC <- to.weekly(GSPC,indexAt='lastof',drop.time=TRUE)

#Custom Order Sizing Function to trade percent of equity based on a stopsize
osPCTEQ <- function(timestamp, orderqty, portfolio, symbol, ruletype, ...){
  tempPortfolio <- getPortfolio(
  dummy <- updatePortf(, Dates=paste('::',as.Date(timestamp),sep='')) <- sum(getPortfolio($summary$Realized.PL) #change to ..$summary$Net.Trading.PL for Total Equity Position Sizing
  total.equity <-
  DollarRisk <- total.equity * trade.percent
  ClosePrice <- as.numeric(Cl(mktdata[timestamp,]))
  mavg <- as.numeric(mktdata$SMA52[timestamp,])
  sign1 <- ifelse(ClosePrice > mavg, 1, -1)
  sign1[] <- 1
  Posn = getPosQty(Portfolio =, Symbol =, Date = timestamp)
  StopSize <- as.numeric(mktdata$SDEV[timestamp,]*StopMult) #Stop = SDAVG * StopMult !Must have SDAVG or other indictor to determine stop size
  orderqty <- ifelse(Posn == 0, sign1*round(DollarRisk/StopSize), 0) # number contracts traded is equal to DollarRisk/StopSize

#Function that calculates the n period standard deviation of close prices.
#This is used in place of ATR so that I can use only close prices.
SDEV <- function(x, n){
  sdev <- runSD(x, n, sample = FALSE)
  colnames(sdev) <- "SDEV"

#Custom indicator function 
RB <- function(x,n){
  x <- x
  roc <- ROC(x, n=1, type="discrete")
  sd <- runSD(roc,n, sample= FALSE)
  sd[] <- 0
  med <- runMedian(sd,n)
  med[] <- 0
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  #ret <- cbind(x,roc,sd,med,mavg,signal)
  #colnames(ret) <- c("close","roc","sd","med","mavg","lowvol")

initEq <- 100000

trade.percent <- .05 #percent risk used in sizing function
StopMult = 1 #stop size used in sizing function

#Name the portfolio and account'RBtest''RBtest'

initPortf(,, initPosQty=0, initDate=initDate, currency="USD")
initAcct(,, initDate=initDate, initEq=initEq)

#Name the strategy
stratRB <- strategy('RBtest')

#Add indicators
#The first indicator is the 52 period SMA
#The second indicator is the RB indicator. The RB indicator returns a value of 1 when close > SMA & volatility < runMedian(volatility, n = 52)
stratRB <- add.indicator(strategy = stratRB, name = "SMA", arguments = list(x = quote(Cl(mktdata)), n=52), label="SMA52")
stratRB <- add.indicator(strategy = stratRB, name = "RB", arguments = list(x = quote(Cl(mktdata)), n=52), label="RB")
stratRB <- add.indicator(strategy = stratRB, name = "SDEV", arguments = list(x = quote(Cl(mktdata)), n=52), label="SDEV")

#Add signals
#The buy signal is when the RB indicator crosses from 0 to 1
#The exit signal is when the close crosses below the SMA
stratRB <- add.signal(strategy = stratRB, name="sigThreshold", arguments = list(threshold=1, column="RB",relationship="gte", cross=TRUE),label="RB.gte.1")
stratRB <- add.signal(strategy = stratRB, name="sigCrossover", arguments = list(columns=c("Close","SMA52"),relationship="lt"),label="")

#Add rules
stratRB <- add.rule(strategy = stratRB, name='ruleSignal', arguments = list(sigcol="RB.gte.1", sigval=TRUE, orderqty=1000, ordertype='market', orderside='long', osFUN = 'osPCTEQ', pricemethod='market', replace=FALSE), type='enter', path.dep=TRUE)
stratRB <- add.rule(strategy = stratRB, name='ruleSignal', arguments = list(sigcol="", sigval=TRUE, orderqty='all', ordertype='market', orderside='long', pricemethod='market',TxnFees=0), type='exit', path.dep=TRUE)

# Process the indicators and generate trades
out<-try(applyStrategy(strategy=stratRB ,
print("Strategy Loop:")

print("updatePortf execution time:")


#Update Account

#Update Ending Equity

#ending equity
getEndEq(, Sys.Date()) + initEq

tstats <- tradeStats(,

#View order book to confirm trades

#Trade Statistics for CAGR, Max DD, and MAR
#calculate total equity curve performance Statistics
ec <- tail(cumsum(getPortfolio($summary$Net.Trading.PL),-1)
ec$initEq <- initEq
ec$totalEq <- ec$Net.Trading.PL + ec$initEq
ec$maxDD <- ec$totalEq/cummax(ec$totalEq)-1
ec$logret <- ROC(ec$totalEq, n=1, type="continuous")
ec$logret[$logret)] <- 0

Strat.Wealth.Index <- exp(cumsum(ec$logret)) #growth of $1

period.count <- NROW(ec)-104 #Use 104 because there is a 104 week lag for the 52 week SD and 52 week median of SD
year.count <- period.count/52
maxDD <- min(ec$maxDD)*100
totret <- as.numeric(last(ec$totalEq))/as.numeric(first(ec$totalEq))
CAGR <- (totret^(1/year.count)-1)*100
MAR <- CAGR/abs(maxDD)

Perf.Stats <- c(CAGR, maxDD, MAR)
names(Perf.Stats) <- c("CAGR", "maxDD", "MAR")
#write.zoo(mktdata, file = "E:\\a.csv")

charts.PerformanceSummary(ec$logret, wealth.index = TRUE, colorset = "steelblue2", main = "SMA with Volatility Filter System Performance")

Created by Pretty R at

Disclaimer: Past results do not guarantee future returns. Information on this website is for informational purposes only and does not offer advice to buy or sell any securities.


5 thoughts on “Simple Moving Average Strategy with a Volatility Filter

  1. In general, I think filters are a good idea to try and limit the number of position when trading/scanning a (too) large portfolio (ie follow 100 instruments but only pick the best 20 or whatever number is returned by your filter(s)). I am not sure that they can always add value in FORWARD test.
    In some cases they do though, and even in the example you chose.
    I ran a study where I tested the same concept: volatility filters on trend following (on a diversified portfolio of instruments) and only filtering the top volatility decile seemed to improve results – even better when filtering higher vols.
    Here is the study:

    By the way, there seems to be more in more R backtest bloggers these days. Apologies for asking, but what is it that attracts you to the platform for back-testing? Is it cost?
    I like R as an offline stats/analysis tool but I really would not see myself using it for backtests…


    • Jez,

      Thanks for the feedback. I’ve been a follower of your Au.Tra.Sy blog for about the past year. I agree that having risk limiters or risk filters is important to essentially allow you to trade an infinitely large portfolio with a finite amount of capital. Adding value in forward testing is always the question…

      I’ll take a look at the study you did, thanks for sending the link. It seems that there is a fine line when a volatility filter works and when it doesn’t. They tend to work and limit overall drawdown, but at the expense of performance. It all depends on what you are optimizing for (MAR, RAR, Total Return, other bliss ratios). At least we can run tests to understand the impacts and make an informed decision if we want to add a degree of freedom (filter) to our system.

      What attracts me to R is the flexibility, huge library of packages, ease of use, and it is free open source software. The quantstrat package and Performance Analytics package make it very easy to run backtests, custom analysis, and graphing with just a few lines of code. When I initially became interested in systematic trading, I had almost no programming experience. I wanted the flexibility to create my own systems, but learning the tradingblox syntax or trading recipes syntax was very intimidating and I was not comfortable spending several thousands dollars on a backtesting platform to learn a language I may or may not be able to use. In my opinion, tradingblox is like the ferrari of testing and order generation platforms, I could definitely see myself moving to a platform like that. But for now, R does everything I need and I enjoy using it.


      • You’re welcome Ross.

        I think one of the problems of filters is that they result in missed opportunities (ie it might get you out of 9 “bad” trades out of 10 but with the tenth one more than making up for the 9 bad ones).

        When testing with one instrument, this more likely results in bigger impact / drop in performance. When testing/trading over a full diversified portfolio (as in my study) the effects of diversification would counter that potential drop.

        In some ways, it’s more the diversification than the trend that is your friend.

        Thanks for the feedback on R. To me, it seems a bit more convoluted to run back-tests in there and I could not see a clear advantage over Trading Blox or other platforms (apart from cost), but to each his own (mind you TB is not that straight-forward either).

  2. Interesting blog.

    From my experience; I do agree with both of you in regard to using R vs TB or other software. I have evaluated TradingBlox; and found it’s architecture close to Automated Trading domain; as it is very flexible in creating portfolio of strategies; and run backtests on them. However; as rbresearch mentioned; the language sucks.

    R on the other side; helps a lot in analyzing data with ease.

    Currently i built my own automated system; that was inspired by TB, MT4, TS, and also by R. I integrate with R to do most of the analysis and graph generation. The packages included in R are priceless.

  3. Pingback: Simple Moving Average Strategy with a Volatility Filter: Follow-Up Part 2 | rbresearch

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s