Quick View on Correlations of Different Instruments

In this post, I will demonstrate how to quickly visualize correlations using the PerformanceAnalytics package. Thanks to the package creators, it is really easy correlation and many other performance metrics.

The first chart looks at the rolling 252 day correlation of nine sector ETFs using SPY as the benchmark. As expected the correlation is rather high because the sector ETFs are part of the S&P 500 index, but has been even more pronounced the last few years.

rbresearch

Chart 2 shows the correlation of five ETFs. Note that there is no single instrument I am using as a benchmark, all five ETFs will be benchmarked against one another. (note that I removed the legend because it literally took up the entire plot).

rbresearch

Chart 3 shows the same 4 ETFs, this time using SPY as a benchmark.

rbresearch

In my opinion, the beauty of the chart.RollingCorrelation function is that the inputs are time series returns. This means that the correlations of instruments (ETFs, stocks, mutual funds, etc.), hedge fund managers, portfolios, and even strategies we test in quantstrat.

Here is the R code used to generate the first chart. To do you own correlation analysis, just change the symbols or add in new data sets of different returns.

#Correlations of Sector ETFs to benchmarked against SPY

#Load the packages used
require(PerformanceAnalytics)
require(quantmod)

#create a list of symbols
symbols = c("XLY", "XLP", "XLE", "XLF", "XLV", "XLI", "XLK", "XLB", "XLU")
retsymbols <- paste("ret", symbols, sep = ".")

#Downlad the data from yahoo
getSymbols(symbols, src = 'yahoo', index.class = c("POSIXt","POSIXct"), from = '2000-01-01')
getSymbols("SPY", src = 'yahoo', index.class = c("POSIXt","POSIXct"), from = '2000-01-01')

#The benchmark is the return vector of which the other assets will be benchmarked against
benchmark <- ROC(Ad(SPY), n=1, type="continuous", na.pad=TRUE)
colnames(benchmark) <- "SPY"

#Loop to create new xts objects with just the returns
for (symbol in symbols){
  x <- get(symbol)
  x1 <- ROC(Ad(x), n=1, type="continuous", na.pad=TRUE)
  colnames(x1)<-gsub("x",symbol,colnames(x1))
  assign(paste("ret", symbol, sep = "."),x1)
}

#this merges all of the objects in 'retsymbols' into one object named 'ret'
ret <- do.call(merge, lapply(retsymbols, get))

suppressWarnings(chart.RollingCorrelation(ret[,1:ncol(ret)], benchmark, width = 252, xaxis = TRUE, 
                          colorset = rich8equal, legend.loc = "bottomright",
                         main = "Rolling 252 Day Correlation"))

Created by Pretty R at inside-R.org

Advertisements

Simple Moving Average Strategy with a Volatility Filter: Follow-Up Part 3

In part 2, we saw that adding a volatility filter to a single instrument test did little to improve performance or risk adjusted returns. How will the volatility filter impact a multiple instrument portfolio?

In part 3 of the follow up, I will evaluate the impact of the volatility filter on a multiple instrument test.

The tests will use nine of the Select Sector SPDR ETFs listed below.

XLY – Consumer Discretionary Select Sector SPDR
XLP – Consumer Staples Select Sector SPDR
XLE – Energy Select Sector SPDR
XLF – Financial Select Sector SPDR
XLV – Health Care Select Sector SPDR
XLI – Industrial Select Sector SPDR
XLK – Technology Select Sector SPDR
XLB – Materials Select Sector SPDR
XLU – Utilities Select Sector SPDR

Test #1 – without volatility filter

Start Date*: 2001-01-01

Test#2 – with volatility filter

Start Date*: 2000-01-01

*Note the difference in start dates. The volatility filter requires an extra 52 periodsto process the RBrev1 indicator so the test dates are offset by 52 weeks (one year).

Both tests will risk 1% of account equity and the stop size is 1 standard deviation.

Test #1 is a simple moving average strategy without a volatility filter on a portfolio of the nine sector ETFs mentioned previously. This will be the baseline for comparison of the strategy with the volatility filter.

Test #1 Buy and Exit Rules

  • Buy Rule: Go long if close crosses above the 52 period SMA
  • Exit Rule: Exit if close crosses below the 52 period SMA
Test #1 Performance Statistics
Test CAGR (%) MaxDD (%) MAR
Test#1 7.976377 -14.92415 0.534461

rbresearch

Test #2 will be a simple moving average strategy with a volatility filter on the same 9 ETFs. The volatility filter is the same measure used in Follow-Up Part 2. The volatility filter is simply the 52 period standard deviation of close prices.

Test #2 Buy and Exit Rules

The new volatility filter will be the 52 period standard deviation of close prices. Now, the buy rule can be interpreted as follows:

  • Buy Rule: Go long if close is greater than the 52 period SMA and the 52 period standard deviation of close prices is less than its median over the last 52 periods.
  • Exit Rule: Exit if long and close is less than the 52 period SMA

Test#2 Performance Statistics

Test CAGR (%) MaxDD (%) MAR
Test#2 7.6694587 -14.6590123 0.523191

rbresearch

Both strategies perform fairly well. I would give a slight edge to Test#1, the strategy without a volatility filter. The strategy without a volatility filter has a slightly higher maximum drawdown (MaxDD), but also a higher CAGR.

Test CAGR (%) MaxDD (%) MAR
Test#1 7.976377 -14.92415 0.534461
Test#2 7.6694587 -14.65901 0.523191

Below I will include the R code for the test#2, shoot me an email if you want the code for test#1.

#Weekly Timing Strategy with Volatility Filter
require(PerformanceAnalytics)
require(quantstrat)

suppressWarnings(rm("order_book.TimingWeekly",pos=.strategy))
suppressWarnings(rm("account.TimingWeekly","portfolio.TimingWeekly",pos=.blotter))
suppressWarnings(rm("account.st","portfolio.st","symbols","stratBBands","initDate","initEq",'start_t','end_t'))

##### Begin Functions #####

#Custom Order Sizing Function to trade percent of equity based on a stopsize
osPCTEQ <- function(timestamp, orderqty, portfolio, symbol, ruletype, ...){
  tempPortfolio <- getPortfolio(portfolio.st)
  dummy <- updatePortf(Portfolio=portfolio.st, Dates=paste('::',as.Date(timestamp),sep=''))
  trading.pl <- sum(getPortfolio(portfolio.st)$summary$Realized.PL) #change to ..$summary$Net.Trading.PL for Total Equity Position Sizing
  assign(paste("portfolio.",portfolio.st,sep=""),tempPortfolio,pos=.blotter)
  total.equity <- initEq+trading.pl
  DollarRisk <- total.equity * trade.percent
  ClosePrice <- as.numeric(Cl(mktdata[timestamp,]))
  mavg <- as.numeric(mktdata$SMA[timestamp,])
  sign1 <- ifelse(ClosePrice > mavg, 1, -1)
  sign1[is.na(sign1)] <- 1
  Posn = getPosQty(Portfolio = portfolio.st, Symbol = symbol, Date = timestamp)
  StopSize <- as.numeric(mktdata$SDEV[timestamp,]*StopMult) #Stop = SDAVG * StopMult !Must have SDAVG or other indictor to determine stop size
  #orderqty <- round(DollarRisk/StopSize, digits=0)
  orderqty <- ifelse(Posn == 0, sign1*round(DollarRisk/StopSize), 0) # number contracts traded is equal to DollarRisk/StopSize
  return(orderqty)
}

#Function that calculates the n period standard deviation of close prices.
#This is used in place of ATR so that I can use only close prices.
SDEV <- function(x, n){
  sdev <- runSD(x, n, sample = FALSE)
  colnames(sdev) <- "SDEV"
  reclass(sdev,x)
}

#Custom indicator function 
RBrev1 <- function(x,n){
  x <- x
  sd <- runSD(x, n, sample= FALSE)
  med <- runMedian(sd,n)
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  #ret <- cbind(x,roc,sd,med,mavg,signal) #Only use for further analysis of indicator
  #colnames(ret) <- c("close","roc","sd","med","mavg","RB") #Only use for further analysis of indicator
  reclass(signal,x)
}

##### End Functions #####

#Symbols to be used in test
#XLY - Consumer Discretionary Select Sector SPDR
#XLP - Consumer Staples Select Sector SPDR
#XLE - Energy Select Sector SPDR
#XLF - Financial Select Sector SPDR
#XLV - Health Care Select Sector SPDR
#XLI - Industrial Select Sector SPDR
#XLK - Technology Select Sector SPDR
#XLB - Materials Select Sector SPDR
#XLU - Utilities Select Sector SPDR

#Symbol list to pass to the getSymbols function
symbols = c("XLY", "XLP", "XLE", "XLF", "XLV", "XLI", "XLK", "XLB", "XLU")

#Load ETFs from yahoo
currency("USD")
stock(symbols, currency="USD",multiplier=1)
getSymbols(symbols, src='yahoo', index.class=c("POSIXt","POSIXct"), from='2000-01-01')

#Data is downloaded as daily data
#Convert to weekly
for(symbol in symbols) {
  x<-get(symbol)
  x<-to.weekly(x,indexAt='lastof',drop.time=TRUE)
  indexFormat(x)<-'%Y-%m-%d'
  colnames(x)<-gsub("x",symbol,colnames(x))
  assign(symbol,x)
}

#Use the adjusted close prices
#this for loop sets the "Close" column equal to the "Adjusted Close" column
#because the trades are executed based on the "Close" column
for(symbol in symbols) {
  x<-get(symbol)
  x[,4] <- x[,6]
  assign(symbol,x)
}

initDate='1900-01-01'
initEq <- 100000

trade.percent <- 0.01 #percent risk used in sizing function
StopMult = 1 #stop size used in sizing function

#Name the portfolio and account
portfolio.st = 'TimingWeekly'
account.st = 'TimingWeekly'

#Initialization
initPortf(portfolio.st, symbols=symbols, initPosQty=0, initDate=initDate, currency="USD")
initAcct(account.st,portfolios=portfolio.st, initDate=initDate, initEq=initEq)
initOrders(portfolio=portfolio.st,initDate=initDate)

#Name the strategy
strat <- strategy('TimingWeekly')

#Add indicators
#The first indicator is the 52 period SMA
#The second indicator is the SDEV indicator used for stop and position sizing
strat <- add.indicator(strategy = strat, name = "SMA", arguments = list(x = quote(Cl(mktdata)), n=52), label="SMA")
strat <- add.indicator(strategy = strat, name = "RBrev1", arguments = list(x = quote(Cl(mktdata)), n=52), label="RB")
strat <- add.indicator(strategy = strat, name = "SDEV", arguments = list(x = quote(Cl(mktdata)), n=52), label="SDEV")

#Add signals
#The buy signal is when the RB indicator crosses from 0 to 1
#The exit signal is when the close crosses below the SMA
strat <- add.signal(strategy = strat, name="sigThreshold", arguments = list(threshold=1, column="RB",relationship="gte", cross=TRUE),label="RB.gte.1")
strat <- add.signal(strategy = strat, name="sigCrossover", arguments = list(columns=c("Close","SMA"),relationship="lt"),label="Cl.lt.SMA")

#Add rules
strat <- add.rule(strategy = strat, name='ruleSignal', arguments = list(sigcol="RB.gte.1", sigval=TRUE, orderqty=1000, ordertype='market', orderside='long', osFUN = 'osPCTEQ', pricemethod='market', replace=FALSE), type='enter', path.dep=TRUE)
strat <- add.rule(strategy = strat, name='ruleSignal', arguments = list(sigcol="Cl.lt.SMA", sigval=TRUE, orderqty='all', ordertype='market', orderside='long', pricemethod='market',TxnFees=0), type='exit', path.dep=TRUE)

# Process the indicators and generate trades
start_t<-Sys.time()
out<-try(applyStrategy(strategy = strat, portfolios = portfolio.st))
end_t<-Sys.time()
print("Strategy Loop:")
print(end_t-start_t)

start_t<-Sys.time()
updatePortf(Portfolio=portfolio.st,Dates=paste('::',as.Date(Sys.time()),sep=''))
end_t<-Sys.time()
print("updatePortf execution time:")
print(end_t-start_t)

#chart.Posn(Portfolio=portfolio.st,Symbol=symbols)

#Update Account
updateAcct(account.st)

#Update Ending Equity
updateEndEq(account.st)

#ending equity
getEndEq(account.st, Sys.Date()) + initEq

tstats <- tradeStats(Portfolio=portfolio.st, Symbol=symbols)

#View order book to confirm trades
#getOrderBook(portfolio.st)

#Trade Statistics for CAGR, Max DD, and MAR
#calculate total equity curve performance Statistics
ec <- tail(cumsum(getPortfolio(portfolio.st)$summary$Net.Trading.PL),-1)
ec$initEq <- initEq
ec$totalEq <- ec$Net.Trading.PL + ec$initEq
ec$maxDD <- ec$totalEq/cummax(ec$totalEq)-1
ec$logret <- ROC(ec$totalEq, n=1, type="continuous")
ec$logret[is.na(ec$logret)] <- 0

WI <- exp(cumsum(ec$logret)) #growth of $1
#write.zoo(nofilterWI, file = "E:\\nofiltertest.csv", sep=",")

period.count <- NROW(ec)-104 #Use 104 because there is a 104 week lag for the 52 week SD and 52 week median of SD
year.count <- period.count/52
maxDD <- min(ec$maxDD)*100
totret <- as.numeric(last(ec$totalEq))/as.numeric(first(ec$totalEq))
CAGR <- (totret^(1/year.count)-1)*100
MAR <- CAGR/abs(maxDD)

Perf.Stats <- c(CAGR, maxDD, MAR)
names(Perf.Stats) <- c("CAGR", "maxDD", "MAR")
Perf.Stats

#transactions <- getTxns(Portfolio = portfolio.st, Symbol = symbols)
#write.zoo(transactions, file = "E:\\nofiltertxn.csv")

charts.PerformanceSummary(ec$logret, wealth.index = TRUE, ylog = TRUE, colorset = "steelblue2", main = "Strategy with Volatility Filter")

Created by Pretty R at inside-R.org


Simple Moving Average Strategy with a Volatility Filter: Follow-Up Part 2

In the Follow-Up Part 1, I explored some of the functions in the quantstrat package that allowed us to drill down trade by trade to explain the difference in performance of the two strategies. By doing this, I found that my choice of a volatility measure may not have been the best choice. Although the volatility filter kept me out of trades during periods of higher volatility, it also had a negative impact on position sizing and overall return.

The volatility measure presented in the original post was the 52 period standard deviation of the 1 period change of close prices. I made a custom indicator to incorporate the volatility filter into the buy rule. Here is the original RB function:

#Custom indicator function 
RB <- function(x,n){
  x <- x
  roc <- ROC(x, n=1, type="discrete")
  sd <- runSD(roc,n, sample= FALSE)
  med <- runMedian(sd,n)
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  reclass(signal,x)
  }

Created by Pretty R at inside-R.org

The new volatility filter will be the 52 period standard deviation of close prices. Now, the buy rule can be interpreted as follows:

  • Buy Rule: Go long if close is greater than the 52 period SMA and the 52 period standard deviation of close prices is less than its median over the last N periods.
  • Exit Rule: Exit if long and close is less than the N period SMA

A slight change to the RB function will do the trick, I will call it RBrev1 (that is my creative side coming out ;))

#Custom indicator function 
RBrev1 <- function(x,n){
  x <- x
  sd <- runSD(x, n, sample= FALSE)
  med <- runMedian(sd,n)
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  #ret <- cbind(x,roc,sd,med,mavg,signal) #Only use for further analysis of indicator
  #colnames(ret) <- c("close","roc","sd","med","mavg","RB") #Only use for further analysis of indicator
  reclass(signal,x)
  }

Created by Pretty R at inside-R.org

I will test the strategy on the adjusted close of the S&P500 using weekly prices from 1/1/1990 to 1/1/2000 just as in the previous post.

And the winner is… both! There is no difference in performance on this single instrument in this specific window of time I used for the test.

rbresearch

Always do your own testing to decide whether or not a filter of any kind will add value to your system. This single instrument test in the series of posts showed that choosing the “wrong” volatility filter can hinder performance and another choice of volatility filter doesn’t have much impact, if any, at all.

How do you think the volatility filter will affect a multiple instrument test?

require(PerformanceAnalytics)
require(quantstrat)

suppressWarnings(rm("order_book.RBtest",pos=.strategy))
suppressWarnings(rm("account.RBtest","portfolio.RBtest",pos=.blotter))
suppressWarnings(rm("account.st","portfolio.st","symbols","stratBBands","initDate","initEq",'start_t','end_t'))

sym.st = "GSPC"
currency("USD")
stock(sym.st, currency="USD",multiplier=1)
getSymbols("^GSPC", src='yahoo', index.class=c("POSIXt","POSIXct"), from='1990-01-01', to='2012-04-17')
GSPC <- to.weekly(GSPC,indexAt='lastof',drop.time=TRUE)

#Custom Order Sizing Function to trade percent of equity based on a stopsize
osPCTEQ <- function(timestamp, orderqty, portfolio, symbol, ruletype, ...){
  tempPortfolio <- getPortfolio(portfolio.st)
  dummy <- updatePortf(Portfolio=portfolio.st, Dates=paste('::',as.Date(timestamp),sep=''))
  trading.pl <- sum(getPortfolio(portfolio.st)$summary$Realized.PL) #change to ..$summary$Net.Trading.PL for Total Equity Position Sizing
  assign(paste("portfolio.",portfolio.st,sep=""),tempPortfolio,pos=.blotter)
  total.equity <- initEq+trading.pl
  DollarRisk <- total.equity * trade.percent
  ClosePrice <- as.numeric(Cl(mktdata[timestamp,]))
  mavg <- as.numeric(mktdata$SMA52[timestamp,])
  sign1 <- ifelse(ClosePrice > mavg, 1, -1)
  sign1[is.na(sign1)] <- 1
  Posn = getPosQty(Portfolio = portfolio.st, Symbol = sym.st, Date = timestamp)
  StopSize <- as.numeric(mktdata$SDEV[timestamp,]*StopMult) #Stop = SDAVG * StopMult !Must have SDAVG or other indictor to determine stop size
  orderqty <- ifelse(Posn == 0, sign1*round(DollarRisk/StopSize), 0) # number contracts traded is equal to DollarRisk/StopSize
  return(orderqty)
}

#Function that calculates the n period standard deviation of close prices.
#This is used in place of ATR so that I can use only close prices.
SDEV <- function(x, n){
  sdev <- runSD(x, n, sample = FALSE)
  colnames(sdev) <- "SDEV"
  reclass(sdev,x)
}

#Custom indicator function 
RBrev1 <- function(x,n){
  x <- x
  sd <- runSD(x, n, sample= FALSE)
  med <- runMedian(sd,n)
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  #ret <- cbind(x,roc,sd,med,mavg,signal) #Only use for further analysis of indicator
  #colnames(ret) <- c("close","roc","sd","med","mavg","RB") #Only use for further analysis of indicator
  reclass(signal,x)
  }

initDate='1900-01-01'
initEq <- 100000

trade.percent <- .05 #percent risk used in sizing function
StopMult = 1 #stop size used in sizing function

#Name the portfolio and account
portfolio.st='RBtest'
account.st='RBtest'

#Initialization
initPortf(portfolio.st, symbols=sym.st, initPosQty=0, initDate=initDate, currency="USD")
initAcct(account.st,portfolios=portfolio.st, initDate=initDate, initEq=initEq)
initOrders(portfolio=portfolio.st,initDate=initDate)

#Name the strategy
stratRB <- strategy('RBtest')

#Add indicators
#The first indicator is the 52 period SMA
#The second indicator is the RB indicator. The RB indicator returns a value of 1 when close > SMA & volatility < runMedian(volatility, n = 52)
stratRB <- add.indicator(strategy = stratRB, name = "SMA", arguments = list(x = quote(Cl(mktdata)), n=52), label="SMA52")
stratRB <- add.indicator(strategy = stratRB, name = "RBrev1", arguments = list(x = quote(Cl(mktdata)), n=52), label="RB")
stratRB <- add.indicator(strategy = stratRB, name = "SDEV", arguments = list(x = quote(Cl(mktdata)), n=52), label="SDEV")

#Add signals
#The buy signal is when the RB indicator crosses from 0 to 1
#The exit signal is when the close crosses below the SMA
stratRB <- add.signal(strategy = stratRB, name="sigThreshold", arguments = list(threshold=1, column="RB",relationship="gte", cross=TRUE),label="RB.gte.1")
stratRB <- add.signal(strategy = stratRB, name="sigCrossover", arguments = list(columns=c("Close","SMA52"),relationship="lt"),label="Cl.lt.SMA")

#Add rules
stratRB <- add.rule(strategy = stratRB, name='ruleSignal', arguments = list(sigcol="RB.gte.1", sigval=TRUE, orderqty=1000, ordertype='market', orderside='long', osFUN = 'osPCTEQ', pricemethod='market', replace=FALSE), type='enter', path.dep=TRUE)
stratRB <- add.rule(strategy = stratRB, name='ruleSignal', arguments = list(sigcol="Cl.lt.SMA", sigval=TRUE, orderqty='all', ordertype='market', orderside='long', pricemethod='market',TxnFees=0), type='exit', path.dep=TRUE)

# Process the indicators and generate trades
start_t<-Sys.time()
out<-try(applyStrategy(strategy=stratRB , portfolios=portfolio.st))
end_t<-Sys.time()
print("Strategy Loop:")
print(end_t-start_t)

start_t<-Sys.time()
updatePortf(Portfolio=portfolio.st,Dates=paste('::',as.Date(Sys.time()),sep=''))
end_t<-Sys.time()
print("updatePortf execution time:")
print(end_t-start_t)

chart.Posn(Portfolio=portfolio.st,Symbol=sym.st)

#Update Account
updateAcct(account.st)

#Update Ending Equity
updateEndEq(account.st)

#ending equity
getEndEq(account.st, Sys.Date()) + initEq

tstats <- tradeStats(Portfolio=portfolio.st, Symbol=sym.st)

#View order book to confirm trades
getOrderBook(portfolio.st)

#Trade Statistics for CAGR, Max DD, and MAR
#calculate total equity curve performance Statistics
ec <- tail(cumsum(getPortfolio(portfolio.st)$summary$Net.Trading.PL),-1)
ec$initEq <- initEq
ec$totalEq <- ec$Net.Trading.PL + ec$initEq
ec$maxDD <- ec$totalEq/cummax(ec$totalEq)-1
ec$logret <- ROC(ec$totalEq, n=1, type="continuous")
ec$logret[is.na(ec$logret)] <- 0

RBrev1WI <- exp(cumsum(ec$logret)) #growth of $1
#write.zoo(RBrev1WI, file = "E:\\volfiltertest.csv", sep=",")

period.count <- NROW(ec)-104 #Use 104 because there is a 104 week lag for the 52 week SD and 52 week median of SD
year.count <- period.count/52
maxDD <- min(ec$maxDD)*100
totret <- as.numeric(last(ec$totalEq))/as.numeric(first(ec$totalEq))
CAGR <- (totret^(1/year.count)-1)*100
MAR <- CAGR/abs(maxDD)

Perf.Stats <- c(CAGR, maxDD, MAR)
names(Perf.Stats) <- c("CAGR", "maxDD", "MAR")
Perf.Stats

transactions <- getTxns(Portfolio = portfolio.st, Symbol = sym.st)
#write.zoo(transactions, file = "E:\\filtertxn.csv")

charts.PerformanceSummary(ec$logret, wealth.index = TRUE, ylog = TRUE, colorset = "steelblue2", main = "SMA with Volatility Filter System Performance")

Created by Pretty R at inside-R.org

Simple Moving Average Strategy with a Volatility Filter: Follow-Up Part 1

Analyzing transactions in quantstrat

This post will be part 1 of a follow up to the original post, Simple Moving Average Strategy with a Volatility Filter. In this follow up, I will take a closer look at the individual trades of each strategy. This may provide valuable information to explain the difference in performance of the SMA Strategy with a volatility filter and without a volatility filter.

Thankfully, the creators of the quantstrat package have made it very easy to view the transactions with a simple function and a single line of code.

getTxns(Portfolio, Symbol, Dates)

For the rest of the post, I will refer to the strategies as:

  • Strategy 1 =  Simple Moving Average Strategy with a Volatility Filter
  • Strategy 2 = Simple Moving Average Strategy without a Volatility Filter

It is evident from the equity curves in the last post that neither strategy did much from the year 2000 to 2012. For that reason, I will analyze the period from 1990 to 2000

Strategy 1 Transactions

                    Txn.Qty Txn.Price Txn.Fees  Txn.Value Txn.Avg.Cost Net.Txn.Realized.PL
1900-01-01 00:00:00       0      0.00        0       0.00         0.00                0.00
1992-10-23 00:00:00     410    414.10        0  169781.00       414.10                0.00
1994-04-08 00:00:00    -410    447.10        0 -183311.00       447.10            13530.00
1994-06-10 00:00:00     531    458.67        0  243553.77       458.67                0.00
1994-06-17 00:00:00    -531    458.45        0 -243436.95       458.45             -116.82
1995-05-19 00:00:00     247    519.19        0  128239.93       519.19                0.00
1998-09-04 00:00:00    -247    973.89        0 -240550.83       973.89           112310.90
1999-09-10 00:00:00      45   1351.66        0   60824.70      1351.66                0.00
1999-10-22 00:00:00     -45   1301.65        0  -58574.25      1301.65            -2250.45
1999-11-26 00:00:00      82   1416.62        0  116162.84      1416.62                0.00

Strategy 2 Transactions

                    Txn.Qty Txn.Price Txn.Fees  Txn.Value Txn.Avg.Cost Net.Txn.Realized.PL
1900-01-01 00:00:00       0      0.00        0       0.00         0.00                0.00
1992-10-23 00:00:00     410    414.10        0  169781.00       414.10                0.00
1994-04-08 00:00:00    -410    447.10        0 -183311.00       447.10            13530.00
1994-06-10 00:00:00     531    458.67        0  243553.77       458.67                0.00
1994-06-17 00:00:00    -531    458.45        0 -243436.95       458.45             -116.82
1994-08-19 00:00:00     593    463.68        0  274962.24       463.68                0.00
1994-09-30 00:00:00    -593    462.71        0 -274387.03       462.71             -575.21
1994-10-07 00:00:00     562    455.10        0  255766.20       455.10                0.00
1994-10-14 00:00:00    -562    469.10        0 -263634.20       469.10             7868.00
1994-10-21 00:00:00     560    464.89        0  260338.40       464.89                0.00
1994-12-02 00:00:00    -560    453.30        0 -253848.00       453.30            -6490.40
1995-01-13 00:00:00     548    465.97        0  255351.56       465.97                0.00
1998-09-04 00:00:00    -548    973.89        0 -533691.72       973.89           278340.16
1998-10-02 00:00:00      66   1002.60        0   66171.60      1002.60                0.00
1998-10-09 00:00:00     -66    984.39        0  -64969.74       984.39            -1201.86
1998-10-23 00:00:00      68   1070.67        0   72805.56      1070.67                0.00
1999-10-22 00:00:00     -68   1301.65        0  -88512.20      1301.65            15706.64
1999-10-29 00:00:00      70   1362.93        0   95405.10      1362.93                0.00

For ease of comparison, I exported the transactions for each strategy to excel and aligned the trades as close I could by date.

First, lets look at the trades highlighted by the red rectangle. Strategy 2 executed a trade for 548 units on 1/13/1995 and closed on 9/4/1998 for a total profit of $278340.16. By comparison, Strategy 1 executed a trade  for 247 units on 5/19/1995 (about 4 months later) and closed on 9/4/1998 for a total profit of $112,310.90. This is a significant difference of $166,029. It is clear that this single trade is critical to the performance of the strategy.

Now, lets look at the trade highlighted by the yellow rectangle. Both trades were closed on 10/22/1999. Strategy 1 resulted in a loss of $2,250.45 and Strategy 2 resulted in a gain of $15,706.64… a difference of $17,957.09.

The equity curve of Strategy 1 compared with Strategy 2 shows a clearer picture of the outperformance.

rbresearch

Why such a big difference?

For an even closer look, we will need to take a look at the measure of volatility we use as a filter. I will make a few modifications to the RB function so we can see the volatility measure and median.

#Function that calculates the n period standard deviation of close prices.
#This is used in place of ATR so that I can use only close prices.
SDEV <- function(x, n){
  sdev <- runSD(x, n, sample = FALSE)
  colnames(sdev) <- "SDEV"
  reclass(sdev,x)
}

#Custom indicator function 
RB <- function(x,n){
  x <- x
  roc <- ROC(x, n=1, type="discrete")
  sd <- runSD(roc,n, sample= FALSE)
  #sd[is.na(sd)] <- 0
  med <- runMedian(sd,n)
  #med[is.na(med)] <- 0
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  ret <- cbind(x,roc,sd,med,mavg,signal)
  colnames(ret) <- c("close","roc","sd","med","mavg","RB")
  reclass(ret,x)
  }

data <- cbind(RB(Ad(GSPC),n=52),SDEV(Ad(GSPC),n=52)) #RB is the volatility signal indicator and SDEV is used for position sizing
Created by Pretty R at inside-R.org
> data['1995']
                     close           roc          sd        med     mavg RB      SDEV
1995-01-13 00:00:00 465.97  0.0114830251 0.013545475 0.01088292 459.7775  0  8.924008
...
1995-05-19 00:00:00 519.19 -0.0121016078 0.012412166 0.01259515 472.6006  1 21.161032


The sd for 1995-01-13 is 0.0135 while the SDEV is 8.924. The sd for 1995-05-19 is 0.0124 while the SDEV is 21.16… the SDEV is almost 3 times larger even though our volatility measure is indicating a period of low volatility! (note: SDEV has a direct impact on position sizing)

Perhaps we should take a second look at our choice of volatility measure.

If you want to incorporate a volatility filter into your system, choose the volatility measure wisely…

Simple Moving Average Strategy with a Volatility Filter

I would describe my trading approach as systematic long term trend following. A trend following strategy can be difficult mentally to trade after experiencing multiple consecutive losses when a trade reverses due to a volatility spike or the trend reverses. Volatility tends to increase when prices fall. This is not good for a long only trend following strategy, especially when initially entering trades.

Can adding a volatility filter to a simple system improve performance?

SMA System with Volatility Filter Rules

  • Buy Rule: Go long if close is greater than the N period SMA and a volatility measure is less than its median over the last N periods.
  • Exit Rule: Exit if long and close is less than the N period SMA

SMA System without Volatility Filter Rules

  • Buy Rule: Go long if close is greater than the N period SMA
  • Exit Rule: Exit if close is less than the N period SMA

For this test, my volatility measure is the 52 period standard deviation of the 1 period change of close prices and I will use a 52 period SMA.

I will test the strategy on the total return series of the S&P500 using weekly prices from 1/1/1990 to 4/17/2012.

yuck… the equity curves look pretty good up until 1999, then not so good after that.

rbresearch

rbresearch

CAGR maxDD MAR # Trades Ending Equity Percent Winning Trades
SMA with Volatility Filter 4.369174 -22.3993 0.195059 34 $239,104.70 58.82
SMA System 7.442673 -22.2756 0.334119 57 $464,198.80 53.57

This test shows that adding a volatility filter to our entries can actually hinder performance. Keep in mind this is ny no means an exhaustive test on a single instrument. I also chose the 52 period SMA and SDEV somewhat arbitrarily because it represents a year.

Reading through trading forums, it is clear to see that people are in search of the “holy grail” trading system. Some people claim to have found the “holy grail” system, but that system is usually combination of 10+ indicators and rules that say “use indicator A, B, and C when the market is doing X or use indicators D, E, and F when the market is doing Y.” Beware of these “filters” and always test yourself.

Stay tuned for future posts that will look at adding a similar filter on a multiple instrument test.

What have you found with adding entry filters to trading systems?

require(PerformanceAnalytics)
require(quantstrat)

sym.st = "GSPC"
currency("USD")
stock(sym.st, currency="USD",multiplier=1)
getSymbols("^GSPC", src='yahoo', index.class=c("POSIXt","POSIXct"), from='1990-01-01')
GSPC <- to.weekly(GSPC,indexAt='lastof',drop.time=TRUE)

#Custom Order Sizing Function to trade percent of equity based on a stopsize
osPCTEQ <- function(timestamp, orderqty, portfolio, symbol, ruletype, ...){
  tempPortfolio <- getPortfolio(portfolio.st)
  dummy <- updatePortf(Portfolio=portfolio.st, Dates=paste('::',as.Date(timestamp),sep=''))
  trading.pl <- sum(getPortfolio(portfolio.st)$summary$Realized.PL) #change to ..$summary$Net.Trading.PL for Total Equity Position Sizing
  assign(paste("portfolio.",portfolio.st,sep=""),tempPortfolio,pos=.blotter)
  total.equity <- initEq+trading.pl
  DollarRisk <- total.equity * trade.percent
  ClosePrice <- as.numeric(Cl(mktdata[timestamp,]))
  mavg <- as.numeric(mktdata$SMA52[timestamp,])
  sign1 <- ifelse(ClosePrice > mavg, 1, -1)
  sign1[is.na(sign1)] <- 1
  Posn = getPosQty(Portfolio = portfolio.st, Symbol = sym.st, Date = timestamp)
  StopSize <- as.numeric(mktdata$SDEV[timestamp,]*StopMult) #Stop = SDAVG * StopMult !Must have SDAVG or other indictor to determine stop size
  orderqty <- ifelse(Posn == 0, sign1*round(DollarRisk/StopSize), 0) # number contracts traded is equal to DollarRisk/StopSize
  return(orderqty)
}

#Function that calculates the n period standard deviation of close prices.
#This is used in place of ATR so that I can use only close prices.
SDEV <- function(x, n){
  sdev <- runSD(x, n, sample = FALSE)
  colnames(sdev) <- "SDEV"
  reclass(sdev,x)
}

#Custom indicator function 
RB <- function(x,n){
  x <- x
  roc <- ROC(x, n=1, type="discrete")
  sd <- runSD(roc,n, sample= FALSE)
  sd[is.na(sd)] <- 0
  med <- runMedian(sd,n)
  med[is.na(med)] <- 0
  mavg <- SMA(x,n)
  signal <- ifelse(sd < med & x > mavg,1,0)
  colnames(signal) <- "RB"
  #ret <- cbind(x,roc,sd,med,mavg,signal)
  #colnames(ret) <- c("close","roc","sd","med","mavg","lowvol")
  reclass(signal,x)
  }

initDate='1900-01-01'
initEq <- 100000

trade.percent <- .05 #percent risk used in sizing function
StopMult = 1 #stop size used in sizing function

#Name the portfolio and account
portfolio.st='RBtest'
account.st='RBtest'

#Initialization
initPortf(portfolio.st, symbols=sym.st, initPosQty=0, initDate=initDate, currency="USD")
initAcct(account.st,portfolios=portfolio.st, initDate=initDate, initEq=initEq)
initOrders(portfolio=portfolio.st,initDate=initDate)

#Name the strategy
stratRB <- strategy('RBtest')

#Add indicators
#The first indicator is the 52 period SMA
#The second indicator is the RB indicator. The RB indicator returns a value of 1 when close > SMA & volatility < runMedian(volatility, n = 52)
stratRB <- add.indicator(strategy = stratRB, name = "SMA", arguments = list(x = quote(Cl(mktdata)), n=52), label="SMA52")
stratRB <- add.indicator(strategy = stratRB, name = "RB", arguments = list(x = quote(Cl(mktdata)), n=52), label="RB")
stratRB <- add.indicator(strategy = stratRB, name = "SDEV", arguments = list(x = quote(Cl(mktdata)), n=52), label="SDEV")

#Add signals
#The buy signal is when the RB indicator crosses from 0 to 1
#The exit signal is when the close crosses below the SMA
stratRB <- add.signal(strategy = stratRB, name="sigThreshold", arguments = list(threshold=1, column="RB",relationship="gte", cross=TRUE),label="RB.gte.1")
stratRB <- add.signal(strategy = stratRB, name="sigCrossover", arguments = list(columns=c("Close","SMA52"),relationship="lt"),label="Cl.lt.SMA")

#Add rules
stratRB <- add.rule(strategy = stratRB, name='ruleSignal', arguments = list(sigcol="RB.gte.1", sigval=TRUE, orderqty=1000, ordertype='market', orderside='long', osFUN = 'osPCTEQ', pricemethod='market', replace=FALSE), type='enter', path.dep=TRUE)
stratRB <- add.rule(strategy = stratRB, name='ruleSignal', arguments = list(sigcol="Cl.lt.SMA", sigval=TRUE, orderqty='all', ordertype='market', orderside='long', pricemethod='market',TxnFees=0), type='exit', path.dep=TRUE)

# Process the indicators and generate trades
start_t<-Sys.time()
out<-try(applyStrategy(strategy=stratRB , portfolios=portfolio.st))
end_t<-Sys.time()
print("Strategy Loop:")
print(end_t-start_t)

start_t<-Sys.time()
updatePortf(Portfolio=portfolio.st,Dates=paste('::',as.Date(Sys.time()),sep=''))
end_t<-Sys.time()
print("updatePortf execution time:")
print(end_t-start_t)

chart.Posn(Portfolio=portfolio.st,Symbol=sym.st)

#Update Account
updateAcct(account.st)

#Update Ending Equity
updateEndEq(account.st)

#ending equity
getEndEq(account.st, Sys.Date()) + initEq

tstats <- tradeStats(Portfolio=portfolio.st, Symbol=sym.st)

#View order book to confirm trades
#getOrderBook(portfolio.st)

#Trade Statistics for CAGR, Max DD, and MAR
#calculate total equity curve performance Statistics
ec <- tail(cumsum(getPortfolio(portfolio.st)$summary$Net.Trading.PL),-1)
ec$initEq <- initEq
ec$totalEq <- ec$Net.Trading.PL + ec$initEq
ec$maxDD <- ec$totalEq/cummax(ec$totalEq)-1
ec$logret <- ROC(ec$totalEq, n=1, type="continuous")
ec$logret[is.na(ec$logret)] <- 0

Strat.Wealth.Index <- exp(cumsum(ec$logret)) #growth of $1

period.count <- NROW(ec)-104 #Use 104 because there is a 104 week lag for the 52 week SD and 52 week median of SD
year.count <- period.count/52
maxDD <- min(ec$maxDD)*100
totret <- as.numeric(last(ec$totalEq))/as.numeric(first(ec$totalEq))
CAGR <- (totret^(1/year.count)-1)*100
MAR <- CAGR/abs(maxDD)

Perf.Stats <- c(CAGR, maxDD, MAR)
names(Perf.Stats) <- c("CAGR", "maxDD", "MAR")
Perf.Stats
#write.zoo(mktdata, file = "E:\\a.csv")

charts.PerformanceSummary(ec$logret, wealth.index = TRUE, colorset = "steelblue2", main = "SMA with Volatility Filter System Performance")

Created by Pretty R at inside-R.org

Disclaimer: Past results do not guarantee future returns. Information on this website is for informational purposes only and does not offer advice to buy or sell any securities.

Low Volatility with R

Low volatility and minimum variance strategies have been getting a lot of attention lately due to their outperformance in recent years. Let’s take a look at how we can incorporate this low volatility effect into a monthly rotational strategy with a basket of ETFs.

Performance Summary from Low Volatility Test in quantstrat

Starting Equity: 100,000
Ending Equity: 114,330
CAGR: 1.099%
maxDD: -38.325%
MAR:  0.0287

Not the greatest performance stats in the world. There are some things we can do to improve this strategy. I will save that for later. The purpose of this post was an exercise using quantstrat to implement a low volatility ranking system.

We can see from the chart that the low volatility strategy does what it is supposed to do… the drawdown is reduced compared to a buy and hold strategy on SPY. This is by no means a conclusive test. Ideally, the test would cover 20, 40, 60+ years of data to show the “longer” term performance of both strategies.

Here is a step by step approach to implement the strategy in R

The first step is fire up R and require the quantstrat package.

require(quantstrat)

This test will use nine of the Select Sector SPDR ETFs.
XLY – Consumer Discretionary Select Sector SPDR
XLP – Consumer Staples Select Sector SPDR
XLE – Energy Select Sector SPDR
XLF – Financial Select Sector SPDR
XLV – Health Care Select Sector SPDR
XLI – Industrial Select Sector SPDR
XLK – Technology Select Sector SPDR
XLB – Materials Select Sector SPDR
XLU – Utilities Select Sector SPDR

#Symbol list to pass to the getSymbols function
symbols = c("XLY", "XLP", "XLE", "XLF", "XLV", "XLI", "XLK", "XLB", "XLU")
#Load ETFs from yahoo
currency("USD")
stock(symbols, currency="USD",multiplier=1)
getSymbols(symbols, src='yahoo', index.class=c("POSIXt","POSIXct"), from='2000-01-01')

#Data is downloaded as daily data
#Convert to monthly
for(symbol in symbols) {
  x<-get(symbol)
  x<-to.monthly(x,indexAt='lastof',drop.time=TRUE)
  indexFormat(x)<-'%Y-%m-%d'
  colnames(x)<-gsub("x",symbol,colnames(x))
  assign(symbol,x)
}

Here is what the data for XLB looks like after it is downloaded

> tail(XLB)
           XLB.Open XLB.High XLB.Low XLB.Close XLB.Volume XLB.Adjusted
2011-11-30    33.10    35.73   31.41     34.52  290486300        34.15
2011-12-31    34.34    35.01   31.86     33.50  233453200        33.37
2012-01-31    34.24    37.73   34.23     37.18  171601400        37.04
2012-02-29    37.48    37.97   36.40     36.97  179524000        36.83
2012-03-31    37.19    37.65   35.80     36.97  201651000        36.97
2012-04-30    36.92    37.63   35.10     35.59   85846600        35.59

The measure of volatility that I will use is a rolling 12 period standard deviation of the 1 period ROC. The 1 period ROC is taken on the Adjusted Close prices. My approach for the ranking system is to first apply the standard deviation to the market data and then assign a rank of 1, 2, …9 for the instruments. There may be a more elegant way to do this in R, so if you have an alternative way to implement this I am all ears.

#Calcuate the ranking factors for each symbol and bind to its symbol
#This loops through the list of symbols and adds a "RANK" column
for(symbol in symbols) {
  x <- get(symbol)
  x1 <- ROC(Ad(x), n=1, type="continuous", na.pad=TRUE)
  colnames(x1) <- "ROC"
  colnames(x1) <- paste("x",colnames(x1), sep =".")
  #x2 is the 12 period standard deviation of the 1 month return
  x2 <- runSD(x1, n=12)
  colnames(x2) <- "RANK"
  colnames(x2) <- paste("x",colnames(x2), sep =".")
  x <- cbind(x,x2)
  colnames(x)<-gsub("x",symbol,colnames(x))
  assign(symbol,x)
}

Now the XLB data has an extra column of the 12 period SD of the 1 period ROC named “RANK”

> tail(XLB)
           XLB.Open XLB.High XLB.Low XLB.Close XLB.Volume XLB.Adjusted   XLB.RANK
2011-11-30    33.10    35.73   31.41     34.52  290486300        34.15 0.08300814
2011-12-31    34.34    35.01   31.86     33.50  233453200        33.37 0.07752127
2012-01-31    34.24    37.73   34.23     37.18  171601400        37.04 0.08425784
2012-02-29    37.48    37.97   36.40     36.97  179524000        36.83 0.08381949
2012-03-31    37.19    37.65   35.80     36.97  201651000        36.97 0.08360368
2012-04-30    36.92    37.63   35.10     35.59   85846600        35.59 0.08367737
#Bind each symbols's "RANK" column into a single xts object
rank.factors <- cbind(XLB$XLB.RANK,
                      XLE$XLE.RANK,
                      XLF$XLF.RANK,
                      XLI$XLI.RANK,
                      XLK$XLK.RANK,
                      XLP$XLP.RANK,
                      XLU$XLU.RANK,
                      XLV$XLV.RANK,
                      XLY$XLY.RANK)

Here is what our rank.factors object looks like.

> tail(rank.factors)
             XLB.RANK   XLE.RANK   XLF.RANK   XLI.RANK   XLK.RANK   XLP.RANK   XLU.RANK   XLV.RANK   XLY.RANK
2011-11-30 0.08300814 0.08837101 0.07381782 0.06492454 0.04169398 0.02930909 0.01532320 0.03559538 0.04946373
2011-12-31 0.07752127 0.08522966 0.06612174 0.06136258 0.03898518 0.02811202 0.01555798 0.03451478 0.04843218
2012-01-31 0.08425784 0.08291821 0.07063470 0.06389852 0.04171582 0.02806211 0.02160217 0.03502983 0.05052721
2012-02-29 0.08381949 0.08192191 0.07192495 0.06410781 0.04552402 0.02863641 0.02164171 0.03451369 0.04946965
2012-03-31 0.08360368 0.08223880 0.07536219 0.06385518 0.04589758 0.02914123 0.02158078 0.03581751 0.05032237
2012-04-30 0.08367737 0.08291464 0.07608845 0.06423188 0.04573648 0.02728300 0.02114430 0.03341575 0.05064814

Now we need to apply a “RANK” of 1 through 9 (because there are 9 symbols).

#ranked in order such that the symbol with the lowest volatility is given a rank of 1
r <- as.xts(t(apply(rank.factors, 1, rank)))
Here is what the r object looks like with each symbol being ranked by volatility
> tail(r)
           XLB.RANK XLE.RANK XLF.RANK XLI.RANK XLK.RANK XLP.RANK XLU.RANK XLV.RANK XLY.RANK
2011-11-30        8        9        7        6        4        2        1        3        5
2011-12-31        8        9        7        6        4        2        1        3        5
2012-01-31        9        8        7        6        4        2        1        3        5
2012-02-29        9        8        7        6        4        2        1        3        5
2012-03-31        9        8        7        6        4        2        1        3        5
2012-04-30        9        8        7        6        4        2        1        3        5
#Set the symbol's market data back to its original structure so we don't have 2 columns named "RANK"
for (symbol in symbols){
  x <- get(symbol)
  x <- x[,1:6]
  assign(symbol,x)
}
#Bind the symbol's rank to the symbol's market data
XLB <- cbind(XLB,r$XLB.RANK)
XLE <- cbind(XLE,r$XLE.RANK)
XLF <- cbind(XLF,r$XLF.RANK)
XLI <- cbind(XLI,r$XLI.RANK)
XLK <- cbind(XLK,r$XLK.RANK)
XLP <- cbind(XLP,r$XLP.RANK)
XLU <- cbind(XLU,r$XLU.RANK)
XLV <- cbind(XLV,r$XLV.RANK)
XLY <- cbind(XLY,r$XLY.RANK)

Now we can see that each symbol has an extra “RANK” column

> tail(XLB)
           XLB.Open XLB.High XLB.Low XLB.Close XLB.Volume XLB.Adjusted XLB.RANK
2011-11-30    33.10    35.73   31.41     34.52  290486300        34.15        8
2011-12-31    34.34    35.01   31.86     33.50  233453200        33.37        8
2012-01-31    34.24    37.73   34.23     37.18  171601400        37.04        9
2012-02-29    37.48    37.97   36.40     36.97  179524000        36.83        9
2012-03-31    37.19    37.65   35.80     36.97  201651000        36.97        9
2012-04-30    36.92    37.63   35.10     36.56   99089100        36.56        9

Now that the market data is “prepared”, we can easily implement the strategy using quantstrat. Note that the signal is when the “RANK” column is less than 3. This means that the strategy buys the 3 instruments with the lowest volatility. See end of post for quantstrat code.

#Market data is prepared with each symbols rank based on the factors chosen
#Now use quantstrat to execute the strategy

#Set Initial Values
initDate='1900-01-01' #initDate must be before the first date in the market data
initEq=100000 #initial equity

#Name the portfolio
portfolio.st='RSRANK'

#Name the account
account.st='RSRANK'

#Initialization
initPortf(portfolio.st, symbols=symbols, initPosQty=0, initDate=initDate, currency = "USD")
initAcct(account.st,portfolios=portfolio.st, initDate=initDate, initEq=initEq)
initOrders(portfolio=portfolio.st,initDate=initDate)

#Initialize strategy object
stratRSRANK <- strategy(portfolio.st)

# There are two signals:
# The first is when Rank is less than or equal to N (i.e. trades the #1 ranked symbol if N=1)
stratRSRANK <- add.signal(strategy = stratRSRANK, name="sigThreshold",arguments = list(threshold=3, column="RANK",relationship="lte", cross=TRUE),label="Rank.lte.N")
# The second is when Rank is greater than N
stratRSRANK <- add.signal(strategy = stratRSRANK, name="sigThreshold",arguments = list(threshold=3, column="RANK",relationship="gt",cross=TRUE),label="Rank.gt.N")

# There is one rule:
# The first is to buy when the Rank crosses above the threshold
stratRSRANK <- add.rule(strategy = stratRSRANK, name='ruleSignal', arguments = list(sigcol="Rank.lte.N", sigval=TRUE, orderqty=1000, ordertype='market', orderside='long', pricemethod='market', replace=FALSE), type='enter', path.dep=TRUE)

#Exit when the symbol Rank falls below the threshold
stratRSRANK <- add.rule(strategy = stratRSRANK, name='ruleSignal', arguments = list(sigcol="Rank.gt.N", sigval=TRUE, orderqty='all', ordertype='market', orderside='long', pricemethod='market', replace=FALSE), type='exit', path.dep=TRUE)

#Apply the strategy to the portfolio
start_t<-Sys.time()
out<-try(applyStrategy(strategy=stratRSRANK , portfolios=portfolio.st))
end_t<-Sys.time()
print(end_t-start_t)

#Update Portfolio
start_t<-Sys.time()
updatePortf(Portfolio=portfolio.st,Dates=paste('::',as.Date(Sys.time()),sep=''))
end_t<-Sys.time()
print("trade blotter portfolio update:")
print(end_t-start_t)

#Update Account
updateAcct(account.st)

#Update Ending Equity
updateEndEq(account.st)

#get ending equity
getEndEq(account.st, Sys.Date()) + initEq

#View order book to confirm trades
getOrderBook(portfolio.st)

tstats <- tradeStats(Portfolio=portfolio.st, Symbol=symbols)

chart.Posn(Portfolio=portfolio.st,Symbol="XLF")

#Trade Statistics for CAGR, Max DD, and MAR
#calculate total equity curve performance Statistics
ec <- tail(cumsum(getPortfolio(portfolio.st)$summary$Net.Trading.PL),-1)
ec$initEq <- initEq
ec$totalEq <- ec$Net.Trading.PL + ec$initEq
ec$maxDD <- ec$totalEq/cummax(ec$totalEq)-1
ec$logret <- ROC(ec$totalEq, n=1, type="continuous")
ec$logret[is.na(ec$logret)] <- 0

Strat.Wealth.Index <- exp(cumsum(ec$logret)) #growth of $1
write.zoo(Strat.Wealth.Index, file = "E:\\a.csv")

period.count <- NROW(ec)
year.count <- period.count/12
maxDD <- min(ec$maxDD)*100
totret <- as.numeric(last(ec$totalEq))/as.numeric(first(ec$totalEq))
CAGR <- (totret^(1/year.count)-1)*100
MAR <- CAGR/abs(maxDD)

Perf.Stats <- c(CAGR, maxDD, MAR)
names(Perf.Stats) <- c("CAGR", "maxDD", "MAR")
#tstats
Perf.Stats

#Benchmark against a buy and hold strategy with SPY
require(PerformanceAnalytics)
getSymbols("SPY", src='yahoo', index.class=c("POSIXt","POSIXct"), from='2001-01-01')
SPY <- to.monthly(SPY,indexAt='lastof',drop.time=TRUE)

SPY.ret <- Return.calculate(Ad(SPY), method="compound")
SPY.ret[is.na(SPY.ret)] <- 0
SPY.wi <- exp(cumsum(SPY.ret))

write.zoo(SPY.wi, file = "E:\\a1.csv")

Created by Pretty R at inside-R.org

Disclaimer: Past results do not guarantee future returns. Information on this website is for informational purposes only and does not offer advice to buy or sell any securities.

Hello world!

Welcome to the first post of the RB Research blog. Inthis blog, I will focus on quantitative research, trading strategy ideas, and backtesting; primarily in the Foreign Exchange (FX) and equity markets. In the past, I had done nearly all of my testing and analysis in microsoft excel, but over the past 6 months I have been “bitten” by the programming bug. My language of choice is the R language because of the vast amount of contributed packages and tremendous support community. It has been frustrating, insightful, and rewarding all at the same time. My initial inspiration for moving my testing to R from excel, was a series of posts over at FOSS Trading. If you haven’t checked out his blog, I highly recommend it! Other blogs that have been influential are:

  • Timely Portfolio for his excellent posts and sharing his R code
  • World Beta for a plethora of research ideas and articles
  • and many others

As stated earlier, the themes of my post will be research driven using the R programming language and maybe even some vb.net. I consider myself a beginner programmer and hope that through this blog my programming skills will continue to develop by sharing my work with others.

Stay tuned…

RB