ep_cmdline.pl generates scripts that call JBD. It provides good default arguments and an easy way to specify non-default arguments. The simplest invocation would be from the directory containing your datafiles and is

ep_cmdline.pl --dir .

ep_cmdline.pl will do the following things:

  1. combine the .coeffs files into a single file, combined.coeffs
  2. generate a set of scripts that invoke JBD
  3. prints, on stdout, calls to submit the jobs it just created

Since ep_cmdline.pl generates calls to the jbd program you compiled, it needs to know the path to your matlab installation and the jbd program. You'll need to edit the $jbdpath variable to tell it where to find the jbd binary. You may also need to help it find matlab. You will also probably need to modify the lines that generate LD_LIBRARY_PATH for the scripts, depending on the architecture that you're using.

You might need to modify ep_cmdline.pl if you're using a job-queueing system other than PBS/Torque or if it can't find your matlab installation. JBD can take a while to run, depending on how much data you have (in terms of number of probes and number of replicates) and also depending on the data itself. Some data is harder to deconvolve than other. One input parameter that makes a big difference is minboundy. If JBD can't find a ratio greater than this value, it won't look find binding in a region. We use 1.4 as the default value, which gives a good tradeoff between speed and sensitivity to smaller binding events. Higher values will examine fewer regions, thus yielding shorter runtimes but potentially missing more binding events.