<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.cs.vt.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Davisjam</id>
	<title>Computer Science Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.cs.vt.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Davisjam"/>
	<link rel="alternate" type="text/html" href="http://wiki.cs.vt.edu/index.php/Special:Contributions/Davisjam"/>
	<updated>2026-04-04T03:42:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.45.1</generator>
	<entry>
		<id>http://wiki.cs.vt.edu/index.php?title=Sushi101&amp;diff=4040</id>
		<title>Sushi101</title>
		<link rel="alternate" type="text/html" href="http://wiki.cs.vt.edu/index.php?title=Sushi101&amp;diff=4040"/>
		<updated>2020-04-17T15:33:26Z</updated>

		<summary type="html">&lt;p&gt;Davisjam: /* Sushi Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sushi&#039;&#039;&#039; is a small cluster shared by several Stack@CS faculty members.&lt;br /&gt;
&lt;br /&gt;
==Sushi Nodes==&lt;br /&gt;
There are 10 sushi nodes. Each node has:&lt;br /&gt;
* 48 cores&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* Local scratch space -- a few hundred GB in /tmp&lt;br /&gt;
* Shared access to a shared /home file system -- 180TB over NFS&lt;br /&gt;
* 10Gig Ethernet connection to the other nodes&lt;br /&gt;
* Access to the external Internet&lt;br /&gt;
&lt;br /&gt;
==Sushi Access==&lt;br /&gt;
&lt;br /&gt;
Only certain labs have access to Sushi. If you are not sure, ask the lab PI.&lt;br /&gt;
&lt;br /&gt;
If you have an account on sushi, you can access sushi via the head node: sushi.cs.vt.edu (128.173.236.117 on the intranet)&lt;br /&gt;
* scp external files to your home directory on the head node&lt;br /&gt;
* Launch jobs from the head node&lt;br /&gt;
&lt;br /&gt;
==Sushi Jobs==&lt;br /&gt;
&lt;br /&gt;
Launch jobs from the head node using the PBS job submission system. There are many guides to PBS on the web. I recommend [https://www.rcac.purdue.edu/knowledge/hammer/run/pbs Purdue&#039;s guide].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DO NOT run expensive tasks on the head node itself.&#039;&#039;&#039; This affects the cluster&#039;s stability and inconveniences everyone.&lt;br /&gt;
&lt;br /&gt;
== Learning to use sushi ==&lt;br /&gt;
&lt;br /&gt;
Before you use sushi for the first time, you should:&lt;br /&gt;
&lt;br /&gt;
* Read this wiki page&lt;br /&gt;
* Learn about the PBS system&lt;br /&gt;
* Review the man page for qsub, qstat, and qnodes&lt;br /&gt;
* Try a simple practice job, e.g. an &amp;quot;echo&amp;quot; that prints the node name&lt;br /&gt;
&lt;br /&gt;
This may take you a day or two. It is well worth the investment.&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what I do for &amp;quot;embarassingly parallel&amp;quot; jobs driven by an input file with one task per line.&lt;br /&gt;
&lt;br /&gt;
=== Split input into files, one task per line ===&lt;br /&gt;
&lt;br /&gt;
  (10:51:16) davisjam@sushi-headnode ~/qsub-jobs/Memo/input $ split sl-regex-filteredForPrototype-all.json sl-regex-filteredForPrototype-all-piece-  --lines=3000 --additional-suffix=.json --numeric-suffixes --suffix-length=4&lt;br /&gt;
&lt;br /&gt;
=== Write job script ===&lt;br /&gt;
&lt;br /&gt;
I use this as a template and tweak it from there. You might also try the GNU Parallel tool. There&#039;s a copy in /home/davisjam/bin/parallel.&lt;br /&gt;
&lt;br /&gt;
  (12:05:29) davisjam@sushi-headnode ~/qsub-jobs/Memo $ cat qsub-memo.sh&lt;br /&gt;
  #!/usr/bin/env bash&lt;br /&gt;
  &lt;br /&gt;
  # You must provide REGEX_FILE&lt;br /&gt;
  # e.g. &amp;quot;qsub -v REGEX_FILE=&#039;/home/davisjam/qsub-jobs/RegexRepl/syntax/input/test/500.json&#039; qsub-syntax.sh&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  #########################################&lt;br /&gt;
  ## PBS Configuration (Single Comment # ONLY)&lt;br /&gt;
  #########################################&lt;br /&gt;
  #&lt;br /&gt;
  #PBS -l nodes=1:ppn=8&lt;br /&gt;
  #&lt;br /&gt;
  # Save all env vars -- including PERL5LIB&lt;br /&gt;
  #PBS -V&lt;br /&gt;
  #########################################&lt;br /&gt;
  &lt;br /&gt;
  #########################################&lt;br /&gt;
  ## Setup&lt;br /&gt;
  #########################################&lt;br /&gt;
  &lt;br /&gt;
  #OUT_FILE=~/data/syntax/cross-registry-real/`basename $REGEX_FILE .json`-slras-job$PBS_JOBID.json&lt;br /&gt;
  OUT_FILE=~/data/memo/all-SL/`basename $REGEX_FILE .json`-measureMemo-job$PBS_JOBID.pkl.bz2&lt;br /&gt;
  &lt;br /&gt;
  STDOUT_FILE=$HOME/logs/qsub-memo-$$.out&lt;br /&gt;
  STDERR_FILE=$HOME/logs/qsub-memo-$$.err&lt;br /&gt;
  &lt;br /&gt;
  NCORES=`wc -l &amp;lt; $PBS_NODEFILE`&lt;br /&gt;
  &lt;br /&gt;
  # Flush NFS?&lt;br /&gt;
  rm $STDOUT_FILE 2&amp;gt;/dev/null&lt;br /&gt;
  rm $STDERR_FILE 2&amp;gt;/dev/null&lt;br /&gt;
  sync; sync; sync; sync; sync;&lt;br /&gt;
  touch $STDOUT_FILE&lt;br /&gt;
  touch $STDERR_FILE&lt;br /&gt;
  &lt;br /&gt;
  # Here we go!&lt;br /&gt;
  echo &amp;quot;Hello on node &amp;quot; `hostname` &amp;quot; with $NCORES cores&amp;quot;&lt;br /&gt;
  echo &amp;quot;REGEX_FILE $REGEX_FILE&amp;quot;&lt;br /&gt;
  echo &amp;quot;OUT_FILE $OUT_FILE&amp;quot;&lt;br /&gt;
  echo &amp;quot;STDOUT_FILE $STDOUT_FILE STDERR_FILE $STDERR_FILE&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export MEMOIZATION_PROJECT_ROOT=~/memoized-regex-engine&lt;br /&gt;
  export ECOSYSTEM_REGEXP_PROJECT_ROOT=~/EcosystemRegexps&lt;br /&gt;
  &lt;br /&gt;
  set -x&lt;br /&gt;
  &lt;br /&gt;
  # For data about prototype&lt;br /&gt;
  # PYTHONUNBUFFERED=1 $MEMOIZATION_PROJECT_ROOT/eval/measure-memoization-behavior.py \&lt;br /&gt;
  #   --regex-file $REGEX_FILE \&lt;br /&gt;
  #   --queryPrototype \&lt;br /&gt;
  #   --trials 1 \&lt;br /&gt;
  #   --queryProductionEngines \&lt;br /&gt;
  #   --parallelism $NCORES \&lt;br /&gt;
  #   --out-file $OUT_FILE \&lt;br /&gt;
  #   &amp;gt; $STDOUT_FILE \&lt;br /&gt;
  #   2&amp;gt;$STDERR_FILE&lt;br /&gt;
  &lt;br /&gt;
  # For data about other regex engines -- use if you want to test with extended features not supported by prototype&lt;br /&gt;
  PYTHONUNBUFFERED=1 $MEMOIZATION_PROJECT_ROOT/eval/measure-memoization-behavior.py \&lt;br /&gt;
    --regex-file $REGEX_FILE \&lt;br /&gt;
    --useCSharpToFindMostEI \&lt;br /&gt;
    --queryProductionEngines \&lt;br /&gt;
    --parallelism $NCORES \&lt;br /&gt;
    --out-file $OUT_FILE \&lt;br /&gt;
    &amp;gt; $STDOUT_FILE \&lt;br /&gt;
    2&amp;gt;$STDERR_FILE&lt;br /&gt;
&lt;br /&gt;
=== Launch job ===&lt;br /&gt;
&lt;br /&gt;
  (10:56:30) davisjam@sushi-headnode ~/qsub-jobs/Memo $ for f in input/500-piece-*; do echo $f; qsub -v REGEX_FILE=`pwd`/input/$f qsub-memo.sh; done&lt;br /&gt;
&lt;br /&gt;
=== Monitor job ===&lt;br /&gt;
&lt;br /&gt;
  (11:13:54) davisjam@sushi-headnode ~/qsub-jobs/Memo $ ls -lhtra ~/logs&lt;br /&gt;
&lt;br /&gt;
(and tail log files, etc.)&lt;br /&gt;
&lt;br /&gt;
=== Export data ===&lt;br /&gt;
&lt;br /&gt;
If you want to export the data (e.g. for analysis in a Jupyter notebook), try something like this:&lt;br /&gt;
&lt;br /&gt;
  (11:15:32) davisjam@sushi-headnode ~/qsub-jobs/Memo $ mkdir ~/export-latest; cp ~/data/memo/all-SL/*.pkl.bz2 ~/export-latest; tar -czvf ~/export-latest.tgz ~/export-latest; scp ...&lt;br /&gt;
&lt;br /&gt;
== Handy scripts ==&lt;br /&gt;
&lt;br /&gt;
=== Check on your jobs ===&lt;br /&gt;
&lt;br /&gt;
How much longer will you be waiting?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;summarize-job-state.pl&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Summarize the status of the jobs of a user&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  if (scalar(@ARGV) ne 1) {&lt;br /&gt;
    die &amp;quot;  Summarize state of jobs submitted by a user\nusage: $0 username\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  my @lines = `qstat -u $user`;&lt;br /&gt;
  my @running = grep { m/\s+$user\s+.*\sR\s/ } @lines;&lt;br /&gt;
  my @queued = grep { m/\s+$user\s+.*\sQ\s/ } @lines;&lt;br /&gt;
  my @error = grep { m/\s+$user\s+.*\sE\s/ } @lines;&lt;br /&gt;
  &lt;br /&gt;
  my $nRunning = scalar(@running);&lt;br /&gt;
  my $nQueued = scalar(@queued);&lt;br /&gt;
  my $nError = scalar(@error);&lt;br /&gt;
  my $nJobs = $nRunning + $nQueued + $nError;&lt;br /&gt;
  &lt;br /&gt;
  print &amp;quot;    Running jobs: $nRunning\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Queued jobs: $nQueued\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Error jobs: $nError\n&amp;quot;;&lt;br /&gt;
  print &amp;quot; + ------------------------\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Active jobs: $nJobs\n&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
=== Abort a run ===&lt;br /&gt;
&lt;br /&gt;
Sometimes you see an error show up in your logs files and need to abort the run.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kill-my-jobs.pl&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Kill (qdel) all jobs owned by the given user&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  if (scalar(@ARGV) ne 1) {&lt;br /&gt;
    die &amp;quot;  qdel all jobs submitted by a user\nusage: $0 username\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  my @jobIDs = &amp;amp;getJobIDs($user);&lt;br /&gt;
  &lt;br /&gt;
  if (@jobIDs) {&lt;br /&gt;
    &amp;amp;log(&amp;quot;qdel&#039;ing the &amp;quot; . scalar(@jobIDs) . &amp;quot; jobs owned by $user&amp;quot;);&lt;br /&gt;
    my $cmd = &amp;quot;qdel &amp;quot; . join(&amp;quot; &amp;quot;, @jobIDs);&lt;br /&gt;
    system($cmd);&lt;br /&gt;
  } else {&lt;br /&gt;
    print &amp;quot;qstat reported no jobs to kill\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  ###########&lt;br /&gt;
  &lt;br /&gt;
  sub getJobIDs {&lt;br /&gt;
    my ($user) = @_;&lt;br /&gt;
  &lt;br /&gt;
    &amp;amp;log(&amp;quot;Using qstat to get the jobs owned by $user&amp;quot;);&lt;br /&gt;
    my @qstat_output = `qstat -u $user`;&lt;br /&gt;
    chomp @qstat_output;&lt;br /&gt;
    my @jobLines = grep { m/\s+$user\s+/ } @qstat_output;&lt;br /&gt;
    my @jobIDs = map {&lt;br /&gt;
      my ($id) = ( $_ =~ m/^(\d+)\./ );&lt;br /&gt;
      $id;&lt;br /&gt;
    } @jobLines;&lt;br /&gt;
    return @jobIDs;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  sub log {&lt;br /&gt;
    my ($msg) = @_;&lt;br /&gt;
    print STDERR &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
=== Clean up /tmp ===&lt;br /&gt;
&lt;br /&gt;
Sometimes my analysis tools leak files into /tmp on the sushi nodes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;clean-my-tmp.pl&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Print commands to clean up my files in /tmp across sushi&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  my @nodes = qw/ sushi01 sushi02 sushi03 sushi04 sushi05 sushi06 sushi07 sushi08 sushi09 sushi10 /;&lt;br /&gt;
  &lt;br /&gt;
  ## Parse args&lt;br /&gt;
  if (scalar(@ARGV) &amp;lt; 1 or scalar(@ARGV) &amp;gt; 2) {&lt;br /&gt;
    die &amp;quot;Print commands to delete files in /tmp on each sushi node [matching the specified find predicates]&lt;br /&gt;
  Usage: $0 owning-user [&#039;find predicates&#039;]&lt;br /&gt;
  &lt;br /&gt;
  Examples:&lt;br /&gt;
    $0 davisjam&lt;br /&gt;
      - Deletes all files owned by davisjam in /tmp on all sushi nodes&lt;br /&gt;
    $0 davisjam &#039;-name \&amp;quot;protoRegexEngine*\&amp;quot;&#039;&lt;br /&gt;
      - Delete all files ... whose name matches this predicate&lt;br /&gt;
        You should wrap predicates in single-quotes, and use double-quotes for any quoting within the predicates&lt;br /&gt;
        (The ssh command is wrapped in single-quotes)&lt;br /&gt;
  &amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $findPredicates = &amp;quot;&amp;quot;;&lt;br /&gt;
  if (scalar(@ARGV) &amp;gt;= 2) {&lt;br /&gt;
    $findPredicates = $ARGV[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  ## Cleanup operations&lt;br /&gt;
  my $cmd = &amp;quot;ssh $nodes[0] &#039;find /tmp -user $user -delete -type f $findPredicates&#039;&amp;quot;;&lt;br /&gt;
  &lt;br /&gt;
  for my $node (@nodes) {&lt;br /&gt;
    my $cmd = &amp;quot;find /tmp -user $user -delete -type f $findPredicates&amp;quot;;&lt;br /&gt;
    print(&amp;quot;ssh $node &#039;$cmd&#039; &amp;amp;\n&amp;quot;);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  &amp;amp;log(&amp;quot;\n^^ If the preceding commands are to your liking, copy/paste/execute to run them.&amp;quot;);&lt;br /&gt;
  &lt;br /&gt;
  ############&lt;br /&gt;
  &lt;br /&gt;
  sub log {&lt;br /&gt;
    my ($msg) = @_;&lt;br /&gt;
    print STDERR &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
  }&lt;/div&gt;</summary>
		<author><name>Davisjam</name></author>
	</entry>
	<entry>
		<id>http://wiki.cs.vt.edu/index.php?title=Sushi101&amp;diff=4039</id>
		<title>Sushi101</title>
		<link rel="alternate" type="text/html" href="http://wiki.cs.vt.edu/index.php?title=Sushi101&amp;diff=4039"/>
		<updated>2020-04-17T15:29:36Z</updated>

		<summary type="html">&lt;p&gt;Davisjam: Formatting&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sushi&#039;&#039;&#039; is a small cluster shared by several Stack@CS faculty members.&lt;br /&gt;
&lt;br /&gt;
==Sushi Nodes==&lt;br /&gt;
There are 10 sushi nodes. Each node has:&lt;br /&gt;
* 48 cores&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* Local scratch space -- a few hundred GB in /tmp&lt;br /&gt;
* Shared access to a shared /home file system -- 180TB over NFS&lt;br /&gt;
* 10Gig Ethernet connection to the other nodes&lt;br /&gt;
* Access to the external Internet&lt;br /&gt;
&lt;br /&gt;
==Sushi Access==&lt;br /&gt;
&lt;br /&gt;
Only certain labs have access to Sushi. If you are not sure, ask the lab PI.&lt;br /&gt;
&lt;br /&gt;
If you have an account on sushi, you can access sushi via the head node: sushi.cs.vt.edu (128.173.236.117 on the intranet)&lt;br /&gt;
* scp external files to your home directory on the head node&lt;br /&gt;
* Launch jobs from the head node&lt;br /&gt;
&lt;br /&gt;
==Sushi Jobs==&lt;br /&gt;
&lt;br /&gt;
Launch jobs using the PBS job submission system. There are many guides to PBS on the web. I recommend [https://www.rcac.purdue.edu/knowledge/hammer/run/pbs Purdue&#039;s guide].&lt;br /&gt;
&lt;br /&gt;
== Learning to use sushi ==&lt;br /&gt;
&lt;br /&gt;
Before you use sushi for the first time, you should:&lt;br /&gt;
&lt;br /&gt;
* Read this wiki page&lt;br /&gt;
* Learn about the PBS system&lt;br /&gt;
* Review the man page for qsub, qstat, and qnodes&lt;br /&gt;
* Try a simple practice job, e.g. an &amp;quot;echo&amp;quot; that prints the node name&lt;br /&gt;
&lt;br /&gt;
This may take you a day or two. It is well worth the investment.&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what I do for &amp;quot;embarassingly parallel&amp;quot; jobs driven by an input file with one task per line.&lt;br /&gt;
&lt;br /&gt;
=== Split input into files, one task per line ===&lt;br /&gt;
&lt;br /&gt;
  (10:51:16) davisjam@sushi-headnode ~/qsub-jobs/Memo/input $ split sl-regex-filteredForPrototype-all.json sl-regex-filteredForPrototype-all-piece-  --lines=3000 --additional-suffix=.json --numeric-suffixes --suffix-length=4&lt;br /&gt;
&lt;br /&gt;
=== Write job script ===&lt;br /&gt;
&lt;br /&gt;
I use this as a template and tweak it from there. You might also try the GNU Parallel tool. There&#039;s a copy in /home/davisjam/bin/parallel.&lt;br /&gt;
&lt;br /&gt;
  (12:05:29) davisjam@sushi-headnode ~/qsub-jobs/Memo $ cat qsub-memo.sh&lt;br /&gt;
  #!/usr/bin/env bash&lt;br /&gt;
  &lt;br /&gt;
  # You must provide REGEX_FILE&lt;br /&gt;
  # e.g. &amp;quot;qsub -v REGEX_FILE=&#039;/home/davisjam/qsub-jobs/RegexRepl/syntax/input/test/500.json&#039; qsub-syntax.sh&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  #########################################&lt;br /&gt;
  ## PBS Configuration (Single Comment # ONLY)&lt;br /&gt;
  #########################################&lt;br /&gt;
  #&lt;br /&gt;
  #PBS -l nodes=1:ppn=8&lt;br /&gt;
  #&lt;br /&gt;
  # Save all env vars -- including PERL5LIB&lt;br /&gt;
  #PBS -V&lt;br /&gt;
  #########################################&lt;br /&gt;
  &lt;br /&gt;
  #########################################&lt;br /&gt;
  ## Setup&lt;br /&gt;
  #########################################&lt;br /&gt;
  &lt;br /&gt;
  #OUT_FILE=~/data/syntax/cross-registry-real/`basename $REGEX_FILE .json`-slras-job$PBS_JOBID.json&lt;br /&gt;
  OUT_FILE=~/data/memo/all-SL/`basename $REGEX_FILE .json`-measureMemo-job$PBS_JOBID.pkl.bz2&lt;br /&gt;
  &lt;br /&gt;
  STDOUT_FILE=$HOME/logs/qsub-memo-$$.out&lt;br /&gt;
  STDERR_FILE=$HOME/logs/qsub-memo-$$.err&lt;br /&gt;
  &lt;br /&gt;
  NCORES=`wc -l &amp;lt; $PBS_NODEFILE`&lt;br /&gt;
  &lt;br /&gt;
  # Flush NFS?&lt;br /&gt;
  rm $STDOUT_FILE 2&amp;gt;/dev/null&lt;br /&gt;
  rm $STDERR_FILE 2&amp;gt;/dev/null&lt;br /&gt;
  sync; sync; sync; sync; sync;&lt;br /&gt;
  touch $STDOUT_FILE&lt;br /&gt;
  touch $STDERR_FILE&lt;br /&gt;
  &lt;br /&gt;
  # Here we go!&lt;br /&gt;
  echo &amp;quot;Hello on node &amp;quot; `hostname` &amp;quot; with $NCORES cores&amp;quot;&lt;br /&gt;
  echo &amp;quot;REGEX_FILE $REGEX_FILE&amp;quot;&lt;br /&gt;
  echo &amp;quot;OUT_FILE $OUT_FILE&amp;quot;&lt;br /&gt;
  echo &amp;quot;STDOUT_FILE $STDOUT_FILE STDERR_FILE $STDERR_FILE&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export MEMOIZATION_PROJECT_ROOT=~/memoized-regex-engine&lt;br /&gt;
  export ECOSYSTEM_REGEXP_PROJECT_ROOT=~/EcosystemRegexps&lt;br /&gt;
  &lt;br /&gt;
  set -x&lt;br /&gt;
  &lt;br /&gt;
  # For data about prototype&lt;br /&gt;
  # PYTHONUNBUFFERED=1 $MEMOIZATION_PROJECT_ROOT/eval/measure-memoization-behavior.py \&lt;br /&gt;
  #   --regex-file $REGEX_FILE \&lt;br /&gt;
  #   --queryPrototype \&lt;br /&gt;
  #   --trials 1 \&lt;br /&gt;
  #   --queryProductionEngines \&lt;br /&gt;
  #   --parallelism $NCORES \&lt;br /&gt;
  #   --out-file $OUT_FILE \&lt;br /&gt;
  #   &amp;gt; $STDOUT_FILE \&lt;br /&gt;
  #   2&amp;gt;$STDERR_FILE&lt;br /&gt;
  &lt;br /&gt;
  # For data about other regex engines -- use if you want to test with extended features not supported by prototype&lt;br /&gt;
  PYTHONUNBUFFERED=1 $MEMOIZATION_PROJECT_ROOT/eval/measure-memoization-behavior.py \&lt;br /&gt;
    --regex-file $REGEX_FILE \&lt;br /&gt;
    --useCSharpToFindMostEI \&lt;br /&gt;
    --queryProductionEngines \&lt;br /&gt;
    --parallelism $NCORES \&lt;br /&gt;
    --out-file $OUT_FILE \&lt;br /&gt;
    &amp;gt; $STDOUT_FILE \&lt;br /&gt;
    2&amp;gt;$STDERR_FILE&lt;br /&gt;
&lt;br /&gt;
=== Launch job ===&lt;br /&gt;
&lt;br /&gt;
  (10:56:30) davisjam@sushi-headnode ~/qsub-jobs/Memo $ for f in input/500-piece-*; do echo $f; qsub -v REGEX_FILE=`pwd`/input/$f qsub-memo.sh; done&lt;br /&gt;
&lt;br /&gt;
=== Monitor job ===&lt;br /&gt;
&lt;br /&gt;
  (11:13:54) davisjam@sushi-headnode ~/qsub-jobs/Memo $ ls -lhtra ~/logs&lt;br /&gt;
&lt;br /&gt;
(and tail log files, etc.)&lt;br /&gt;
&lt;br /&gt;
=== Export data ===&lt;br /&gt;
&lt;br /&gt;
If you want to export the data (e.g. for analysis in a Jupyter notebook), try something like this:&lt;br /&gt;
&lt;br /&gt;
  (11:15:32) davisjam@sushi-headnode ~/qsub-jobs/Memo $ mkdir ~/export-latest; cp ~/data/memo/all-SL/*.pkl.bz2 ~/export-latest; tar -czvf ~/export-latest.tgz ~/export-latest; scp ...&lt;br /&gt;
&lt;br /&gt;
== Handy scripts ==&lt;br /&gt;
&lt;br /&gt;
=== Check on your jobs ===&lt;br /&gt;
&lt;br /&gt;
How much longer will you be waiting?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;summarize-job-state.pl&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Summarize the status of the jobs of a user&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  if (scalar(@ARGV) ne 1) {&lt;br /&gt;
    die &amp;quot;  Summarize state of jobs submitted by a user\nusage: $0 username\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  my @lines = `qstat -u $user`;&lt;br /&gt;
  my @running = grep { m/\s+$user\s+.*\sR\s/ } @lines;&lt;br /&gt;
  my @queued = grep { m/\s+$user\s+.*\sQ\s/ } @lines;&lt;br /&gt;
  my @error = grep { m/\s+$user\s+.*\sE\s/ } @lines;&lt;br /&gt;
  &lt;br /&gt;
  my $nRunning = scalar(@running);&lt;br /&gt;
  my $nQueued = scalar(@queued);&lt;br /&gt;
  my $nError = scalar(@error);&lt;br /&gt;
  my $nJobs = $nRunning + $nQueued + $nError;&lt;br /&gt;
  &lt;br /&gt;
  print &amp;quot;    Running jobs: $nRunning\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Queued jobs: $nQueued\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Error jobs: $nError\n&amp;quot;;&lt;br /&gt;
  print &amp;quot; + ------------------------\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Active jobs: $nJobs\n&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
=== Abort a run ===&lt;br /&gt;
&lt;br /&gt;
Sometimes you see an error show up in your logs files and need to abort the run.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;kill-my-jobs.pl&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Kill (qdel) all jobs owned by the given user&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  if (scalar(@ARGV) ne 1) {&lt;br /&gt;
    die &amp;quot;  qdel all jobs submitted by a user\nusage: $0 username\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  my @jobIDs = &amp;amp;getJobIDs($user);&lt;br /&gt;
  &lt;br /&gt;
  if (@jobIDs) {&lt;br /&gt;
    &amp;amp;log(&amp;quot;qdel&#039;ing the &amp;quot; . scalar(@jobIDs) . &amp;quot; jobs owned by $user&amp;quot;);&lt;br /&gt;
    my $cmd = &amp;quot;qdel &amp;quot; . join(&amp;quot; &amp;quot;, @jobIDs);&lt;br /&gt;
    system($cmd);&lt;br /&gt;
  } else {&lt;br /&gt;
    print &amp;quot;qstat reported no jobs to kill\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  ###########&lt;br /&gt;
  &lt;br /&gt;
  sub getJobIDs {&lt;br /&gt;
    my ($user) = @_;&lt;br /&gt;
  &lt;br /&gt;
    &amp;amp;log(&amp;quot;Using qstat to get the jobs owned by $user&amp;quot;);&lt;br /&gt;
    my @qstat_output = `qstat -u $user`;&lt;br /&gt;
    chomp @qstat_output;&lt;br /&gt;
    my @jobLines = grep { m/\s+$user\s+/ } @qstat_output;&lt;br /&gt;
    my @jobIDs = map {&lt;br /&gt;
      my ($id) = ( $_ =~ m/^(\d+)\./ );&lt;br /&gt;
      $id;&lt;br /&gt;
    } @jobLines;&lt;br /&gt;
    return @jobIDs;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  sub log {&lt;br /&gt;
    my ($msg) = @_;&lt;br /&gt;
    print STDERR &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
=== Clean up /tmp ===&lt;br /&gt;
&lt;br /&gt;
Sometimes my analysis tools leak files into /tmp on the sushi nodes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;clean-my-tmp.pl&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Print commands to clean up my files in /tmp across sushi&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  my @nodes = qw/ sushi01 sushi02 sushi03 sushi04 sushi05 sushi06 sushi07 sushi08 sushi09 sushi10 /;&lt;br /&gt;
  &lt;br /&gt;
  ## Parse args&lt;br /&gt;
  if (scalar(@ARGV) &amp;lt; 1 or scalar(@ARGV) &amp;gt; 2) {&lt;br /&gt;
    die &amp;quot;Print commands to delete files in /tmp on each sushi node [matching the specified find predicates]&lt;br /&gt;
  Usage: $0 owning-user [&#039;find predicates&#039;]&lt;br /&gt;
  &lt;br /&gt;
  Examples:&lt;br /&gt;
    $0 davisjam&lt;br /&gt;
      - Deletes all files owned by davisjam in /tmp on all sushi nodes&lt;br /&gt;
    $0 davisjam &#039;-name \&amp;quot;protoRegexEngine*\&amp;quot;&#039;&lt;br /&gt;
      - Delete all files ... whose name matches this predicate&lt;br /&gt;
        You should wrap predicates in single-quotes, and use double-quotes for any quoting within the predicates&lt;br /&gt;
        (The ssh command is wrapped in single-quotes)&lt;br /&gt;
  &amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $findPredicates = &amp;quot;&amp;quot;;&lt;br /&gt;
  if (scalar(@ARGV) &amp;gt;= 2) {&lt;br /&gt;
    $findPredicates = $ARGV[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  ## Cleanup operations&lt;br /&gt;
  my $cmd = &amp;quot;ssh $nodes[0] &#039;find /tmp -user $user -delete -type f $findPredicates&#039;&amp;quot;;&lt;br /&gt;
  &lt;br /&gt;
  for my $node (@nodes) {&lt;br /&gt;
    my $cmd = &amp;quot;find /tmp -user $user -delete -type f $findPredicates&amp;quot;;&lt;br /&gt;
    print(&amp;quot;ssh $node &#039;$cmd&#039; &amp;amp;\n&amp;quot;);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  &amp;amp;log(&amp;quot;\n^^ If the preceding commands are to your liking, copy/paste/execute to run them.&amp;quot;);&lt;br /&gt;
  &lt;br /&gt;
  ############&lt;br /&gt;
  &lt;br /&gt;
  sub log {&lt;br /&gt;
    my ($msg) = @_;&lt;br /&gt;
    print STDERR &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
  }&lt;/div&gt;</summary>
		<author><name>Davisjam</name></author>
	</entry>
	<entry>
		<id>http://wiki.cs.vt.edu/index.php?title=Sushi101&amp;diff=4038</id>
		<title>Sushi101</title>
		<link rel="alternate" type="text/html" href="http://wiki.cs.vt.edu/index.php?title=Sushi101&amp;diff=4038"/>
		<updated>2020-04-17T15:24:10Z</updated>

		<summary type="html">&lt;p&gt;Davisjam: Sushi guide&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;&amp;quot;Sushi&amp;quot;&amp;quot; is a small cluster shared by several Stack@CS faculty members.&lt;br /&gt;
&lt;br /&gt;
==Sushi Nodes==&lt;br /&gt;
There are 10 sushi nodes. Each node has:&lt;br /&gt;
* 48 cores&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* Local scratch space -- a few hundred GB in /tmp&lt;br /&gt;
* Shared access to a shared /home file system -- 180TB over NFS&lt;br /&gt;
* 10Gig Ethernet connection to the other nodes&lt;br /&gt;
* Access to the external Internet&lt;br /&gt;
&lt;br /&gt;
==Sushi Access==&lt;br /&gt;
&lt;br /&gt;
Only certain labs have access to Sushi. If you are not sure, ask the lab PI.&lt;br /&gt;
&lt;br /&gt;
If you have an account on sushi, you can access sushi via the head node: sushi.cs.vt.edu (128.173.236.117 on the intranet)&lt;br /&gt;
* scp external files to your home directory on the head node&lt;br /&gt;
* Launch jobs from the head node&lt;br /&gt;
&lt;br /&gt;
==Sushi Jobs==&lt;br /&gt;
&lt;br /&gt;
Launch jobs using the PBS job submission system. There are many guides to PBS on the web. I recommend [Purdue&#039;s guide](https://www.rcac.purdue.edu/knowledge/hammer/run/pbs).&lt;br /&gt;
&lt;br /&gt;
== Learning to use sushi ==&lt;br /&gt;
&lt;br /&gt;
Before you use sushi for the first time, you should:&lt;br /&gt;
&lt;br /&gt;
* Read this wiki page&lt;br /&gt;
* Learn about the PBS system&lt;br /&gt;
* Review the man page for qsub, qstat, and qnodes&lt;br /&gt;
* Try a simple practice job, e.g. an &amp;quot;echo&amp;quot; that prints the node name&lt;br /&gt;
&lt;br /&gt;
This may take you a day or two. It is well worth the investment.&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what I do for &amp;quot;embarassingly parallel&amp;quot; jobs driven by an input file with one task per line.&lt;br /&gt;
&lt;br /&gt;
=== Split input into files, one task per line ===&lt;br /&gt;
&lt;br /&gt;
  (10:51:16) davisjam@sushi-headnode ~/qsub-jobs/Memo/input $ split sl-regex-filteredForPrototype-all.json sl-regex-filteredForPrototype-all-piece-  --lines=3000 --additional-suffix=.json --numeric-suffixes --suffix-length=4&lt;br /&gt;
&lt;br /&gt;
=== Write job script ===&lt;br /&gt;
&lt;br /&gt;
I use this as a template and tweak it from there. You might also try the GNU Parallel tool. There&#039;s a copy in /home/davisjam/bin/parallel.&lt;br /&gt;
&lt;br /&gt;
  (12:05:29) davisjam@sushi-headnode ~/qsub-jobs/Memo $ cat qsub-memo.sh&lt;br /&gt;
  #!/usr/bin/env bash&lt;br /&gt;
  &lt;br /&gt;
  # You must provide REGEX_FILE&lt;br /&gt;
  # e.g. &amp;quot;qsub -v REGEX_FILE=&#039;/home/davisjam/qsub-jobs/RegexRepl/syntax/input/test/500.json&#039; qsub-syntax.sh&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  #########################################&lt;br /&gt;
  ## PBS Configuration (Single Comment # ONLY)&lt;br /&gt;
  #########################################&lt;br /&gt;
  #&lt;br /&gt;
  #PBS -l nodes=1:ppn=8&lt;br /&gt;
  #&lt;br /&gt;
  # Save all env vars -- including PERL5LIB&lt;br /&gt;
  #PBS -V&lt;br /&gt;
  #########################################&lt;br /&gt;
  &lt;br /&gt;
  #########################################&lt;br /&gt;
  ## Setup&lt;br /&gt;
  #########################################&lt;br /&gt;
  &lt;br /&gt;
  #OUT_FILE=~/data/syntax/cross-registry-real/`basename $REGEX_FILE .json`-slras-job$PBS_JOBID.json&lt;br /&gt;
  OUT_FILE=~/data/memo/all-SL/`basename $REGEX_FILE .json`-measureMemo-job$PBS_JOBID.pkl.bz2&lt;br /&gt;
  &lt;br /&gt;
  STDOUT_FILE=$HOME/logs/qsub-memo-$$.out&lt;br /&gt;
  STDERR_FILE=$HOME/logs/qsub-memo-$$.err&lt;br /&gt;
  &lt;br /&gt;
  NCORES=`wc -l &amp;lt; $PBS_NODEFILE`&lt;br /&gt;
  &lt;br /&gt;
  # Flush NFS?&lt;br /&gt;
  rm $STDOUT_FILE 2&amp;gt;/dev/null&lt;br /&gt;
  rm $STDERR_FILE 2&amp;gt;/dev/null&lt;br /&gt;
  sync; sync; sync; sync; sync;&lt;br /&gt;
  touch $STDOUT_FILE&lt;br /&gt;
  touch $STDERR_FILE&lt;br /&gt;
  &lt;br /&gt;
  # Here we go!&lt;br /&gt;
  echo &amp;quot;Hello on node &amp;quot; `hostname` &amp;quot; with $NCORES cores&amp;quot;&lt;br /&gt;
  echo &amp;quot;REGEX_FILE $REGEX_FILE&amp;quot;&lt;br /&gt;
  echo &amp;quot;OUT_FILE $OUT_FILE&amp;quot;&lt;br /&gt;
  echo &amp;quot;STDOUT_FILE $STDOUT_FILE STDERR_FILE $STDERR_FILE&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export MEMOIZATION_PROJECT_ROOT=~/memoized-regex-engine&lt;br /&gt;
  export ECOSYSTEM_REGEXP_PROJECT_ROOT=~/EcosystemRegexps&lt;br /&gt;
  &lt;br /&gt;
  set -x&lt;br /&gt;
  &lt;br /&gt;
  # For data about prototype&lt;br /&gt;
  # PYTHONUNBUFFERED=1 $MEMOIZATION_PROJECT_ROOT/eval/measure-memoization-behavior.py \&lt;br /&gt;
  #   --regex-file $REGEX_FILE \&lt;br /&gt;
  #   --queryPrototype \&lt;br /&gt;
  #   --trials 1 \&lt;br /&gt;
  #   --queryProductionEngines \&lt;br /&gt;
  #   --parallelism $NCORES \&lt;br /&gt;
  #   --out-file $OUT_FILE \&lt;br /&gt;
  #   &amp;gt; $STDOUT_FILE \&lt;br /&gt;
  #   2&amp;gt;$STDERR_FILE&lt;br /&gt;
  &lt;br /&gt;
  # For data about other regex engines -- use if you want to test with extended features not supported by prototype&lt;br /&gt;
  PYTHONUNBUFFERED=1 $MEMOIZATION_PROJECT_ROOT/eval/measure-memoization-behavior.py \&lt;br /&gt;
    --regex-file $REGEX_FILE \&lt;br /&gt;
    --useCSharpToFindMostEI \&lt;br /&gt;
    --queryProductionEngines \&lt;br /&gt;
    --parallelism $NCORES \&lt;br /&gt;
    --out-file $OUT_FILE \&lt;br /&gt;
    &amp;gt; $STDOUT_FILE \&lt;br /&gt;
    2&amp;gt;$STDERR_FILE&lt;br /&gt;
&lt;br /&gt;
=== Launch job ===&lt;br /&gt;
&lt;br /&gt;
  (10:56:30) davisjam@sushi-headnode ~/qsub-jobs/Memo $ for f in input/500-piece-*; do echo $f; qsub -v REGEX_FILE=`pwd`/input/$f qsub-memo.sh; done&lt;br /&gt;
&lt;br /&gt;
=== Monitor job ===&lt;br /&gt;
&lt;br /&gt;
  (11:13:54) davisjam@sushi-headnode ~/qsub-jobs/Memo $ ls -lhtra ~/logs&lt;br /&gt;
&lt;br /&gt;
(and tail log files, etc.)&lt;br /&gt;
&lt;br /&gt;
=== Export data ===&lt;br /&gt;
&lt;br /&gt;
If you want to export the data (e.g. for analysis in a Jupyter notebook), try something like this:&lt;br /&gt;
&lt;br /&gt;
  (11:15:32) davisjam@sushi-headnode ~/qsub-jobs/Memo $ mkdir ~/export-latest; cp ~/data/memo/all-SL/*.pkl.bz2 ~/export-latest; tar -czvf ~/export-latest.tgz ~/export-latest; scp ...&lt;br /&gt;
&lt;br /&gt;
== Handy scripts ==&lt;br /&gt;
&lt;br /&gt;
=== Check on your jobs ===&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Summarize the status of the jobs of a user&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  if (scalar(@ARGV) ne 1) {&lt;br /&gt;
    die &amp;quot;  Summarize state of jobs submitted by a user\nusage: $0 username\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  my @lines = `qstat -u $user`;&lt;br /&gt;
  my @running = grep { m/\s+$user\s+.*\sR\s/ } @lines;&lt;br /&gt;
  my @queued = grep { m/\s+$user\s+.*\sQ\s/ } @lines;&lt;br /&gt;
  my @error = grep { m/\s+$user\s+.*\sE\s/ } @lines;&lt;br /&gt;
  &lt;br /&gt;
  my $nRunning = scalar(@running);&lt;br /&gt;
  my $nQueued = scalar(@queued);&lt;br /&gt;
  my $nError = scalar(@error);&lt;br /&gt;
  my $nJobs = $nRunning + $nQueued + $nError;&lt;br /&gt;
  &lt;br /&gt;
  print &amp;quot;    Running jobs: $nRunning\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Queued jobs: $nQueued\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Error jobs: $nError\n&amp;quot;;&lt;br /&gt;
  print &amp;quot; + ------------------------\n&amp;quot;;&lt;br /&gt;
  print &amp;quot;    Active jobs: $nJobs\n&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
=== Abort a run ===&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Kill (qdel) all jobs owned by the given user&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  if (scalar(@ARGV) ne 1) {&lt;br /&gt;
    die &amp;quot;  qdel all jobs submitted by a user\nusage: $0 username\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  my @jobIDs = &amp;amp;getJobIDs($user);&lt;br /&gt;
  &lt;br /&gt;
  if (@jobIDs) {&lt;br /&gt;
    &amp;amp;log(&amp;quot;qdel&#039;ing the &amp;quot; . scalar(@jobIDs) . &amp;quot; jobs owned by $user&amp;quot;);&lt;br /&gt;
    my $cmd = &amp;quot;qdel &amp;quot; . join(&amp;quot; &amp;quot;, @jobIDs);&lt;br /&gt;
    system($cmd);&lt;br /&gt;
  } else {&lt;br /&gt;
    print &amp;quot;qstat reported no jobs to kill\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  ###########&lt;br /&gt;
  &lt;br /&gt;
  sub getJobIDs {&lt;br /&gt;
    my ($user) = @_;&lt;br /&gt;
  &lt;br /&gt;
    &amp;amp;log(&amp;quot;Using qstat to get the jobs owned by $user&amp;quot;);&lt;br /&gt;
    my @qstat_output = `qstat -u $user`;&lt;br /&gt;
    chomp @qstat_output;&lt;br /&gt;
    my @jobLines = grep { m/\s+$user\s+/ } @qstat_output;&lt;br /&gt;
    my @jobIDs = map {&lt;br /&gt;
      my ($id) = ( $_ =~ m/^(\d+)\./ );&lt;br /&gt;
      $id;&lt;br /&gt;
    } @jobLines;&lt;br /&gt;
    return @jobIDs;&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  sub log {&lt;br /&gt;
    my ($msg) = @_;&lt;br /&gt;
    print STDERR &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
=== Clean up /tmp ===&lt;br /&gt;
&lt;br /&gt;
Sometimes my analysis tools leak files into /tmp on the sushi nodes.&lt;br /&gt;
&lt;br /&gt;
  #!/usr/bin/env perl&lt;br /&gt;
  # Author: Jamie Davis &amp;lt;davisjam@vt.edu&amp;gt;&lt;br /&gt;
  # Description: Print commands to clean up my files in /tmp across sushi&lt;br /&gt;
  &lt;br /&gt;
  use strict;&lt;br /&gt;
  use warnings;&lt;br /&gt;
  &lt;br /&gt;
  my @nodes = qw/ sushi01 sushi02 sushi03 sushi04 sushi05 sushi06 sushi07 sushi08 sushi09 sushi10 /;&lt;br /&gt;
  &lt;br /&gt;
  ## Parse args&lt;br /&gt;
  if (scalar(@ARGV) &amp;lt; 1 or scalar(@ARGV) &amp;gt; 2) {&lt;br /&gt;
    die &amp;quot;Print commands to delete files in /tmp on each sushi node [matching the specified find predicates]&lt;br /&gt;
  Usage: $0 owning-user [&#039;find predicates&#039;]&lt;br /&gt;
  &lt;br /&gt;
  Examples:&lt;br /&gt;
    $0 davisjam&lt;br /&gt;
      - Deletes all files owned by davisjam in /tmp on all sushi nodes&lt;br /&gt;
    $0 davisjam &#039;-name \&amp;quot;protoRegexEngine*\&amp;quot;&#039;&lt;br /&gt;
      - Delete all files ... whose name matches this predicate&lt;br /&gt;
        You should wrap predicates in single-quotes, and use double-quotes for any quoting within the predicates&lt;br /&gt;
        (The ssh command is wrapped in single-quotes)&lt;br /&gt;
  &amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $user = $ARGV[0];&lt;br /&gt;
  if (length($user) &amp;lt; 1) {&lt;br /&gt;
    die &amp;quot;Error, username is empty\n&amp;quot;;&lt;br /&gt;
  }&lt;br /&gt;
  my $findPredicates = &amp;quot;&amp;quot;;&lt;br /&gt;
  if (scalar(@ARGV) &amp;gt;= 2) {&lt;br /&gt;
    $findPredicates = $ARGV[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  ## Cleanup operations&lt;br /&gt;
  my $cmd = &amp;quot;ssh $nodes[0] &#039;find /tmp -user $user -delete -type f $findPredicates&#039;&amp;quot;;&lt;br /&gt;
  &lt;br /&gt;
  for my $node (@nodes) {&lt;br /&gt;
    my $cmd = &amp;quot;find /tmp -user $user -delete -type f $findPredicates&amp;quot;;&lt;br /&gt;
    print(&amp;quot;ssh $node &#039;$cmd&#039; &amp;amp;\n&amp;quot;);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  &amp;amp;log(&amp;quot;\n^^ If the preceding commands are to your liking, copy/paste/execute to run them.&amp;quot;);&lt;br /&gt;
  &lt;br /&gt;
  ############&lt;br /&gt;
  &lt;br /&gt;
  sub log {&lt;br /&gt;
    my ($msg) = @_;&lt;br /&gt;
    print STDERR &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
  }&lt;/div&gt;</summary>
		<author><name>Davisjam</name></author>
	</entry>
</feed>