Aeolus: Performance

From Pmgwiki

(Difference between revisions)
Jump to: navigation, search
(Data to be collected for SOSP paper)
(Data to be collected for SOSP paper)
 
(30 intermediate revisions not shown)
Line 1: Line 1:
-
== Data to be collected for SOSP paper ==
+
== Collecting Performance Data for SOSP paper ==
# Preliminaries
# Preliminaries
-
## Configuration questions
+
## Configuration
-
### Which machine should be used? Eigenharp and Badger are the fastest. Farm best for when lots of machines needed
+
### Will be competing with James for machines for SOSP
 +
### Eigenharp, Badger, Qubit, Theremin are the fastest machines. The farm is best when lots of machines needed
### Is the Authority Service on the same machine that runs the benchmarks?
### Is the Authority Service on the same machine that runs the benchmarks?
#### ok for micro benchmarks (because different processor cores will be used, but big monster should really be distributed
#### ok for micro benchmarks (because different processor cores will be used, but big monster should really be distributed
-
#### also test to see if it makes a different
+
#### also test to see if it makes a difference
-
### Is the Audit Trail Logging Service on the same machine that runs the benchmarks?
+
### Turn off logging and turn off the file service (No interest in fs benchmarks)
-
#### turn off logging for microbenchmarks
+
## (ok) indicates that the current benchmark code is somewhat reasonable
-
### Is the File Service on the same machine that runs the benchmarks?
+
## (P) indicates that the benchmark is mentioned in the current paper
## (P) indicates that the benchmark is mentioned in the current paper
-
## List is mostly based on Winnie's thesis
+
## List based on Winnie's thesis; Vicky's thesis computes "with and w/out logging"
-
## Vicky's thesis also computes "with and without logging"
+
-
# Communication costs
+
-
## inter VN communication
+
-
## intra VN communicaion (i.e., inter thread/aeolus process)
+
# Forks and calls
# Forks and calls
-
## (P) normal fork vs aeolus fork, af with public pid, af with a different pid
+
## (ok) aeolus fork, call with a different pid, same pid, public pid; native call
-
## (P) normal call vs aeolus call, also aeolus call with public pid, aeolus call with different pid
+
## to do: native fork
-
## Note that "normal" calls and forks are not measured in the paper
+
## to do: rpc
-
## (P) closure call
+
## ?skip: closure:
-
## rpc call
+
### it will be close to call with different pid
-
## create, put, get
+
### the implementation still uses the authority server with closures (which could be hacked out)
-
# Shared Objects
+
# Shared Memory
-
## (P) create, put, get
+
## (ok) Shared Objects: (P) put, get (skip create)
-
## Note: the paper does not discuss create times
+
## (ok) Shared Queues: enqueue, waitanddequeue: basic + ipc (skip create)
-
# Shared Queues
+
## (ok) Shared Locks: Lock, unlock
-
## create, enqueue, waitanddequeue: basic + ipc
+
# Monster.com: non-aeolus vs. aeolus
-
# Shared Locks
+
## basic resume matching
-
## Lock, Unlock
+
## review infoflow control usage
 +
## collect timing information
 +
 
 +
----
 +
 
# Skip the following due to rare use
# Skip the following due to rare use
## Creation: Normal process creation vs VN creation
## Creation: Normal process creation vs VN creation
## Boxes
## Boxes
# Skip the following due to monster example
# Skip the following due to monster example
-
# Applications
 
## (P) Online Store
## (P) Online Store
## (P) Secure Wiki
## (P) Secure Wiki
-
## Web
+
## Web: base web service vs. aeolus web service
-
### base web service
+
-
### aeolus web service
+
# Skip the following: not too interesting
# Skip the following: not too interesting
## File operations
## File operations

Latest revision as of 21:58, 23 February 2011

Collecting Performance Data for SOSP paper

  1. Preliminaries
    1. Configuration
      1. Will be competing with James for machines for SOSP
      2. Eigenharp, Badger, Qubit, Theremin are the fastest machines. The farm is best when lots of machines needed
      3. Is the Authority Service on the same machine that runs the benchmarks?
        1. ok for micro benchmarks (because different processor cores will be used, but big monster should really be distributed
        2. also test to see if it makes a difference
      4. Turn off logging and turn off the file service (No interest in fs benchmarks)
    2. (ok) indicates that the current benchmark code is somewhat reasonable
    3. (P) indicates that the benchmark is mentioned in the current paper
    4. List based on Winnie's thesis; Vicky's thesis computes "with and w/out logging"
  2. Forks and calls
    1. (ok) aeolus fork, call with a different pid, same pid, public pid; native call
    2. to do: native fork
    3. to do: rpc
    4.  ?skip: closure:
      1. it will be close to call with different pid
      2. the implementation still uses the authority server with closures (which could be hacked out)
  3. Shared Memory
    1. (ok) Shared Objects: (P) put, get (skip create)
    2. (ok) Shared Queues: enqueue, waitanddequeue: basic + ipc (skip create)
    3. (ok) Shared Locks: Lock, unlock
  4. Monster.com: non-aeolus vs. aeolus
    1. basic resume matching
    2. review infoflow control usage
    3. collect timing information

  1. Skip the following due to rare use
    1. Creation: Normal process creation vs VN creation
    2. Boxes
  2. Skip the following due to monster example
    1. (P) Online Store
    2. (P) Secure Wiki
    3. Web: base web service vs. aeolus web service
  3. Skip the following: not too interesting
    1. File operations
      1. (P) create dir
      2. (P) create file
      3. (P) list dir
      4. (P) remove file
      5. (P) remove dir
      6. read various size files
      7. write various size files
      8. file stream open and read, various size files
      9. file stream write, various size files
      10. Note: the paper mentions some measurement information about accessing file content
    2. Authority Server
Personal tools