Aeolus: Performance

From Pmgwiki

(Difference between revisions)
Jump to: navigation, search
(Data to be collected for SOSP paper)
(Data to be collected for SOSP paper)
Line 5: Line 5:
## Configuration questions
## Configuration questions
### Which machine should be used?
### Which machine should be used?
 +
#### Eigenharp and Badger are the fastest
 +
#### Farm best for when lots of machines needed
### Is the Authority Service on the same machine that runs the benchmarks?
### Is the Authority Service on the same machine that runs the benchmarks?
 +
#### ok for micro benchmarks (because different processor cores will be used, but big monster should really be distributed
 +
#### also test to see if it makes a different
### Is the Audit Trail Logging Service on the same machine that runs the benchmarks?
### Is the Audit Trail Logging Service on the same machine that runs the benchmarks?
 +
#### turn off logging for microbenchmarks
### Is the File Service on the same machine that runs the benchmarks?
### Is the File Service on the same machine that runs the benchmarks?
## (P) indicates that the benchmark is mentioned in the current paper
## (P) indicates that the benchmark is mentioned in the current paper

Revision as of 22:59, 8 February 2011

Data to be collected for SOSP paper

  1. Preliminaries
    1. List is mostly based on Winnie's thesis
    2. Vicky's thesis also computes "with and without logging"
    3. Configuration questions
      1. Which machine should be used?
        1. Eigenharp and Badger are the fastest
        2. Farm best for when lots of machines needed
      2. Is the Authority Service on the same machine that runs the benchmarks?
        1. ok for micro benchmarks (because different processor cores will be used, but big monster should really be distributed
        2. also test to see if it makes a different
      3. Is the Audit Trail Logging Service on the same machine that runs the benchmarks?
        1. turn off logging for microbenchmarks
      4. Is the File Service on the same machine that runs the benchmarks?
    4. (P) indicates that the benchmark is mentioned in the current paper
  2. Creation
    1. Normal process creation
    2. VN creation
  3. Communication costs
    1. inter VN communication
    2. intra VN communicaion (i.e., inter thread/aeolus process)
  4. Forks and calls
    1. (P) normal fork vs aeolus fork, af with public pid, af with a different pid
    2. (P) normal call vs aeolus call, also aeolus call with public pid, aeolus call with different pid
    3. Note that "normal" calls and forks are not measured in the paper
    4. (P) closure call
    5. rpc call
  5. File
    1. (P) create dir
    2. (P) create file
    3. (P) list dir
    4. (P) remove file
    5. (P) remove dir
    6. read various size files
    7. write various size files
    8. file stream open and read, various size files
    9. file stream write, various size files
    10. Note: the paper mentions some measurement information about accessing file content
  6. Boxes
    1. create, put, get
  7. Shared Objects
    1. (P) create, put, get
    2. Note: the paper does not discuss create times
  8. Shared Queues
    1. create, enqueue, waitanddequeue: basic + ipc
  9. Shared Locks
    1. Lock, Unlock
  10. Web
    1. base web service
    2. aeolus web service
  11. Authority Manager
    1. not sure
  12. Applications
    1. (P) Online Store
    2. (P) Secure Wiki
Personal tools