Resources for 6.835 term projects

We provide a list of toolkits/libraries that you can use to build your own multimodal user interface.

Body / hand tracking

Microsoft Kinect SDK [link] [Guidelines] [Sample Code]
OpenNI SDK [link] [guide]
Leap Motion SDK [getting started] [overview] [demos]

Head / eye / face tracking

Watson Head Tracker [link]
Active Appearance Model using OpenCV (AAM-OpenCV) [link]
EyeAPI [link]
FaceAPI [link]
CLM-Z Face Tracker [link] [CVPR'12 paper]

Speech

HTML5 Web Speech API [demo] [tutorial]
CMU Sphinx [link]
Microsoft Speech Platform SDK [link]
The WAMI Toolkit [link]
OpenEAR: Munich Open-Source Emotion and Affect Recognition Toolkit [link] [ACII'09 paper]
OpenEars [link]

General Computer Vision / Machine Learning Libraries

Gesture Recognition Toolkit [link]
Weka: Data Mining Software in Java [link]
OpenCV computer vision library [link]
libSVM [link] [ACM TIST'11 paper]
BudgetedSVM [link]
SVM^light [link]
Bayes Net Toolbox for Matlab [link]
hCRF Library [link]
Robot Operating Sysmtem (ROS) [link]
OpenFramework [link]

Useful Links

Compressive Sensing Resources [link]
VRML: Visual Recognition and Machine Learning Summer School [2010] [2011] [2012]
MLSS: Machine Learning Summer Schools [link]
Survey on gesture datasets [1] [2]

Last updated February 16, 2015