?? http:^^www.ai.mit.edu^projects^handarm-haptics^manipulation.html
字號:
Date: Tue, 26 Nov 1996 00:01:31 GMT
Server: Apache/1.2-dev
Connection: close
Content-Type: text/html
Last-Modified: Tue, 03 Sep 1996 16:28:10 GMT
ETag: "5d872-3cd8-322c5c9a"
Content-Length: 15576
Accept-Ranges: bytes
<title>Vision and Touch Guided Manipulation</title><body><a name="S0"></a><center><!WA0><img alt="-----" src="http://www.ai.mit.edu/icons/lines/line-bluemarble.gif"><p><h1>Vision and Touch Guided Manipulation Group</h1><h2>MIT Artificial Intelligence Lab & Nonlinear Systems Lab</h2><!WA1><img alt="-----" src="http://www.ai.mit.edu/icons/lines/line-bluemarble.gif"><p></center><!---------------------------------------><!------------ Intro --------------------><!--------------------------------------->The Vision and Touch Guided Manipulation group at the <!WA2><ahref="http://www.ai.mit.edu/">MIT Artificial Intelligence Lab</a>conducts research in a wide variety of topics related to manipulator and end effector design, dextrous manipulation, adaptive nonlinear control, and vision guided manipulation. We employ techniques from various fields including Mechanical Design, Stability Theory, MachineLearning, Approximation Theory, and Computer Vision.<p>The group is headed by <!WA3><ahref="http://www.ai.mit.edu/people/jks/jks.html">Dr. KennethSalisbury</a> (mechanics) and ProfessorJean-Jacques E. Slotine (autonomy and vision). Other groups atthe MIT AI Lab headed by Ken are the <!WA4><ahref="http://www.ai.mit.edu/projects/handarm-haptics/haptics.html">HapticInterfaces Group</a> and the <!WA5><ahref="http://www.ai.mit.edu/projects/handarm-haptics/robothand.html">RobotHands Group</a>. Professor Slotine also heads the <!WA6><ahref="http://web.mit.edu/nsl/www/">Nonlinear Systems Laboratory</a>.<p><!---------------------------------------><!------------ People -------------------><!--------------------------------------->The people in and associated with the Vision and Touch GuidedManipulation Group are:<p><ul> <li> Brian Anthony (<i>touch sensing</i>) <li> Mark Cannon (<i>wavelet networks ,graduated</i>) <li> <!WA7><a href="http://www.ai.mit.edu/people/bse/bse.html">Brian Eberman</a> (<i>system integration, ,graduated</i>) <li> <!WA8><a href="http://web.mit.edu/bhoffman/www/home.html">Brian Hoffman</a> (<i>active vision</i>) <li> <!WA9><a href="http://exodus.mit.edu/~jesse/">W. Jesse Hong</a> (<i>coordination vision-manipulation</i>) <li> <!WA10><a href="http://www.ai.mit.edu/people/madhani/madhani.html">Akhil Madhani</a> (<i>wrist-hand mechanism</i>) <li> Günter Niemeyer (<i>adaptive control and system integration</i>) <li> Daniel Theobald (<i>visual processing</i>) <li> Ichiro Watanabe (<i>machine learning</i>)</ul><p><hr><center><!WA11><a href="#S0">[Introduction]</a><!WA12><a href="#S1">[Our Robots]</a><!WA13><a href="#S2">[Our Research]</a><!WA14><a href="#S3">[References]</a></center><hr><!---------------------------------------><!------------- Our Robots --------------><!---------------------------------------><center><a name="S1"><h1>Introduction to our Robots</h1></a></center><center><table border=5 cellspacing=5 cellpadding=5><tr><a name="S1.1"><th colspan=2 bgcolor=steelblue>The Whole ArmManipulator</th></a></tr><tr><td>The MIT Whole Arm Manipulator (WAM) Arm is a very fast, force controllable robot arm designed in Dr. Salisbury's group at the AILab. The concept of "Whole Arm Manipulation" was originallyaimed at enabling robots to use all of their surfaces to manipulateand perceive objects in the environment. Central to this concept (andour group's design efforts in general) has been a focus on controllingthe forces of interaction between robots and the environment.To permit this, the WAM arm employs novel cable transmissions which arestiff, low friction and backdrivable. This in turn, permits alightweight design. To achieve good bandwidth in force control whilein contact with the environment, the arm's design maximizes the lowestresonant frequency of the system and employs an impedance matchingratio between motor and arm masses. This also enables the arm toachieve high accelerations while moving in free space.</td><td><!WA15><img src="http://www.ai.mit.edu/projects/handarm-haptics/images/wamdoor.gif" width=240 height=307></td><tr><td colspan=2>Prof. Slotine and his students have developed system architecturesand control algorithms for both force controlled tasks and tasksrequiring rapid and accurate free space motion. The algorithms alsoprovide fast and stable adaptation of the arm to large variations inloads and environments.</td></tr></table></center><p><center><table border=5 cellspacing=5 cellpadding=5><tr><a name="S1.2"><th colspan=2 bgcolor=steelblue>The Talon</th></a></tr><tr><td><!WA16><img src="http://www.ai.mit.edu/projects/handarm-haptics/images/wam3-2.gif" width=221 height=336></td><td>A new wrist-hand mechanism has been developed and replaces a previousforearm mounted system. The new wrist-hand, known as the Talon,provides 3 additional powered freedoms: one for grasping forces andtwo for orientation. The motors for the device are located in theforearm to minimize end-effector mass and maximize its workspace. Thegrasping mechanism is comprised of a group of 2 fingers which moveagainst a group of 3 fingers such that two groups may be made to meshtogether while encircling objects. Finger inner surfaces are serratedto provide for high contact friction against rough (rock) surfaces,and curved to enhance capturing large and small objects. Fingers maydeflect compliantly to accomodate to object geometry, and fingerdeflections may be sensed to provide for monitoring grasp state. Wealso have studied the design of a miniature end-effector suitable forgrasping small rocks and cylindrical objects. Similar in spirit tothe Talon, the new miniature end-effector utilizes slightly differentkinematics to enlarge its feasible grasping volume.</td></tr></table></center><p><center><table border=5 cellspacing=5 cellpadding=5><tr><a name="S1.3"><th colspan=2 bgcolor=steelblue>The Fast Eye Gimbals</th></a></tr><tr><td colspan=2>A more recent component of our system is our active vision systemwhich is comprised of two hi-resolution color CCD cameras with 50mmfocal length lenses mounted on two degree of freedom gimbals. We haveutilized cameras with a narrow field of view to give higher resolutionimages of typical objects. This implies, however, that the camerashave to be actuated in order to pan and tilt so that they can coverbroad scenes, leading to an active vision system, and an associatedtrade-off between controller precision and image resolution(narrowness of field of view).</td><tr><td><!WA17><img src="http://www.ai.mit.edu/projects/handarm-haptics/images/fegs2.gif" width=288 height=225></td><td>The actuators which we haveimplemented were designed in our lab and are known as the Fast EyeGimbals (FEGs). The FEGs provide directional positioning for ourcameras using a similar drive mechanism as the WAM. The two jointsare cable driven and have ranges of motion of +/- 90 degrees and +/-45 degrees in the base and upper joint axes respectively. These twoFEGs are currently strategically mounted on ceiling rafters with awide baseline for higher position accuracy using stereo visionmethods. The independent nature of the FEGs allow us to position eachone at different locations in order to vary the baseline ororientation of the coordinate frame as well as easily add additionalcameras to provide additional perspectives.</td></tr></table></center><p><hr><center><!WA18><a href="#S0">[Introduction]</a><!WA19><a href="#S1">[Our Robots]</a><!WA20><a href="#S2">[Our Research]</a><!WA21><a href="#S3">[References]</a></center><hr><!----------------------------------------------><!------------------ Our Research --------------><!----------------------------------------------><center><a name="S2"><h1>Research Projects</h1></a></center>
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -