6自由度puma機器人仿真程序源碼。This is PUMA3d.M, a 3D Matlab Kinematic model of a Puma ROBOT located in the ROBOTics lab of Walla Walla University.
The file uses CAD data converted to Matlab using cad2matdemo.m, which is located on the Mathworks central file exchange.
%
% This file is still being developed, for the latest version check the
% Mathworks central file exchange.
In this paper we describe a control methodology for
catching a fast moving object with a ROBOT manipulator,
where visual information is employed to track the
trajectory of the target. Sensing, planning and control
are performed in real-time to cope with possible
unpredictable trajectory changes of the moving target,
and prediction techniques are adopted to compensate the
time delays introduced by visual processing and by the
ROBOT controller. A simple but reliable model of the
ROBOT controller has been taken into account in the
control architecture for improving the performance of the
system. Experimental results have shown that the ROBOT
system is capable of tracking and catching an object
moving on a plane at velocities of up to 700 mm/s and
accelerations of up to 1500 mm/s2.
This paper presents a visual based localization
mechanism for a legged ROBOT. Our proposal, fundamented
on a probabilistic approach, uses a precompiled topological
map where natural landmarks like doors or ceiling lights
are recognized by the ROBOT using its on-board camera.
Experiments have been conducted using the AIBO Sony
ROBOTic dog showing that it is able to deal with noisy sensors
like vision and to approximate world models representing
indoor ofce environments. The two major contributions of
this work are the use of this technique in legged ROBOTs, and
the use of an active camera as the main sensor
-- Simple ROBOT Control Program
--------------------------------------------------------------------------
-- Left is left IR sensor - 1=object to left
-- Right is rigth IR sensor - 1=object to right
-- Lmotor_dir 1=forward 0=reverse
-- Rmotor_dir 1=forward 0=reverse
-- Lmotor_speed 111=fast 000=slow
-- Rmotor_speed 111=fast 000=slow
-- Simple ROBOT Control Program
--------------------------------------------------------------------------
library IEEE
use IEEE.STD_LOGIC_1164.all
use IEEE.STD_LOGIC_ARITH.all
use IEEE.STD_LOGIC_UNSIGNED.all
This approach addresses two difficulties simultaneously: 1)
the range limitation of mobile ROBOT sensors and 2) the difficulty of detecting buildings in
monocular aerial images. With the suggested method building outlines can be detected
faster than the mobile ROBOT can explore the area by itself, giving the ROBOT an ability to
“see” around corners. At the same time, the approach can compensate for the absence
of elevation data in segmentation of aerial images. Our experiments demonstrate that
ground-level semantic information (wall estimates) allows to focus the segmentation of
the aerial image to find buildings and produce a ground-level semantic map that covers
a larger area than can be built using the onboard sensors.