Action Designator

This example will show the different kinds of Action Designators that are available. We will see how to create Action Designators and what they do.

Action Designators are high-level descriptions of actions which the robot should execute.

Action Designators are created from an Action Designator Description, which describes the type of action as well as the parameter for this action. Parameter are given as a list of possible parameters. For example, if you want to describe the robot moving to a table you would need a NavigateAction and a list of poses that are near the table. The Action Designator Description will then pick one of the poses and return a performable Action Designator which contains the picked pose.

Move Torso

This action designator moves the torso up or down, specifically it sets the torso joint to a given value.

We start again by creating a description and resolving it to a designator. Afterwards, the designator is performed in a simulated_robot environment.

[4]:
from pycram.designators.action_designator import MoveTorsoAction
from pycram.process_module import simulated_robot

torso_pose = 0.2

torso_desig = MoveTorsoAction([torso_pose]).resolve()

with simulated_robot:
    torso_desig.perform()

Set Gripper

As the name implies, this action designator is used to open or close the gripper.

The procedure is similar to the last time, but this time we will shorten it a bit.

[5]:
from pycram.designators.action_designator import SetGripperAction
from pycram.process_module import simulated_robot

gripper = "right"
motion = "open"

with simulated_robot:
    SetGripperAction(grippers=[gripper], motions=[motion]).resolve().perform()

Park Arms

Park arms is used to move one or both arms into the default parking position.

[6]:
from pycram.designators.action_designator import ParkArmsAction
from pycram.process_module import simulated_robot
from pycram.enums import Arms

with simulated_robot:
    ParkArmsAction([Arms.BOTH]).resolve().perform()

Pick Up and Place

Since these two are dependent on each other, meaning you can only place something when you picked it up beforehand, they will be shown together.

These action designators use object designators, which will not be further explained in this tutorial so please check the example on object designators for more details.

To start we need an environment in which we can pick up and place things as well as an object to pick up.

[7]:
kitchen = Object("kitchen", ObjectType.ENVIRONMENT, "kitchen.urdf")
milk = Object("milk", ObjectType.MILK, "milk.stl", pose=Pose([1.3, 1, 0.9]))

world.reset_bullet_world()
Scalar element defined multiple times: limit
Scalar element defined multiple times: limit
[8]:
from pycram.designators.action_designator import PickUpAction, PlaceAction, ParkArmsAction, MoveTorsoAction, NavigateAction
from pycram.designators.object_designator import BelieveObject
from pycram.process_module import simulated_robot
from pycram.enums import Arms
from pycram.pose import Pose

milk_desig = BelieveObject(names=["milk"])
arm ="right"

with simulated_robot:
    ParkArmsAction([Arms.BOTH]).resolve().perform()

    MoveTorsoAction([0.3]).resolve().perform()

    NavigateAction([Pose([0.72, 0.98, 0.0],
                     [0.0, 0.0, 0.014701099828940344, 0.9998919329926708])]).resolve().perform()

    PickUpAction(object_designator_description=milk_desig,
                     arms=[arm],
                     grasps=["right"]).resolve().perform()

    NavigateAction([Pose([-1.90, 0.78, 0.0],
                     [0.0, 0.0, 0.16439898301071468, 0.9863939245479175])]).resolve().perform()

    PlaceAction(object_designator_description=milk_desig,
                target_locations=[Pose([-1.20, 1.0192, 0.9624],
                                   [0.0, 0.0, 0.6339889056055381, 0.7733421413379024])],
                arms=[arm]).resolve().perform()
[9]:
world.reset_bullet_world()

Look At

Look at lets the robot look at a specific point, for example if it should look at an object for detecting.

[9]:
from pycram.designators.action_designator import LookAtAction
from pycram.process_module import simulated_robot
from pycram.pose import Pose

target_location = Pose([1, 0, 0.5], [0, 0, 0, 1])
with simulated_robot:
    LookAtAction(targets=[target_location]).resolve().perform()

Detect

Detect is used to detect objects in the field of vision (FOV) of the robot. We will use the milk used in the pick up/place example, if you didn’t execute that example you can spawn the milk with the following cell. The detect designator will return a resolved instance of an ObjectDesignatorDescription.

[5]:
milk = Object("milk", ObjectType.MILK, "milk.stl", pose=Pose([1.3, 1, 0.9]))
[10]:
from pycram.designators.action_designator import DetectAction, LookAtAction, ParkArmsAction, NavigateAction
from pycram.designators.object_designator import BelieveObject
from pycram.enums import Arms
from pycram.process_module import simulated_robot
from pycram.pose import Pose

milk_desig = BelieveObject(names=["milk"])

with simulated_robot:
    ParkArmsAction([Arms.BOTH]).resolve().perform()

    NavigateAction([Pose([0, 1, 0], [0, 0, 0, 1])]).resolve().perform()

    LookAtAction(targets=[milk_desig.resolve().pose]).resolve().perform()

    obj_desig = DetectAction(milk_desig).resolve().perform()

    print(obj_desig)
ObjectDesignatorDescription.Object(name=milk, type=ObjectType.MILK, bullet_world_object=Object(world=<pycram.bullet_world.BulletWorld object at 0x7f73f87738b0>,
local_transformer=<pycram.local_transformer.LocalTransformer object at 0x7f73f8773b80>,
name=milk,
type=ObjectType.MILK,
color=[1, 1, 1, 1],
id=4,
path=/home/jdech/workspace/ros/src/pycram-1/src/pycram/../../resources/cached/milk.urdf,
joints: ...,
links: ...,
attachments: ...,
cids: ...,
original_pose=header:
  seq: 0
  stamp:
    secs: 1699445647
    nsecs: 368098735
  frame_id: "map"
pose:
  position:
    x: 1.3
    y: 1.0
    z: 0.9
  orientation:
    x: 0.0
    y: 0.0
    z: 0.0
    w: 1.0,
tf_frame=milk_4,
urdf_object: ...,
_current_pose=header:
  seq: 0
  stamp:
    secs: 1699445653
    nsecs: 726317167
  frame_id: "map"
pose:
  position:
    x: -1.1999999986110241
    y: 1.019199981411649
    z: 0.9623999834060677
  orientation:
    x: 6.574869882871473e-09
    y: 2.9171242826262372e-09
    z: 0.6339889522913499
    w: 0.7733421030646893,
_current_link_poses={'milk_main': header:
  seq: 0
  stamp:
    secs: 1699445647
    nsecs: 475535869
  frame_id: "map"
pose:
  position:
    x: 1.3
    y: 1.0
    z: 0.9
  orientation:
    x: 0
    y: 0
    z: 1
    w: 1},
_current_link_transforms={'milk_main': header:
  seq: 0
  stamp:
    secs: 1699445665
    nsecs:  21037817
  frame_id: "map"
child_frame_id: "milk_4"
transform:
  translation:
    x: 1.3
    y: 1.0
    z: 0.9
  rotation:
    x: 0.0
    y: 0.0
    z: 0.0
    w: 1.0},
_current_joint_states={},
base_origin_shift=[ 4.15300950e-04 -6.29518181e-05  8.96554102e-02],
link_to_geometry={'milk_main': <urdf_parser_py.urdf.Mesh object at 0x7f73f594b9a0>}), _pose=<bound method Object.get_pose of Object(world=<pycram.bullet_world.BulletWorld object at 0x7f73f87738b0>,
local_transformer=<pycram.local_transformer.LocalTransformer object at 0x7f73f8773b80>,
name=milk,
type=ObjectType.MILK,
color=[1, 1, 1, 1],
id=4,
path=/home/jdech/workspace/ros/src/pycram-1/src/pycram/../../resources/cached/milk.urdf,
joints: ...,
links: ...,
attachments: ...,
cids: ...,
original_pose=header:
  seq: 0
  stamp:
    secs: 1699445647
    nsecs: 368098735
  frame_id: "map"
pose:
  position:
    x: 1.3
    y: 1.0
    z: 0.9
  orientation:
    x: 0.0
    y: 0.0
    z: 0.0
    w: 1.0,
tf_frame=milk_4,
urdf_object: ...,
_current_pose=header:
  seq: 0
  stamp:
    secs: 1699445653
    nsecs: 726317167
  frame_id: "map"
pose:
  position:
    x: -1.1999999986110241
    y: 1.019199981411649
    z: 0.9623999834060677
  orientation:
    x: 6.574869882871473e-09
    y: 2.9171242826262372e-09
    z: 0.6339889522913499
    w: 0.7733421030646893,
_current_link_poses={'milk_main': header:
  seq: 0
  stamp:
    secs: 1699445647
    nsecs: 475535869
  frame_id: "map"
pose:
  position:
    x: 1.3
    y: 1.0
    z: 0.9
  orientation:
    x: 0
    y: 0
    z: 1
    w: 1},
_current_link_transforms={'milk_main': header:
  seq: 0
  stamp:
    secs: 1699445665
    nsecs:  21037817
  frame_id: "map"
child_frame_id: "milk_4"
transform:
  translation:
    x: 1.3
    y: 1.0
    z: 0.9
  rotation:
    x: 0.0
    y: 0.0
    z: 0.0
    w: 1.0},
_current_joint_states={},
base_origin_shift=[ 4.15300950e-04 -6.29518181e-05  8.96554102e-02],
link_to_geometry={'milk_main': <urdf_parser_py.urdf.Mesh object at 0x7f73f594b9a0>})>, pose=header:
  seq: 0
  stamp:
    secs: 1699445653
    nsecs: 726317167
  frame_id: "map"
pose:
  position:
    x: -1.1999999986110241
    y: 1.019199981411649
    z: 0.9623999834060677
  orientation:
    x: 6.574869882871473e-09
    y: 2.9171242826262372e-09
    z: 0.6339889522913499
    w: 0.7733421030646893)

Transporting

Transporting can transport an object from its current position to another target position. It is similar to the Pick and Place plan used in the Pick-up and Place example. Since we need an Object which we can transport we spawn a milk, you don’t need to do this if you already have spawned it in a previous example.

[7]:
kitchen = Object("kitchen", ObjectType.ENVIRONMENT, "kitchen.urdf")
milk = Object("milk", ObjectType.MILK, "milk.stl", pose=Pose([1.3, 1, 0.9]))
Scalar element defined multiple times: limit
Scalar element defined multiple times: limit
[11]:
from pycram.designators.action_designator import *
from pycram.designators.object_designator import *
from pycram.process_module import simulated_robot
from pycram.pose import Pose

milk_desig = BelieveObject(names=["milk"])

with simulated_robot:
    MoveTorsoAction([0.3]).resolve().perform()
    TransportAction(milk_desig, ["left"], [Pose([-0.9, 0.9, 0.95], [0, 0, 1, 0])]).resolve().perform()

Opening

Opening allows the robot to open a drawer, the drawer is identified by an ObjectPart designator which describes the handle of the drawer that should be grasped.

For the moment this designator works only in the apartment environment, therefore we remove the kitchen and spawn the apartment.

[12]:
kitchen.remove()
[13]:
apartment = Object("apartment", ObjectType.ENVIRONMENT, "apartment.urdf")
Unknown tag "material" in /robot[@name='apartment']/link[@name='coffe_machine']/collision[1]
Unknown tag "material" in /robot[@name='apartment']/link[@name='coffe_machine']/collision[1]
[14]:
from pycram.designators.action_designator import *
from pycram.designators.object_designator import *
from pycram.enums import Arms
from pycram.process_module import simulated_robot
from pycram.pose import Pose

apartment_desig = BelieveObject(names=["apartment"]).resolve()
handle_deisg = ObjectPart(names=["handle_cab10_t"], part_of=apartment_desig)

with simulated_robot:
    MoveTorsoAction([0.25]).resolve().perform()
    ParkArmsAction([Arms.BOTH]).resolve().perform()
    NavigateAction([Pose([1.7474915981292725, 2.6873629093170166, 0.0],
                         [-0.0, 0.0, 0.5253598267689507, -0.850880163370435])]).resolve().perform()
    OpenAction(handle_deisg, ["right"]).resolve().perform()

Closing

Closing lets the robot close an open drawer, like opening the drawer is identified by an ObjectPart designator describing the handle to be grasped.

This action designator only works in the apartment environment for the moment, therefore we remove the kitchen and spawn the apartment. Additionally, we open the drawer such that we can close it with the action designator.

[ ]:
kitchen.remove()
[17]:
apartment = Object("apartment", ObjectType.ENVIRONMENT, "apartment.urdf")
apartment.set_joint_state("cabinet10_drawer_top_joint", 0.4)
Unknown tag "material" in /robot[@name='apartment']/link[@name='coffe_machine']/collision[1]
Unknown tag "material" in /robot[@name='apartment']/link[@name='coffe_machine']/collision[1]
[22]:
from pycram.designators.action_designator import *
from pycram.designators.object_designator import *
from pycram.enums import Arms
from pycram.process_module import simulated_robot
from pycram.pose import Pose

apartment_desig = BelieveObject(names=["apartment"]).resolve()
handle_deisg = ObjectPart(names=["handle_cab10_t"], part_of=apartment_desig)

with simulated_robot:
    MoveTorsoAction([0.25]).resolve().perform()
    ParkArmsAction([Arms.BOTH]).resolve().perform()
    NavigateAction([Pose([1.7474915981292725, 2.8073629093170166, 0.0],
                         [-0.0, 0.0, 0.5253598267689507, -0.850880163370435])]).resolve().perform()
    CloseAction(handle_deisg, ["right"]).resolve().perform()