When mounted camera recognizes weed by image processing, the following process will be run.

  1. Discriminate if weed is located at an area where arm can reach. We can decide it by that if weed is located in foreground and scissors is located in background. Scissors and arm are always located in front of the camera, we can see an 256×256 image including those objects if there is no occlusion. We can extract specific segment to scissors and arm from this image by calculating super pixel. Then we mask these segments and set other segments to black. We call it image ①. Although unfortunately there is a problem we couldn’t extract scissors as below photo shows, it can be resolved by seal it using same material (blue nitrile rubber) as arm is covered. Scissors and nitrile rubber will be glued using Cemedine 3000 series for hetero materials.

    Next current captured image is called ②. Compare both image ① and ② while ROI is limited to scissors and arm segments included in image ① by that subtraction result of RGB color intensity of image ① from ② exceeds pre-defined threshold. Naturally even if weed is located closer to camera than scissors, there is a case it’s too close. However we can recogize this situation in terms of missing segment in blue cover when we calculate super pixel. Thus we can expect the subtraction result of RGB color intensity of image ① from ② exceeds pre-defined threshold.
  2. If weed is located in the area where arm can reach, we process an image in order to identify around the root of weed. We can use same image processing scheme as before, that is, already learned cascade classifier or subtraction of image sequence in a captured video. In the latter case, we try to identify when weed segment in an image is switched from background to foreground or weed is deformed. One of example is shown below. The process how to discriminate foreground and background is same as 1.

    The advantage of foreground and background identification is that we can reuse same process as before. However this process can’t answer where should be cut. Although this is a subject to be considered later, we can recognize how scissors is far away from the ground in terms of current servo angle so that we will stop current motion immediately and switch the mode to exploration if the robot is about to cut extremely upper part of weed.
  3. If weed is located in the area where arm can reach, we calculate servo angle in order to coincide the position of scissors rotate axis with root of weed by means of inverse kinematics. Scissors needs to be open by 30 degrees before required motion is realized. The process if scissors is located at appropriate position to cut around the root of weed is same as before. In this scenario one part of scissors is located at farthest, weed is middle and another part of scissors is closest. If scissors can’t reach appropriate position to cut around the root of weed, same process should be executed again after arm is returned to the original position. If scissors can’t catch weed three times, the robot gives up and then continue to explore other weeds after arm is returned to the original position.
  4. If scissors can catch weed successfully, main controller sends signal to servo motor to open/close scissors three times. Currently we don’t have a plan to see if scissors cuts weed successfully.
  5. If weed isn’t located in the area where arm can reach, the robot continues to search for another weed.

Next we describe the simulation procedure using 3D CAD.

  1. Create a new body on the ground and project an image containing weed on that body.
  2. Move the robot to forward direction.
  3. Discriminate if that image is located within the range of view field angle(horizontally160 degrees, vertically 90 degrees)。Here we simply think that image is located within the range if it is in front of camera.
  4. If image is located within the range, we decide the image contains weed or not.
  5. If image contains weed, we do image processing and arm control as described above section.
  6. Once it is successfully done, we increase number of sample images. Additional sample images should include negative case as well as positive one.
Now robot is thinking about the image…