So, I was working on a script to get Yolo starting only if an object is presented at the correct distance (1.0meter) using a Ultrasonic sensor.

This works nice.

Now is there a filter I could use to eliminate the background before Yolo analyses the picture?

@moz4r, I solved the neopixel issue by adding an extra sleep(4) at the beginning of my script.

def darknet(): 
  imagedisplay.closeAll()
  sleep(4)
  if isNeopixelActivated==1:
        i01.setNeopixelAnimation("Color Wipe", 25, 5, 10, 15)
  if isOpenCvActivated==1: 
    if i01.RobotIsOpenCvCapturing():
      i01.setHeadVelocity(-1,-1,-1,-1,-1,-1)
      i01.moveHead(75,90,90,30,0,90)
      sleep(1)
      takeFotoForYolo()
      statisticResult()
      sleep(0.1)
      analyseResult()
    else:
      i01.opencv.capture()
      sleep(1)
      i01.setHeadVelocity(-1,-1,-1,-1,-1,-1)
      i01.moveHead(75,90,90,30,0,90)
      sleep(1)
      takeFotoForYolo()
      statisticResult()
      sleep(0.1)
      analyseResult()

#######################################################

# -- coding: utf-8 --

def YoloOnUSonic():
  i01.setHeadVelocity(-1,-1,-1,-1,-1,-1)
  if ultraSonicSensorActivated:
    distance=200
    timeout=0
    timeoutGetCloser=0
    while (not distance or distance > 100):
      timeout+=1
      timeoutGetCloser+=1
      distance=i01.getUltrasonicSensorDistance()
      print distance
      if timeout > 20:
        chatBot.getResponse("SYSTEM_NO_OBJECT")
        sleep(1)
        break
      # ask to move object CLOSER
      if timeoutGetCloser>6:
        chatBot.getResponse("SYSTEM_GET_OBJECT_CLOSER")
        timeoutGetCloser=0
        sleep(1)
      sleep(0.5)
      # Nice an object is detected
    if distance<=100:
      chatBot.getResponse("SYSTEM_SEE_OBJECT")
      sleep(1)
  else:
    sleep(1)
    No()
    i01.mouth.speakBlocking("I think my Ultrasonic is not activated")

 

##############################################

In the _inmoovGestures.AIML:

 

<category><pattern># THIS IS</pattern>
<template>Let me see.
        <oob><mrl><service>python</service><method>exec</method><param>YoloOnUSonic()</param></mrl></oob></template>
</category>
<category><pattern>SYSTEM_NO_OBJECT</pattern>
<template><random>
          <li>Object or the person is too far</li>
          <li>Object should be within 2 meter range</li>
        </random></template>
</category>
<category><pattern>SYSTEM_GET_OBJECT_CLOSER</pattern>
<template><random>
          <li>Get closer, show the object</li>
          <li>Present the object closer</li>
          <li>Show the object closer</li>
        </random></template>
</category>
<category><pattern>SYSTEM_SEE_OBJECT</pattern>
<template><random>
          <li>Be patient I need to process</li>
          <li>This takes a few seconds to process</li>
          <li>My processor is slow, be patient</li>
        </random><oob><mrl><service>python</service><method>exec</method><param>darknet()</param></mrl></oob></template>
</category>

 

GroG

5 years 5 months ago

As moz4r suggested, the coordinate area would probably be the quickest & easiest.  
But there are many possibilities.  

If you get another person and ask "What is this ?":
another person would :

  • Find you "the person" in th picture
  • Determine where your hands were
  • Determine which hand was holding something
  • Segment/Fixate on the thing which was in your hand
  • Identify it
  • Respond

To go through that sequence would require Kinect/OpenNI or OpenPose (new tech) 
Then some segmentation strategy

Another possible segmentation/masking strategy could be motion.
For a long time we've had the Detector filter, which is a motion detection masking filter.  But it has long been broken by OpenCV changing their interface.

I fixed it locally to show you what I mean.

The motion filter has a switch that alternates between "learning the background" where it becomes hidden. And watching the foreground -  the result could be applied as a mask, or minimally a bounding box.

Lots of possiblities..

Thanks guys!

Since I am running Manticore version, I guess the detector filter is not yet an option for me. And yolo filter in Nixie is currently cooking my cpu. I need to wait a bit.

But I could try the idea of using the mask filter in the meantime. Does mask workon a single frame or it also works on the video?

When testing it in Manticore, after adding the mask filter, numbers are running in the top opencv frame but no mask appears. Maybe a bug or doesn't it work that way?

 

Since I am running Manticore version, I guess the detector filter is not yet an option for me. And yolo filter in Nixie is currently cooking my cpu. I need to wait a bit.

All these are very good reasons to make sure we have all the bugs tracked in the issue list ;)

But I could try the idea of using the mask filter in the meantime. Does mask workon a single frame or it also works on the video?

It works on video...  it flips between two modes .. learning a background and watching a background ...

Manticore is dead to me .... and its borked in that version ..

You can't add and remove a yolo filter ?

This is where I'm guessing you want to be ...

Hey InMoov, "What is this ?"

  • yolo filter starts
  • array of things detected gets published
  • yolo filter gets removed
  • InMoov says .. (from array) .. I see an apple, a person, and an oven

If you keep those things in memory and have a little logic that reports on "new" things ..
The next time you ask ...

  • yolo filter starts
  • array of things detected gets published
  • yolo filter gets removed
  • InMoov says .. (from array not in previous array) .. I see a cup

So this is object masking vs picture masking .. it might be easier ...