The aim of this lesson is to learn how to make Misty more aware of her surroundings using event-based skills with face and object recognition, as well as audio localization. By the end of the lesson, you will be able to build more complex skills in Python for your robot character.
Face Recognition
Challenge 1: Remember me?
Now that you've explored how to make Misty more aware of her environment and respond to human touch in Python, you can also make her recognize faces using her face recognition capabilities just like in Blockly. If you've trained Misty to recognize your face in Lesson 6: Face recognition you don't need to repeat the process for Python, your FaceID is already in her memory.
The amazing thing about programming is that once you have learned a way to solve a problem if you you can use the same solution for similiar problems. So even for the face recognition event the structure of the code for the function and for the event is the same as for the bump and touch sensors events with one exception. Some event functions require you to first enable the service you are using, in this case we need to use the misty.start_face_recognition API call to enable the service for the face recognition event.
In order to distinguish faces in Python you can write the FaceID name in the “Label” parameter and build your conditions with sequences as shown in the example below. Give it a try!
If you want Misty say hello to everyone she already recognizes you can generalize the code using event messages that we covered in the previous lesson. This will not only compact your code, but open up a new realm of interactions that you can create. Try editing the previous code using the example below.
Well done so far! You can now make Misty recognize you and your friends, this is an essential element for robots to be able to establish and remember relationships just like us humans. But it doesn't stop there, you can even have her recognize objects like a TV, Toothbrush, Bicycle and much more. See Known objectsfor more information. Using the same event skill structure as for face recognition and the example below, try building a skill where Misty tells you what object she sees.
from mistyPy.Robot import Robotfrom mistyPy.Events import Eventsimport timemisty =Robot()misty.start_object_detector()defrecognized(data):print(data) misty.speak("Oh sweet, I think I see a"+ data["message"]["description"], 1)if data["message"]["description"] =='person': time.sleep(2) misty.play_audio("s_Awe.wav", 20) time.sleep(1) misty.speak("I love humans, they are my best friends") misty.transition_led(0, 255, 0, 255, 255, 0, "TransitOnce", 1000)elif data["message"]["description"] =='tv': misty.speak("Lets watch some sci-fi robot movies together") misty.transition_led(255, 0, 0, 255, 127, 0, "TransitOnce", 1000)misty.register_event(event_name='object_detection_event', event_type=Events.ObjectDetection, callback_function=recognized, keep_alive=False)
misty.keep_alive()
Audio Localization
Challenge 8: Marco Polo
Besides recognizing faces and objects, you can also teach Misty to understand where your voice or a sound is coming from using audio localization. This is set up like any other event using the source tracking event together with the misty.start_recording_audio API. The tricky part about setting up an audio localization event is using the audio data to tell Misty in which direction she should move her head. To solve this you can set a variable called doa (degreeofarrival). Since Misty will create an audio recording, you will need to delete it once the event is complete. To do this, you can set up sensor event with one of her bumpers. Let's try playing the game Marco Polo using audio localization together with a face recognition event!
from mistyPy.Events import Eventsfrom mistyPy.Robot import Robotmisty =Robot()audio_file_name ="deleteme.wav"defbumper_press(event):if(event["message"]["sensorId"] =="bfr"and event["message"]["isContacted"] ==True): misty.stop_recording_audio() misty.delete_audio(audio_file_name) misty.unregister_all_events()print("finished")defsource_tracking(event): doa = event["message"]["degreeOfArrivalSpeech"]# Calculate the head angle based on the DOA (adjust the scaling factor as needed) head_angle =90- doa # Subtract 90 degrees to center the head# Set the head movement duration and units as needed movement_duration =1.0# You can adjust the duration as needed movement_units ="degrees"# Move the head towards the sound source misty.move_head(0, 0, head_angle, duration=movement_duration, units=movement_units)print(f"DOA: {doa}, Head Angle: {head_angle} degrees")defrecognized(data):print(data)if data["message"]["label"] =='yourname': misty.speak("Polo") misty.transition_led(0, 255, 0, 255, 127, 0, "TransitOnce", 1000)# Register for the FaceRecognition eventmisty.register_event(event_name='face_recognition_event', event_type=Events.FaceRecognition, callback_function=recognized, keep_alive=True)
misty.register_event(event_type=Events.BumpSensor, event_name="bump pressed", keep_alive=True, callback_function=bumper_press)
misty.register_event(event_type=Events.SourceTrackDataMessage, event_name="some name", keep_alive=True, callback_function=source_tracking, debounce=1000)
misty.start_recording_audio(audio_file_name)misty.start_face_recognition()misty.keep_alive()