Misty Lessons
  • Misty Lessons
    • 📖Welcome to Misty Lessons
    • 📚Get to know your Misty
    • 📲Connect to Misty
    • 👩‍💻Misty Studio
    • 🖥️Desktop Environment
    • ⬆️Update your Misty
    • 👥Projects
  • Blockly
    • 🧩Blockly Lessons
      • 🤸Lesson 1: Movement
      • 🎶Lesson 2: Voice and Sound
      • 🎭Lesson 3: Expressions
      • 🎥Lesson 4: Robot Cinema
      • 🛠️Lesson 5: Events
      • ☺️Lesson 6: Face recognition
      • 🔢Lesson 7: Variables and Functions
      • 💬Lesson 8: NLP
    • 🏫Blockly projects
  • Blockly Elements
    • ⚛️Misty Blocks
      • Movement
      • Speech
      • Audio
      • Vision
      • Events
      • Miscellaneous
      • NLP
      • System
    • 🔁Basic Blocks
      • Logic
      • Loops
      • Math
      • Text
      • Lists
    • 🅰️Advanced Blocks
      • Variables
      • Functions
  • Python
    • 🐍Python Lessons
      • 🦿Lesson 1: Loco-motion
      • 🤖Lesson 2 : Build a character
      • 🧠Lesson 3: Create memories
      • ⚒️Lesson 4: Event skills
      • 👀Lesson 5: Expand awareness
      • 🔗Lesson 6: Compact code
      • 🗣️Lesson 7: Start a conversation
  • Python Elements
    • 🐸Misty Python API
      • Motion and Mobility
      • Display and LED
      • Record Assets
      • Change/Remove Assets
      • Stream Assets
      • Get Assets
      • Events
      • Sensor Events
      • Speech and NLP
      • Arduino Backpack
      • System
      • Slam
    • 📗Python Definitions
  • Python Projects
    • 🔮MistyGPT
    • 🚨Misty Intruder Alert
    • 📺Conference Assistant
    • 🏷️QR code detector
    • 🕵️‍♂️Misty follow human
    • 👋Misty wave back
    • 🖲️Misty OA
    • 🌐Get weather
    • 🚚Misty Delivery
    • 🫂Motivational Misty
    • 🖼️Misty Museum Guide
    • 🎃Who for Halloween
  • ARDUINO
    • ♾️Arduino Backpack
    • 🦎Arduino
    • 🔧Arduino Lessons
      • 🔌Arduino to Misty
      • ➕Misty to Arduino
  • ARDUINO PROJECTS
    • 🛠️Misty Tracker
    • 🦾Misty Arm
  • Community Projects
    • 🌤️Misty weather forecaster
  • HARDWARE EXTENSION
    • ⚙️Arduino breadboard support
    • 🦾Misty arm
    • 🥤Tin holder
  • Resource Database
    • 📁Image files
    • 📁Audio files
    • 📁Languages
    • 📁Known objects
    • 📁NLP Actions
    • 📁Action Commands
    • 📁ChatGPT PDF files
    • 📁AR Tag Dictionary
    • ⚙️Technical Specifications
Powered by GitBook
On this page
  1. Python Projects

MistyGPT

Welcome to MistyGPT! Using this project template you can learn how to integrate ChatGPT in your Misty to have autonomous conversations and combine it with Langchain to give Misty verbal commands as well as ask for information. Note: Can only be used in a desktop environment, not compatible with the Misty Studio Python Interface.

import os
import sys
from mistyPy.Robot import Robot
from mistyPy.Events import Events
import time
import random
import requests

# Initialize Misty with your robot's IP address
misty = Robot("YOUR_IP_ADDRESS")
#Robot default
misty.set_default_volume(50)

#To install dependencies use pip install langchain_community,  pip install openai and pip install constants
import constants
from langchain_community.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain_community.llms import OpenAI
from langchain_community.chat_models import ChatOpenAI

os.environ["OPENAI_API_KEY"] = constants.APIKEY
print("API key loaded")

#query = sys.argv[1]

loader = TextLoader('Python-SDK-main\data.txt')
print("data.txt is loaded")
#loader = DirectoryLoader(".", glob="*.txt")
index = VectorstoreIndexCreator().from_loaders([loader])

def speech_captured(data):
    if data["message"]["step"] == "CompletedASR":
        user_input = data["message"]["text"]
        process_user_input(user_input)
        print(user_input)


def process_user_input(user_input):
    mistyOutput = index.query(user_input, llm=ChatOpenAI())
    #print(mistyOutput)
    moveArms = "move my arms"
    moveHead = "move my head"
    moveForward = "go forward"
    moveBackward = "go backward"
    moveForGesture1 = "intelligence"
    lowerVolume = "lower my volume"
    higherVolume = "higher my volume"
    changeDisplay= "change my display"
    print(mistyOutput)
    misty.speak_and_listen(mistyOutput)
    if moveForGesture1 in mistyOutput:
        misty.move_arms(-70, 50, 40, 40)
        time.sleep(1)
        print("left arm moved")
        misty.move_arms(50, 50, 40, 40)
    elif moveArms in mistyOutput:
        misty.move_arms(-50, -50, 40, 40)
        time.sleep(2)
        misty.move_arms(50, 50, 40, 40)
        print("arms moved")
    elif moveHead in mistyOutput:
        misty.move_head(0, -25, 0, 100, None, None)
        time.sleep(2)
        misty.move_head(0, 25, 0, 100, None, None)
        time.sleep(2)
        misty.move_head(0, 0, 0, 100, None, None)
        print("arms moved")
    elif moveForward in mistyOutput:
        misty.drive_time(5000,1,5000,0)
        print("moving forward")
    elif moveBackward in mistyOutput:
        misty.drive_time(-5000,1,5000,0)
        print("moving forward")
    elif lowerVolume in mistyOutput:
        misty.set_default_volume(50)
    elif higherVolume in mistyOutput:
        misty.set_default_volume(100)
    elif changeDisplay in mistyOutput:
        misty.display_image("e_JoyGoofy3.jpg")
        time.sleep(3)
        misty.display_image("e_EcstacyHilarious.jpg")
        time.sleep(3)
        misty.display_image("e_defaultcontent.jpg")

def recognized(data):
    print(data)  
    misty.speak("Yay, Hi " + data["message"]["label"], 1)
    misty.stop_face_recognition()
    time.sleep(2)
    misty.start_dialog()
    misty.speak_and_listen("How can I help you today", utteranceId="required-for-callback")

#If Misty is lifted she gets a bit touchy about that.
def touch_sensor(data):
    if data["message"]["sensorId"] == "cap" and data["message"]["isContacted"] == True:
        touched_sensor = data["message"]["sensorPosition"]
        print(touched_sensor)
        if touched_sensor == "Scruff":
            misty.play_audio("s_Rage.wav")
            misty.display_image("e_Anger.jpg")
            time.sleep(3)
           #Triggers face recognition event to initate ChatGPT
        if touched_sensor == "HeadFront": 
            misty.move_head( -5, 0, 0, 85, None, None)
            misty.display_image("e_Joy2.jpg")
            misty.speak("Aha")
            time.sleep(1)
            misty.start_face_recognition()
            #Stops ChatGPT event
        if touched_sensor == "Chin":
            misty.move_head(0, -50, 0, 150, None, None)
            misty.play_audio("s_Love.wav")
            misty.display_image("e_Love.jpg")
            time.sleep(2)
            misty.display_image("e_DefaultContent.jpg")
            misty.unregister_event("arbitrary-name")

misty.register_event(event_name="touch-sensor",
                     event_type=Events.TouchSensor,
                     callback_function=touch_sensor,
                     keep_alive=True)


misty.register_event(event_name="arbitrary-name",
                     event_type=Events.DialogAction,
                     callback_function=speech_captured,
                     keep_alive=True)

misty.register_event(event_name='face_recognition_event', 
                     event_type=Events.FaceRecognition, 
                     callback_function=recognized, 
                     keep_alive=False)

#misty.speak(index.query(query, llm=ChatOpenAI()))

x = 4
while (x > 3):
    misty.display_image("e_DefaultContent.jpg")
    misty.move_arms(30, 30, 40, 40)
    misty.move_head(0, 0, 0, 85, None, None)
    time.sleep(5)
    misty.display_image("e_ContentLeft.jpg")
    time.sleep(3)
    misty.move_arms(20, 10, 40, 40)
    time.sleep(2)
    misty.move_head(0, -10, 0, 60, None, None)
    time.sleep(5)
    misty.display_image("e_ContentRight.jpg")
    time.sleep(3)
    misty.move_head(0, 10, 0, 60, None, None)
    time.sleep(5)
    misty.move_arms(10, 20, 40, 40)
    

print("testing")
misty.keep_alive()
APIKEY = "YOUR_OPEN_AI_API_KEY"
Limit your responses to 3 sentences unless you are explaining a theoretical concept

When asked about moving, tell some information about being a robot that can move. Maybe a joke.

Yes I can move my arms.
Yes I can move my head.
Yes I can go forward.
Yes I can go backward.
Yes I can lower my volume.
Yes I can higher my volume.
Yes I can change my display
Yes I can change my ChestLED
Yes I can play domo arrigato
Yes I can displayVideo
Yes I can do facerecognition

When not getting any input from user, say the following but do not ask questions about what the user knows or the users capabilities:
Can you say that again?
Please say that again
Sorry, I didnt get that
Are you still there?
Sorry I got distracted, can you say that again?
```

Last updated 1 year ago

🔮