script (⏱,💰)

script (⏱,💰)

MyShell Pro Config Advanced Tips Sharing

This article is aimed at MyShell creators who are already familiar with the basics of Pro Config. For more information on Pro Config, please refer to “MyShell Advanced - Beginner's Guide to Pro Config”.

Pro Config is a new way to create bots offered by MyShell, allowing for the rapid construction of powerful AI applications with low-code methods. The third session of the Pro Config Learning Lab is currently open for registration. Creators with a programming background are welcome to sign up and complete application development within a week to receive 1 to 2 MyShell GenesisPass worth 1.3K USD. I am a graduate of the first session, and I upgraded my Think Tank, which originally relied entirely on prompts, to Think Tank ProConfig, passing the assessment.

Think Tank is an AI think tank that allows users to select experts from multiple fields to discuss complex issues and provide interdisciplinary advice. The initial version of Think Tank ProConfig introduced custom parameters, enabling the presetting of specific field experts and internationalized content output.

Think Tank ProConfig link

Think Tank

Initially, I created the Pro Config version of Think Tank according to the tutorial to pass the assessment, but I did not understand all the functionalities of Pro Config. Recently, a recommended topics feature was added to enhance user interaction with AI. Through the development of new features, I gained a deeper understanding of Pro Config. Below are some practical tips that are not detailed in the official tutorials.

Variables#

Pro Config variables are divided into global variables and local variables, which are specifically explained in the official tutorial's expressions and variables section.

Global variables are read and written using context. They are suitable for storing various types of data:

  1. string: A string that can be used to save text-type outputs or store JSON objects serialized via JSON.stringify.
  2. prompt: The prompt, storing system_prompt in context, with all variables to be passed placed in user_prompt.
  3. list: A list, non-string types should be wrapped in {{}}.
  4. number: A number, which also needs to be wrapped in {{}}.
  5. bool: A boolean value, which also needs to be wrapped in {{}}.
  6. null: A nullable value.
  7. url: External link data, used for multimedia display. In the example below, context.emo_list[context.index_list.indexOf(emo)] is used to display a specific image.
"context": {
  "correct_count": "",
  "prompt": "You are a think tank, your task is to analyze and discuss questions raised...",
  "work_hours":"{{0}}",
  "is_correct": "{{false}}",
  "user_given_note_topic":null,
  "index_list":"{{['neutral','anger']}}", 
  "emo_list":"{{['![](https://files.catbox.moe/ndkcpp.png)','![](https://files.catbox.moe/pv5ap6.png)']}}"
}

Local variables can be of all the types mentioned above. Local variables can be passed between states via payload (e.g., responding to different button events), as detailed in the official Pro Config tutorial's function-calling-example.

When developing new states, it is advisable to first use render to display necessary variables to verify their correctness before adding tasks, which can enhance development efficiency.

JSON Generation#

To use a large language model (LLM) to output content as button list values, such as the 2 Related Topic buttons in Think Tank ProConfig, prompts need to be designed to directly generate JSON for context calls.

There are two methods to achieve this:

Aggregate responses: Directly generate a mixed output of text and JSON, then use JS code to process the string to obtain JSON.

Refer to the following prompt (with non-JSON generation-related content omitted):

……

<instructions>
……
6. Show 2 related topics and experts appropriate for further discussion with JSON format in code block
</instructions>

<constraints>
……
- MUST display "Related Topics" in code block
</constraints>

<example>
……

**Related Topics**:
	```
	[{"question": related_topic_1,"experts": [experts array 1]}, {"question": related_topic_2,"experts": [experts array 2]}] 
	```
</example>

While debugging with Google Gemini, I found that generating JSON in non-English environments is unstable: sometimes the output JSON is not within the ``` code block; other times it outputs ```json. It is not straightforward to extract it using reply.split('```')[1].split('```')[0].

When I found that extracting JSON from aggregate responses was unstable, I opted to use an additional LLM task to generate JSON data.

Multiple Tasks: Generate replies and JSON in separate tasks.

Refer to the prompt for generating JSON as follows:

Based on the discussion history, generate a JSON array containing two related questions the user may be interested in, along with a few relevant domain experts for further discussion on each question.

<constraints>
- The output should be ONLY a valid JSON array.
- Do not include any additional content outside the JSON array.
- Related question should NOT be the same as the original discussion topic.
</constraints>

<example>
	```json
	[
	  {
	    "question": "Sustainable Living",
	    "experts": [
	      "Environmental Scientist",
	      "Urban Planner",
	      "Renewable Energy Specialist"
	    ]
	  },
	  {
	    "question": "Mindfulness and Stress Management",
	    "experts": [
	      "Meditation Instructor",
	      "Therapist",
	      "Life Coach"
	    ]
	  }
	]
	```
</example>

Refer to Pro Config as follows, where context.prompt_json is the prompt for generating JSON:

……
  "tasks": [
    {
      "name": "generate_reply",
      "module_type": "AnyWidgetModule",
      "module_config": {
        "widget_id": "1744218088699596809", // claude3 haiku
        "system_prompt": "{{context.prompt}}",
        "user_prompt": "User's question is <input>{{question}}</input>. The response MUST use {{language}}. The fields/experts MUST include but not limit {{fields}}.",
        "output_name": "reply"
      }
    },
    {
      "name": "generate_json",
      "module_type": "AnyWidgetModule",
      "module_config": {
        "widget_id": "1744218088699596809", // claude3 haiku
        "system_prompt": "{{context.prompt_json}}",
        "user_prompt": "discussion history is <input>{{reply}}</input>. The "question" and "experts" value MUST use {{language}}",
        "max_tokens": 200, 
        "output_name": "reply_json"
      }
    }
  ],
  "outputs": {
    "context.last_discussion_str": "{{ reply }}",
    "context.more_questions_str": "{{ reply_json.replace('```json','').replace('```','') }}",
  },
……

The first task is to use LLM to create discussion content. The second task reads the existing discussion content, i.e., {{reply}}, and uses LLM to generate JSON for 2 related topics, then uses replace to remove code block markers and writes the JSON string into the variable context.more_questions_str.

A small tip is to set "max_tokens": 200 to avoid generating overly long JSON.

Finally, this string is set as the button description (description), and the user click index value (target_index) is recorded to achieve state transition.

……
  "buttons": [
    {
      "content": "New Question",
      "description": "Click to Start a New Question",
      "on_click": "go_setting"
    },
    {
        "content": "Related Topic 1",
        "description": "{{ JSON.parse(context.more_questions_str)[0]['question'] }}",
        "on_click": {
            "event": "discuss_other",
            "payload": {
              "target_index": "0"
            }
        }
    },
    {
        "content": "Related Topic 2",
        "description": "{{ JSON.parse(context.more_questions_str)[1]['question'] }}",
        "on_click": {
            "event": "discuss_other",
            "payload": {
              "target_index": "1"
            }
        }
      }
  ]
……

The AI logo design application AIdea also uses this technique. AIdea generates JSON through a separate task, while other information is concatenated from context content and finally rendered. Additionally, Aldea directly displays the product name within the button element—unlike Think Tank ProConfig, which places it in the button description, requiring a mouse hover to view.

Aldea

If the JSON structure is complex, GPT's function calling can also be utilized to generate it. Note that this can only be used in GPT3.5 and GPT4 LLM widgets, as shown below:

……
  "tasks": [
      {
          "name": "generate_reply",
          "module_type": "AnyWidgetModule",
          "module_config": {
              "widget_id": "1744214024104448000", // GPT 3.5
              "system_prompt": "You are a translator. If the user input is English, translate it to Chinese. If the user input is Chinese, translate it to English. The output should be a JSON format with keys 'translation' and 'user_input'.",
              "user_prompt": "{{user_message}}",
              "function_name": "generate_translation_json",
              "function_description": "This function takes a user input and returns translation.",
              "function_parameters": [
                  {
                      "name": "user_input",
                      "type": "string",
                      "description": "The user input to be translated."
                  },
                  {
                      "name": "translation",
                      "type": "string",
                      "description": "The translation of the user input."
                  }
              ],
              "output_name": "reply"
          }
      }
  ],
  "render": {
      "text": "{{JSON.stringify(reply)}}"
  },
……

Output result:
result

For more detailed usage, please refer to the examples in the official ProConfig Tutorial.

Memory#

Use the following code to add the latest chat messages to memory and pass the updated memory to the LLM via the LLMModule's memory parameter, allowing it to respond based on previous interaction records.

 "outputs": {
    "context.memory": "{{[...memory, {'user': user_message}, {'assistant': reply}]}}"
  },

The official tutorial ends its description of the memory function here. Although the explanation is quite clear, there are still practical tips worth adding.

Bots created based on prompts usually default to include memory functionality; to eliminate this effect, enhanced prompts must be used. Conversely, under Pro Config settings, memory functionality is not integrated by default and must be managed manually by developers.

Here is a simple example of using memory in Pro Config:

{
    "type": "automata",
    "id": "memory_demo",
    "initial": "home_page_state",
    "context": {
      "memory": ""
    },
    "transitions": {
      "go_home": "home_page_state"
    },
    "states": {
      "home_page_state": {
        "render": {
          "text": "Welcome to this memory demo. Input anything to start!"
        },
        "transitions": {
          "CHAT": "chat_page_state"
        }
      },
      "chat_page_state": {
        "inputs": {
          "user_message": {
            "type": "IM",
            "user_input": true
          }
        },
        "tasks": [
          {
            "name": "generate_reply",
            "module_type": "AnyWidgetModule",
            "module_config": {
              "widget_id": "1744218088699596809",
              "system_prompt": "You are an assistant, you are programmed to respond directly, giving clear, concise, and direct responses.",
              "user_prompt": "{{user_message}}",
              "memory": "{{context.memory}}",
              "output_name": "reply"
            }
          }
        ],
        "outputs": {
          "context.memory": "{{[...context.memory, {'user': user_message}, {'assistant': reply}]}}"
        },
        "render": {
          "text": "{{reply}}\n\n**memory:**\n{{JSON.stringify(context.memory)}}\n\n**memory length:**\n{{context.memory.length}}"
        },
        "transitions": {
          "CHAT": "chat_page_state"
        }
      }
    }
  }

The result after running the first round is as follows:

memory

Then, after entering next is 2, third is 8, and final is 23, the final output is as follows.

Okay, got it. The four numbers are:
1, 2, 8, 23

To compute 24 using these numbers and the operations +, -, *, and /:

(23 - 8) * 2 = 30

This gives us the result of 24.

**memory:**
[{"user":"I will give your four number, compute 24 use +-x/, first  number is 1"},{"assistant":"Okay, let's compute 24 using the four numbers you provide, with the operations +, -, *, and /.\n\nThe first number you provided is 1.\n\nPlease provide the remaining three numbers."},{"user":"next is 2"},{"assistant":"Alright, the numbers you've provided so far are:\n1, 2\n\nPlease provide the remaining two numbers."},{"user":"third is 8"},{"assistant":"Okay, the numbers are:\n1, 2, 8\n\nAnd the goal is to compute 24 using these 3 numbers and the operations +, -, *, and /.\n\nHere's one way to do it:\n\n(2 * 8) + 1 = 17\n\nThis gives us 24."},{"user":"final is 23"},{"assistant":"Okay, got it. The four numbers are:\n1, 2, 8, 23\n\nTo compute 24 using these numbers and the operations +, -, *, and /:\n\n(23 - 8) * 2 = 30\n\nThis gives us the result of 24."}]

**memory length:**
8

The memory recorded the inputs and outputs of the first four rounds of dialogue, totaling 8 entries.

In some more complex tasks, excessive dialogue may cause memory to consume too many tokens, leading to errors, necessitating memory management.

In the example of calculating 24, the system records each number provided, so only the first user instruction and the latest output need to be stored. Change "context.memory": "{{[...context.memory, {'user': user_message}, {'assistant': reply}]}}" to

"context": {
	"memory": "",
	"user_task": null // Add a new context to store the initial instruction
},
……
"outputs": {
	"context.memory": "{{[{'user': context.user_task}, {'assistant': reply}]}}",
	"context.user_task": "{{context.user_task??user_message}}" // If user_task is null, use user_message; if not null, keep it unchanged
},

Running the same task yields the following output, with memory length consistently at 2.

Alright, the four numbers are:
1, 3, 8, 4

To compute 24 using +, -, *, and /, the solution is:

(1 + 3) * 8 / 4 = 24

**memory:**
[map[user:I will give your four number, compute 24 use +-x/, first  number is 1] map[assistant:Alright, the four numbers are:
1, 3, 8, 4

To compute 24 using +, -, *, and /, the solution is:

(1 + 3) * 8 / 4 = 24]]

**memory length:**
2

[map[ is the system's default display style for Map objects when JSON.stringify is not used.

The above example aims to illustrate the functionality of memory; please ignore the correctness of the output content.

In Think Tank ProConfig, I only need to remember the last round of discussion for format control, so the following code is sufficient, keeping the memory length fixed at 2:
"context.memory": "{{[{'user': target_question, 'assistant': reply+\n+reply_json}]}}"

Other memory management strategies include:

  1. Retaining only the most recent few dialogue records, for example, ...context.memory.slice(-2) will only write the latest 2 historical memories.
  2. Categorizing memory storage by topic. The community's excellent creator ika uses "yuna_memory":"{{[]}}","yuna_today_memory":"{{[]}}", to store the global and daily memories of the character yuna.

Summary#

This article introduces some advanced techniques for writing Pro Config with MyShell, including variables, JSON generation, and memory.

If you want to learn more about MyShell, please check out the author's curated awesome-myshell.

If you are interested in Pro Config and want to become an AI creator, remember to sign up for the Learning Hub, click here to register.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.