Home Artists Posts Import Register

Downloads

Content

This is for video: https://youtu.be/9-PxJcrZLGo

In this video we will review an experimental memory idea I had, which uses dynamic and parallel summarization along with secondary memory to keep the main memory for the GPT 4 chatbot much smaller than regular memory  Code files are available  at Patreon(ALL FILES): https://www.patreon.com/posts/94591199

Everything GPT API Masterclass: https://www.patreon.com/posts/everything-gpt-8-92436797

Poll AI People: https://www.askaipeople.com/

CodeHive 900+ GPT python chat apps: https://www.codehive.app/

Search 200+ echohive videos and code download links: https://www.echohive.live/

Quick start if you are new to coding and GPT API: https://youtu.be/YMhsatiXiGc https://youtu.be/YMhsatiXiGc

Voice controlled Auto AGI with swarm and multi self launch capabilities: https://youtu.be/zErt3Tp7srY

Auto AGI original video: https://youtu.be/jTC-6kBOfn8

Auto AGI original source code: https://www.patreon.com/posts/87530987

Chat with us on Discord:  https://discord.gg/PPxTP3Cs3G

Follow on twitter(X) : https://twitter.com/hive_echo

Files

Comments

Axel

Thanks for this interesting project. I guess, you could always show the full long_messages to the user and use the summarised_messages only in the background for the openai-chat. Or, even switch between showing summarised_msg or long_msg through a command (like: /fullmsg=on or off). I'd see benefits for both, depending on the use-case.

echohive42

Yeah full messages are displayed to the user directly from API. What happens in json files are only to keep track of what is happening in the background.

Rasika Singal

I do not see file: step_2_main.py as shown in the video?