How to live within the prompt size constraints of #LLM #chatbots such as #ChatGPT?
When automating practically any real-world task, the amount of contextual knowledge required can be astounding. Meanwhile current ChatGPT has a limitation for tokens in the prompt+output of 4,000 tokens. The quality also goes down sharply long before this limit is reached.
The solution is simple: Decompose and structure the task into independent components which can be handled one by one in a conditional tree.
You take the input to the process, and ask ChatGPT: "Does this relate to topic X. Answer only yes/no."
You capture the answer and depending on that you go deeper into that subtopic, each time in a new, clean session. When you reach a "leaf" in the decision diagram, you ask ChatGPT to produce a conclusion for that one thing only.
You collect the conclusions across many, many sessions, in aggregate applying a huge amount of contextual knowledge to the task over multiple clean sessions, and finally you get the whole result which you can combine from the sub results either with ChatGPT or procedurally.
To do this effectively you need to #document your domain in a hierarchical fashion in small bite-sized, understandable bits. This doesn't only help #AIs, it will also help new #employees.
In the future, this encoded knowledge base will become a core asset of any #business. No better time to start #documenting your #processes, #procedures and #tasks than now!