Forum Discussion
sandyotic
Mar 01, 2025Copper Contributor
Proposal for Enhancing ChatGPT's Response Quality During Training
Dear Microsoft Research and OpenAI Teams,
I hope this message finds you well. I would like to propose a method for significantly improving the quality of ChatGPT’s responses during its training phase, specifically without relying on a chain of reasoning.
Using the current version of ChatGPT, it is possible to annotate the logical accuracy of statements in a structured manner. This method could substantially enhance response quality at the foundational training stage and help mitigate hallucinations. The approach I apply is highly cost-effective and does not require complex reasoning chains. Furthermore, it could inspire the development of even more efficient techniques based on similar principles.
Using the existing ChatGPT, you can break down the entire text into sentences and check each sentence separately in the training data. This is very inexpensive compared to other text verification methods and immediately provides huge results without complex algorithms.
Proposed Logical Annotation Categories for sentences:
logical-error
ok100 (completely true)
ok90 (high probability of truth)
ok60 (moderate probability of truth)
ok51 (slightly more likely to be true than false)
usually false
sentence strongly depends on other sentences in the text and the context
Example Code Snippet for Annotation.
Below is a version of the code I use to classify and mark logical inconsistencies in text to detect nonsense(rave) in text written by humans. The same method can be used for labeling training data.
This will allow each sentence to be as logically accurate as possible, so that it aligns with the facts. It will exclude sentences that are intentionally erroneous from the text. It will also account for subsequent errors in other generated sentences, considering what has already been written. This will make the text easier to read. After that, the model can be fine-tuned to make the text more pleasant and more human-readable.
gptApiRequestTypeIsQuestion -> {
val isQuestion = phrase.endsWith("?")
val requestStateResult: Int = if (isQuestion) {
gptApiRequestStateTypeResultTrue
} else {
gptApiRequestStateTypeResultFalse
}
ignoreApiRequest(timeId, requestId, phraseId, requestType, requestStateResult)
}
gptApiRequestTypeIsTooShortToCheck -> {
val isTooShortToCheck = phrase.substring(0, phrase.lastIndex).trim().contains(" ").not()
val requestStateResult: Int = if (isTooShortToCheck) {
gptApiRequestStateTypeResultTrue
} else {
gptApiRequestStateTypeResultFalse
}
ignoreApiRequest(timeId, requestId, phraseId, requestType, requestStateResult)
}
gptApiRequestTypeIsStatement -> {
prompt = "Is this phrase a statement? Answer only yes or no.\n\nphrase\n_"
fullPrompt = "Is this phrase a statement? Answer only yes or no.\n\nphrase\n$phrase"
}
gptApiRequestTypeIsLogicalErrorText -> {
prompt =
"Find errors in the statement using: Logical check. A person can analyze their statements from the perspective of logic and consistency, checking for any contradictions, logical errors, or unsupported conclusions.\n\nstatement\n_"
fullPrompt =
"Find errors in the statement using: Logical check. A person can analyze their statements from the perspective of logic and consistency, checking for any contradictions, logical errors, or unsupported conclusions.\n\nstatement\n$phrase"
}
gptApiRequestTypeIsLogicalErrorBoolean -> if (isLogicalErrorText != null) {
prompt =
"Does the statement description say that the statement contains clear logical errors? Answer only yes or no.\n\nstatement\n_\n\ndescription\n_"
fullPrompt =
"Does the statement description say that the statement contains clear logical errors? Answer only yes or no.\n\nstatement\n$phrase\n\ndescription\n$isLogicalErrorText"
}
gptApiRequestTypeIsTrue100 -> {
percentStr = 100.toPromptPercentStr()
prompt = "Is this statement in _ cases true? Answer only yes or no.\n\nstatement\n_"
fullPrompt = "Is this statement in $percentStr cases true? Answer only yes or no.\n\nstatement\n$phrase"
}
gptApiRequestTypeIsTrue90 -> {
percentStr = 90.toPromptPercentStr()
prompt = "Is this statement in _ cases true? Answer only yes or no.\n\nstatement\n_"
fullPrompt = "Is this statement in $percentStr cases true? Answer only yes or no.\n\nstatement\n$phrase"
}
gptApiRequestTypeIsTrue60 -> {
percentStr = 60.toPromptPercentStr()
prompt = "Is this statement in _ cases true? Answer only yes or no.\n\nstatement\n_"
fullPrompt = "Is this statement in $percentStr cases true? Answer only yes or no.\n\nstatement\n$phrase"
}
gptApiRequestTypeIsTrue51 -> {
percentStr = 51.toPromptPercentStr()
prompt = "Is this statement in _ cases true? Answer only yes or no.\n\nstatement\n_"
fullPrompt = "Is this statement in $percentStr cases true? Answer only yes or no.\n\nstatement\n$phrase"
}
This system could be extended and fine-tuned to further improve AI response accuracy while maintaining computational efficiency.
I would love to discuss this idea further and explore how it might be incorporated into future iterations of ChatGPT. Please let me know if you would be open to a discussion.
Best regards,
Oleksandr Andriichenko
email address removed for privacy reasons
No RepliesBe the first to reply