Protect the quantifiers in ChatGPT Answer Generation
In LLM (ChatGPT) one-shot learning, you have a question-answer pair written first. And then you want to write another question that has certain parts replaced, for example, subject, or object. Then when you do the auto-completion, you will get an answer that's modified in a way that's relevant to the new question.
For example,
How to make an IEP goal about autism?
By XX/X/XXXX, given a window of 4-6 days, the teacher would evaluate autism and observe the score improve over last year by 10%.
How to make an IEP goal about language skills?
If you send the above prompt into ChatGPT, you will likely get an answer like this:
By XX/X/XXXX, given a window of 4-6 days, the teacher would
evaluate language skills and observe the score improve over last year
by 10%.
You would feel amazed and think that you get a helper, as long as you modify the initial question part, you can get the rest of the answer part automatically modified to stay consistent with the new question.
But the reality is it not always work. Sometimes ChatGPT makes certain modifications with "no reason," such as changing "some" to "all" based on whether the probability is higher (whether it appears "nice" to the reader). And if you do not have a way to regulate it, then you will feel free to embrace the reality that ChatGPT can make errors in a way that's most difficult for a human to identify and change the nature of meaning.
However, if we have a formal statement that's expressing the meaning of the neighbor's natural language, then ChatGPT will do the transformation to them also, but the end output is still a formal statement. We can leverage Proof Assistant to check.
Here we have Montague's core semantics (just a tiny fraction) implemented below:
Notation "\l ( x : A ) ( P )" := (fun x : A => P) (at level 0, x at level 99).
Notation "\V ( x : A ) ( P )" := (forall x : A, P:Prop) (at level 0, x at level 99).
Notation "\E ( x : A ) ( P )" := (exists x : A, P: Prop) (at level 0, x at level 99).
Parameter PN: Set.
Parameter CN: Set.
Definition P' := PN->Prop.
Parameter John: PN.
Parameter sing: PN->Prop.
Parameter man: PN->Prop.
Parameter P: Prop.
Definition every:= \l (P: P')( \l (Q: P') ( \V (x:PN) ( P x-> Q x ))).
Definition some:= \l (P: P')( \l (Q: P') ( \E (x:PN) ( P x /\ Q x ))).
Definition no:= \l (P: P')( \l (Q: P') ( ~ \E (x:PN) ( P x /\ Q x ))).
Definition john_np := \l (P: P') ( P (John ) ).
Definition john_sing := \l (P: P') ( P (John ) )(sing).
Definition every_man_sing := (every man) sing.
Definition some_man_sing := (some man) sing.
Definition no_man_sing := (no man) sing.
You can use online Coq Interface to verify.
领英推荐
Whenever the passage mentioned "every", "some" or "no", we can put the related formal statement in the same passage. That way, when ChatGPT does change "every" to "no", we will be able to capture it.