Leveraging ChatGPT for Technical Study: Ensuring Reliable Answers and Mitigating Bias
Recently it has become one of my habit to take ChatGPT help in my study as we can ask specific technical question & can get the answers that are elaborate & easy to understand.
Questions from a wide range of topics like:
Explain Fibonacci circle in trading.
Explain Linux thermal management system.
I got response for almost all of the question with enough details & accuracy in very less time than what I can do using normal web search. This build up a trust factor and dependency on ChatGPT and let me forget that a AI language model can have bias and more so a general purpose AI language model knowledge training is sparse so when asking question of specific topic the answers can be misleading.
Let me explain a case here.
I asked a very common android question.
Explain the android booting sequence as per the API 32 of the Android?
There was a mistake with the response as it stated that system server starts the zygote not the other way round.
This got me confused because as per my knowledge it is zygote that starts the system server and so I followed up with a question to confirm the same.
领英推荐
Do zygote starts the system server or the system server starts the zygote?
AI model still defended its answer & clarified that it is system server that starts the zygote not the other way round.
While very surprisingly for the same question asked by my colleague before me to ChatGPT the answer was correct.
So here comes the question what we can do to insure that the answers we are getting are correct.
In fact there is no sure way to make 100% guarantee the credibility of the answers, but we can follow one simple rule to somehow solve the problem.