Hallucinations: LLMs such as ChatGPT can put collectively text that is definitely lexically accurate but factually Completely wrong.ChatGPT works by using text dependant on enter, so it could perhaps expose sensitive data. The design's output could also observe and profile people today by collecting information from the prompt and associating this