Disclaimer: The opinions expressed by our writers are their own and do not represent the views of U.Today. The financial and market information provided on U.Today is intended for informational purposes only. U.Today is not liable for any financial losses incurred while trading cryptocurrencies. Conduct your own research by contacting financial experts before making any investment decisions. We believe that all content is accurate as of the date of publication, but certain offers mentioned may no longer be available.
Binance, the leading cryptocurrency exchange, recently revealed that it has been targeted by a ChatGPT misinformation campaign. The false information was spread using the Default (GPT-3.5) model, which referred to Changpeng Zhao (CZ), Binance's CEO, as an employee of the Chinese state-owned company PetroChina during the 1990s. This claim was based on a fake Forbes report link and LinkedIn links.
However, when using other ChatGPT models, the correct information was provided. Binance shared the ChatGPT thread on social media, inviting crypto and AI enthusiasts to investigate further. Upon closer examination, it became apparent that the fake LinkedIn profile and nonexistent Forbes article had no basis in reality, further discrediting the claim.
Despite the targeted misinformation, Binance remains committed to exploring the potential of AI technology in the crypto industry. The company has been investing significant time and resources in researching how AI can empower users on its platform and how blockchain technology can address existing gaps in AI-based information verification. Binance believes that AI will be a transformative technology in the years to come.
However, as with any emerging technology, bad actors will attempt to exploit it for personal profit or political gain, especially during the early stages. By raising awareness of issues like the misinformation campaign against Binance and CZ, the company hopes to prevent people from relying solely on AI-generated responses when seeking to disparage others.
ChatGPT and other GPT-based chat models often provide inaccurate information and use nonexistent sources, which is why users who work with AI language models should always double check information shared by the tool. As OpenAI's own disclaimer suggests, "ChatGPT may produce inaccurate information about people, places, or facts."