David Schwartz "started worrying" about AI's development considering the prospects of deepfakes and their influence on social, economic and political processes.
Ripple CTO: AI will be able to broadcast terrorist attacks that never happened
The chief technology officer and one of the architects of XRP Ledger's (XRPL) technical design has taken to Twitter to share his concerns about artificial intelligence technology (AI).
For a few years now, people have been warning me about the threat that AI could pose to our society. I was not worried about it even the slightest. But now I am starting to worry. 1/3— 𝐃𝐚𝐯𝐢𝐝 ❝𝐉𝐨𝐞𝐥𝐊𝐚𝐭𝐳❞ 𝐒𝐜𝐡𝐰𝐚𝐫𝐭𝐳 (@JoelKatz) October 31, 2022
He stressed that AI development never looked dangerous to him. But recently he started to worry about deepfake streams as one of the most impressive implementations of the AI concept. According to him, in 20 years, $1 million in funding will be enough to initiate "dozens" of video streams from events that are not actually happening.
Each of these streams will be "interactive" and sufficient to create a ton of evidence of totally fake events of public importance. For instance, the world might be watching deepfake videos about terrorist attacks.
Eventually, the technologies might be able to corrupt the undisputed political facts; AI usage, therefore, might make things much worse compared to the situation nowadays.
Crypto is a hotbed of deepfakes
Mr. Schwartz is also concerned about the role of CBDCs in cryptocurrency's progress and the economic development of modern societies. As covered by U.Today previously, Ripple is involved in numerous CBDC initiatives across the globe.
In Web3, deepfake operations have already resulted in millions of dollars lost. In 2020, malefactors used Justin Sun's AI-generated avatar to impersonate Tron's (TRX) founder and steal investors' money.
Deepfakes of Elon Musk, Changpeng "CZ" Zhao and Vitalik Buterin are the three most popular instruments for TikTok, YouTube and Instagram crypto scams.