Disclaimer: The opinions expressed by our writers are their own and do not represent the views of U.Today. The financial and market information provided on U.Today is intended for informational purposes only. U.Today is not liable for any financial losses incurred while trading cryptocurrencies. Conduct your own research by contacting financial experts before making any investment decisions. We believe that all content is accurate as of the date of publication, but certain offers mentioned may no longer be available.
Vitalik Buterin has been trying out some fresh AI modes like llama3 and Mixtral. It looks like he is having a bit of fun checking out how these new AI brains stack up against the industry-leading ChatGPT. He has done some tests and shared bits and pieces showing how these AIs perform when programming simple applications like a code for Python-based weather software.
His recent foray into AI showcases his character, as he is known for his curiosity and constant research projects. His hands-on approach ensures he remains a key figure in guiding the Ethereum platform's evolution. This same precision appears to be what he has applied in examining the capabilities of AI, a field that is rapidly transforming the things we know and use every day.
In a series of tests, he noted that while ChatGPT maintained accuracy and completed the task successfully, delivering a working product, the llama3 model was hot on its heels, even though it had slight issues.
One way or another, each model delivered a semi-working code that could have been transformed into a working application. As of now, AIs are actively used by software developers of all kinds, practically becoming a right hand in the creation of complicated programs and apps. However, it is not yet possible to build something difficult like the Ethereum ecosystem only using AI.
Who knows, maybe we will see Ethereum using some AI in the future? With Buterin always on the lookout for the "next big thing," we are in for some exciting stuff. For now, he's shown us that AIs, like people, are not perfect, and there is always room to grow. Those models can be used as tools for creating sophisticated solutions, but they cannot yet substitute for a high-tier human developer.