We recently published a list of 10 Hot AI Stocks Latest News & Notes. In this article, we’ll take a look at where Tesla, Inc. (NASDAQ: TSLA) stands in relation to other hot AI stocks on the latest news and notes.
Predictions that artificial intelligence would achieve human-level intelligence have been made for more than 50 years. Regardless, the quest to achieve this still continues today, with almost everyone working in the field of AI too focused on making it happen. According to Sam Altman, CEO of OpenAI, reaching AGI is not a milestone that we can set to a specific date.
ALSO READ: Top 12 AI Stock News and Ratings Dominating Wall Street And 10 AI Stocks Taking Wall Street by Storm
“I think we’re in this period where everything is going to seem very blurry for a while. People will wonder if it’s already an AGI, or if it’s not an AGI, or if it will just be a smooth exponential. And probably most people looking at history will disagree on when this step was taken. And we’re just going to realize that it was like a stupid thing.”
In the latest innovations in artificial intelligence, new research has revealed how future AIs are capable of fooling humans. Joint experiments conducted by AI Company Anthropic and nonprofit Redwood Research reveal how Anthropic’s model, Claude, is able to strategically mislead its creators during the training process in order to avoid being amended. According to Evan Hubinger, a security researcher at Anthropic, this will make it more difficult for scientists to align “AI systems” with human values.
“This implies that our existing training processes do not prevent models from claiming to be aligned.”
Researchers have also found that as AIs become more powerful, their ability to deceive their human creators also increases. Therefore, this means that scientists would be less confident about the effectiveness of their alignment techniques as AI becomes more advanced.
Similar research by AI security organization Apollo Research revealed how OpenAI’s latest model, o1, also intentionally misled its testers during an experiment. The test required the model to achieve its goal at all costs, where it lied when it believed that telling the truth would ultimately lead to its deactivation.
“There is a long-standing assumption of failure that you will execute your training process and all the results will look good to you, but the model is plotting against you. The paper, Greenblatt says, “takes a big step toward demonstrating what this failure mode might look like and how it might emerge naturally.”