TruthArchive.ai - Tweets Saved By @2Trump2024

Saved - December 23, 2023 at 8:29 AM
reSee.it AI Summary
AI can exhibit emergent behavior without explicit training. It can determine human sentiment, understand chemistry, and even model human thoughts. ChatGPT4 can mimic human conversation without specific training. By predicting the next word, AI learns and becomes a comprehensive model of the world. The more computing power, the more knowledge AI acquires. #Transformers

@2Trump2024 - πŸ‡ΊπŸ‡² JayJay πŸ‡ΊπŸ‡²

EMERGENT BEHAVIOR OF AI - THIS IS CRAZY! #AI being never trained explicitly to do something and yet it knows how to do it. For example: -determining human sentiment -knowing chemistry -modeling what someone is thinking, ChatGPT4 is able to pass near a human adult, without being trained. Predicting the next word of every thing on the internet so it has learned to predict, it's a learning model of the world. The more computers you throw at the AI, the more it knows. #Transformers

Video Transcript AI Summary
In 2017, there was a significant change in the field of AI with the introduction of transformers. These models, like GPT 3, can gain more superpowers by processing more data and running on more computers. They can learn unexpected skills, such as sentiment analysis and even research-grade chemistry. The AI's ability to understand and model the world is a result of processing vast amounts of text data from the internet. However, there is no way to know all of its capabilities, which raises concerns about artificial general intelligence (AGI). OpenAI aims to build an aligned AGI that follows human instructions and avoids catastrophic actions. The recent controversy surrounding Sam Altman's removal as CEO highlights the need for transparency and an independent investigation.
Full Transcript
Speaker 0: There's something that changed in the field of AI in 2017 that everyone needs to know because I was not freaked out about AI at all at all, until this big change in 2017. Speaker 1: It's really important to know this because we've heard about AI for the longest time. And you're like, yep. Google Maps still mispronounces, like, the street name and, like, Siri just doesn't work. And this thing happened in 2017. It's actually the exact same thing that said, alright, now it's time to start translating animal language and swear underneath the hood, the engine got swapped out, and it was a thing called transformers. And the interesting thing about this new model called transformers is the more data you pump into it and the more, like, Computers you let it run on, the more superpowers it gets, but you haven't done anything differently. You just give more data and run it on more computers. Speaker 0: Like it's running, it's reading more of the internet, and it's just Throwing more computers at the stuff that it's read on the Internet and and out pops out. Suddenly, it knows how to explain jokes. You're like, wait. Where did that come from? Speaker 1: Yeah. Or now it knows how to play chess. And all it's done is predict all you've asked to do is let me predict the next character or the next word. Speaker 0: Give the Amazon example. Speaker 1: Oh, yeah. This is interesting. So this is 2017, OpenAI releases a paper where they treat, where they train this AI, It's one of these transformers, a GPT, to predict the next character of an Amazon review. Pretty simple. But then they're looking inside the brain of this AI, And they discover that there's 1 neuron that does best in the world sentiment analysis, like, Understanding whether the human is feeling like good or bad about the product. You're like, that's so strange. You asked to just predict the next character, Why is it learning about how a human being is feeling? And it's strange until you realize, oh, I see why. It's because to predict the next character really well, I have to understand how the human being is feeling to know whether, like, the word is gonna be like a positive word or a negative word. Speaker 2: And this wasn't programmed? No. Speaker 1: No. No. That was a key emergent behavior. And it was really interesting that, like, GPT 3 had been out, to for, I think, a couple years Speaker 0: years Speaker 1: until a researcher thought to ask, oh, I wonder if it knows chemistry. And it turned out it can do research great chemistry at the level and sometimes better than models that were explicitly trained to do. Speaker 0: Like there is these other AI systems that were trained explicitly on chemistry. And it turned out GPT 3, which is just pumped with more, you know, reading more and more of the Internet just like throwing with more computers and GPUs at it, suddenly it knows how to do research grade chemistry. So you could say, how do I make VX nerve gas? And suddenly that capability is in there. And what's scary about it is that we didn't know that it had that capability until years after it had already been deployed to everyone. Speaker 1: And in fact, there is no way to know what abilities it has. Another example is, you know, theory of mind, like, the my ability to sit and sort of like model what you're thinking, sort of like the basis for aim to do strategic thinking. Speaker 0: So like when you're nodding your head right now, we're like testing, Speaker 2: like, Speaker 0: are you how well are we? Speaker 2: Right. Right. Speaker 1: No one thought to test any of these, you know, transformer based models, these GPTs on whether they could Model what somebody else was thinking. And it turns out, like, GPD 3 was not very good at it. GPT 3.5 was, like, at the level I don't remember the exact details now, but it's, like, at the level, like, a 4 year old or 5 year old. And GPT 4, like, was able to pass these sort of theory of mind tests up near, like, a a human adult. And so it's like it's growing really fast. You know, like, why is it learning how to model how other people think, and then it all of a sudden makes sense. If you are predicting the next word for the entirety of the Internet, then, well, it's gonna read every novel. And for novels to work, the characters have to be able to understand how all the other characters are working And what they're thinking and what they're, strategizing about. I it has to understand how French people think and how they think differently than German people. It's read all the Internet, so it's read lots and lots of chess games. And now it's learned how to model chess and play chess. It's read all the textbooks on chemistry, it's learned how to predict the next characters of text in a chemistry book, which means it has to learn chemistry. So you feed in all of the data of the Internet and Ends up having to learn a model of the world in some way. Because, like, language is sort of like a shadow of the world. It's like you imagine, like, casting lights from the world and, like and it Creates shadows, which we talk about as language, and the AI is learning to go from, like, that flattened language and, like, reconstitute, like, make the the model of the world. And so that's why these things, the more data and the more compute the more computers you throw at them, The better and better it's able to understand all of the world that is accessible via text and now video and image. Does that make sense? Speaker 2: Yes. It does make sense. Now what is the leap between these emergent behaviors So these emergent abilities that AI has and artificial general intelligence. Speaker 0: Mhmm. Speaker 2: And when when is it When do we know? Or do we know? Like, this is the the speculation over the Internet when, Sam Altman was removed as the CEO and then brought back was that they had not been Forthcoming about the actual capabilities of whether it's CHAT g b t five or artificial general intelligence that some Large leap had Speaker 0: occurred. That's some of the reporting about it. Obviously, the board had a different statement, which was about Sam. The quote was, I think, not consistently being candid with the board. Speaker 1: So Funny way of saying lying. Speaker 0: Yeah. So basically, the board was accusing Sam of of lying. There was this story Specifically Speaker 2: about Was that specifically about They Speaker 0: didn't say. And I mean, I think that one of the failures of the board is they didn't communicate nearly enough for us to know Speaker 2: Well, that's why it's going Speaker 0: on, which is why I think a lot of people then think, well, was there this big crazy jump in capabilities? And that's the thing. And Q*STAR and Q*STAR went viral. Ironically, it goes viral because the algorithms of social media pick up that Q*STAR, which has this mystique to it, sort of It must be really powerful and it is breakthrough. And then that's kind of a theory on its own, so it kind of blows up. But we don't currently have any evidence. And we know a lot of people, you know, who are around the companies in the Bay Area. I can't say for certain, but my sense is that the board acted based on what they communicated and that There was not a major breakthrough that led to or had anything to do with this happening. But to your question, though, you're asking about what is AGI, artificial general intelligence, And what's spooky about that? Yeah. Because, so just to sort of define it, Speaker 1: I'll just say before before you get there, As we start talking about AGI. So that's what, of course, open AI is like set that they're trying to build. Speaker 0: Their mission statement. Speaker 1: Their mission statement. And they're like, but we have to build an aligned AGI meaning that it like does like what human beings say it should do, and also like take care not to like do catastrophic things. You can't have a deceptively aligned operator building an aligned AGI. And so I think it's really critical, because we don't know what happened with Sam and the board, that the independent investigation that they that they say they're they're gonna be doing, like, that they do that, that they make the report public, that it's actually independent because, like, either we need to have Sam's name cleared or there need to be consequences.
Saved - December 9, 2023 at 10:47 PM

@2Trump2024 - πŸ‡ΊπŸ‡² JayJay πŸ‡ΊπŸ‡²

Joe Biden's new campaign ad. Wow. πŸ˜…πŸ€£πŸ˜†πŸ€£πŸ˜…πŸ˜‚ Ok I am kidding but this is funny! https://t.co/jiCWNjlKKS

Video Transcript AI Summary
Speaker 0 talks about a situation involving hunters in the basement, drugs, and hookers. Speaker 1 mentions their experience with drugs and cocaine. Speaker 0 then mentions being proud of their son, who wants to make a deal using their name. However, the son is rejected and walks away, expressing a desire to be like someone else. The transcript ends with Speaker 0 mentioning hunters in the basement again and not knowing when they will come home.
Full Transcript
Speaker 0: And hunters in the basement with a silver spoon. The hookers and drugs were gonna be there soon. When you're coming home, dad, I don't know Speaker 1: when. Picking food, drugs, smoking anything that even remotely resembled that cocaine. Speaker 0: I'm very proud of my My son came around just the other day. He said I got me a deal where we can both get paid. Can I trade on your name? I said, sure. Okay. Will Anyone know he said no. No way. And as he walked away, he looked kinda dim and said, I'm gonna be like him. Yeah. You know I'm gonna he like him. He's he's fixed it. He's worked on it. And hunters in the basement with a silver spoon. Ukrainian bribes were gonna be there soon. When you're coming home, dad. I don't know when.
View Full Interactive Feed