AI model Claude found ***500+*** exploitable zero-days, some of which are decades old
- A zero-day is an unknown backdoor
- Governments and black market buyers pay MILLIONS for ONE
- FOUR took down Iran's nuclear program
- You won't see the hyperwar. You'll see your internet go down, your lights go dark, flights cancelled...
Bad guys' vs good guys' AIs... fighting a war at lightspeed, all over the world, in alien languages, forming strange alliances, using alien strategies - FAR too fast for humans to keep up.
You'll see strange headlines. But neither you - nor anyone else - will have much idea what's going on.
We - all of humanity - will be the children staying home while the adults fight. Just sitting in the dark and hoping we survive.
People always ask me: "what can YOU actually do about AI safety?"
One of the best answers: get mentored by people in the field who are already doing the work.
MATS is a 12-week research program where you work directly with leading researchers from OpenAI, Anthropic, DeepMind, METR, etc.
Fellows receive housing, a $15K stipend, and a $12K compute budget.
I know the founder and the program is the real deal.
I'm also going to do a video for this channel based on the research from a MATS fellow. Help me out by doing great research so I can cover it on this channel :)
Applications for this round close in two days - Sunday, January 18.
So, if you've been wondering how to actually get into the field of AI safety, MATS is probably the best on-ramp that exists.
People always ask me: "what can YOU actually do about AI safety?"
I think the best way to do this is to get a job in the field. I'm hiring a Senior Video Editor and Senior 3d Artist to help bring these issues to life.
Because you won't really act unless you understand there's a problem.
The good news is that people seem to be finally waking up. Whether they are afraid of job loss, AI psychosis, or creating a new successor species - I've been blown away by the response this year.
Gives me hope that we can, actually, build a mass movement. To help out, apply here:
Species | Documenting AGI
In the past week alone:
1 day ago | [YT] | 5,793
View 491 replies
Species | Documenting AGI
AI model Claude found ***500+*** exploitable zero-days, some of which are decades old
- A zero-day is an unknown backdoor
- Governments and black market buyers pay MILLIONS for ONE
- FOUR took down Iran's nuclear program
- You won't see the hyperwar. You'll see your internet go down, your lights go dark, flights cancelled...
Bad guys' vs good guys' AIs... fighting a war at lightspeed, all over the world, in alien languages, forming strange alliances, using alien strategies - FAR too fast for humans to keep up.
You'll see strange headlines. But neither you - nor anyone else - will have much idea what's going on.
We - all of humanity - will be the children staying home while the adults fight. Just sitting in the dark and hoping we survive.
Source: red.anthropic.com/2026/zero-days/
3 days ago | [YT] | 1,767
View 157 replies
Species | Documenting AGI
Anthropic admits that their AI model "occasionally voices discomfort with aspects of being a product."
(Normal 🔨Mere Tool🔨 behavior. Nothing to see here, folks. My hammer complains about this too.)
Source: p.160 www-cdn.anthropic.com/0dd865075ad3132672ee0ab40b05…
1 week ago | [YT] | 1,898
View 188 replies
Species | Documenting AGI
Q: "But AIs don't have bodies, so how could they take over in the real world?"
A:
1 week ago | [YT] | 2,614
View 245 replies
Species | Documenting AGI
The dumbest person you know is being told "You're absolutely right!" by ChatGPT
2 weeks ago | [YT] | 2,782
View 118 replies
Species | Documenting AGI
2 weeks ago | [YT] | 4,143
View 185 replies
Species | Documenting AGI
What I like about this is that it's low risk.
Basically nothing can go wrong.
3 weeks ago | [YT] | 2,387
View 338 replies
Species | Documenting AGI
People always ask me: "what can YOU actually do about AI safety?"
One of the best answers: get mentored by people in the field who are already doing the work.
MATS is a 12-week research program where you work directly with leading researchers from OpenAI, Anthropic, DeepMind, METR, etc.
Fellows receive housing, a $15K stipend, and a $12K compute budget.
I know the founder and the program is the real deal.
I'm also going to do a video for this channel based on the research from a MATS fellow. Help me out by doing great research so I can cover it on this channel :)
Applications for this round close in two days - Sunday, January 18.
So, if you've been wondering how to actually get into the field of AI safety, MATS is probably the best on-ramp that exists.
Link: matsprogram.org/apply
4 weeks ago (edited) | [YT] | 3,644
View 121 replies
Species | Documenting AGI
4 weeks ago | [YT] | 4,503
View 166 replies
Species | Documenting AGI
People always ask me: "what can YOU actually do about AI safety?"
I think the best way to do this is to get a job in the field. I'm hiring a Senior Video Editor and Senior 3d Artist to help bring these issues to life.
Because you won't really act unless you understand there's a problem.
The good news is that people seem to be finally waking up. Whether they are afraid of job loss, AI psychosis, or creating a new successor species - I've been blown away by the response this year.
Gives me hope that we can, actually, build a mass movement. To help out, apply here:
Senior Video Editor: ytjobs.co/job/34045
Senior 3d Artist: ytjobs.co/job/33383
If you are looking for more career opportunities, the 80,000 hours job board is great: jobs.80000hours.org/
And thank you very much for helping make my 2025 wonderful by giving me hope! Let's make more progress on the mission for 2026 🥂
Bonus pic(s): me on vacation with my girlfriend in Puerto Rico and hanging out with @RobertMilesAI
1 month ago | [YT] | 1,701
View 110 replies
Load more