General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsWho is this Eliezer Yudkowsky who keeps screaming that AI is going to kill everyone?
I did a brief search and other than the fact that he is a gloom and doom AI person with some knowledge of AI, his credentials aren't that impressive. He isn't someone who writes code so how would he know if AI might kill everyone by accident or how it actually works? That being said, I do think it's important to have Devil's Advocates bringing up possible issues that might be harmful. I don't doubt AI brings plenty of risks like more algorithms that divide us, scams, job losses, military advances etc. I just don't buy that we will lose control of AI and it will kill everyone. But I am open to hearing opinions and I do recognize AI is advancing a lot faster than they expected it to with language.
RationalWiki page: https://rationalwiki.org/wiki/Eliezer_Yudkowsky
anciano
(2,256 posts)but at this point I don't see how anyone can possibly know with certainty how AI will eventually evolve in its relationship to humans. But one thing is certain, the toothpaste is already out of the tube.
usonian
(25,312 posts)Whether things are beneficial or destructive (or both) is a choice that humans make.
Some choose poorly. Unfortunately, they hold powerful positions in government/military and of course, tech (AHEM).
Shermann
(9,062 posts)...as Steve Bannon advised. So once the Trumpers figure it out, things are going to suck for a while.
But it won't KEAL.
?w=840
highplainsdem
(62,136 posts)Last edited Thu May 25, 2023, 10:09 PM - Edit history (1)
CEO Sam Altman has said there's a chance developing AI could mean "lights-out for all of us."
https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1
Geoffrey Hinton has been warning people of the risks: https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
So has Paul Christiano: https://news.yahoo.com/chatgpt-creator-says-50-chance-103617019.html
And Max Tegmark: https://www.democraticunderground.com/100217862176
There are both short-term and long-term risks, and it's important to be aware of both.
LudwigPastorius
(14,723 posts)I don't think so, but you can read some of the papers he worked on, and judge whether his lack of academic credentials should factor into your criticism.
https://intelligence.org/files/IEM.pdf
https://intelligence.org/files/ProgramEquilibrium.pdf
https://arxiv.org/pdf/1710.05060.pdf
https://intelligence.org/files/Corrigibility.pdf
womanofthehills
(10,988 posts)I have no idea why it picks one study over others - but it has actual drs reviewing the info.
Link to tweet
?s=46&t=YZgyyp4w_z7vW3neKxa6cQ
Link to tweet
?s=46&t=YZgyyp4w_z7vW3neKxa6cQ
And now AI is coming to Photoshop. I can put myself in famous European cities & post on Facebook- Here I am in the Paris Protests
highplainsdem
(62,136 posts)highplainsdem
(62,136 posts)them, just their own stuff which is at least partly self-promotion.
But if they have only 4 doctors reviewing all these AI summaries, the 4 named here - https://www.openevidence.com/ - I doubt they can review the summaries at all carefully. It takes very careful checking to catch the errors LLMs make.