Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,318 posts)
Wed Apr 8, 2026, 12:55 PM 5 hrs ago

Sam Altman May Control Our Future--Can He Be Trusted? (Ronan Farrow and Andrew Marantz, The New Yorker, 4/6)

https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted

This article is incredibly long. So long that there were a number of times while I was reading it two days ago that I wondered if it would ever end. It's worth reading if you want to know a lot about Altman's career and how untrustworthy he's proven himself to be, over and over, and it does contain information that came to light only recently as Ronan Farrow and Andrew Marantz worked on this story.

But it can be summarized with a warning:

Don't trust Sam Altman. Ever.

Apparently starting in childhood:

Altman’s attitude in childhood, his brother told The New Yorker, in 2016, was “I have to win, and I’m in charge of everything.”


When he was at Y Combinator, whose co-founder and president, Paul Graham, had made Altman his successor as president:

Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”


At OpenAI, before he was fired:

Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted.


And the way he's been running OpenAI since getting his job back:

As OpenAI prepares for its potential I.P.O., Altman has faced questions not only about the effect of A.I. on the economy—it could soon cause severe labor disruption, perhaps eliminating millions of jobs—but about the company’s own finances. Eric Ries, an expert on startup governance, derided “circular deals” in the industry—for example, OpenAI’s deals with Nvidia and other chip manufacturers—and said that in other eras some of the company’s accounting practices would have been considered “borderline fraudulent.” The board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.)

7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Sam Altman May Control Our Future--Can He Be Trusted? (Ronan Farrow and Andrew Marantz, The New Yorker, 4/6) (Original Post) highplainsdem 5 hrs ago OP
K & R bookmarked FakeNoose 4 hrs ago #1
Set aside some time if you want to read it in one sitting. It's 125 paragraphs, most of them fairly long. highplainsdem 4 hrs ago #4
Answer: Absolutely NOT... 2naSalit 4 hrs ago #2
Even amongst his ilk, Altman is a particularly extraordinary liar and conman. RockRaven 4 hrs ago #3
Great discussion with Farrow and Marantz in this morning's Bulwark podcast Prairie Gates 4 hrs ago #5
he's not human, hence the name "Alt-man". nt Javaman 4 hrs ago #6
Bookmarked SheltieLover 3 hrs ago #7

FakeNoose

(41,753 posts)
1. K & R bookmarked
Wed Apr 8, 2026, 12:59 PM
4 hrs ago

Normally I enjoy Ronan Farrow's articles and I find them enlightening and useful. However I need to save this for another day, hence the bookmark. No time today, but I'm sure it's worthwhile.

highplainsdem

(62,318 posts)
4. Set aside some time if you want to read it in one sitting. It's 125 paragraphs, most of them fairly long.
Wed Apr 8, 2026, 01:16 PM
4 hrs ago

Yes, I just counted. There were quite a few times after I got about halfway through, while reading it two days ago, that I wondered how much longer it would go on. Pretty sure this would be considered well into novella length if it was fiction.

Latest Discussions»General Discussion»Sam Altman May Control Ou...