General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI-assisted writing is close to becoming as standard as spell check. Here's the catch
This op-ed was originally in the LA Times. Jane Rosenzweig is the director of the Writing Center at Harvard.
https://finance.yahoo.com/news/opinion-ai-assisted-writing-close-103055423.html
Writing is hard because the process of getting something onto the page helps us figure out what we think about a topic, a problem or an idea. If we turn to AI to do the writing, were not going to be doing the thinking either. That may not matter if youre writing an email to set up a meeting, but it will matter if youre writing a business plan, a policy statement or a court case.
While AI assistants might be able to help us with our own thinking, its likely that in many cases theyll end up replacing that thinking. In a recent article in the Chronicle of Higher Education, Columbia undergraduate Owen Kichizo Terry described using ChatGPT not to edit his own ideas but to generate the substantive components of his college papers, leaving him only to stitch those ideas together. Using AI to generate ideas, create an outline and provide specific instructions for writing each paragraph, Terry wasnt using an AI assistant; he had become the assistant and so will we.
Once we let the chatbot fill the blank page, the bots text will shape our understanding of the topic with whatever limitations, biases and errors go with it. To effectively assess AI-generated drafts, well need to be able to ask difficult questions, analyze evidence, consider counterarguments in other words, to do the same important work we do when we write ourselves. But if we no longer value doing our own writing if every time we open a Google or Word document, were prompted to save time by turning to the bot we may get to the point when we dont know how to think for ourselves anymore. Even if we dont lose our jobs to AI, well lose what matters about them.
I drafted multiple versions of this essay before I got to the version youre reading now. I didnt use an AI assistant because I was not interested in finding out what an algorithm would predict someone could say about this topic. I wanted to figure out what was troubling me about it.
-snip-
EYESORE 9001
(29,732 posts)Laziness will prevail. Mark my words.
WhiskeyGrinder
(26,955 posts)EYESORE 9001
(29,732 posts)Im not surprised. There will undoubtedly be other instances just as tragic and unnecessary.
FalloutShelter
(14,465 posts)Muscles.
A jobless future is coming
humans are consumed with vanity and leisure
we are the Eloy.
H.G. Wells was a prophet.
EYESORE 9001
(29,732 posts)I would see people whip out their calculators for math problems they could probably solve in their heads. Its only gotten worse since proliferation of the smart phone.
Freethinker65
(11,203 posts)AI completely changed the meaning of a post of mine in Nextdoor (a pretty useless local social media site/app). Instead of merely replacing a single word because of "community standards", it combined and edited the remaining post to say the opposite of what I wrote.
I was given an option of accepting the edit or posting the original (could personally edit after posting), but not merely replacing the single community standards violating word (fyi, I used the word crappy) before posting. I chose not to post at all.
Tetrachloride
(9,624 posts)time for coffee
edisdead
(3,396 posts)Freethinker65
(11,203 posts)I should have screen shot my post and the suggested edit. I assume the word "crappy" had been caught in a community standards filter and the word was AI replaced with "negative experience", which by itself I would have been ok with.
The gist of my post was although I had a crappy experience at same local hospital in the past, the experience now being discussed actually seemed well within acceptable best practice policy. (A "neighbor" was upset that patients with more potentially life threatening symptoms were seen before her adult daughter with a sprained ankle.) AI made it seem like I was supporting the poster. It seemed AI didn't understand one could be critical of care in one situation, but supportive of care in another situation at same facility.
I posted about it earlier in DU on another AI topic thread. Sure enough a DU member did some research and found a Nextdoor blog explaining how they were using AI.
edisdead
(3,396 posts)I wouldnt doubt it but Ive never seen it.
Freethinker65
(11,203 posts)I did not ask for any AI assistance and I do NOT use the app. I access from the web.
I honestly don't know if I had accepted and posted the AI version, if it would have indicated such on the post?
I rarely post, but I am a local moderator. I try to keep all off topic politics off.