A few years ago, a local writer commissioned me to edit his latest novel. I completed the brief (copyedit/structural edit) and returned the marked-up copy to him. I also included feedback on his use of words and expressions that could have been offensive to some readers. He thanked me for the work and paid me. When I saw him the following month, I asked him how he found my work. He laughed and said, ‘You did great, Bev. But Grammarly picked up another eighty errors.’
Following that unsettling conversation, I explored the plethora of free and subscription-based spelling and grammar checkers. I tested Grammarly, liked it, and bought the paid version. However, I did not enjoy it. Some of its suggestions did not work for my style of writing and I started treating it with contempt. I moved onto ProWritingAid. It suited me better and I now use its paid version to do a final check over my writing. Occasionally, I also let MS Word’s Editor function loose. I could go on about the pros and cons of AI language/grammar analysers, but I want to skip to the broader topic of artificial intelligence language models that can generate text and where they fit into our writing world.
What are people saying about AI?
I recently tuned into a Zoom offering hosted by the Institute of Professional Editors titled ‘What ChatGPT means for editors, authors, and publishers.’ There was record attendance, indicative of the existential angst expressed by fellow-editors about AI. The three speakers were leaders in the fields of AI safety, writing plain English and publishing in Australia. The key takeaways for me were – yes, AI will influence the publishing industry; no, it won’t replace humans; it is only as smart as the information it is built on; and governments around the world are struggling with designing regulations around the use of AI. They said the use of plagiarism checkers by educators is more relevant than ever and the speaker from the Plain English Foundation said AI can express itself well but often needs guidance regarding appropriate tone. The presenter from Epoch UK said AI is probably reliable for copyediting but suggested we don’t rely on it for accuracy.
Who is using AI?
I love that Australians are early adopters of new tech:
A young man in Queensland consulted it for relationship advice. An Adelaide woman asked for a poem to her boyfriend. A Brisbane rideshare driver turned to it for legal advice. An octogenarian man found ways to improve his system for placing bets. A young woman in Melbourne commissioned a love poem to her neighbour’s dog — on behalf of her own dog.
Last week friend told me she used OpenAI’s ChatGPT to compose her cover letter to an application for a senior job with a large organisation. She had spent a lot of time on her resume and had run out of time to write her supporting letter. She was happy with ChatGPT’s efforts. I wonder if AI will help her get the job.
I’ve also heard of people using AI to compose blog posts. This may be very helpful for writers who want to promote their book yet struggle with the marketing aspect.
ABC presenter Richard Fidler (Conversations) recently commented that he’d asked ChatGPT to write his bio and found it amusingly incorrect. (Did you ever Google yourself?)
What about copyright issues?
If you use ChatGPT (or Microsoft’s Bing, Google’s Bard, or Anthropic’s Claude) to write your work, who owns the copyright? This is a vexed question under Australian copyright rules. If I research (the web, periodicals, textbooks) for information to support my writing, I must acknowledge my sources. I can’t just steal ideas and words. But if I ask AI to generate text for an article I’m writing about, say, growing tomatoes hydroponically, and if AI scrapes the web for data and ‘writes’ my article, I can’t attribute the sources information because I don’t know where the information has come from. Am I unwittingly breaching copyright laws? Am I denying the source authors the right to recognition? (That’s a yes from me)
What is the quality of AI’s output?
Many years ago, when I was learning to wrangle Excel to bring data sets to life, the trainer contended, ‘Rubbish in, rubbish out!’ It seems this is the same for AI-generated text. In the Zoom call I mentioned above, the CEO of the Australian Society of Authors alluded to the term ‘hallucinations’ where AI scrapes the internet and draws false information or uses rubbish sources.
A recent article in New Scientist reports that ‘Microsoft and Google often train their AI on closed data sets that aren’t available for public scrutiny.’ So how do we check if AI collected its data from reputable sources? Has it understood nuanced writing? Have authors given permission for their work to be used?
What is in this for us?
It’s early days. Generative AI tools are not going away. They are tools, and if we choose to use them, we must learn how. We need to query their accuracy, check for bias and sensitivity, and be mindful of misappropriating other people’s ideas.
I will continue to use AI to help me edit my writing and I plan to see if ChatGPT can write me some promotional copy for my training course.
How about you? Will you have a play with AI?
Beverley Streater styles herself as a reader, writer and critical friend. She relaxes by reading fiction, particularly new releases. Her key writing activity involves shifting complex text in government reports, white papers, and (UK) Mental Health Court reports into a format called Easy Read which suits readers with learning disabilities, cognitive impairment or people whose first language is not English. She and a colleague have written a short online course about writing Easy Read at easyreadtraining.com
She critiques and provides helpful feedback to authors in her role at Streater Editing Services.
You can find out more at streatereditingservices.com
Beverley has always been fascinated by language and is curious about emerging technologies.
[iv] Jeremy Hsu, ‘Big data may make AI more racist’, New Scientist, Volume 259, Issue 3448, 2023.
Every 3rd Saturday of the Month
Doors open at 12:00pm.
Doors close at 3:00pm.