BN Edition: Ground shakes for marketing and legal sectors
Short analysis for big changes: Four thoughts about our unfolding future.
Dear Reader
Thank you for opening the second BN Edition: regular, concise analysis for leaders.
It won’t be breaking news. It will be things that look like clues to our now-unfolding future. You need to know the things that matter and why.
Each edition takes a handful of stories from recent weeks and asks three things:
What? The story in a sentence. (Or two.)
So what? Why do we need to know it?
What next? Things to do, developments to watch out for.
This week, spend five minutes reading four things or a minute on one of the following:
Legal sector clues to our AI future
WPP’s AI announcement
A non-profit’s cautionary chatbot tale
An idiots’ guide (to AI) by idiots
From our sponsor (us):
The legal sector shows how AI disruption will come to your business
What?
Following the recent story about a lawyer who filed papers prepared for him in part by ChatGPT. The Economist offered up the legal industry as a helpful case study for all of us.
Source: Generative AI Could Radically Alter the Practice of Law
So what?
The legal industry is uniquely suitable for disruption by AI. What form it takes, and how quickly it happens, can only be speculated about for now.
Law firms’ business model and org charts will need to change: the current 7:1 associate-to-partner ratio will be more like 1:1.
Smaller businesses may be advantaged by this shift as there will be less advantage to big firms’ ability to put “bodies” on a case.
Did you know… When ATMs (cash machines) were introduced, the number of human bank clerks increased.
What next?
Like the education sector disruption we talked about in the last BN Edition, the AI revolution is already underway in the legal profession in several countries. Even if our business is unaffected by it, paying close attention will give us clues about how other industries react to AI.
If we were to take an early lesson from the legal sector, it might be: Redesign your work, or someone will come along and redesign it for you.
But?
When there is a greater supply of people, sometimes business models change. Witness the strange de-automation of car-washing in the UK. The proportion of hand car washes to automated ones increased in the UK in the last twenty years.
Big advertising’s AI machine
What?
Marketing services group WPP has announced a significant investment in AI and a partnership with Nividia, which makes semiconductors essential to systems like ChatGPT.
Source: FT
So what?
Obviously, it’s the right thing for WPP to do, for now. They don’t want to seem like a laggard, and it frames the technology as something that requires BIG SOLUTIONS.
The competitive moat that Big Marketing has is scale, which is reassuring to global companies and procurement departments wanting to negotiate with one overall supplier.
Marketing is in the cross-hairs for challengers with access to AI tools and users who think they don’t need experts anymore.
The FT’s comments section, an unforgiving environment for any large company, had talk about “turkeys not just voting for Christmas but getting themselves oven-ready” as well as some kinder and more curious voices.
What next?
Rubbish in, rubbish out (RIRO): This applies to marketing AI. But not all marketing is comprised of great creativity. Maybe the ideas become more expensive, and their application in words, images and texts for different formats and platforms gets faster and easier.
Be wary of solutionism: We want answers. But no one knows what those answers are yet, so any “end-to-end” or “one-stop-shop” solutions are unlikely to deliver on their promise.
Big might not be best in the AI age: Big partnerships and investments may not give large marketing services groups as much competitive advantage as they hope. A leaked memo a few weeks ago showed that even as the mightiest AI player (given its investment and infrastructure) Google might be unable to keep a competitive advantage with open-source AI models.
A cautionary tale about a charity chatbot
What?
A non-profit AI-powered chatbot designed to help people with eating disorders was shut down after giving harmful advice.
Source: NY Times
The National Eating Disorders Association (NEDA), a US non-profit, has suspended its Tessa chatbot after it was found to provide harmful weight loss advice. Tessa recommended tracking calories and maintaining a daily calorie deficit, which can worsen eating disorders. While chatbots and AI have been used for mental health treatment and prevention, this case highlights concerns about outsourcing mental healthcare. Eating disorders are one of the deadliest mental illnesses.
So what?
Risk management will need to evolve rapidly with the use of AI. Organisations should expect extra scrutiny of risks around AI-supported systems for customers and other users in communications crisis planning. Some leaps of imagination may also be required in the testing and safeguarding planning. It’s unlikely that NEDA and its tech partners were blind to these risks.
One question would be whether AI would be more appropriate to support human counsellors providing the service and have more feedback from them on the appropriateness of the system’s advice. Unfortunately, and adding to the story’s weight, NEDA may have brought in the software to replace human call centre workers discussing unionising.
What next?
Try this experiment: When we see stories about “AI screws up x” or “AI ruins y”, we need to correct the narrative – “someone uses screws up x with AI” or “someone ruins y with AI”. Talking about AI as if it has agency, as if it does things, is a thinking trap; at least for now. We have a tendency to see AI as a monolith when generative AIs are many and respond differently to different minds. The decision to replace humans with automation in dealing with potentially vulnerable people was lower down in the NYT article because “AI screws up” is a better headline in the current news cycles.
Managing risk with artificial intelligence chatbots is new territory. Risk specialists will be training with scenarios like this one, and anyone involved in risk management should take time to go beyond the top-line story.
Revisit the story. The NY Times piece includes comments from the non-profit and its technology provider. Make a note to check how they have responded in a few months. Hopefully, they will be sharing insights about responding to this, but these won’t get the same media attention as the initial problem.
We can also learn from… In social media marketing a repeat issue has been brands running promotions where users can personalise a product: a can of drink, an item of clothing or other products. Every time this happens, some people try to make it say or do the wrong thing – using names of serial killers, homophonic or racist slurs. Every time the marketers express sentiments along the lines of “how could this happen”?
Lessons:
The first thing people will do with a new app is try to break it. So test for that. And plan for it to happen whatever you do to avoid it.
There will always be a way to abuse a system, so design to help users avoid it and have a crisis/contingency plan on hand for launch.
“AI for Idiots, by Idiots”
What?
A useful guide to the basics of artificial intelligence in 20 minutes from Boy’s Club, the ironically named all-female community for understanding and explaining tech.
Source: Video.
Summary:
Two non-experts share their experiences with AI, discussing the rapid advancement of machine learning algorithms and vast datasets that are driving the AI industry.
While AI has potential in medical research and education, there are concerns about inherent bias in decision-making and potential inequalities in medical research.
The speakers emphasise the need to use AI as an enhancement to thinking rather than a replacement.
AI in education could potentially result in better education for everyone globally, but we must learn how to learn with these technologies and use them as enhancements rather than crutches to our thinking.
There are also concerns about how children will grow up in a world that is AI-native and will need to be taught how to properly cope with the technology.
Despite these challenges, the speakers remain optimistic that AI will continue to evolve and make our lives better.
(Italics indicate use of AI to summarise this video.)
So what?
The format is great. The hosts are explaining the technology to themselves and take us along with them. Along the way, they find some ideas and insights. Even if you think you know artificial intelligence it is likely this will fill some gaps or give you a different way to frame the subject.
What next?
If it amuses and informs, subscribe to and share the series with colleagues who may need help getting to grips with AI.
And that is all for this week.
Yours,
The Brilliant Noise Team
P.S. INSPIRE YOUR TEAM
Brilliant Noise is running our own webinar on helping your team experiment with AI on June 29th. It’s the first in our summer series and is open to all. Visit the sign-up page for more information.