TikTok Election, Meta Data & The Lunch Break
5 in 5 - Brave & Heart HeartBeat #205 ❤️
This week we’ll be looking at our “TikTok Election”, Google’s AI Overviews’ fatal flaw, and how Meta are getting in on the game with our data.
Plus, would you use Google if you had another choice, and why we should stop the death of the lunch break.
Let’s get into it.
Were you forwarded this? Not a subscriber? 👉 Sign up here
#1 - The TikTok Election
While TikTok may be having a political moment across the pond as the US government try to ban it completely, it’s currently already playing a huge role in our upcoming general election.
Some have called the surprise general election the “first TikTok election” after both partis opened accounts on TikTok days after the announcement.
By the end of the same week, Labour had posted fifty-four videos and the Conservatives had posted fourteen – a mix of memes, voter reactions and attempts to explain policies in thirty seconds.
Labour are going hard with the memes. The most watched have been a video of Rishi Sunak not playing football very well with the caption “You get tackled by a cone but are trying to convince voters you can run the country” and a bold clip of Cilla Black singing Surprise Surprise with the caption “POV Rishi Sunak turning up on your 18th birthday to send you to war”.
The Conservatives have gone more traditional, with camera facing videos explaining their politics. While some suspect their videos will do well in the algorithm due the amount of comments the political figures used might attract – whether positive or negative – while other commentators have noted that the Conservative’s audience simply aren’t on TikTok.
They’re not the only ones taking the election to TikTok, meme videos from younger voters spoofing the national service flooded the internet after Sunak announced the initiative.
While TikTok isn’t going to reach everyone, it may be THE best way to reach young people this election.
#2 – Why Google’s AI Issue Won’t Go Away
Google and their AI offerings have been in the news for all the wrong reasons in the last fortnight. Although they’ve launched a multimodal AI assistant that can do everything that OpenAI’s can without the sexist undertones, all anyone is talking about is their catastrophic AI search overviews.
We touched on this last week during the bonus – Google had launched a function that summarised the results of any search, but due to some “issues” in the programming it was answering questions really really badly. I.e. “yes you should smoke at least one cigarette a day when pregnant” badly.
It seems that those issues are pretty much irreparable, or at least unless you put a LOT of work into it.
The AI Overviews feature draws on Gemini, which is an LLM like the one behind ChatGPT, using it to generate the overviews as written answers summarising the information that comes up online when a certain question is searched.
However, the issues come mainly from the fact that LLM’s, despite the impressive fluency with which they speak, haven’t got a clue what they’re talking about.
Richard Socher, a key AI researcher wo launched an AI-centric search engine all the way back in 2021, says that although you can get a “snappy prototype” out quickly using LLMs, to get it to the point where it “doesn’t tell you to eat rocks” takes a lot more effort – his company have developed about a dozen tricks to keep LLMs from misbehaving in this way, and even then they still slip up occasionally.
Socher says that the limitations of LLM technology is such that in some cases it is better not to offer an answer at all, or just to show different viewpoints. Experts have criticised Google for launching AI overviews for such important queries as medical and financial queries, saying they were surprised Google hadn’t been more careful when rolling out the feature across all searches.
A consultant who worked on the beta option said she wasn’t surprised to see the errors, and feels that Google rushed out their product, adding that the nature of AI itself means that these errors are almost impossible to avoid.
Everyone wants to get in on the AI buzz, but doing it badly on your most valuable product is probably not the best way to do it…
Meta is getting in on the tricky game that Google Overviews is playing with their summaries of the comments sections on certain posts, but is it just an excuse to steal our data?
Spoiler alert, yeah, probably.
Meta have added a feature which summarises the comments sections on popular posts, which pops up in a little bubble titled “What People Are Saying”.
An example of this in use is the summary of a bobcat sighting in a town in Florida, which read “Some admired the sighting, with one commenter hoping the bobcat remembered sunscreen.” Unlike Google Overviews, this LLM sems to be able to identify jokes somehow.
This feature does seem pretty useful for taking the pulse on a certain issue, although Meta haven’t disclosed how they decide which comments are deemed important enough to make the summary.
What’s also interesting to us is that this means Meta are feeding our comments into their Ai system to generate them, and then who knows what else?
Facebook and Instagram users in the EU and the UK got a notification this past week saying that Meta will train its AI on their content, and although they will let users object to this, it’s a difficult process and the company has already rejected some users’ requests.
Their US privacy policy says that they use information “information shared on Meta’s Products and services” to train AI. This includes posts, photos, and captions on Facebook or Instagram, and while you can ask they correct or delete any personal information you won’t be able to opt out from them using any of your art that you’ve been showing on Instagram by choice.
This means that many artists have been leaving Facebook and Instagram, but anything they’ve already uploaded may still be fair game.
We all vaguely knew back in the day that anything we uploaded to Facebook and Instagram belonged to them, but what they can do with that information has changed a lot since then, while the rules around who owns it has not…
#4 - Is Google Cheating?
This week a Wired article asked the question we’d never really wondered “Would You Still Use Google if It Didn't Pay Apple $20 Billion to Get on Your iPhone?”
Well, would you?
We might be about to find out, as last Thursday the US government asked a Washington DC federal judge to rule that Google is maintaning their place as the go-to search engine by “unfairly manipulating users” to keep Microsoft and other competitors away from the top spot.
Far, far away from the top spot, as nine out of ten internet searches are done using Google.
Google’s advantage has been a topic with the US Department of Justice since 2020 when they first decided to sue the company for maintaning a monopoly and therefore violating antitrust laws through “exclusionary contracts”.
The exclusionary contracts in question are Google’s deals with Apple dating back to 2002.
In 2002 Google struck a deal with Apple to integrate Google search into their browser, and in 2005 they struck a bigger deal – Google would pay Apple half of their search revenue sales if they made Google search default on the Safari browser.
The judge asked why bother doing that, if what’s keeping Google on top is simply their market lead or superior product?
The trial continues, so we’ll be watching this space, but until then, imagine a world where you could choose your search engine, would you bother?
#5 - Save The Lunch Hour
Is the lunch hour dead?
The figures seem to say so, as a study into post-Covid downtown recovery shows that most North American cities have seen a “decrease in activity levels during working hours”.
This would be partly due to the rise in remote working, but some seem to think that the concept of the lunch hour itself is being phased out of workers lives.
There has always been a way for us to argue ourselves out of taking a break at lunch, from the “lunch is for wimps” catchphrase of the competitive productivity in 1987’s Wolf Of Wall Street to the “sad desk lunch” of millennials on a deadline.
Now the argument for remote workers is that if you’re working on your own schedule, and without colleagues to take a break with, working through your lunch hour is a way to take back an hour for yourself later on in the day – but does it ever really work that way?
Taking a lunch break used to be a nice way to break up the day, now we push through under the illusion that we’re doing something for ourselves – but are we just conditioned to keep signaling performative productivity?
Kids at school need break times. Sure, they could push through and finish an hour and a half earlier if they get rid of the mini breaks and the lunch break – but I think we can all agree that they’d be too worn out to learn anything. So why don’t we treat ourselves with the same compassion?
Brave & Heart over and out.
Bonus
OpenAI vs. ScarJo Continued
Scarlett Johansson is still mad, and a voice analysis lab may have confirmed that she has the right to be.
The lab used AI models to analyse vocal similarities between Sky and 600 other actresses – and guess who came out on top as more similar than 98% of the others?
To find out more on how you can retain your top talent, or how we can help you with digital solutions to your business and marketing challenges, check out our case studies.