I am more than a little furious. We live in times when far right agitators are stirring up physical violence, and people are getting sucked into internet rabbit holes of conspiracist thinking and outright lies. But, if you wanted to check the facts about something, where else would you turn but to Google, the default search for, well, pretty much everyone.
So, what happens if you have heard rumours on Facebook, or wherever, that council tenants are being evicted in order to house asylum seekers? What happens if you search something like ‘evicting council tenants for asylum seekers’?
This. This is what happens. This is the google AI summary at the top of the page.
Just in case anyone reading this doesn’t have an inkling of the relevant law, this is horseshit of the highest order. It is absolutely untrue that council tenants can be evicted in order to house asylum seekers (fn 1). It is not a ‘complex issue with legal and ethical considerations’, it is a non-existent issue because it cannot legally happen.
But people have been searching this, and searching for it in their area. You can see from the search auto complete drop down. And they are going to read this nonsense. And many of them are going to believe it.
Oh yes, sure, you and I know that LLM AIs generate false output all the time and have to be checked. Aren’t we the clever ones. But when lawyers are putting hallucinated cases in submissions, and people are falling in love with their AI chatbots, and passing their degrees with no reading but a few prompts, there can be no smugness, no ‘oh of course it does that’.
And when it does something positively dangerous like this, we have to acknowledge the outright risk that lies in apparently confirming a far right fabulation at the top of a google search. This is quite some way beyond telling you to put glue on your pizza on the scale of hazard.
We are heading into very dangerous territory. This is merely a housing law example of it. But it is very frightening. And it is hard to see how to stop it.
[Update 25/07/2025. I have been told that the same search now returns an AI summary saying that such evictions do not happen and citing this post as the source. So the only way to correct such a dangerous error is to have a website considered as authoritative by Google and explicitly describe why the previous summary is wrong. This is not really a workable solution!]
fn 1. With one extremely limited exception. Homeless applicants housed in council properties under ‘non-secure tenancies’ can have their tenancy ended pretty much at the whim of the council, though the council would have to rehouse them. Quite why the council would do this when it is screamingly desperate for all the homeless temporary accommodation it can get is another question, but technically, it could happen. It just doesn’t in reality.


With regard to your footnote: “technically, it could happen. But doesn’t in reality.” Am I correct that the reason it doesn’t happen in reality is because any such decision would be found unlawful in public law terms – either within the possession proceedings or a standalone JR?
No.
It doesn’t happen because no council would give up property it has available as temporary accommodation for those it owes the homelessness housing duty.
As long as the occupants were rehoused in suitable accommodation elsewhere, it would be perfectly lawful. Just practically mad.
https://www.bigissue.com/news/housing/lambeth-renters-labour-eviction-homeless/
“Renters fail in bid to stop council from making them homeless to house people already homeless”
Private tenants, not council tenants.
Shocking that it couldn’t handle Section 21, given the amount of online material about it in recent years.
A genuine question for people who know more than me about AI (which is everyone): is AI being trained to give any extra weight to gov.uk and ac.uk sites for policy questions, and legal/LLP websites for legal ones?
Apart from s.21 being irrelevant to council tenants, I’m not sure what you mean? The mention of it is not, in itself, wrong.
With respect, I think there’s a second error here. Mistaking the currently personally tailored web experiences we all have with “the internet” that everyone sees.
I’ve just done an identical search on “my” google (not logged in, but still unique to me and my PC) – The AI summary says “It’s not accurate to say council tenants are being evicted to make way for asylum seekers. While asylum seekers are housed in various accommodations, including those provided by local councils, evictions of existing council tenants are generally not a routine or accepted practice for this purpose. There are legal processes that council landlords must follow, and these processes are not directly linked to housing asylum seekers.” I cant post a screen shot, but I have one.
I get a slightly different version on my phone, but it’s broadly the same. Not only am I a different set of histories, I’m somewhere else and it’s a different date.
Every search on Google is directed, or at least influenced, by your (and your devices) previous history and those of people close to you (geographically and via other connections). And I suspect, but don’t know, that AI/LLMs are even more affected by this. They tend to hallucinate differently for different people. People who are racists or who hunt racism will experience a more racist web experience more quickly than other people. It’s why Twitter and Facebook are so different for every user. It’s probably why people think their devices are listening to them.
What you see is only what you get, not what everyone else gets.
I have to differ. At least five people, independently, got the same AI response that I did for that search before I did, at least before I published the post. After I published, people have reported vaerying results – some saying AI not available on that search, others referencing my post. And this one, which made me laugh (Sent to me, not my search).
The result you got was after I had published, I think. So, no, I don’t believe the AI response is that variable by individual search, but it certainly begam changing responses after I published.
Search results used to be tailored in the way you suggest, maybe still are, but I don’t think the AI summary is so much. Twitter and Facebook alogrithms are a different matter altogether. And awful.
I have to accept that my understanding of search results is not applicable to AI. But I’m prepared to go out on a limb and assert that AI summaries are almost certainly tailored very specifically to the user. Descendants of leopards have similar spots.
I’d design it in if I was planning to launch it as a product, because it would hugely increase user acceptance out of the gate. That your colleagues and acquaintances get similar results to you is how it works. I know that seems like a paranoid loop.
If you made money from the response to adverts displayed to users (like Google), would results from queries from users who’d recently visited (or who connected to users had visited) Shelter or (say) Property118 be different?
Users with a connection to this site would get results skewed towards its existing content.
I don’t think it does work that way, for various connected reasons. The additional computing cost, on top of the already hefty LLM cost, would not be insignificant. But it would give no benefit to Google. The google ads remain targeted as before, but that has nothing to do with the AI summary. Where is the return for Google?
(And by the way not ‘colleagues and aquaintances’ – first couple of identical reports were from people I don’t know at all).
Many thanks, Giles, you’ve summed out the point that I was trying to make much better than I did. I simply meant there’s a lot of information on the web explaining (a) the overwhelming majority of council tenants have secure tenancies; (b) there are no grounds to evict these secure tenants to rehouse migrants, and (c) S.21 might be cited for private landlords but it doesn’t apply to secure tenancies. My comment was just prompted by my surprise that AI couldn’t join these three dots, and I wasn’t trying to make any arcane legal or political point. Apologies for any confusion.
“I’ve just done an identical search on “my” google…! – dear God a general Google search is not AI … read the damn article!
The original article was about the results of a Google search returning an AI generated response. Google searches can now generate AI generated responses. Which I think was the concern being raised.
I’d be delighted to be wrong!
It really is worrying. We are starting to notice tenants with problems reaching out for advice and telling us how they want us to proceed, based on what ChatGPT told them about their rights and the powers of the local authority. If it’s true that a little knowledge is a dangerous thing then I dont even know where to begin to quantify where random made up knowledge would sit in that metaphor.
In the interest of balance as a government sets out more controls on speech, the advantage of a right wing or Corbyn and the ladies left wing new for old party is, that in allowing speech, is that we are invited to think and enquire. Sadly the media isn’t going to provide it and as you say AI will author to populist choice driven by clicks, increasingly by people “educated” by it having it do their work/studdy.
Artificial Stupidity=Actual Idiocy