× Search rightsnet
Search options

Where

Benefit

Jurisdiction

Jurisdiction

From

to

Forum Home  →  Discussion  →  Access to justice and advice sector issues  →  Thread

RESEARCH - Algorithms & AI in Local Welfare Provision

JHurfurt
forum member

Head of Research and Investigations, Big Brother Watch, London

Send message

Total Posts: 7

Joined: 19 April 2021

Hello,

Thanks to RightsNet for letting me post! I am Head of Research and Investigations at civil liberties NGO Big Brother Watch and we are currently running a long-term project about the digitisation of the welfare state. Our focus is on the use of algorithms, AI and the ever-growing amounts of data collection that contributes to the welfare state directing digital surveillance at the people using it.

As part of our second phase of research, we’re looking to speak to people who have been affected by the digital tools that are part of the welfare state at a local level to find out more about the on-the-ground impact these algorithms are having on people’s lives.

Someone from Hackney Law Centre suggested posting on RightsNet to see if anyone here may be able to connect us with people who have been affected who may be willing to share their experience, or if anyone here has experience first hand of advising people fighting against these digital systems.

It’s the kind of perspective that is vital to understanding the use of tech in the welfare state not just in theory but in practice! Any help would be hugely appreciated, either in replies or my email is .(JavaScript must be enabled to view this email address).
Thanks!
Jake

Mike Hughes
forum member

Senior welfare rights officer - Salford City Council Welfare Rights Service

Send message

Total Posts: 3138

Joined: 17 June 2010

Particular area of interest for me following various PLP events. Hard to pick up with claimants though.

As an aside, yesterday my debit card was blocked by a certain well known Building Society. Now theoretically I could give them my mobile number and then, when the text comes in which apparently says “Is this you making this payment?” I text back “Y” and my card is unblocked. Kind of misses the point. I don’t want them having my phone number when they can’t distinguish between texts I need and marketing. I also want to talk to a human being so as to understand what caused my card to be blocked when I am 4 figures in credit and have been for a long time.

One hour and four people later (two of whom I was misdirected to) I have had multiple answers:

- it was because you used a site in the US. Er, you mean that one I used in January, February and March? Try again.
- it was because you used a US version of that site having used a UK version. Nope. Never been an issue before. Check my transactions.
- it was because you bought a live stream ticket that didn’t look legitimate. Nope. I buy one most weeks for home or away games and the streaming company has had my details since last year with no issue.
- it was because you entered your name differently. Nooo. I corrected it from Mike to Mr M A as the former is usually rejected as not being what is on my card.
- it was because Paypal are suspicious of multiple payments. No. I don’t have a PayPal account. I was acting as a guest.

Eventually I am put through to the guy who I am told can both unblock my card and tell me why it happened. He queries why I didn’t respond to their text. That would probably be because you don’t have my number and thus can’t possibly have sent me a text. He spends a further 5 minutes trying to explain to me why I need to hand over my phone number. No chance.

I queried whether it had been belatedly been blocked because I’d just been into Manchester for the first time in 13 months and spent on said card in several locations I’d not been to for a long time. Oh Waterstones Deansgate and Costa on Bridge Street how I have missed you. Nope, not that either.

He reluctantly unblocks my card; confirms that he cannot tell my why they blocked my card because he “doesn’t know”. Why doesn’t the guy in fraud know? Because it’s done by… algorithm and they have no insight at all into the parameters used by said algorithm. So, there’s me trying to address the issue with logic and there’s simply no point. This could happen anywhere at any time and they wouldn’t even know why. Their only solution is that I surrender more personal information in order to make myself more secure.

Say what.

Mrs. H. then reminded me of her credit card being blocked for buying a wedding dress for her daughter. “Suspicious activity” said the algorithm. In what sense, perhaps buying one every month might have been suspicious but labelling an obvious one off expenditure as that is insane.

Sorry, rant over.

Gareth Morgan
forum member

CEO, Ferret, Cardiff

Send message

Total Posts: 1995

Joined: 16 June 2010

I’m not sure where the problem being the ‘algorithm’ comes from.  That’s just the implementation of a, presumably, human designed process. It seems to be creating an excuse to evade responsibility.

Mike Hughes
forum member

Senior welfare rights officer - Salford City Council Welfare Rights Service

Send message

Total Posts: 3138

Joined: 17 June 2010

Well it certainly creates the excuse and, from my perspective, I’ve no issues on paper with algorithms per se. The issue is when you don’t know they’re in use and when there’s zero transparency over how they’re put together so you can’t look at bias etc. If you can’t explain what the algorithm is doing and how it does it then it shouldn’t be in use.

Similar issues with the passport office who use an algorithm to triage passport photos. Nothing much had changed with me when my passport was lost after a year of ownership so I sent in an identical picture to that on my passport. Algorithm rejected it for reasons that would clearly have applied to the existing photo. However, the algorithm had simply never been built to contemplate that some people might send in the same photo. The challenge was interesting to say the least. They conceded in full once a human looked at the photo and saw the issue.

Va1der
forum member

Welfare Rights Officer with SWAMP Glasgow

Send message

Total Posts: 706

Joined: 7 May 2019

I’m with Gareth on this one. As most computer scientists would point out: humans frequently make more errors than computers. On the flip side, algorithms will never be perfect, so there will inevitably be a need for a human element.
Given the field you are conducting research in I assume you’re already conscious of this.

I do think that language like “on-the-ground impact these algorithms are having on people’s lives” and ” experience first hand of advising people fighting against these digital systems” is counterproductive as it takes the focus away from the people that are implementing these systems, and who are ultimately at fault.

 

 

JHurfurt
forum member

Head of Research and Investigations, Big Brother Watch, London

Send message

Total Posts: 7

Joined: 19 April 2021

Hi Mike,

Yeah I thought it might be hard to pick up when an individual is affected, but I’d also be very keen to talk to welfare advisors who have experience dealing with digitisation and algorithms as you are on the frontline and experience it in a way I don’t.

I’m not an expert on digital banking but from my knowledge of general risk/fraud algorithms i think their computer will have decided some transaction you made was risky and led to your card being blocked, and the algorithm flags transactions with common characteristics with previous transactions flagged as fraudulent (new place, high amount etc). Not a perfect explanation but it’s my understanding. It’s not great if they use a text as a check when they don’t have your number either!!

Mike Hughes - 20 April 2021 04:35 PM

Particular area of interest for me following various PLP events. Hard to pick up with claimants though.

As an aside, yesterday my debit card was blocked by a certain well known Building Society. Now theoretically I could give them my mobile number and then, when the text comes in which apparently says “Is this you making this payment?” I text back “Y” and my card is unblocked. Kind of misses the point. I don’t want them having my phone number when they can’t distinguish between texts I need and marketing. I also want to talk to a human being so as to understand what caused my card to be blocked when I am 4 figures in credit and have been for a long time.

One hour and four people later (two of whom I was misdirected to) I have had multiple answers:

- it was because you used a site in the US. Er, you mean that one I used in January, February and March? Try again.
- it was because you used a US version of that site having used a UK version. Nope. Never been an issue before. Check my transactions.
- it was because you bought a live stream ticket that didn’t look legitimate. Nope. I buy one most weeks for home or away games and the streaming company has had my details since last year with no issue.
- it was because you entered your name differently. Nooo. I corrected it from Mike to Mr M A as the former is usually rejected as not being what is on my card.
- it was because Paypal are suspicious of multiple payments. No. I don’t have a PayPal account. I was acting as a guest.

Eventually I am put through to the guy who I am told can both unblock my card and tell me why it happened. He queries why I didn’t respond to their text. That would probably be because you don’t have my number and thus can’t possibly have sent me a text. He spends a further 5 minutes trying to explain to me why I need to hand over my phone number. No chance.

I queried whether it had been belatedly been blocked because I’d just been into Manchester for the first time in 13 months and spent on said card in several locations I’d not been to for a long time. Oh Waterstones Deansgate and Costa on Bridge Street how I have missed you. Nope, not that either.

He reluctantly unblocks my card; confirms that he cannot tell my why they blocked my card because he “doesn’t know”. Why doesn’t the guy in fraud know? Because it’s done by… algorithm and they have no insight at all into the parameters used by said algorithm. So, there’s me trying to address the issue with logic and there’s simply no point. This could happen anywhere at any time and they wouldn’t even know why. Their only solution is that I surrender more personal information in order to make myself more secure.

Say what.

Mrs. H. then reminded me of her credit card being blocked for buying a wedding dress for her daughter. “Suspicious activity” said the algorithm. In what sense, perhaps buying one every month might have been suspicious but labelling an obvious one off expenditure as that is insane.

Sorry, rant over.

JHurfurt
forum member

Head of Research and Investigations, Big Brother Watch, London

Send message

Total Posts: 7

Joined: 19 April 2021

Gareth Morgan - 21 April 2021 09:23 AM

I’m not sure where the problem being the ‘algorithm’ comes from.  That’s just the implementation of a, presumably, human designed process. It seems to be creating an excuse to evade responsibility.

Hi Gareth,

It’s not that algorithms are inherently bad, it’s that they are often very opaque in a way that human decision making is not. They tend to be developed by private companies who claim commercial confidentiality when asked to explain how it works, so nobody can really be sure how a decision was arrived at, which isn’t ideal for important decisions like those in welfare.
There are also some wider concerns about potential bias and I totally agree that is down to the humans who design the algorithm, but I suppose my view on it is that putting that process in a computer makes it harder to challenge than if a human making the decision was biased, especially as the algorithm will often replace a diverse set of humans who make decisions.

Hope that makes where I’m coming from clearer?
Jake

JHurfurt
forum member

Head of Research and Investigations, Big Brother Watch, London

Send message

Total Posts: 7

Joined: 19 April 2021

Hey,

I get where you’re coming from and I agree it’s ultimately the humans/companies who design the algorithms who ultimately should be held accountable - but I think an understanding of the problems in the tech is vital in this.

Apologies if the language is clunky, I’m trying to get at the fact that I have a lot of technical info about how a lot of these algorithms work and can make a reasoned guess about how they translate into real world use, and I’m now trying to find out how they work in practice for two reasons a) to see if there are any effects (good or bad) I’ve missed from a technical analysis and b) to see if any claimed functions are not happening in reality.

Yeah it’s the companies/councils/coders who are at fault but the digitisation of the welfare state is not going away, so the more we understand how it works the more we can challenge its flaws.

Va1der - 21 April 2021 09:51 AM

I’m with Gareth on this one. As most computer scientists would point out: humans frequently make more errors than computers. On the flip side, algorithms will never be perfect, so there will inevitably be a need for a human element.
Given the field you are conducting research in I assume you’re already conscious of this.

I do think that language like “on-the-ground impact these algorithms are having on people’s lives” and ” experience first hand of advising people fighting against these digital systems” is counterproductive as it takes the focus away from the people that are implementing these systems, and who are ultimately at fault.

 

Mike Hughes
forum member

Senior welfare rights officer - Salford City Council Welfare Rights Service

Send message

Total Posts: 3138

Joined: 17 June 2010

JHurfurt - 21 April 2021 10:39 AM

Hi Mike,

I’m not an expert on digital banking but from my knowledge of general risk/fraud algorithms i think their computer will have decided some transaction you made was risky and led to your card being blocked, and the algorithm flags transactions with common characteristics with previous transactions flagged as fraudulent (new place, high amount etc). Not a perfect explanation but it’s my understanding. It’s not great if they use a text as a check when they don’t have your number either!!

Actually that was the whole point. The lack of transparency in the algorithm meant that the humans tried to guess what it had done but every guess could be refuted and they literally had no idea. All the “common characteristics” they tried to pin it on fell flat (see my responses in my post) and they had to admit they’d no idea.

Translating that into the detection of actual fraud throws up issues people really don’t want to engage with. As you say algorithms are not inherently evil but the lack of transparency, regulation and accountability is terrifying. We have picked up some of the areas where DWP are using algorithms for various sifts but only because they’ve told us about them. They have no requirement to disclose; no requirement to explain how it works; no regulation and thus there is no opportunity other than through the kind of work you’re looking at to say “hey, this appears to have an inherent racial, sexual or other bias which skews outcomes can we alter the algorithm please?”

As it stands we only “know what we know”. We don’t know what we don’t know. To what extent are DWP using such things which have yet to be detailed.

[ Edited: 21 Apr 2021 at 11:58 am by Mike Hughes ]
Paul_Treloar_AgeUK
forum member

Information and advice resources - Age UK

Send message

Total Posts: 3196

Joined: 7 January 2016

Gareth Morgan - 21 April 2021 09:23 AM

I’m not sure where the problem being the ‘algorithm’ comes from.  That’s just the implementation of a, presumably, human designed process. It seems to be creating an excuse to evade responsibility.

I think there are a great deal of concerns, and very valid concerns, about algorithimic bias Gareth, see for example Why algorithms can be racist and sexist

Mike Hughes
forum member

Senior welfare rights officer - Salford City Council Welfare Rights Service

Send message

Total Posts: 3138

Joined: 17 June 2010

And this is an essential read too.

https://cyber.harvard.edu/story/2019-04/facial-recognition-plutonium-ai

Yes an algorithm is as Gareth puts it a “human designed process” but the need for it to have an explicit purpose; transparency; accountability and regulation is absolutely clear. Without those you simply replace a biased human with biased technology and of course that is exactly what the likes of DWP et al have been rushing headlong into doing.

Gareth Morgan
forum member

CEO, Ferret, Cardiff

Send message

Total Posts: 1995

Joined: 16 June 2010

Mike Hughes - 21 April 2021 11:53 AM

As it stands we only “know what we know”. We don’t know what we don’t know. To what extent are DWP using such things which have yet to be detailed.

True, but that makes this study more difficult, doesn’t it? 

They say “we’re looking to speak to people who have been affected by the digital tools that are part of the welfare state at a local level”.  How do people know that they have been affected? Similarly what are the algorithms referred to in “find out more about the on-the-ground impact these algorithms are having on people’s lives”?

At the simplest level, an algorithm that says: If there-is-one_relevant_child then maxUC = MaxUC + 1st_child_element is going to affect a lot of people but would be fairly easy to identify where there are problems..

One that says: investigate_for_fraud IF zero_earnings_over_two_years AND left_UK_more than_twice_in previous_year AND capital_greater_than_£5000 AND has_Sky_Subscription AND buys_Aunt_Bessies_Yorkshire _puddings is going to be very difficult to identify people affected by it.

What is actually being looked for?  In the latter example you might find two people who have been investigated, because they meet the conditions, but also both work for the NHS, or belong to a political group.  It’s going to be too easy to infer the rules wrongly without the detailed knowledge.  Make the rules explicit, on the other hand, and people stop buying Yorkshire puddings.

[ Edited: 22 Apr 2021 at 12:35 pm by Gareth Morgan ]
Mike Hughes
forum member

Senior welfare rights officer - Salford City Council Welfare Rights Service

Send message

Total Posts: 3138

Joined: 17 June 2010

Love the underscores there Gareth. That’s the techy in you coming out :)

I think FWIW that the focus here has to start with 2 things:

1 - identifying the groups likely to be impacted by the algorithms we do know about?
2 - awareness raising amongst advisers to enable better identification of scenarios where it looks like an algorithm could be doing the heavy lifting.

JHurfurt
forum member

Head of Research and Investigations, Big Brother Watch, London

Send message

Total Posts: 7

Joined: 19 April 2021

I agree we don’t know everything the DWP are doing, but I’ve been working on this research for 6 months so I have a good idea of some of the things that are going on, briefly

- DWP risk modelling of housing benefit recipients on a regular basis to predict who poses the highest fraud/error risk due to change of circumstance
- Around 1/3 local authorities risk score people who receive housing benefit and council tax support when they apply, using privately-developed algorithms that are kept very secret
- Around 1/3 people in social housing have their rent payments analysed to predict if they will keep paying their rent
- Some fairly large local authorities use bigger predictive systems that can model who is at risk of homelessness (Newcastle, Maidstone, Cornwall, Croydon, Haringey) and others use similar systems to model children at risk of harm (Hillingdon, Bristol) and others still can model general financial vulnerability (Barking & Dagenham)

You’re right a lot of people won’t know they’ve been affected because a lot of these systems rely on claims of legitimate interest rather than active consent to process data, which is why our call out is quite wide. I think this is partially the issue, the shift from humans making decisions to computer systems often without the knowledge of the people whose data is fed into them.

Some people may be aware and decide they may want to share the impact, but others may be interested in how their data is (or isn’t) used and BBW will use our existing work to tell people to the best of our knowledge what algorithms are in use in their areas and help them find this out, if they want to. That’s why my call out is wide/vague, as the range of algorithms is!

Our hope is twinfold, firstly to get a sense of the general awareness of automated data processing (as if its low it’s now great) and that because data rights from the DPA2018/GDPR entitle people to know how their data is processed these could be used to understand in greater detail the functionality and flaws in the algorithms other methods don’t allow for, if someone chooses to share their data for our research. From what I’ve found the IF/OR models aren’t that common and it’s more complex propensity style modelling in use which is so much harder to hold accountable for bias/error.Hope that makes sense?

Gareth Morgan - 22 April 2021 12:30 PM
Mike Hughes - 21 April 2021 11:53 AM

As it stands we only “know what we know”. We don’t know what we don’t know. To what extent are DWP using such things which have yet to be detailed.

True, but that makes this study more difficult, doesn’t it? 

They say “we’re looking to speak to people who have been affected by the digital tools that are part of the welfare state at a local level”.  How do people know that they have been affected? Similarly what are the algorithms referred to in “find out more about the on-the-ground impact these algorithms are having on people’s lives”?

At the simplest level, an algorithm that says: If there-is-one_relevant_child then maxUC = MaxUC + 1st_child_element is going to affect a lot of people but would be fairly easy to identify where there are problems..

One that says: investigate_for_fraud IF zero_earnings_over_two_years AND left_UK_more than_twice_in previous_year AND capital_greater_than_£5000 AND has_Sky_Subscription AND buys_Aunt_Bessies_Yorkshire _puddings is going to be very difficult to identify people affected by it.

What is actually being looked for?  In the latter example you might find two people who have been investigated, because they meet the conditions, but also both work for the NHS, or belong to a political group.  It’s going to be too easy to infer the rules wrongly without the detailed knowledge.  Make the rules explicit, on the other hand, and people stop buying Yorkshire puddings.

JHurfurt
forum member

Head of Research and Investigations, Big Brother Watch, London

Send message

Total Posts: 7

Joined: 19 April 2021

When we publish our work I think there will be some elements that will be of help to advisers to identify when algorithms have a hand - and some findings on which groups may be affected, so I really hope it will have a positive impact there too!

Mike Hughes - 22 April 2021 01:00 PM

Love the underscores there Gareth. That’s the techy in you coming out :)

I think FWIW that the focus here has to start with 2 things:

1 - identifying the groups likely to be impacted by the algorithms we do know about?
2 - awareness raising amongst advisers to enable better identification of scenarios where it looks like an algorithm could be doing the heavy lifting.