In her 2015 book Reclaiming Conversation, Sherry Turkle describes ‘one of the most gut-wrenching moments in my then fifteen years of research on sociable robotics’, one which ‘changed her mind’ after years as an enthusiastic advocate of the emancipatory potential of connected technologies:
One day I saw an older woman who had lost a child talking to a robot in the shape of a baby seal. It seemed to be looking in her eyes. It seemed to be following the conversation. Many people on my research team and who worked at the nursing home thought this was amazing. This woman was trying to make sense of her loss with a machine that put on a good show… I didn’t find it amazing. I felt we had abandoned this woman … It seemed that we all had a stake in outsourcing the thing we do best – understanding each other, taking care of each other … It is not just older people that are supposed to be talking. Younger people are supposed to be listening. This is the compact between the generations… [instead], We build machines that guarantee human stories will fall upon deaf ears.
It’s a striking example of just one of the ways in which the glittering offerings of Big Tech are eating into the social contract of modern welfare states. Of how they’re taking over, and altering, the functions of the state – from benefits and legal services, to education and health and social care. Activists, unions and academics have begun to ring separate alarm bells about privacy, and to a lesser extent, about automation and exclusion – but unless you look at the whole picture, something vital is missed, as we move dizzyingly fast into an era of ‘ubiquitous computing’. From robotics, to algorithmic decision-making, to state- and self-monitoring, something huge, and largely corporate-driven, is happening to our relationships with public services. And up until now, we’ve had very little debate about it.
Technosalvationism
Big promises are made by technophile politicians and tech corporations for the ways in which connected technology will help the state solve a wide range of social challenges. And nowhere more so than in health and care, where we’re told that digitisation will do everything from ‘empowering’ patients and getting us off the couch, to curing cancer and saving the National Health Service (NHS).
In what he described as his ‘most important speech’, England’s then health secretary Jeremy Hunt declared in 2015 that ‘The future is here … 40,000 health apps now on iTunes … this is Patient Power 2.0.’ Soon after, Hunt suggested (in the midst of a junior doctors’ strike) that parents worried about a child’s rash could turn to ‘Doctor Google’. Hunt’s successor, Matt Hancock, declared in the foreword to a TaxPayers’ Alliance report last year that ‘tech transformation in the NHS’ is the ‘only way’ the organisation can now meet people’s needs. The report itself claimed that automation could save the NHS and social care sector £18.5 billion a year by, among other things, using ‘Care o bots’ to entertain and feed patients. Within five years, it went on, ‘technology should be advanced enough to provide help with a wider range of physical tasks including dressing, eating, and toileting [and] be able to understand user preferences, intentions, and emotions.’
Ali Parsa, founder of the digital health company and NHS contractor Babylon, claims that Babylon’s products ‘will shortly be able to diagnose and foresee personal health issues better than doctors’. Its mission, he says, is ‘to make healthcare affordable and accessible to every human being on Earth …We want to do with healthcare what Google did with information’. The company has attracted substantial investment from US giant Centene and, appropriately, from the founders of DeepMind, an artificial intelligence company owned by Google’s parent, Alphabet Inc.
We need to look beyond such technosalvationism and ask not only whether these products really work, but other vital questions. Who is really benefitting from Big Tech’s increasing involvement in public services like health and care? Will it save money? Can it overcome the ‘productivity paradox’ that besets IT innovation: the more you spend on the technology, the less productive each worker becomes? Who gets a say in the transformation that is underway? How will it change our relationships with each other, and with the workers who the state employs or funds to look after us? And finally, how does the transformation affect our perception of the state itself?
Photo by Nick Collins from Pexels
The future is here
In the past couple of years, a string of tie-ups between England’s National Health Service and Big Tech have been unveiled. England’s NHS, which is run independently of those of Scotland, Wales and Northern Ireland, has signed a deal with Amazon to provide health advice via the Alexa digital assistant, allowing the tech company to use the data generated from NHS voice enquiries to develop its own products – although, importantly, it will not be allowed to create individual health records of its users. It looks as though Amazon has big plans for its healthcare services: in the US, it is already moving into prescription services and healthcare plans. NHS England has also signed a string of framework contracts with IBM and other US IT giants to develop electronic health records as well as services termed ‘patient empowerment’, ‘demand management’ and ‘transformation’.
Matt Hancock, the health secretary, has repeatedly promoted Babylon’s offerings of online NHS consultations – despite concerns from general practitioners that this service is aimed at healthy patients and takes NHS money away from those in most need, many of whom are explicitly advised that Babylon is not suitable for them. The company also provides a so-called ‘AI’ symptom-checker chat bot to millions of Londoners via the NHS 111 non-emergency hotline service. And NHS Trusts have continued to roll out Deep Mind’s Streams app, although the company had to change how it handled NHS data after a 2016 deal that gave DeepMind access to 1.6 million patients’ medical records was found to have breached the Data Protection Act.
All of these deals have been sold on big promises. But such big promises sometimes turn out to yield disappointing results. A public-private partnership between a Danish public healthcare provider and Watson, an IBM AI system, folded at the end of 2018, with a former official involved in the negotiation saying: ‘It was very oversold, what IBM Watson could do. There is something of the emperor’s new clothes about it.’
If it’s free (or tax-funded), you’re the product
What public debate these deals have provoked has been mainly about privacy. And indeed, this is where we see the first – massive – proposed shift in the social contract. Tim Kelsey, the NHS’s former data and tech ‘tsar’, made waves a few years ago when he suggested that allowing access to our health data was the price we pay for access to the NHS. Kelsey wrote an article in which he said that ‘no one who uses a public service should be allowed to opt out of sharing their records’. Later he told MPs that if too many people opted out of his controversial flagship NHS data-sharing project, care.data, ‘we won’t have an NHS’.
Kelsey appeared to be suggesting that the old British social contract – in which paying one’s taxes, rather than sharing one’s health records, was the price one paid in exchange for being able to access health services – was to be superseded. He was unable, however, to explain exactly whom British people would be paying this new price to, leaving the door open for private corporations to gain access to ever greater amounts of sensitive patient data.
Whilst Big Tech’s impact on our privacy has come under scrutiny, and debates continue about who this data should ultimately belong to, less examined are two further shifts in the social contract, particularly in health, and how these are enabled by Big Tech.
The behavioural turn
For years now, governments around the world have been making public services, notably welfare benefit systems, more conditional on recipients proving they ‘deserve’ them. It’s a policy sometimes labelled ‘conditionality’ or the ‘behavioural turn’. Big Tech has played a growing role in conditionality in recent years: the UK government’s flagship Universal Credit benefit reform is the first major government service to be ‘digital by default’. The result has been a well-documented landslide of suffering for those seeking and receiving benefits.
Now conditionality is on its way into England’s NHS as well. Health secretary Matt Hancock’s recent pronouncements suggest that our social contract with the NHS has changed in another significant way. Aside from perhaps being expected to give up our data in exchange for our healthcare, it seems we are now also expected to look after ourselves in exchange, too. Of course, we should all ‘take responsibility for our own health’, ideally. But our ability to do so is not purely a matter of freely made individual choices. It’s also, as most public health professionals agree, socio-economically determined. As a result, government regulation on everything from working time to green spaces influences our ability to keep ourselves healthy.
Already, health services are becoming more conditional, with some NHS areas imposing blanket bans on all routine operations for smokers and obese people. Doctors’ leaders have condemned such policies, but rightwing British newspapers and think tanks are working to shift the British public away from its commitment towards universal healthcare and towards greater conditionality.
How might these two burgeoning conditions on accessing healthcare – release of health data and ‘deserving’ behaviour – begin to intersect? Particularly in a tech-saturated world where ever greater amounts of our health-related data is collected? When every time we search online for information about health concerns, or sign up (sometimes at the suggestion of our doctor or employer) to apps and devices that self-monitor our diet, exercise, sleep and more, we leave behind a data trail?
Already, several think tanks have recommended that the NHS collect data on what people buy in the supermarkets, and their gym attendance, and reward those with healthy behaviours with everything from vouchers to going to the front of the NHS queue. The growth of self-monitoring (known as the ‘quantified self’ movement) may also have the effect of making the more ‘health-engaged’ amongst us somewhat smug – and perhaps, less sympathetic to our more resistant fellow citizens.
In fact, there’s pretty limited evidence that apps and wearables that promise to change our health behaviour actually work. One of the longest-term studies to date found that people who used fitness trackers (alongside other weight-loss interventions) actually lost less weight than those who didn’t. Another study found that these devices don’t encourage walking amongst people who actually need to walk more, and yet another found that use of such trackers has an association with eating disorders.
Data Justice Lab highlights the trend to ‘data neoliberalism’ and the ‘fundamental personalization of risk’, whereby ‘attaching risk factors to individual characteristics and behaviour … can lead to individualized responses to social ills being privileged over collective and structural responses, such as issues of inequality, poverty or racism.’
Computer says no (or yes)
Who and what should be prioritised in healthcare is also under challenge from digital health. Doctors report a rise in digitally-driven demands from the ‘worried well’, with people visiting them after an app has suggested they have an issue with their heart rate (when they haven’t), or contacting them via an app for minor, self-resolving niggles that probably wouldn’t have prompted a visit to the surgery. ‘Protocoldriven diagnostic pathways’ send patients more readily to emergency departments and doctors’ surgeries than trained clinicians would, according to GP and author Margaret McCartney. Overtreatment is as much a concern as the more publicised potential for under-treatment when digital symptom-checkers miss serious conditions.
NHS-offered apps, like Babylon’s, also offer ‘in-app purchases’ (for example for tests), a departure from the NHS’s single-payer model. Will users become used to ‘paid for’ services via the novelty of digital delivery? Sociologists of tech have long told us that people will accept things that are pushed as tech innovations, which they would not accept if presented to them as political choices.
As UN Special Rapporteur on Poverty Philip Alston says in a devastating report on the digital welfare state (mostly looking at benefits policies), these shifts towards policies of ‘digital by default’ are presented merely ‘administrative’ and indeed ‘inevitable’ decisions, rather than political choices. But political choices (as he makes clear) they are. The question is: whose?
‘If you only have a hammer, then everything looks like a nail’
Poorer and older people (those most reliant on public services) are far more sceptical than the general population about the benefits of digitalised public services. And whilst most people are happy to conduct simple transactional relationships with the state online (making a payment or booking an appointment, for example), there’s a lack of evidence that most people actually want their deeper connections with the state to be extensively digitalised.
There’s clearly a tension between the tendency to suggest, on the one hand, that digitalised services are more ‘responsive’ and ‘personalised’ and, on the other, that resistance to them is not a preference to be met, but merely a ‘barrier’ to be overcome. Other concerns need to be heard.
That poorer people’s concerns are greater is perhaps not surprising – they already have more experience of how, in Alston’s words, ‘systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish’. Alston warns too that ‘Big Tech operates in an almost human rights free-zone, and that this is especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state’.
Poorer people are also more likely to have complex requirements that don’t fit neatly into a digitalised workflow. And they are more likely to feel lonely (in itself, a serious risk factor for poor health) and in need of more human interaction (poorer children in the UK are twice as likely as their richer peers to report feeling lonely and isolated). The same study suggested that the internet itself was contributing to a sense of isolation and loneliness amongst children – and other influential studies have correlated internet use and loneliness. Already, children as young as five may be prescribed ‘apps’ for depression, with particularly widespread use of mental health apps to manage patients on waiting lists (who then may no longer show up so obviously on waiting lists).
Even our most supposedly ‘transactional’ interactions with healthcare, such as booking an appointment via an app, or checking in via an electronic kiosk to a surgery or hospital appointment, might not be the ‘no-brainers’ that tech advocates suggest. On a recent visit to my local district hospital, as I approached the receptionist, a girl of about 10 came in behind me with her dad, clutching her arm in pain, looking miserable. I stood aside of course, and observed as the receptionist asked the girl, in kind tones, her name, address, and the other details required on a form. In the receptionist’s experienced hands, the predictably bureaucratic questions were turned into a soothing ritual. ‘Are you married?’ she asked with a twinkle, eliciting a giggle. The girl sat down to wait for treatment looking considerably happier than she had done on arrival. She was in safe, human care. No doubt, sometimes, dealing with a computer is all we need. But not always.
The shifts underway would matter less, were investment in digitalisation not squeezing out investment in other areas – if digital options were genuinely just one option, rather than being offered as a balm for cuts in face-to-face services; if the NHS and social care sector were not using digital products to justify a lack of commitment to preserving local hospitals and face to face clinics.
When surveys do ask people, perhaps more honestly, whether they support a trade-off between greater access to remote, digitalised services and reduced access to physical services in their community, support for digital services drops off a cliff – but such survey questions are sadly rare.
The justification that privately provided digital tech will ameliorate cuts to publicly provided face-to-face health services is a phenomenon I’ve become increasingly alarmed about throughout my seven years of writing about, and campaigning on, healthcare cuts and privatisation. It’s a concern shared by those looking closely at developments in education and ‘EdTech’, too. Healthcare plans in my local NHS are typical. They call for heavy investment in the IT ‘architecture’ to underpin a significant expansion of digital health services, even as they acknowledge that the benefits of those services are still hard to discern – and even as small district hospitals cut back their hours and services.
‘Community-based’ healthcare is increasingly taken to mean healthcare over a telephone line or via a connected device. ‘Preventative’ healthcare is increasingly taken to mean digital nudges that (the plans optimistically tell us) will reduce ill health, for example by getting men exercising as much as women. Admittedly not many are quite as explicit about the magical thinking behind all this as the Liverpool NHS manager who suggested that these kind of developments mean that ‘we do not need hospitals’. But the fact remains that hospital bed numbers in England have dropped substantially from an already relatively low base in recent years.
All this would matter less if there was good evidence that, despite resistance in some quarters, digitalisation was a cost effective and proven solution to healthcare challenges. But that evidence is remarkably thin on the ground. A recent overview in Nature magazine found ‘very low quality evidence’ for health apps overall. The World Health Organisation has highlighted how some developing countries are in danger of buying snake oil ‘digital health’ products – but developed countries are in exactly the same danger. There’s a lack of ‘robust and independent trials’ to test the effectiveness of healthcare apps. The earlier version of the NHS ‘Apps Library’ (globally cited as an apparent badge of approval) carried 40,000 apps, but a study of the mental-health related apps, for example, showed that 85% of them provided no reliable evidence of effectiveness whatsoever (despite industry assertions that depression, along with diabetes, is one of the areas with the best quality of evidence for digital health). The UK’s drugs regulator has since published a set of criteria for inclusion in the NHS Apps Library, which ‘stays deliberately vague on the exact research designs’ required, according to the Journal of mHealth. It does not demand randomised control trials (RCTs). The EU itself has given up on its attempt to come up with some clear guidelines to assess healthcare apps.
A revised EU Directive to better regulate some digital medical devices is due to come into force this year, as well as ongoing work on data privacy. It’s currently looking somewhat unlikely that either of these will apply in the long term in Britain, though, with its stated aims of deregulation, ‘lead[ing] the world on healthtech’, and signing a trade agreement with the US. NHS data has been valued at £10bn, and one of the key US trade demands is that there be no obstacles to that or any other data flowing freely to the US. Alston slams government for ‘abandon[ing] their regulatory responsibilities’ towards Big Tech, and says there is ‘no justification’ for what The Lancet terms ‘digital exceptionalism’ over the need for both evidence and regulation of digital products.
Understandably, public sector managers end up convincing themselves that what they have is a problem that can be solved by more information/communication technology. Funding-wise, that is often all that’s on offer (many tranches of NHS ‘extra’ cash, including its recent ‘birthday gift’, have techno-strings attached), even if what those managers really wish for when they blow out their own birthday candles is more staff. Instead, in an era of austerity, digitalised public services lessen the ‘choice burden’ on workers who gatekeep access to public services, by either guiding them through decision-making or removing them from it altogether.
‘Automated systems of state-corporate decision-making’
In this context, there’s no shortage of tech corporates branching out into health consultancy, and health corporations rebranding themselves as information companies. For example, Optum’s data-driven products are being used in the NHS to remodel healthcare, allocate funds and workers, influence decisions about referrals and prescriptions, and predict which will be the most ‘expensive‘, highest ‘risk’ patients). Palantir is another company that has supplied the NHS with technology to ‘support … patient/citizen risk scoring and stratification’.
These kind of predictive analytics are often sold as being about prevention (whether in health, criminal justice or at-risk children). But Virginia Eubanks (‘Automating Inequality’) and Cathy O’Neill (‘Weapons of Math Destruction’), alongside Cardiff’s Data Justice Lab, have highlighted concerns about the extent to which ‘automated systems of state-corporate decision-making’ are closed to scrutiny and, in particular, their potential for discrimination and hidden replication of human bias.
Optum recently had to apologise when its US algorithm was found to show dramatic biases against black patients. Essentially, the data gathered reflected the fact that black people had historically had less money spent on their healthcare, but Optum’s algorithm crunched that data and decided that it meant that areas with lots of black people needed less healthcare. There is no suggestion that the same bias applies to Optum systems in use in the UK. But amongst British healthcare campaigners there is the wider concern that systems of stratification honed in a US context are being applied alongside other proposed changes that begin to introduce more insurance-like elements into the English system.
Teaching computers to think like humans – or vice versa?
Other tech products use health data to algorithmically allocate workforce and health worker workflow. But what does it mean to be an algorithmically allocated worker, or a patient on the receiving end of algorithmically mediated care?
For a glimpse into the possible future, I visited San Francisco last year and spoke to patients, nurses and doctors. One of the biggest complaints (in a healthcare system with much to complain about) was the way that digital health tech was damaging the human side of healthcare, with every interaction nudged towards by computerised ‘Electronic Health Records’ designed – staff felt – not to maximise patient care, but to maximise billable procedures.
‘The EHR never mentions the time that is required to teach, listen, talk to someone, deal with grief – that’s never in the calculation’, Michelle Mohan of the National Nurses United told me. Mohan is certain that we’re close to a ‘tipping point’ in healthcare that is poorly understood by patients. ‘People are thinking, well I still see a human being, not a robot, but what they don’t realise is that the human being is being controlled by technology that is being controlled by an industry that seeks ever more profit.’
‘The industry is telling nurses, “there’s so much information that you can’t possibly learn it, or understand it – let us take this cognitive burden away from you”’, said Mohan. And when nurses do question the level of care that the computer recommends for a patient, their own competence is challenged, she added.
What I’m struck by most is not a terrifying stateside dystopia – but how familiar all of this sounds after years of talking to NHS staff, who have similar complaints about de-professionalisation and loss of both autonomy and patient trust, as they’re guided through ‘care pathways’ and grapple with Electronic Health Records that apply (mostly notional, for now) costs to every patient interaction. The more we have this experience of sitting opposite a doctor looking anxiously not at us but at a computer screen, or calling the NHS 111 number and getting not a nurse but a call centre operative following an algorithmic script, the less of a leap it is to replacing humans with computers altogether.
High touch, or hi tech?
Even before we get to that stage, the shift to remote rather than face to face connections through ‘digital first’ providers like Livi (which has an NHS video-consult deal covering 1.85 million patients) and Babylon, as well as the use of Skype for follow-up appointments with consultants, has raised other concerns. Many doctors doubt whether a digitally delivered interaction can really meet patient needs. One concern is whether they can allow for the ‘doorknob consultation’ where a patient, having established trust with the human in front of them, perhaps had their blood pressure and temperature taken, and a little chat about the day ahead, makes to leave, only to turn round as they reach the door, having finally plucked up the courage to raise their real worry. ‘Um, just one other thing, there’s this lump, in my, um, testicle…’
Other doctors raise concerns that even over a video connection (let alone a phone or text communication), clues can more easily be missed. Is the patient’s breathing laboured, their body language withdrawn? Do they smell strongly? Are they dishevelled, twitchy, flushed, wincing? Is there a catch in their voice or a giveaway foot tapping as they answer a probing question?
The future’s bright, the future’s Babylon?
Babylon is particularly well-positioned at the cusp of this journey from face to face, to remote, ‘AI’ healthcare. In recent months Babylon has signed two ‘digital first’ tie-ups with large NHS hospital trusts, promising ‘remote access to GPs and hospital specialists’, ‘live monitoring of patients’, ‘personalised care plans underpinned by Babylon’s Artificial Intelligence’, as well as digital rehab and an ‘AI Health Assistant, which gives users medical information and triage advice’.
A leaked NHS-commissioned draft report evaluating Babylon’s ‘AI’ symptom-checker trial (and 3 similar trials elsewhere in England) suggested smart phones could become ‘the primary method of accessing health services’ and that apps would replace people for up to a third of NHS helpline calls in the next couple of years.
Babylon has global ambitions, and their work with the internationally trusted NHS brand is a big part of their sales pitch. Boris Johnson’s chief aide Dominic Cummings was an advisor to Babylon shortly before Johnson entered Downing Street. One of Johnson’s first acts on becoming Prime Minister was to announce a £250m fund to boost the use of AI in the NHS.
For a radical technology approach
Even at its best, cultural critique (often originating in Silicon Valley itself) of our increasingly online lives tends to focus on people as individual technology consumers, rather on our increasingly technologically mediated relationship with the state. But big Tech is eating into the social contract of the welfare state – not only with its insatiable demands for data, but in its shiny new products. It has the potential to diminish people’s commitment to universalism – to their relationships and solidarity with each other and with workers who are funded by the state. Ultimately, Big Tech may undermine public commitment to the (welfare) state itself, as it increasingly takes over some of its core functions – not just in health, but in education, the legal system and the military. Critiques of the privacy implications of Big Tech (often US-based) can miss the fact that in European-style social democracies, we expect more from the state than merely to be left in peace.
Public sector workers, unlike algorithms and robots, tend to demand autonomy and decent wages. Ideally, they want to exercise discretion, compassion and ethics in their handling of service users. Taking them out of the system to some degree has obvious appeal to politicians and hard-pressed managers seeking savings and standardisation amidst rising demand, as well as to free market advocates who see ethical considerations as little more than market distortions.
This has of course been going on for a while. Those who have grown up during the era of digital neoliberalism, who have been surveilled, assessed, shoved into tickboxes and league tables in every one of their interactions with the state since childhood, may be less committed to defending the welfare state as they’ve experienced it.
Digitalisation is a tool to ‘tailor’ services, we’re told. But in a context of austerity, the biggest concern is that ‘tailoring’ becomes a fancy word for ‘cutting’ – that it provides the cover for de-funding, privatisation and co-payments, and the loss of communal space, public accountability and social connection.
Digitalisation is often sold to us as an ally in a fight for better services for all. Indeed, it can and must be. But only if we are discerning about the claims made for it by states and corporations, and are in a position to put forward our own alternatives. We need a radical technology approach, going beyond traditional human rights approaches, that analyses the claims made for technology and asks whether its use is promoting emancipation and democracy. Undoubtedly, too, there are more possible upsides of these new technologies. Google’s breast cancer screening programme is showing some early promising results, for example.
But I want to cite Philip Alston one last time, who concludes his devastating report for the UN on the digitalisation of welfare benefits with the following:
It will reasonably be objected that this report is unbalanced, or one-sided, because the dominant focus is on the risks rather than on the many advantages potentially flowing from the digital welfare state. The justification is simple. There are a great many cheerleaders extolling the benefits, but all too few counselling sober reflection on the downsides.