There are many yard-sticks by which hospitals can be measured, but few are so negative and frequently used as lengthy waiting times, and no wonder. There is a bulk of evidence to show the impact this has on treatment outcomes and mortality rates. Although Covid-19 has brought its own challenges in the area long waiting times for treatment are nothing new. Such waiting times are bad news for patients and subsequently for healthcare providers, with the NHS experiencing hefty fines for failing to meet targets and increased operating costs identified in the US. So why does the issue persist?
For many involved in service planning, the issue – or at least the most practical solution – is appropriate patient flow. They describe a chain effect where slow discharge processes on treatment wards delay appropriate transfers from emergency beds, which in turn lead to increased emergency waiting times. Similarly, long elective surgery waits can lead to health deteriorations that ultimately create their own pressures elsewhere in the system. For organisations like the NHS Institute for Innovation and Improvement, the problem is one of bottlenecks rather than capacity. The solution is to manage those bottlenecks. It is here that patient flow automation tools come in.
To clarify some details early on, patient flow automation can be viewed on a spectrum. The vast majority of these tools and systems in effect today concern the ready transfer of relevant data from wards to central operating ‘command centers.’ At their most basic, they may be little more than electronic health records that can be remotely accessed by a bed manager. At the more complex end, there is a burgeoning field of research for AI to assist in predict in managing system demand and in supporting clinical decisions to admit or discharge.
Adoption of Automation
In the United States the history of patient flow automation goes back further than you might expect. At the Carilion Clinic, a series of hospitals and healthcare settings in Virginia, they can trace their adoption back to 2004. Their problem at that time, as they put it, was that they ‘simply had more patients than beds.’ Their solution was to build a central command centre from which all transfers and discharges could be managed. They say that the centralisation and simplification of the decision making enabled a 40% increase in transfers to secondary campuses. Now their service provides dispatchers for the ambulance fleet, as well as three helicopters; environmental services; oversight for clinical transport operations; tight integration with utilization management nurses.
Since then the practice of using automation for centralised patient flow management has become widespread, and not just in the United States. In the UK, systems like InTouch’s Flow Manager now claim to process over 40% of UK outpatient appointments. Elsewhere extraMed’s trial to improve the waiting times in Luton and Dunstable NHS Trust claimed to have saved the trust £4 million pounds that year. Through the pandemic the Mayo clinic have formed specific task teams to project hospital overloads in the US. Although there are differences in these systems, the central premise is that discharge and transfer information is available in real time to a centralised control point. Information entered onto the dashboard by nurses on a ward is immediately available to those managing admissions and transfers, giving them indications as to upcoming availability.
Then there are the AI research avenues. In a recent article Phillips’ chief Technology Officer, Henk van Houten, discussed the possibility of machine learning being used to predict bottlenecks, aid in clinical projections, support discharges through the system and even ordering medical supplies on projected need. In Boston, the Beth Israel Deaconess Medical Centre, in conjunction with MIT, have already
Problems and Dilemmas
These tools are not without their issues, however. Even at the very human end of automation, there remains debate over the benefits and the limitations of heavy centralisation. Studies in Canada have described paradoxical effects where the local implementation of different systems leads to difficulties in regional integration, and although there may be small local successes the wider picture does not change. A similar study also raised the question as to whether centralisation, even when implemented well, is exclusively beneficial. They found that while patient-grouping systems favoured homogenous populations (such as surgical patients), populations with a greater breadth of needs (e.g. frail older adults) required much more flexible models of support. The statistical analysis describes, by virtue of admission length, 10% of patients using 60% of hospital beds, and the benefits of targeting those patients, yet these are likely those least responsive to automation.
These studies also raised the question as to whether removing capacity as a consideration is helpful. The NHS Innovation and Improvement assertion that demand doesn’t outstrip capacity because demand rises and falls does not appear to reflect the way waiting lists in the UK grew pre-Covid. Bottlenecks occur and service value is important, but when public-funded organisations advise that demand does not outstrip capacity at the same time that number of beds are being reduced and waiting lists rising, eyebrows are likely to be raised.
There is also the question as to how the data is managed. There are data protection considerations to patients consenting on their information informing others’ care, and even local or national standards are clear, transnational companies may well be subject to conflicting legislation. Additionally, any large healthcare organisation that relies on a centralised electronic bed management system will need to have appropriate contingencies in place should those systems fail. The Wannacry attack in 2017 resulted in NHS shut-downs that cancelled thousands of appointments and suspected costs of nearly £100 million. The more we rely on electronic systems, the more we stand to lose should they fail.
AI-Specific Issues
Healthcare AI decision-making inevitably raises the question of how much faith we are comfortable placing upon a computer to make clinical decisions. Humans are not infallible, of course, but the complex nature and high stakes of healthcare mean reliance on algorithms is likely to be poorly received. Accuracy varies from system to system but data scientists excitedly writing about 80% prediction success rates is not uncommon. One wouldn’t feel so pleased if a doctor treating us advised that 1 in every 5 of his diagnoses were wrong. Of course, many advocates might stress that algorithms are already used in many aspects of modern medicine (indeed, what is modern medicine if not a series of taught algorithms?), and that these should be used as clinical aids rather than overrides. The issue is that this can easily create false reinforcement of incorrect assumptions and/or alert fatigue which has, at times, had serious negative effects. Even if this particular issue could be circumvented and successful AI planning led to overall improvements, failures can be expected. In healthcare these will inevitably be high-profile, whilst healthcare is simultaneously less resilient to these failures than other industries.
On this matter any computer system can only ever be as good as the information that is provided to it. The complex needs of frail older adults have already been identified but disciplines with a high degree of diagnostic subjectivity, such as psychiatry, may pose their own challenges to accuracy. A 2021 UK paper shows that AI in admissions and discharge in acute psychiatry – traditionally the highest areas of risk in the field – is being explored despite the identified inaccuracies. Psychiatric bed managers are likely to have mixed feelings on reading this report, and patients with paranoid schizophrenia are unlikely to be soothed by one of the AI expert’s suggestion about ‘passively collecting data from the patient’s mobile phone.’ Although that cannot characterize the paper or field as a whole, the risks mean that quixotic interventions will need appropriate professional tempering.
Lastly, we come to the ethical questions for AI in patient flow. With waiting times impacting mortality, if AI learning is focused upon the overall outcomes of the hospital it does not take long before the trolley problem morality question arises. AI ethics of something like self-driving cars are amplified to the extent where the loss of life is not only possible but in many hospitals will be tragically routine. Many emergency room physicians and nurses would be entitled to point out they experience some level of this each day, but are we comfortable delegating this to a machine? Patient flow has a dramatic effect on lives for better and for worse, so if delegated to an AI should that AI “swear” the Hippocratic oath? And if it does, can it still perform the role we are asking for? The vast majority of ethical guidance on AI refers to the treatment of the individual but patient flow brings together all manner of extra considerations.
Final Thoughts
Lengthy waiting lists for healthcare access are in no-one’s interest and cost-effective interventions that can reduce them will always merit consideration. Well-implemented patient flow automation tools can and have had clear positive effects upon access to healthcare. Yet seemingly well-designed systems can come into conflict with each other if they are not implemented systemically, and not all population groups benefit from each approach.
The benefits of AI interventions will need to be considered against legal requirements. Successes in one area should be extrapolated with caution. HealthTech has always necessitated extremely close collaboration between developers and practitioners but the patient flow is a much riskier area than it may first appear and special attention may be warranted. Those same risks create something of an ethical quagmire that will need serious consideration locally, nationally, and internationally. Even where the numbers are in its favor, we may find that we are much more comfortable leaving such decisions to human fallibility.