Machine learning can deliver better outcomes for children and families

Last updated: 09-24-2020

Read original article here

Machine learning can deliver better outcomes for children and families

At Xantura, we welcomed observations by Anne Longfield, the children’s commissioner for England, in her foreword to What Works for Children’s Social Care’s recent research report on machine learning in children’s services.

While the study found that models developed by What Works to test predictive analytics were not effective, Longfield said she “firmly believe[s] that innovative uses of data – be they better analysis, sharing or recording – can unlock considerable benefits, helping local agencies make better and more effective decisions”.

We were also struck by a statistic Longfield quoted, that “there are 2.3 million children in England growing up with a vulnerable family background – far bigger than the number of children being supported by children’s social care at any time”.

Having initially been surprised with the findings of the report, we were heartened by the caveat that while What Works’ models found “no evidence that machine learning works well in children’s social care”, the study did not conclude definitively that machine learning doesn’t work.

Our experience across several real-life implementations is that the range of data used in predictive models is key to their performance, as is how well that data has been pre-processed before the application of machine learning techniques.

The What Works study only had access to data from children’s social care systems. It did not draw on other important sources of information, such as school attendance, exclusion or youth offending data, and neither did it consider data about the wider family situation.

Professionals wouldn’t make an assessment solely on this basis; rather, they would consider children and their families holistically to build an accurate picture of individual circumstances.

The What Works study developed several algorithms, and we cannot make a direct comparison as our algorithms are not looking at the same outcomes. However, none of the What Works algorithms could get the prediction right more than 65% of the time or identify more than 65% of cases. These were considered as key success thresholds.

In contrast, taking one of our algorithms, that predicts child protection cases that will become child looked-after cases in the next six to 12 months, we predict this accurately more than 80% of the time and identify 77% of cases.

One of the key differences is that our algorithm draws on a range of data sources, underpinned by robust information governance agreements. This difference in performance reinforces our argument that the systems delivering these algorithms need to enable the integration of multiple data sources, in a way that is controlled and ethical.

Another significant factor when considering the value of predictive models in children’s services is how they are used in practice. Our approach completely aligns with the statement in Longfield’s foreword to the What Works report that a “statistical model is no match for a human”. But the main report goes on to suggest that, if models could be made to work, they would be likely to disempower, rather than to support, social workers.

We are unaware of any implementation of predictive models in local government that involves automated decision-making (and I would argue that existing legislation and the Information Commissioner’s Office are already effective guardians). In our own implementations, we work with councils to enable the controlled, proportionate sharing of data for use by professionals to support their decision making. Frontline practitioners are presented with textual case summaries and trends of factual information, drawn from multiple sources.

Of course, no algorithms are perfect, and we need to be careful to consider bias and mitigate against unintended consequences.  But this is nothing new – the public sector has done exactly that when deploying ‘algorithms’ in the past, for example DASH models to predict domestic violence risk or the Youth Justice Board model to predict recidivism risk.

We also acknowledge that we need to do more to share what we have learned while respecting client confidentiality. This is a sensitive area and it was understandable the councils that participated in What Works research did so on the basis of anonymity.

Had we had the opportunity to contribute to the study, we would also have shared our insights that the successful use of predictive modelling also relies on wider cultural and transformational change. For example, in Hackney, where our work was discontinued in 2017, it wasn’t that effective models couldn’t be produced (their performance was very similar to the models we are currently supporting) but that there were several wider issues affecting their ability to be deployed effectively and sustainably.

We are looking forward to continuing our work in this area (with huge numbers of children missing from the social care system and in need of support), which is already producing highly effective algorithms. We, and the forward-thinking councils we work with, believe this is an important area of work with real potential to deliver benefits for professionals and for families and children.

Wajid Shafiq is the chief executive of data analysis firm Xantura.


Read the rest of this article here