In San Mateo, California, last week a part-time electrician, Shane Tusch, wrote a hoax-suicide note on Facebook, stating that he would hang himself from the Golden Gate Bridge. He claimed afterwards he was testing Facebook’s new suicide-prevention program.
However, a reader stumbled across the post and alerted the authorities.
Tusch was then arrested by police and sent to a psychiatric institute for three days, placed under psychological examination, where he states he “denied any humane care.” The Californian is married and has two children.
Last month, Facebook initiated a revamped program to flag at-risk posts that may indicate suicidal thoughts or behavior. Facebook then can contact said person and offer help or, if the Facebook people believe there is an “imminent threat,” local police can be contacted to evaluate the situation.
In the case of Tusch, he was immediately detained.
Perhaps that might also represent a failure of institutional attempts to combat suicide. They are done with more force than might always be necessary. It may be beneficial to add that attempting or committing suicide is not a crime, so it should not be treated similarly.
Tusch’s response was to point of the (in)effectiveness of social media and technology attempting to evaluate the psychological state of a living, breathing individual. He was also quick to note that the reader was basically “a complete stranger,” with no context that a family member or friend would have.
This highlights the overbearing power Facebook’s current model runs on, and it needs to be further revised. Each flagged post runs through reviews by systematic processing. Then Facebook decides how approach the at-risk poster.
“Often, friends and family who are the observers in this situation don’t know what to do,” said Holly Hetherington, a Facebook strategist, at the time of launching the new program.
I would like to ask how a technological system of filters could really offer more than what family and friends can. A message asking the poster if they “might be going through something difficult” because a friend “thinks” so seems a bit generic, like stock photos of people with their laptops.
Also, I’m confused as to how beneficial this message could be, especially when the friend who flagged the post is left anonymous. Consultation through interacting with a human is generally a wiser method of dealing with potential suicide. These thoughts stem from some sort of disconnect between the self and others to the point of feeling inescapably lost.
Though technology is reaching newer levels of sentience, suicidal thoughts do not typically occur from disconnect between the self and technology. So why should it be seen as a new measure of suicide prevention?
I tend to agree with Tusch’s statement: “Facebook needs to leave suicide prevention to family and friends.”
Suicide is most effectively dealt with the compassion of individuals (however well-equipped they feel), not the anonymity of a social media platform. It’s a nice idea, Facebook, but requires further revision.