Friday, November 12, 2021

Prescribing Algorithms for Psych Meds

Prescribing algorithms aim to give guidance to prescribers as they choose the safest and most effective treatments for their patients. They are not hard-and-fast rules that take decision-making out of the prescription process by any means. However, they do gently push individual prescribers to take into consideration the whole body of published medical knowledge as it pertains to the person being treated. 

Each of us who prescribes medications has our own personal experiences prescribing different medications. I gave this person with bipolar disorder olanzapine during a mixed hypomanic episode and their mood leveled out pretty quickly but then they also put on 65 pounds over the following six months. I gave two people in a row ziprazidone for bipolar mood stabilization and they both stopped taking it within two months but I prescribed one of them alprazolam and they loved me and kept coming back for more for years. 

It is sort of like the old parable of the blind men who are each holding on to a different part of an elephant. One is holding onto the trunk, another on a leg, another on the tail, and another sitting on the elephant's back. Each one has a totally different set of information regarding what an "elephant" is all about. Note that each one is completely correct - they are just not able to take into consideration the knowledge and experience of the others. 

Doctors are humans who are caught in a situation somewhat like that of the blind men in the parable. They see how one person, with a unique set of characteristics - only a tiny fraction of them knowable to the doctor - and how that person seems to respond to a particular medication treating a certain condition at a certain time. Maybe the doctor has a negative experience with a medicine that is actually quite uncommon - it is only human and natural for that doctor to have greater space reserved for that negative information regarding that drug in their mind. It might be the best medicine on the planet but the doctor will be less likely to prescribe it if they have the one case out of five million with an outlying negative outcome.

Scientific studies are designed, in theory, to try to give us as much relevant information as possible. They follow large numbers of patients with certain qualifying characteristics and then record their responses to those treatments using statistics. This medicine helps more people get better, on average, than taking a sugar pill. 

However, it is not realistic to expect every doctor to read through all of the published studies and meta-analyses, especially new doctors or other prescribers coming into the field today. They would have to read all of the OLD studies ever published, as long as all of the NEW studies being published every day. 

But sometimes teams of doctors and their associates get together to pore over all of the available evidence, and then try to synthesize it for the rest of us. Three studies say that A works better than B but two studies say that B works better than A, but they had possible errors in methodology and possibly the results were swayed by certain characteristics of the population being studied. 

The results of these very laborious and labor-intensive labors elaborated by these laboratory laborers...

Are ALGORITHMS. 

In algorithms, a doctor can learn from the experience of a thousand other doctors and the experiences tens of thousands of patients, without ever meeting them. 

Quetiapine was discontinued at a higher rate than the other second-generation antipsychotics when taken as a mood stabilizer. Ziprasidone caused zero weight gain but still led to slightly increased blood sugars when used to treat psychosis. Varenicline actually works better for successful smoking cessation than nicotine replacement therapy (never mind those three people I knew that got nightmares and increased psychosis). 

Dr. David Osser is one person who has been running a project for decades out of Harvard where he and others pore over published studies in order to develop prescribing algorithms for mental health conditions. He and his associates make recommendations about what medications psychiatrists should prescribe first for what conditions.

They try to show, based on published studies, which medications are likely to be the most effective, have the least side effects, and least risk of drug interactions.

They seek to give guidance not only to brand-new psychiatric prescribers, but to seasoned psychiatrists as well. It really is easy for us to make the same mistake over and over, and actually become more convinced over time about how great we are doing. 

I think of it this way: The people that I see in my day-to-day practice are the survivors. Lots of people that I used to see, I don't see any more. The people that I do see, responded well to the treatments I gave them, and kept coming back. The people that did not respond as well to the treatments I gave them, are less likely to come back. So it is easy for me to get the idea that I am doing a more effective job than I actually am. My patients are the ones who have responded well to "Dr Me." 

I personally find invaluable the work of Dr Osser and others who strive to formulate algorithms for best practice. There are many medical and psychiatric bodies which have also set out to promote best practices. Algorithms truly are about best practices - not taking away autonomy or decision-making. Algorithms provide additional information about what has been found to work in the shared body of published knowledge on the subject. 

Unfortunately, a lot of knowledge that doctors and other medical and psychiatric providers have is not included in algorithms. This is because only a tiny fraction of our experience goes into published works. The case studies which get printed sometimes are not even as good as personal experience because we miss a lot of the context and it is hard to tell if the case studies are representative or outliers. 

The other downside is that lots of the studies out there are funded by medication manufacturers who will naturally try to devise studies in a way that reflects their product in the best possible light. So we get algorithms that tend to focus more on newer, more expensive treatments like second-generation antipsychotics, which may have a higher long-term side effect profile than other, older, more benign medications that do not have the same financial backing. 

While we are on the subject, another limitation is that most studies tend to be for the short term, whereas we are treating conditions for the long term. And most studies are done on people taking only one or possibly two medications, and have no comorbid medical or psychiatric conditions. How often do you meet "ideal" patients like that in real life? Not very often. 

So there are some huge, gaping holes in the knowledge base that all these recommendations and algorithms are based on. If venlafaxine beats imipramine for psychotic depression, does that mean that duloxetine would be expected to beat clomipramine for melancholic depression? And even if it did in five patients out of ten, the only one that we care about right now is the patient sitting right in front of us. 

Still, algorithms are an invaluable tool, in part because there IS so much uncertainty in what we do. For the things that we do "know" as a profession, it is our duty to try to put into practice as much as possible, for the greatest likelihood of benefit for our patients.  

No comments:

Post a Comment

Best Medications for Depression, Part Deux (That Means Two in French)

In our previous post , we (meaning me, myself, and I) looked at antidepressants for two seconds and then looked at:  how bipolar can look li...