My goal for this blog is to always provide at least one point of useful advice or a methodology that you can immediately apply in your day to day work. I work in biopharma (and it’s the name of the blog), so this blog will always have a biopharma focus. My hope is that the methods cross over into non-biopharma jobs that can be of use regardless of your profession. Also, this blog will emphasize no cost or very low cost solutions that are realistic for small, cash constrained companies.

Future posts will include my prompts and summaries of the resulting outputs so you can see how I am prompting AI and the outputs. But first a little background on why I decided start a blog.

In late 2023 I attended an industry conference where a large pharma panel presented how they were experimenting and implementing AI. Being big pharma, they had divisions of dedicated AI staff, their own AI budget, and access to large datasets to train their own in-house model. Meanwhile my small company was struggling to stretch our dwindling cash and survive into the next funding round. I quickly realized that nothing these folks were talking about was remotely applicable to my situation.

For most of 2024 I researched ways to use AI in my job without much luck. Even then it was clear AI was transformational, eventually trickling into all areas of our life, so it made sense to get ahead of the trend. I experimented with some consumer AIs (ChatGPT, Google’s Gemini) and sure, they helped draft an email but nothing really helpful for the various complex tasks of my day to day work. There was an exploding market of tailored/custom AI biotech solutions, but most were designed to optimize drug targets and had large upfront costs ($1M+). Worse, even these tailored/expensive solutions did not seem to offer any help with the type of work I was performing routinely.

Online AI posts were everywhere but mostly consisted of opinions, sales pitches and paywalled content. Practical, job-focused guidance was non-existent (unless you were a programmer). For context, I manage all phases of CMC (Chemistry, Manufacturing, and Controls). So my day is spent working on various phases of drug manufacturing, process development, formulations, stability, testing, etc. Being small biopharma there is always other non-CMC stuff to work on. But enough about me, this blog is intended to help people like you apply AI to solve your everyday problems, not to be my personal memoir.

Not making much progress in 2024, I got more serious in 2025. This was great timing as Microsoft began offering Copilot (CP) bundled with Office at no additional cost. I use Office programs every day (Word, Excel, PPT, Outlook, etc) so I figured CP would be a decent AI choice as #1) it was free with Office, #2) I assumed Microsoft would continue to expand Office integration of CP, #3) Microsoft is huge with lots of money to continue improving their AI, and #4) for what I wanted (use at work) I did not see any significant differentiation with the other available AI models.

So with my AI selected I went looking for resources to learn more. There were some early newsletters with helpful tips and prompt language, but most quickly devolved into sales pitches or moved behind pay walls. Fortunately, I found an Ethan Mollick essay: Getting started with AI: Good enough prompting, Don’t make this hard (Nov 24, 2024), where he discussed several best practices when learning to interface with AI. Most of his advice boiled down to the need to start experimenting with AI. I highly recommend reading the full essay as there are many other gems, but the start experimenting with AI is the best introductory AI advice I have come across so far.

I’m sure this seems like a total Simpson “D’oh!” statement. But stick with me to understand why this advice was so profound.
First, I am amazed at how many competent AI users I meet that have no real understanding of the strengths and limitations about their chosen flavor of AI.

Second, AI is just a tool. A very powerful tool but still just a tool. And like any tool, to gain proficiency with that tool you need to put in the time to learn it. The more powerful the tool the more experience required to deploy it effectively. The challenge is that unlike traditional powerful, AI makes using it so easy that it feels like no real training or experimenting is necessary. You can just throw any prompt at your AI and it will faithfully spit out something that generally sounds really smart and often posed in way that makes you feel good, regardless of accuracy.

So I began experimenting with Copilot in various ways. Early 2025 use of CP was not great. However, what was stunning was how quickly CP improved. It really seemed like CP was getting noticeably better daily. This rapid improvement is important, AI is not static. However bad it’s doing something today, it will get better and probably far quicker than you can imagine. (Quick note – Copilot has gotten pretty solid for most things I throw at it now)

So here are the first set of SBAI tips:

  • SBAI Tip#1 Select an AI and start using it at your job (and for personal use). If you are just starting out your choice of which AI to use does not matter that much. More important that you start than spend time finding the “perfect AI”.
  • SBAI Tip#2 When starting to use AI at work, use it for things that you are already an expert or have a high level of competency. You need to be able to gauge the accuracy of the outputs before letting AI have a big impact on your work deliverables. This is easier in areas you already an expert and can quickly judge.
  • SBAI Tip#3 NEVER take the AI output and use it as your finished work product without some level of verification.

AI is an extremely efficient assistant that produces incredibly detailed summaries in seconds for tasks that would have previously taken me hours or days. However, this assistant will, from time to time, give you incorrect information and this is why you need to understand its limitations.

I have observed several occasions where Copilot provided contradictory information in the same output and then defended the incorrect information when challenged. This behavior happens less frequently but still happens. Part of your job as a professional is to be proficient enough to take the good parts and verify, correct or throw out the bad parts.
There are other reasons to stay closely involved in your AI outputs rather than disassociating yourself from the thought process; a topic to be explored in a future post. For now I think it sufficient to understand that if you provide unverified AI outputs to your boss, client, or colleague and there is an error, you are ultimately responsible for that error.

Maybe in the agentic AI future that will change but not for the time being. And if your response is something like “well the AI got it wrong”, how long until management realizes they can just give your tasks to AI directly? Sufficient warning out to the way so let’s end on a positive.

My view is that these random inaccuracies are not a negative. It keeps me engaged in the details of my job and, if I’m being honest, part of me experiences a tiny bit of joy when I find AI has made a mistake that I caught (although these instances are diminishing at an astonishing rate).
I expect AI to have a ton of upside in helping us do lots of things better. Using AI can be a marvelous partnership that has the potential to unlock massive amounts of knowledge and time for you, you just need to jump in and start experimenting!


Leave a Reply

Your email address will not be published. Required fields are marked *