Hello 👋

Welcome back to another edition of Weekend Rounds!

You lost an hour of sleep last night. In return, the universe offers you longer evenings and, for a few days, the complete inability to judge what time it is. And in case your brain needs one more thing to remember on less sleep, we don’t call it IBD anymore

Welcome to daylight saving time.

Here’s what we’re covering:

💸 Incoming US vet students are about to meet the federal loan ceiling
💊 The FDA just approved the first oral treatment for canine lymphoma
🤖 AI Field Notes: Say hello to GPT 5.4
🚀 Quick hits

💸
Incoming US vet students are about to meet the federal loan ceiling

If you want one story that explains why the profession keeps feeling financially upside down, start here. VIN reported this week that incoming U.S. veterinary students will be limited to Direct Unsubsidized Loans, with a new annual cap of $50,000 and a lifetime cap of $200,000 for professional programs. AVMA’s loan explainer says those federal changes take effect July 1, 2026, and federal rulemaking materials reflect the same caps as part of the new post Grad PLUS landscape. VIN also notes that this ceiling sits below the full cost at most U.S. veterinary schools.

That is a very polite way of saying the numbers and the tuition bill are no longer even pretending to be friends. If US schools do not dramatically expand aid or change pricing, somebody has to absorb the gap, and it usually is not the federal government out of sheer generosity. That matters because debt does not just shape repayment. It shapes who applies, who says yes, who picks internship or academia, and who quietly decides this profession is not for them before orientation even starts. The workforce conversation loves to talk about shortages, burnout, and access to care. Fine. But if the front door gets more expensive while the loan faucet gets turned down, we should not act shocked when a smaller and less diverse group of people want to walk through it.

Read more from the MD side of healthcare:

💊
The FDA just approved the first oral treatment for canine lymphoma

Approved in January 2026, Laverdia is an oral antineoplastic drug for dogs with lymphoma, offering a new at‑home treatment option under veterinary supervision. Originally conditionally approved in 2021, it now has full approval and is administered twice weekly, with specific safety precautions for caregivers handling treated dogs.

The active ingredient of Laverdia, verdinexor, works by blocking a protein called Exportin‑1 (XPO1), which cancer cells use to move tumor‑suppressor proteins out of the cell nucleus. By inhibiting this process, Laverdia keeps these protective proteins inside the nucleus, disrupting cancer cell growth and leading to cancer cell death.

Several other conditionally approved drugs also received full FDA approval. Vetmedin was fully approved for delaying the onset of congestive heart failure in dogs with preclinical mitral valve disease, marking a milestone for the FDA’s expanded conditional approval pathway. KBroVet gained full approval for controlling seizures in dogs with idiopathic epilepsy.

The FDA also approved new generics across species. These include the first generic robenacoxib tablets for postoperative pain and inflammation in cats and the first generic pergolide tablets (Zygolide) for equine Cushing’s disease. For livestock, approvals included Defendazole, a generic fenbendazole dewormer for cattle and goats, and nixiFLOR, a generic injectable treatment for bovine respiratory disease.

Overall, the approvals expand treatment options, increase access to generics, and reflect progress across companion animal and livestock medicine.

Last week I talked about how AI is increasingly gaining the power to act. The biggest news until now came from Anthropic with Claude Code and Cowork. But OpenAI did not want to be outdone and dropped GPT 5.4 on Thursday. The major feature they list is its computer use capability. As far as I can tell, this capability is not baked into ChatGPT today the same way Claude Cowork is part of the Claude subscription, but these abilities are available to developers relying on OpenAI to power their tools. GPT 5.4 can navigate browsers, click through software interfaces, and complete multi-step workflows without being told each step. On OSWorld, a benchmark that tests desktop navigation tasks, GPT-5.4 scored 75%, outperforming the average human benchmark of 72.4%.

Before you conclude that OpenAI has pulled ahead again, the bigger story is that nobody has pulled ahead. GPT-5.4, Google's Gemini 3.1 Pro, and Anthropic's Claude Opus 4.6 are all scoring within two to three percentage points of each other on most benchmarks. Two years ago this was not close. Today it is genuinely competitive, which has real implications for how you should think about adopting any of these tools. Pick one, get comfortable and if another drops a new feature, wait a short while and yours will too…

A rough breakdown for the curious: GPT-5.4 leads on professional task automation and computer use. Gemini 3.1 Pro leads on abstract reasoning and science benchmarks, and at roughly half the per-token cost, it is becoming the default for high-volume applications. Claude Opus 4.6 leads on code generation and deep web research. All three are capable enough that "which one is best" is probably the wrong question. "Which one integrates with the tools I already use" is better. Most practices may have rules around whether you can connect your work email to an AI. The larger the practice group, the more likely this is.

Here is where this gets relevant for veterinary practice. The scribing category has already proven its value, and the adoption numbers back it up. The next step is agentic AI: tools that complete workflows rather than just draft content. Not just writing the discharge instructions, but scheduling the recheck, routing the follow-up, and flagging the overdue vaccine. Whether that sounds exciting or alarming probably depends on how much you trust the underlying model. Tailoring the use case to administrative tasks is the right move in my opinion given the evidence we have seen on frontier models in a healthcare setting.

The Stanford-Harvard ARISE State of Clinical AI Report, published in January, is a useful reference even though it lives in the human medical sphere. Across 31 models evaluated on 100 real primary care consultation cases, even top-performing models produced 12 to 15 severe errors per 100 encounters. Worst-performing systems exceeded 40. The report's clearest finding: AI works best when it supports a clinician rather than replaces one. That is not a surprise. But it is a useful thing to remember whether it is a client or a salesperson pitching you their product.

🚀
Quick Hits

Here are some of the other stories that caught our eye and we're following this week from around the veterinary world and animal kingdom:

How did we do today?

Tell us what you thought of this edition of Weekend Rounds so we can keep improving!

Login or Subscribe to participate

Keep Reading