The world of AI and large language models (LLMs) is in a state of constant flux, change, and improvement. At Infinitus, we’re focused on keeping up with the latest – both in our own research, and in that which we read.

It is our hope that the research papers and articles below will provide some insight into AI and machine learning (ML) – with a bit of a healthcare slant. This month’s theme is: how humans will be augmented by AI, not replaced by it

Here are six pieces of AI research that piqued the interest of Infinitus AI and LLM experts this month:

Catching up on the weird world of LLMs by Simon Willison

If you only have time to read one piece this month about the great big world of LLM and AI, Catching Up on the Weird World of LLMs is that piece.

In one (very, very long) blog post, Willison summarizes the last few years of development in the space, the technology behind tools like ChatGPT, Claude, Bard, and more. In the words of the author himself, it covers a lot of ground – everything from what LLMs are, what you can use them for, what you can build on them, how they’re trained, and many of the challenges involved in using LLMs safely, effectively, and ethically. Whether you have been working with generative AI for years or are completely new to the subject, this is the perfect piece to start with. 

AI and the automation of work by Benedict Evans


What’s going to happen to the concept of work as AI becomes smarter and can do more of what humans have traditionally handled? More importantly, how should we address the worries of automation and resulting job loss?

Evans, a former partner at venture capital firm Andreessen Horowitz, postulates that we actually shouldn’t panic. AI and LLMs are the latest wave of automation, similar to previous innovations like the typewriter and spreadsheet. He postulates that history shows total employment doesn’t decrease with more automation (Jevons paradox), and instead, new kinds of jobs get created. Furthermore, adoption of new technology tends to be slower than we expect, so while AI will impact work, it won’t be an immediate disruption. 

What Evans argues we should focus on instead is on developing ethical and responsible applications of AI that augment humans, not replace them.    

Super Mario meets AI: Experimental effects of automation and skills on team performance and coordination by Harvard University and Columbia University researchers

On the topic of human and AI collaboration, there is a burning question: are we actually more productive with AI? And relatedly, in what areas should we be more cautious in our AI usage? 

This paper provides insights we should consider when developing AI systems meant to collaborate with humans. The experiments conducted by Columbia Business School found that introducing automated agents to teams actually decreased performance, especially for low- and medium-skilled teams. The paper argues this is because humans prefer working with other humans – those paired with AI reported lower trust and effort. 

While AI can excel at individual tasks, there are motivational downsides we must address when it comes to collaboration. As an AI healthcare company, it reminds us at Infinitus to focus not just on technical capabilities, but the social experience of working with AI teammates. Rather than fully automating teams, we should aim for AI that supports and motivates human workers. The paper is a call to design AI systems that account for the nuances of human collaboration and team dynamics. Our AI needs to complement and empower people, not just replace them.

An Internet veteran’s guide to not being scared of technology by Mike Masnick 

(Note: Requires subscription to The New York Times)

Mike Masnick is the founder and editor for Techdirt, one of the web’s longest-running tech blogs. Masnick is a pragmatic optimist at heart and has a deep understanding of technology’s impact. Fear of the unfamiliar, acknowledges Masnick, is a common shared experience related to the rise of AI and LLMs.

He recently advised Hollywood professionals on AI, emphasizing the “AI plus human” synergy and urging them to harness AI’s potential rather than resist it. Drawing from his vast experience in the tech industry (he’s been heavily involved since 1998), Masnick has a core message: embrace technological change but be wary of hasty decisions that might have unintended consequences. 

Masnick’s insights remind us that, while innovation is inevitable, it’s crucial to ensure that our advancements align with human interests. His advocacy for “protocols, not platforms” emphasizes interoperability and decentralization, which could redefine how we approach AI integration in various industries. 

Capabilities of GPT-4 in medical challenge problems by Microsoft and OpenAI

This article is written by Microsoft and OpenAI about their own technology, so while it was valuable to read while working on a research project at Infinitus, readers should take it with a small grain of salt. While GPT-4 scored well on medical licensing exams, real clinical use still requires more work. 

The accuracy and bias risks of GPT-4 mean we must be extremely cautious about any medical applications, even with human oversight. However, the qualitative examples included show the potential for AI to aid physicians – explaining diagnoses, generating clinical scenarios, etc. If we can address the risks, AI augmentation could one day help provide more personalized instruction for medical students. 

For Infinitus’ use of LLMs in our work, the paper is a reminder that benchmarks only tell part of the story. While research like this is promising, it’s again a reminder that responsible development of AI is crucial, especially for our field, where our work has a very real direct human impact. 

The Role of GPT-4 in Drug Discovery by Andrew White

Andrew White is one of the people who devised scientific examples in the GPT-4 technical report. He writes that GPT-4 has shown potential in assisting the drug discovery process. While it cannot directly discover new drugs, it can propose new compounds for further study.

For instance, when targeting the protein TYK2 for psoriasis treatment, GPT-4 can conduct literature searches, identify related drugs, check for patents, and propose modifications to create novel compounds. It can then verify the novelty of these compounds and suggest synthesis for those that aren’t purchasable. However, the real-world application of these compounds requires extensive testing and clinical trials, which GPT-4 cannot automate. While GPT-4 shows promise in augmenting drug discovery, like previous opinions, AI currently complements, not replaces, the expertise of professionals in the field. 

What AI research are you reading this month? Connect with us on LinkedIn to share your list