If you ask many experts, they’ll tell you AI is a discipline within computer science. After all, AI is concerned with creating computer systems that can perform tasks typically requiring human intelligence; it makes logical sense. 

I understand where those that label AI as computer science are coming from. But I also disagree with them. 

In fact, I’d like to go a step further and assert that labeling AI this way is harmful. AI is more than just a subset of computer science; it’s a broad interdisciplinary field. The roots of AI actually come from cognitive science – a discipline to understand how humans think and reason, and where models are created to not be necessarily useful on their own but to shed light on how the mind could work.

And by limiting how we view AI, we may actually limit the potential of AI. 

Magic happens when silos break down

My AI journey started during my undergraduate education. At Stanford University, I majored in Symbolic Systems, an interdisciplinary field of study that incorporates elements of cognitive science, computer programming, psychology, linguistics, and philosophy. I was drawn to this major because I felt it was quite magical when people of different disciplines bring together their unique perspectives. Magic happens when we break down silos, learning from and challenge each other.

Symbolic Systems, in a way, resembled how AI started: a group of folks who were passionate about building something new from a wide range of disciplines. AI was effectively launched at a Dartmouth College workshop in 1956, during a time when computer science was also in its infancy. Experts from different backgrounds like mathematics, economics, cognitive science, physics, neuroscience, and of course computer science, came together and laid its foundation. 

For example, Herbert Simon, a political scientist who straddled many disciplines like computer science, cognitive psychology, and economics (he won a Nobel Prize in Economics) was among the pioneers of AI. Deep learning is a popular trend right now, but neural networks and the mathematics for the learning algorithm (backpropagation) has been around since the ‘80’s and was invented by the cognitive psychologist David Rumelhart, with collaboration with computer scientists.

Why does AI need leaders with diverse backgrounds?

As you can see, AI started with a group of people with a wide range of backgrounds, and I believe we need to keep it that way. There is great demand for AI, so we need to make sure we don’t unnecessarily deter anyone, especially those that don’t fit the traditional computer science stereotype. Innovation comes from thinking out problems differently.

Diversity of thought is critical – now more than ever. As cognitive scientist and AI startup founder Gary Marcus says, “it will take a village to raise to an AI.” He continues: “We should never forget that the human brain is perhaps the most complicated system in the known universe; if we are to build something roughly its equal, open-hearted collaboration will be key.”

We need human-centric AI; we want AI to be good for humans, not automation for the sake of automation. Ultimately, AI is just a learning algorithm that maximizes some objective, so we need to ensure the objectives we are setting actually serve humanity.

An exclusively computer science-centric approach to AI ignores the intricate nuances of human interaction and cognition. It overlooks the importance of creating AI systems that truly resonate with and serve humanity. By broadening our perspective and embracing an interdisciplinary approach, we can unlock the full potential of AI to positively impact diverse domains, from customer experience to healthcare and beyond.

Diversity of thought at Infinitus

At Infinitus, the integration of diverse perspectives and skill sets is central to our approach to AI. From Day 1 we built a team around not only those who could build AI models, but were also skilled in – and curious about – the art of conversational design. 

Early on, we built a computational linguistics team, most members of which do not have a traditional computer science background. Their responsibility is to think about the conversational experience, and with their rich analytical skills (involving a bit of programming), ensure our digital assistant is highly accurate. They work alongside our NLP (Natural Language Processing) team to ensure that we build AI that is human-centric.

For us to succeed in helping our customers, we need to ensure that our models are highly accurate – crucial in the domain of healthcare – but also a pleasure to converse with, as ultimately our systems have long conversations (often 30 minutes to an hour) with a human on the other end of the line.

It is not a coincidence that our team prioritizes diversity of thought and experience. As I learned early on at Stanford: Magic does indeed happen only when we come together, break down silos, and challenge one other.