AI and Interdisciplinary R&D

In my previous article, “AI: a paradigm shift for software?”, where I explored how science historian / philosopher Thomas Kuhn might assess whether or not the current trends in AI could lead to a paradigm-shift, the topic of Interdisciplinary R&D comes up as a recurring theme in his work.

Kuhn explored many examples where an interdisciplinary approach has yielded better quality and/or faster progress by actively exploring multiple angles and approaches to a problem by actively challenging the established assumptions and concepts of any one specific field.

MY TAKE: Many of the complex problems and challenges associated with AI are beyond the scope of any single discipline. Interdisciplinary R&D has the potential to overcome disciplinary boundaries to promote new approaches that are more inclusive and integrated – that better reflect the complex reality of AI in society today.

What better way to get an idea of the key challenges in AI that could benefit from Interdisciplinary R&D – than to converse with an AI!


Examples of Interdisciplinary R&D in AI:

The concepts below are not “straight outta GPT” but have been human-curated a bit. I have broken them into what I think are “obvious” and “non-obvious” challenges associated with AI.

It’s the “non-obvious” problems I am most interested in, but I am also including the “obvious” (and still important) ones too.

Non-obvious AI challenges 👀

Autonomous systems and human-robot interaction

  • Autonomous systems have the potential to revolutionize transportation and logistics, but they also raise significant safety and ethical concerns.
  • Engineers, psychologists, industrial designers and ethicists can help to identify and address these concerns, and develop new approaches to designing autonomous systems that are more responsive to the needs and preferences of different users.

Healthcare and medical research

  • AI has the potential to revolutionize healthcare and medical research by enabling more accurate and personalized diagnoses and treatments.
  • Healthcare professionals, data scientists, industrial designers, and machine learning experts can help to identify and address the challenges associated with using AI in healthcare, such as ensuring patient privacy, interpreting complex medical data, and developing ethical guidelines for the use of AI in medical decision-making.

Environmental monitoring and sustainability

  • AI can be used to monitor and analyze environmental data, such as air and water quality, and to develop new approaches to sustainable development.
  • Environmental scientists, computer scientists, systems engineers and policy experts can help to identify and address the challenges associated with using AI in environmental monitoring and sustainability, such as ensuring the accuracy and reliability of environmental data and developing policies that promote sustainable development.

Obvious AI challenges 🤖

Fairness and bias

  • Developing algorithms and models that are fair and unbiased.
  • Computer scientists, social scientists, and ethicists can help to identify and mitigate the biases that can be introduced by AI algorithms and ensure that the resulting systems are more equitable and inclusive.

Explainability and transparency

  • Systems that are transparent and explainable, so that users can understand how they work and how decisions are being made.
  • Computer scientists, cognitive scientists, and ethicists can help to develop new approaches to explainable AI that are more effective and accessible to a wider range of users.

Social and ethical implications

  • AI has the potential to impact society in many ways, from job displacement to the development of new forms of surveillance and control.
  • Sociologists, philosophers, and policymakers can help to identify and address the social and ethical implications of AI, and develop policies and regulations that are responsive to these concerns.

Interpretable and robust machine learning

  • As machine learning models become more complex, it becomes increasingly difficult to interpret how they arrive at their decisions. This can be a significant barrier to the adoption of these models in critical domains, such as healthcare and finance.
  • Computer scientists, statisticians, and mathematicians can help to develop new methods for interpreting and understanding machine learning models, as well as methods for making them more robust and less susceptible to attacks.

NLP and language understanding

  • Linguists, computer scientists, and cognitive scientists can help to develop more sophisticated Natural Language Processing (NLP) models that better reflect the complex and nuanced nature of human language, as well as methods for training these models with smaller amounts of data.

Human-computer interaction (HCI) and user experience (UX)

  • As AI systems become more ubiquitous, it is important to ensure that they are designed with the user in mind and are accessible to a wide range of users.
  • Computer scientists, psychologists, and designers can help to develop AI systems that are more intuitive and user-friendly, and that take into account the needs and preferences of different users

Data privacy and security

  • As AI systems become more reliant on data, ensuring the privacy and security of that data becomes increasingly important.
  • Computer scientists, legal scholars, and cybersecurity experts can help to identify and address the privacy and security risks associated with AI, and develop new approaches to data governance and regulation that are responsive to these concerns.

The disciplines mentioned above are fairly generic and could expand or change based on the contours and topology of a specific ‘AI Challenge’ – but this looks like a good ‘starter-pack’. There could be more. There will be more.

NET: Interdisciplinary R&D has the potential to develop new approaches and solutions using AI that are more effective, efficient, and ethical, and that better reflect the diverse needs and values of society as a whole.


What do you think?