If we know that ChatGPT makes things up, when should we avoid Large Language Models?
Is generative AI really safe to use when it matters?
Listen to this interview with Dr Heidy Khlaaf to find out.
Dr Khlaaf is the Principal Research Scientist at the AI Now Institute focusing on the assessment and safety of AI within autonomous weapons systems.
She previously worked at OpenAI and Microsoft, amongst others.
Timestamps
00:00 Introduction
06:32 The Problem-First Approach to AI
14:20 Limitations of Large Language Models
20:49 Augmenting Human Knowledge with AI
23:37 AI Systems Gone Wrong
28:22 AI in Safety Critical Systems
33:47 Questioning Technological Determinism
38:19 AI in Defense
For the transcript, go to: https://www.techfornontechies.co/blog/217-when-not-to-use-ai-in-business-and-warfare
For more career & tech lessons, subscribe to Tech for Non-Techies on:
If your organisation wants to drive revenue through innovation, book a call with us here.
Our workshops and innovation strategies have helped Constellation Brands, the Royal Bank of Canada and Oxford University.