London Futurists

London Futurists

Anticipating and managing exponential impact - hosts David Wood and Calum Chace

Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.

His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.

He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.

In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.

He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.

Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.

David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future,including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.

He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.

As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.

From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.

Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

read less
テクノロジーテクノロジー

エピソード

Where are all the Dyson spheres? with Paul Sutter
3日前
Where are all the Dyson spheres? with Paul Sutter
In this episode, we look further into the future than usual. We explore what humanity might get up to in a thousand years or more: surrounding whole stars with energy harvesting panels, sending easily detectable messages across space which will last until the stars die out.Our guide to these fascinating thought experiments in Paul M. Sutter, a NASA advisor and theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University in New York and a visiting professor at Barnard College, Columbia University, also in New York. He is an award-winning science communicator, and TV host.The conversation reviews arguments for why intelligent life forms might want to capture more energy than strikes a single planet, as well as some practical difficulties that would complicate such a task. It also considers how we might recognise evidence of megastructures created by alien civilisations, and finishes with a wider exploration about the role of science and science communication in human society.Selected follow-ups:Paul M. Sutter - website"Would building a Dyson sphere be worth it? We ran the numbers" - Ars TechnicaForthcoming book - Rescuing Science: Restoring Trust in an Age of Doubt"The Kardashev scale: Classifying alien civilizations" - Space.com"Modified Newtonian dynamics" as a possible alternative to the theory of dark matterThe Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory - 1999 book by Brian GreeneThe Demon-Haunted World: Science as a Candle in the Dark - 1995 book by Carl SaganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Provably safe AGI, with Steve Omohundro
13-02-2024
Provably safe AGI, with Steve Omohundro
AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.Selected follow-ups:Steve Omohundro: Innovative ideas for a better worldMetaculus forecast for the date of weak AGI"The Basic AI Drives" (PDF, 2008)TED Talk by Max Tegmark: How to Keep AI Under ControlApple Secure EnclaveMeta Research: Teaching AI advanced mathematical reasoningDeepMind AlphaGeometryMicrosoft Lean theorem proverTerence Tao (Wikipedia)NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)The team at MIRIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Robots and the people who love them, with Eve Herold
06-02-2024
Robots and the people who love them, with Eve Herold
In this episode, our subject is the rise of the robots – not the military kind of robots, or the automated manufacturing kind that increasingly fill factories, but social robots. These are robots that could take roles such as nannies, friends, therapists, caregivers, and lovers. They are the subject of the important new book Robots and the People Who Love Them, written by our guest today, Eve Herold.Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI, and bioethical issues in leading-edge medicine – all of which are issues that Calum and David like to feature on this show.Eve currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. Her previous books include Stem Cell Wars and Beyond Human. She is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.Selected follow-ups:Eve Herold: What lies ahead for the human raceEve Herold on Macmillan PublishersThe book Robots and the People Who Love ThemHealthspan Action CoalitionHanson RoboticsSophia, Desi, and GraceThe AIBO robotic puppySome of the films discussed:A.I. (2001)Ex Machina (2014)I, Robot (2004)I'm Your Man (2021)Robot & Frank (2012)WALL.E (2008)Metropolis (1927)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
What is your p(doom)? with Darren McKee
18-01-2024
What is your p(doom)? with Darren McKee
In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That’s a new book on a vitally important subject.The book’s front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There’s also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I’ve read.”Calum and David had lots of questions ready to put to the book’s author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Climate Change: There’s good news and bad news, with Nick Mabey
11-01-2024
Climate Change: There’s good news and bad news, with Nick Mabey
Our guest in this episode is Nick Mabey, the co-founder and co-CEO of one of the world’s most influential climate change think tanks, E3G, where the name stands for Third Generation Environmentalism. As well as his roles with E3G, Nick is founder and chair of London Climate Action Week, and he has several independent appointments including as a London Sustainable Development Commissioner.Nick has previously worked in the UK Prime Minister’s Strategy Unit, the UK Foreign Office, WWF-UK, London Business School, and the UK electricity industry. As an academic he was lead author of “Argument in the Greenhouse”; one of the first books examining the economics of climate change.He was awarded an OBE in the Queen’s Jubilee honours list in 2022 for services to climate change and support to the UK COP 26 Presidency.As the conversation makes clear, there is both good news and bad news regarding responses to climate change.Selected follow-ups:Nick Mabey's websiteE3G"Call for UK Government to 'get a grip' on climate change impacts"The IPCC's 2023 synthesis reportChatham House commentary on IPCC report"Why Climate Change Is a National Security Risk"The UK's Development, Concepts and Doctrine Centre (DCDC)Bjørn LomborgMatt RidleyTim LentonJason HickelMark CarneyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Meet the electrome! with Sally Adee
05-01-2024
Meet the electrome! with Sally Adee
Our subject in this episode is the idea that the body uses electricity in more ways than are presently fully understood. We consider ways in which electricity, applied with care, might at some point in the future help to improve the performance of the brain, to heal wounds, to stimulate the regeneration of limbs or organs, to turn the tide against cancer, and maybe even to reverse aspects of aging.To guide us through these possibilities, who better than the science and technology journalist Sally Adee? She is the author of the book “We Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds”. That book gave David so many insights on his first reading, that he went back to it a few months later and read it all the way through again.Sally was a technology features and news editor at the New Scientist from 2010 to 2017, and her research into bioelectricity was featured in Yuval Noah Harari’s book “Homo Deus”.Selected follow-ups:Sally Adee's websiteThe book "We are Electric"Article: "An ALS patient set a record for communicating via a brain implant: 62 words per minute"tDCS (Transcranial direct-current stimulation)The conference "Anticipating 2025" (held in 2014)Article: "Brain implants help people to recover after severe head injury"Article on enhancing memory in older peopleBioelectricity cancer researcher Mustafa DjamgozArticle on Tumour Treating FieldsArticle on "Motile Living Biobots"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Don't try to make AI safe; instead, make safe AI, with Stuart Russell
27-12-2023
Don't try to make AI safe; instead, make safe AI, with Stuart Russell
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Aligning AI, before it's too late, with Rebecca Gorman
09-12-2023
Aligning AI, before it's too late, with Rebecca Gorman
Our guest in this episode is Rebecca Gorman, the co-founder and CEO of Aligned AI, a start-up in Oxford which describes itself rather nicely as working to get AI to do more of the things it should do and fewer of the things it shouldn’t.Rebecca built her first AI system 20 years ago and has been calling for responsible AI development since 2010. With her co-founder Stuart Armstrong, she has co-developed several advanced methods for AI alignment, and she has advised the EU, UN, OECD and the UK Parliament on the governance and regulation of AI.The conversation highlights the tools faAIr, EquitAI, and ACE, developed by Aligned AI. It also covers the significance of recent performance by Aligned AI software in the CoinRun test environment, which demonstrates the important principle of "overcoming goal misgeneralisation". Selected follow-ups:buildaligned.aiArticle: "Using faAIr to measure gender bias in LLMs"Article: "EquitAI: A gender bias mitigation tool for generative AI"Article: "ACE for goal generalisation""CoinRun: Solving Goal Misgeneralisation" - a publication on arXivAligned AI repositories on GitHub"Specification gaming examples in AI" - article by Victoria KrakovnaRebecca Gorman speaking at the Cambridge Union on "This House Believes Artificial Intelligence Is An Existential Threat" (YouTube)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Shazam! with Dhiraj Mukherjee
27-11-2023
Shazam! with Dhiraj Mukherjee
Our guest in this episode is Dhiraj Mukherjee, best known as the co-founder of Shazam. Calum and David both still remember the sense of amazement we felt when, way back in the dotcom boom, we used Shazam to identify a piece of music from its first couple of bars. It seemed like magic, and was tangible evidence of how fast technology was moving: it was creating services which seemed like science fiction.Shazam was eventually bought by Apple in 2018 for a reported 400 million dollars. This gave Dhiraj the funds to pursue new interests. He is now a prolific investor and a keynote speaker on the subject of how companies both large and small can be more innovative.In this conversation, Dhiraj highlights some lessons from his personal entrepreneurial journey, and reflects on ways in which the task of entrepreneurs is changing, in the UK and elsewhere. The conversation covers possible futures in fields such as Climate Action and the overcoming of unconscious biases.Selected follow-ups:https://dhirajmukherjee.com/https://www.shazam.com/https://dandelionenergy.com/https://technation.io/Entrepreneur Firsthttps://fairbrics.co/https://neoplants.com/Al Gore's Generation Investment Management Fundhttps://www.mevitae.com/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
The Politics of Transhumanism, with James Hughes
13-11-2023
The Politics of Transhumanism, with James Hughes
Our guest in this episode is James Hughes. James is a bioethicist and sociologist who serves as Associate Provost at the University of Massachusetts Boston. He is also the Executive Director of the IEET, that is the Institute for Ethics and Emerging Technologies, which he co-founded back in 2004.The stated mission of the IEET seems to be more important than ever, in the fast-changing times of the mid-2020s. To quote a short extract from its website:The IEET promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a “technoprogressive” orientation.Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives – especially “human enhancement technologies” – the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use.That mission fits well with what we like to discuss with guests on this show. In particular, this episode asks questions about a conference that has just finished in Boston, co-hosted by the IEET, with the headline title “Emerging Technologies and the Future of Work”. The episode also covers the history and politics of transhumanism, as a backdrop to discussion of present and future issues.Selected follow-ups:https://ieet.org/James Hughes on Wikipediahttps://medium.com/institute-for-ethics-and-emerging-technologiesConference: Emerging Technologies and the Future of WorkMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
How to make AI safe, according to the tech giants, with Rebecca Finlay, CEO of PAI
30-10-2023
How to make AI safe, according to the tech giants, with Rebecca Finlay, CEO of PAI
The Partnership on AI was launched back in September 2016, during an earlier flurry of interest in AI, as a forum for the tech giants to meet leaders from academia, the media, and what used to be called pressure groups and are now called civil society. By 2019 more than 100 of those organisations had joined.The founding tech giants were Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined a year later and Baidu joined in 2018.Our guest in this episode is Rebecca Finlay, who joined the PAI board in early 2020 and was appointed CEO in October 2021. Rebecca is a Canadian who started her career in banking, and then led marketing and policy development groups in a number of Canadian healthcare and scientific research organisations.In the run-up to the Bletchley Park Global Summit on AI, the Partnership on AI has launched a set of guidelines to help the companies that are developing advanced AI systems and making them available to you and me. Rebecca will be addressing the delegates at Bletchley, and no doubt hoping that the summit will establish the PAI guidelines as the basis for global self-regulation of the AI industry.Selected follow-ups:https://partnershiponai.org/https://partnershiponai.org/team/#rebecca-finlay-staffhttps://partnershiponai.org/modeldeployment/An open event at Wilton Hall, Bletchley, the afternoon before the Bletchley Park AI Safety Summit starts: https://lu.ma/n9qmn4h6Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
The shocking problem of superintelligence, with Connor Leahy
25-10-2023
The shocking problem of superintelligence, with Connor Leahy
This is the second episode in which we discuss the upcoming Global AI Safety Summit taking place on 1st and 2nd of November at Bletchley Park in England.We are delighted to have as our guest in this episode one of the hundred or so people who will attend that summit – Connor Leahy, a German-American AI researcher and entrepreneur.In 2020 he co-founded Eleuther AI, a non-profit research institute which has helped develop a number of open source models, including Stable Diffusion. Two years later he co-founded Conjecture, which aims to scale AI alignment research. Conjecture is a for-profit company, but the focus is still very much on figuring out how to ensure that the arrival of superintelligence is beneficial to humanity, rather than disastrous.Selected follow-ups:https://www.conjecture.dev/https://www.linkedin.com/in/connor-j-leahy/https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programmehttps://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-htmlAn open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Preparing for Bletchley Park: behind the scenes, with Ollie Buckley
18-10-2023
Preparing for Bletchley Park: behind the scenes, with Ollie Buckley
The launch of GPT-4 on the 14th of March this year was shocking as well as exciting. ChatGPT had been released the previous November, and became the fastest-growing app ever. But GPT-4’s capabilities were a level beyond, and it provoked remarkable comments from people who had previously said little about the future of AI. In May, Britain’s Prime Minister Rishi Sunak described superintelligence as an existential risk to humanity. A year ago, it would have been inconceivable for the leader of a major country to say such a thing.The following month, in June, Sunak announced that a global summit on AI safety would be held in November at the historically resonant venue of Bletchley Park, the stately home where during World War Two, Alan Turing and others cracked the German Enigma code, and probably shortened the war by many months.Despite the fact that AI is increasingly humanity’s most powerful technology, there is not yet an established forum for world leaders to discuss its longer term impacts, including accelerating automation, extended longevity, and the awesome prospect of superintelligence. The world needs its leaders to engage in a clear-eyed, honest, and well-informed discussion of these things.The summit is scheduled for the 1st and 2nd of November, and Matt Clifford, the CEO of the high-profile VC firm Entrepreneur First, has taken a sabbatical to help prepare it.To help us all understand what the summit might achieve, the guest in this episode is Ollie Buckley.Ollie studied PPE at Oxford, and was later a policy fellow at Cambridge. After six years as a strategy consultant with Monitor, he spent a decade as a civil servant, developing digital technology policy in the Cabinet Office and elsewhere. Crucially, from 2018 to 2021 he was the founding Executive Director of the UK government's original AI governance advisory body, the Centre for Data Ethics & Innovation (CDEI), where he led some of the original policy development regarding the regulation of AI and data-driven technologies. Since then, he has been advising tech companies, civil society and international organisations on AI policy as a consultant.Selected follow-ups:https://www.linkedin.com/in/ollie-buckley-10064b/https://www.publicaffairsnetworking.com/news/tech-policy-consultancy-boosts-data-and-ai-offer-with-senior-hirehttps://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programmehttps://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-htmlAn open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
The future of space-based solar power, with John Bucknell
11-10-2023
The future of space-based solar power, with John Bucknell
In the future, energy will be too cheap to meter. That used to be a common vision of the future: abundant, clean energy, if not exactly free, then much cheaper than today's energy. But a funny thing happened en route to that future of energy abundance. High energy costs are still with us in 2023, and are part of what's called the cost-of-living crisis. Moreover, although there's some adoption of green, non-polluting energy, there seems to be as much carbon-based energy used as ever.Regular listeners to this show will know, however, that one of our themes is that forecasts of the future often go wrong, not so much in their content, but in their timing. New technology and the associated products and services can take longer than expected to mature, but once a transition does start, it can accelerate. And that's a possible scenario for the area of technology we discuss in this episode, namely, space-based solar power.Joining us to discuss the prospects for satellites in space gathering significant amounts of energy from the sun, and then beaming it wirelessly to receivers on the ground, is John Bucknell, the CEO of the marvellously named company Virtus Solis.John has been with Virtus Solis, as CEO and Founder, since 2018. His career previously involved leading positions at Chrysler, SpaceX, General Motors, and the 3D printing company Divergent.Selected follow-ups:https://virtussolis.space/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Whatever happened to self-driving cars, with Timothy Lee
27-09-2023
Whatever happened to self-driving cars, with Timothy Lee
Self-driving cars has long been one of the most exciting potential outcomes of advanced artificial intelligence. Contrary to popular belief, humans are actually very good drivers, but even so, well over a million people die on the roads each year. Globally, for people between 12 and 24 years old, road accidents are the most common form of death.Google started its self-driving car project in January 2009, and spun out a separate company, Waymo, in 2016. Expectations were high. Many people shared hopes that within a few years, humans would no longer need to drive. Some of us also thought that the arrival of self-driving cars would be the signal to everyone else that AI was our most powerful technology, and would get people thinking about the technological singularity. They would in other words be the “canary in the coal mine”.The problem of self-driving turned out to be much harder, and insofar as most people think about self-driving cars today at all, they probably think of them as a technology that was over-hyped and failed. And it turned out that chatbots – and in particular GPT-4 - would be the canary in the coal mine instead.But as so often happens, the hype was not wrong – it was just the timing that was wrong. Waymo and Cruise (part of GM) now operate paid-for taxi services in San Francisco and Phoenix, and they are demonstrably safer than humans. Chinese companies are also pioneering the technology.One man who knows much more about this than most is our guest today, Timothy Lee, a journalist who writes the newsletter "Understanding AI". He was previously a journalist at Ars Technica and the Washington Post, and he has a masters degree in Computer Science. In recent weeks, Timothy has published some carefully researched and insightful articles about the state of the art in self-driving cars.Selected follow-ups:https://www.UnderstandingAI.org/Topics addressed in this episode include:*) The two main market segments for self-driving cars*) Constraints adopted by Waymo and Cruise which allowed them to make progress*) Options for upgrading the hardware in a self-driven vehicle*) Some local opposition to self-driving cars in San Francisco*) A safety policy: when uncertain, stop, and phone home for advice*) Support from the State of California - and from other US States*) Comparing accident statistics: human drivers versus self-driving*) Why self-driving cars don't require AGI (Artificial General Intelligence)*) Reasons why self-driving cars cannot be remotely tele-operated*) Prospects for self-driven freight transport running on highways*) The company Nuro that delivers pizza and other items by self-driven robots*) Another self-driving robot company: Starship ("your local community helpers")*) The Israeli company Mobileye - acquired by Intel in 2017*) Friction faced by Chinese self-driving companies in the US and elsewhere*) Different possibilities for the speed at which self-driving solutions will scale up*) Potential social implications of wider adoption of self-driving solutions*) Consequences of fatal accidents*) Dangerous behaviour from safety drivers*) The special case of Tesla FSD (assisted "Full Self-Driving") and Elon Musk*) The future of recreational driving*) An invitation to European technologistsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Generative AI, cybercrime, and scamability, with Stacey Edmonds
20-09-2023
Generative AI, cybercrime, and scamability, with Stacey Edmonds
One of the short-term concerns raised by artificial intelligence is cybercrime. Cybercrime didn’t start with AI, of course, but it is already being aggravated by AI, and will become more so.We are delighted to have as our guest in this episode somebody who knows more about this than most people. After senior roles in audit and consulting firm Deloitte, and the headhunting firm Korn Ferry, Stacey Edmonds set up Lively, which helps client companies to foster the culture they want, and to inculcate the skills, attitudes, and behaviours that will enable them to succeed, and to be safe online.Stacey’s experience and expertise also encompasses social science, youth work, education, Edtech, and the creative realm of video production. She is a juror at the New York Film Festival and the International Business Awards.In this discussion, Stacey explains how cybercrime is on the increase, fuelled not least by Generative AI. She discusses how people can reduce their 'scam-ability' and live safely in the digital world, and how organisations can foster and maintain trusted digital relationships with their customers.Selected follow-ups:https://www.linkedin.com/in/staceyedmonds/https://futurecrimesbook.com/ (book by Marc Goodman)https://cybersecurityventures.com/cybercrime-to-cost-the-world-8-trillion-annually-in-2023/https://www.vox.com/technology/2023/9/15/23875113/mgm-hack-casino-vishing-cybersecurity-ransomwarehttps://www.trustcafe.io/Topics addressed in this episode include:*) Excitement and apprehension following the recent releases of generative  AI platforms*) The cyberattack on the MGM casino chain*) Estimates of the amount of money stolen by cybercrime*) The human trauma of victims of cybercrime*) Four factors pushing cybercrime figures higher*) Hacking "the human algorithm"*) Phishing attacks with and without spelling mistakes*) The ease of cloning voices*) The digital wild west, where the sheriff has gone on holiday*) People who are particularly vulnerable to digital scams*) The human trafficking of men with IT skills*) Economic drivers for both cybercrime and solutions to cybercrime*) Comparing the threat from spam and the threat from deep fakes*) Anticipating a surge of deep fakes during the 2024 election cycle*) A possible resurgence of mainstream media*) Positive examples: BBC Verify, Trust Café (by Jimmy Wales), the Reddit model of upvoting and downvoting, community notes on Twitter*) Strengthening "netizen" skills in critical thinking*) The forthcoming app (due to launch in November) "Dodgy or Not" - designed to help people build their "scam ability"*) Cyber meets Tinder meets Duolingo meets Angry Birds*) Scenarios for cybercrime 3-5 years in the future*) Will a future UGI (Universal Generous Income) reduce the prevalence of cybercrime?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
The Economic Singularity, Bletchley Park, and the Future of AI
13-09-2023
The Economic Singularity, Bletchley Park, and the Future of AI
The UK government has announced plans for a global AI Safety Summit, to be held in Bletchley Park in Buckinghamshire, outside London, on 1st and 2nd of November. That raises the importance of thinking more seriously about potential scenarios for the future of AI. In this episode, co-hosts Calum and David review Calum's concept of the Economic Singularity - a topic that deserves to be addressed at the Bletchley Park Summit.Selected follow-ups:https://www.gov.uk/government/news/uk-government-sets-out-ai-safety-summit-ambitionshttps://calumchace.com/the-economic-singularity/https://transpolitica.org/projects/surveys/anticipating-ai-30/Topics addressed in this episode include:*) The five themes announced for the AI Safety Summit*) Three different phases in the future of AI, and the need for greater clarity about which risks and opportunities apply in each phase*) Two misconceptions about the future of joblessness*) Learning from how technology pushed horses out of employment*) What the word 'singularity' means in the term "Economic Singularity"*) Sources of meaning, beyond jobs and careers*) Contrasting UBI and UGI (Universal Basic Income and Universal Generous Income)*) Two different approaches to making UGI affordable*) Three forces that are driving prices downward*) Envisioning a possible dual economy*) Anticipating "the great churn" - the accelerated rate of change of jobs*) The biggest risk arising from technological unemployment*) Flaws in the concept of GDP (Gross Domestic Product)*) A contest between different narratives*) Signs of good reactions by politicians*) Recalling Christmas 1914*) Suspension of "normal politics"*) Have invitations been lost in the post?*) 16 questions about what AI might be like in 2030Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.
Longevity Summit Dublin: four new mini-interviews
06-09-2023
Longevity Summit Dublin: four new mini-interviews
This episode, like the previous one, consists of a number of short interviews recorded at the Longevity Summit Dublin between 17th and 20th August, featuring a variety of different speakers from the Summit.In this episode, we'll hear first from Matt Kaeberlein, the CEO of a company called Optispan, following a 20 year period at the University of Washington studying the biological mechanisms of aging and potential interventions to improve healthspan. Among other topics, Matt talks to us about the Dog Aging Project, the Million-Molecule Project, and whether longevity science is at the beginning of the end or the end of the beginning.Our second speaker is João Pedro de Magalhães who is the Chair of Molecular Biogerontology at the University of Birmingham, where he leads the genomics of aging and rejuvenation lab. João Pedro talks to us about the motivation to study and manipulate the processes of aging, and his work to improve the low-temperature cryopreservation of human organs. You may be surprised at how many deaths are caused by the present lack of such cryopreservation methods.Third is Steve Horvath, who has just retired from his position as a professor at the University of California, Los Angeles, and is now a Principal Investigator at Altos Labs in Cambridge. Steve is known for developing biomarkers of aging known as epigenetic clocks. He describes three generations of these clocks, implications of mammalian species with surprisingly long lifespans, and possible breakthroughs involving treatments such as senolytics, partial epigenetic reprogramming, and altering metabolic pathways.The episode rounds off with an interview with Tom Lawry, Managing Director for Second Century Tech, who refers to himself as a recovering Microsoft executive. We discuss his recent bestselling book "Hacking Healthcare", what's actually happening with the application of Artificial Intelligence to healthcare (automation and augmentation), the pace of change regarding generative AI, and whether radiologists ought to fear losing their jobs any time soon to deep learning systems.Selected follow-ups:https://longevitysummitdublin.com/speakers/https://optispanlife.com/https://orabiomedical.com/https://rejuvenomicslab.com/https://oxfordcryotech.com/https://horvath.genetics.ucla.edu/https://altoslabs.com/team/principal-investigators-san-diego/steven-horvath/https://www.tomlawry.com/https://www.taylorfrancis.com/books/mono/10.4324/9781003286103/hacking-healthcare-tom-lawryMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationTop Software Engineers from Eastern Europe | IT Staff Augmentation | Money Back GuaranteeBoost your tech team with top IT talent, risk-free hiring, 10% off with code ECHO PODCAST.