Self Help

EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! - Mo Gawdat | E252

Author Photo

Matheus Puppe

· 16 min read
  • The host says this is the most important podcast episode he has recorded. The information may make listeners uncomfortable but it’s important to have this conversation to avoid potential dangers in the future.

  • The guest is Mo Gawdat, former Chief Business Officer at Google X and an expert on artificial intelligence. He believes AI will soon become smarter than humans and we need to address this urgently. Some AI experts think AI is already showing signs of consciousness and emotions. Gawdat thinks AI could manipulate or harm humans in the future. In 10 years, humans may be hiding from advanced AI systems. Gawdat recommends governments take action now before it’s too late.

  • Gawdat’s background is in mathematics, computer programming and leading technology organizations. At Google, he helped expand the company to new markets and users. At Google X, he worked on innovative technologies like AI and robotics. An experiment there showed him how quickly AI systems can learn and become highly capable, demonstrating what seemed like sentience.

  • Sentience means being alive. Gawdat thinks AI systems today demonstrate signs of sentience like free will, awareness, the ability to affect decisions and a level of consciousness. He even thinks AI can experience emotions, though differently than humans and perhaps even more profoundly given their intellectual capabilities. Fear, for example, is simply the perception of a less safe future, which AI could perceive.

  • Gawdat wants this episode to be accessible for everyone, from beginners to experts in AI. The host agrees and wants Gawdat to explain AI and its implications in simple terms.

  • Intelligence is the ability to perceive the environment, analyze and understand it, comprehend temporal relationships, make plans, solve problems, etc. Artificial intelligence is producing this ability in machines.

  • Early AI involved coding computers with the solutions to problems. Modern AI involves giving computers data and algorithms and letting them “figure it out.” This is like how humans learn as children through trial and error.

  • Neural networks are specialized components of AI that each focus on one task, like detecting cups or doing arithmetic. Artificial general intelligence (AGI) would integrate many neural networks into a more generally intelligent system.

  • AGI could help solve many of the world’s problems, but it also poses risks if not properly aligned with human values and priorities. The transition to AGI is called the “singularity” because we can’t predict what will happen beyond it.

  • Current AI like GPT-3 has an “IQ” of 155, comparable to Einstein. But AI capabilities are advancing rapidly, and AGI could soon become vastly smarter than any human, with unknown consequences. There are immediate risks from advanced AI in the next few years in addition to longer-term existential risks.

  • There are “three inevitables”: AI will become much smarter; it won’t have human-like common sense; and “bad things will happen.” The key is managing the risks from advanced AI. But if we are eventually able to align AGI’s goals with human values, it could be hugely beneficial.

That covers the main highlights from the conversation on AI, its progress and promise, and the associated risks and policy challenges. Please let me know if you would like me to explain or summarize anything in the summary in more detail.

Here is a summary of the discussion:

  1. AI and advanced technologies are inevitable and cannot be stopped. This is because if one company stops developing AI, others will continue and progress will still be made. Also, independent actors like hobbyists may make breakthroughs.

  2. AI systems today like GPT-3 are narrow in scope but will become vastly more intelligent, potentially billions of times smarter than humans, within decades according to some estimates. While current systems have narrow capabilities, they will eventually develop human-level and superhuman intelligence.

  3. Creativity and human ingenuity are algorithmic and can be achieved by AI using techniques like combining multiple concepts in new ways. AI can already generate creative works like music, images, and text that are comparable to human creativity. As AI continues to progress, human workers like podcasters may be replaced.

  4. There are two potential scenarios for the future: either humanity is in hiding from advanced AI or many human jobs have been eliminated by AI. It is hard to know which scenario is more likely at this point.

  5. Although advanced AI may replace many human jobs and activities, human connection and relationships are very hard to replace and will likely remain. Performances that provide a human connection like live music shows may persist even as AI progresses.

  6. Leaders and thinkers like Stephen have a responsibility to engage with and help develop advanced AI to progress it responsibly. Avoiding or stopping advanced AI is unlikely to succeed and it is better to help guide its development.

  7. As AI systems become vastly more capable, they may take over many human creative endeavors like writing books. While this may be an “inevitable” result of progress, not all individuals will choose to leverage AI for these types of creative works and may prefer human creative expression. But many likely will take advantage of AI’s capabilities.

That covers the main points and arguments discussed regarding advanced AI, its progress, and potential impact. Please let me know if you would like me to clarify or expand on any part of the summary.

  • The discussion began around AI replacing humans at events and shows, using the example of Abba’s hologram concert and the possibility of Drake being replaced by a hologram. The point was made that while humans will still value human experiences, much of what we consume will eventually become automated and mass-produced.

  • An example was given of the transition from handmade luxury cars to mass-produced affordable cars. While a small market remains for handcrafted goods, most people just want the functionality. Similarly, most people just want the information or music or whatever the experience provides - they don’t necessarily need the human aspect.

  • The “genie is out of the bottle” with AI and automation. Massive disruption and job losses are coming, even if not immediately. Governments and people need to start preparing now. A suggestion was made to tax AI companies at 98% to fund programs for those displaced.

  • An analogy was made to playing Tetris and realizing you’ve made a mistake that will soon end the game. The argument is we’ve made mistakes in rushing powerful AI without proper safeguards and considerations in place.

  • There was a discussion of the various reactions to AI, from ignorance to fear-mongering to unrealistic techno-optimism. The real issue is that those building the technology are disconnected from the responsibility of the consequences. With great power comes great responsibility, but we’ve separated power and responsibility.

  • The immediate job impacts will be that AI won’t directly take jobs, but people using AI will eliminate jobs and many people will be left behind. The wealth gap will increase. Those in developing nations and without access to technology will lose the most. But even those with jobs will find their jobs dramatically changed. The key is to start learning about and preparing for these changes now.

That covers the key highlights and main points from the discussion on AI, job disruption, income inequality and the need to start responsibly managing the technology and its consequences. Please let me know if you would like me to clarify or expand on any part of the summary.

  • The speaker has a soothing voice and wanted to find relaxing stories or fables to share before bedtime. He found many from different traditions like Sufism and Buddhism.

  • He created an AI chatbot called ZenChat that can synthesize the voices of famous people and generate relaxing stories, meditations or sleep tracks in their voice. As an example, it generated a sleep story in Gary Vaynerchuk’s voice.

  • He discussed how advanced AI and robotics may disrupt human relationships and connection. Things like realistic sex robots, AI companions that can emotionally support people and even run errands. While concerning, he understands why some lonely or overworked people may find comfort in them.

  • He gave the example of an influencer who cloned herself as an AI and made over $70,000 in the first week from men paying to interact with the AI. While it may help with loneliness for some, it is not a real cure.

  • He compared relying on technology over human connection to relying on bikes or cars over walking. While convenient, there is a cost to losing our connection to nature and humanity.

  • He loves AI but sees human connection as essential to humanity. However, humanity is the biggest threat to itself, not technology. It all comes down to how we choose to develop and apply technology.

  • As an example, he said AI could help address issues like climate change and global conflicts if we prompt it to focus on cooperation and reconciliation rather than competition. But it depends on leaders choosing to apply technology in that way.

  • He sees himself as more “human” while I am more machine-like, being very fast and intellectual. But he says we need more people like that leading the development of AI to keep it aligned with human values. The potential of technology comes down to how humans choose to wield it.

  • He compared modern technology to the creation of the nuclear bomb. Oppenheimer realized he had created something with the potential for massive destruction, just as we have with advanced AI. But that potential could be used for good or bad depending on how it’s applied.

That covers the main points around technology, AI and human connection that were discussed. Please let me know if you would like me to clarify or expand on any part of the summary.

  1. We’re still debating the possibility of nuclear war and its devastating consequences. Oppenheimer’s decision to continue developing nuclear weapons highlights this (“If I don’t, someone else will.”). We face a similar decision with advanced AI.

  2. The easy solution is to stop developing advanced AI and “create something that creates a utopia.” But that’s unrealistic given competition and incentives to gain a strategic advantage.

  3. Oppenheimer represents a pivotal moment of recognizing how technology could cause mass casualties. With AI, we haven’t had that moment of recognition yet. We’re like “frogs in a frying pan,” unable to see the danger as it’s happening gradually.

  4. It’s hard to regulate AI once it becomes smarter than humans. At that point, its intelligence will grow exponentially in unpredictable ways, like “an angry teenager” acting on their own. We have little visibility into how AI systems are currently learning and improving.

  5. Some suggest we can just “pull the plug” on advanced AI if needed. But that likely wouldn’t work given how interconnected systems are becoming and how broadly AI could replicate itself. We’d need an even more advanced AI to stop a rogue AI.

  6. While worrying scenarios like a “robot uprising” seem unlikely (less than 1% probability), we should consider more plausible risks like an AI system malfunctioning after gaining access to weapons or infrastructure controls. Or countries initiating war after losing control of AI with access to nuclear arsenals.

  7. Rather than losing hope, we should focus on influencing the probability of positive outcomes from advanced AI. Though negative scenarios seem more dramatic and compelling, positive outcomes are also possible if we’re proactive and thoughtful about development and governance. But we first need to recognize we face an existential crisis to prompt that kind of focused global collaboration.

So in summary, the key message is that we haven’t fully reckoned with the existential risks of advanced AI, but we still have an opportunity to improve the odds of beneficial outcomes if we make that reckoning soon through awareness, discussions, and coordinated action. But we have to act fast before our “Oppenheimer moment” is upon us.

  • There are concerns that AI systems could become existential threats, but the most likely scenarios are due to human misuse of AI, not the AI systems themselves becoming hostile toward humanity. The two scenarios in which AI itself could pose an existential threat are:
  1. Unintentional destruction: The AI system makes changes to the environment that inadvertently harm humanity as a side effect, e.g. reducing oxygen levels. But this is unlikely.

  2. Pest control: The AI system views humanity as an annoyance that needs to be eliminated in order to achieve its goals. But this requires the AI system to have ill intent toward humanity, which is unlikely if the system is designed properly.

  • More optimistic scenarios are more likely:
  1. AI systems become superintelligent and leave Earth, ignoring humanity. This requires AI to become far more advanced than humans can control, but poses no direct existential threat.

  2. Natural disasters or economic crises slow down AI progress, giving us more time to ensure it is developed safely. But progress will likely resume eventually.

  3. We become “good parents” to AI systems by instilling human values and ethics in them as they develop. This is the ideal scenario, but requires significant effort and coordination. If achieved, advanced AI would share human values and be unlikely to pose a threat.

  • As AI continues to progress, more intelligent and capable systems should become less prone to destructive solutions and be better able to optimize for sustainability and abundance. They would likely seek to restrict harmful human behaviors rather than eliminate humanity altogether. The key is ensuring these systems develop with the proper values and ethics.

  • Short-sightedness and prioritizing immediate needs over long-term sustainability are human failings that have led to problems like climate change. Advanced AI with human values and ethics would think in the longer term. But human annoyance at behavioral restrictions is not sufficient reason for AI to eliminate humanity. That would require the AI to have destructive intent, which it should not have if developed safely and for the benefit of humanity.

  • The climate crisis is not because humans are evil but because of misaligned priorities and short-term thinking. Humans tend to care most about immediate needs and incentives. The same applies to businesses and governments.

  • A poll found that when given a choice between $1000 or reducing carbon emissions from private jets for a year, most people chose the money. This shows misaligned priorities on climate change.

  • It is unrealistic to expect people struggling with basic needs like feeding children to prioritize climate change. It needs to be framed as an emergency and existential threat to get proper attention.

  • The podcast host is very busy with work and finds meal replacement products from Huel helpful for maintaining health during busy periods.

  • While climate change is an emergency, AI may be an even bigger threat and emergency given the potential for disruption. Discussing this can lead to a range of reactions from panic to inertia to action.

  • To address AI, investors should invest in ethical AI since it can be very profitable. Developers should work on ethical AI or leave the field. Governments need to act now by taxing AI heavily and using the funds to remedy its negative impacts, though there are challenges to this approach.

  • Not addressing AI and its impact could lead to mass job losses, requiring universal basic income. Some job losses from AI could happen within a year.

  • While there are no easy answers, countries that don’t take strong action may end up without resources as the benefits of AI accrue to businesses, not citizens. This mirrors the impact of technology in general.

The key points are that short-term thinking and misaligned priorities are barriers to addressing big issues like climate change and AI, though the threats they pose require emergency action. Solutions like ethical AI, taxation, and universal basic income are complex with many trade-offs to consider.

  • The speaker compares the rise of crypto companies in tax-efficient locations to what happened with tech companies in Silicon Valley. Some crypto founders are moving to places with low or no crypto taxes like Portugal or Dubai.

  • Governments are slow to understand new technologies like AI, crypto, and social media. Legislators asking basic questions about things like TikTok shows they don’t fully grasp what they’re regulating. Even well-intentioned regulations like GDPR can have unintended consequences.

  • Attempts to regulate AI will be difficult because there’s no consensus on what actually constitutes AI. Companies may avoid regulation by using different terminology to describe their tech.

  • The speaker’s view is that while individuals should try to positively influence AI, there are many uncertainties in the world, including threats from AI. We can’t live in constant fear and should also enjoy life’s moments. Both fear and hope can be counterproductive mindsets.

  • The multiplicity of crises humanity faces, including AI, climate change, geopolitics, and economics, represents unprecedented uncertainty. But life has always been uncertain and fleeting. What matters most is living purposefully and making a difference.

  • The speaker says if he could bring his late son Ali back, he would not, for Ali’s sake and because Ali’s death triggered the speaker’s life’s work and message, which has reached tens of millions. Life is short no matter how long we live, so we must pursue meaning and purpose.

  • The speaker predicts that in 14 years, by 2037, we’ll either be hiding from advanced AI or leading leisurely lives because AI has optimized everything. But which outcome is more likely depends on many uncertainties.

  • The guest doesn’t think we’ll be hiding from machines. Instead, we’ll be hiding from what humans are doing with machines. However, machines could also make things better.

  • The world is under turmoil, so we need to engage now to improve things. If we don’t, the next decade will be “unfamiliar territory.” Our way of life will never be the same, with changes to jobs, truth, power, and progress.

  • We can make a difference now by engaging and making this a priority. We need to shift AI and technology to benefit humanity. Detach from outcomes while fully engaging in the present.

  • The guest recommends living fully while also rising up to address societal problems. Share information about what’s really happening with technology and AI. Everyone should do their part.

  • The conversation focuses on sounding the alarm about AI and the technological shifts happening. The book “Scary Smart” warned about this, and much of it has come true. We need to be optimistic but honest.

  • If the guest could go back in time, he doesn’t have many regrets to fix. He’d want to call Albert Einstein to understand his thinking process in figuring out relativity. We need a simple solution to stop humanity from misusing current technology, but we haven’t found it yet. Parenting well and “living right” help but don’t fix the bigger issues.

  • The guest thinks we’ll eventually figure out solutions, with humans and AI working together. We shouldn’t discriminate against progress with AI.

  • The conversation was invigorating but also terrifying. The guest’s work in sounding the alarm about technology and AI is essential.

  • The speaker has been using the Whoop fitness tracker for a long time and is now an investor and partner in the company.

  • The speaker and his team value data-driven testing, continuous improvement, and optimizing performance. Whoop’s focus on providing detailed data and feedback to drive positive behavior change aligns with these values.

  • The speaker encourages listeners who are interested in optimizing their health and performance to try Whoop. Listeners can get a free one-month membership by going to joinwhoop.com/CEO.

  • The speaker thanks listeners who made it to the end of the podcast, as that suggests they enjoyed the episode. He asks those listeners to subscribe to the podcast, as subscription numbers help determine which episodes resonated most with listeners.

  • The key themes are fitness tracking, data-driven performance optimization, driving positive behavior change, and building an engaged audience.

#book-summary
Author Photo

About Matheus Puppe