BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Should We "Move Fast And Break Things" With AI?

In the bustling corridors of Silicon Valley, the mantra of "move fast and break things" has long been a guiding principle. But when it comes to the integration of Generative Artificial Intelligence (AI) into our daily lives, this approach is akin to playing with fire in a room filled with dynamite. The recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) paints a clear picture: the American public is not only concerned but demanding a more cautious and regulated approach to AI. As someone who works with companies to integrate generative AI into the workplace, I see these fears every day among employees.

A Widespread Concern: The People's Voice on AI

The AIPI survey reveals that 72% of voters prefer slowing down the development of AI, compared to just 8% who prefer speeding development up. This isn't a mere whimper of concern; it's a resounding call for caution. The fear isn't confined to one political party or demographic; it's a shared anxiety that transcends boundaries.

In my work with companies, I witness firsthand the apprehension among employees. The concerns of the general public are mirrored in the workplace, where the integration of AI is no longer a distant future but a present reality. Employees are not just passive observers; they are active participants in this technological revolution, and their voices matter.

Imagine AI as a new dish at a restaurant. The majority of Americans, including the employees I work with, would be eyeing it suspiciously, asking for the ingredients, and perhaps even calling for the chef (in this case, tech executives) to taste it first. This analogy may seem light-hearted, but it captures the essence of the skepticism and caution that permeate the discussion around AI.

The fears about AI are not unfounded, and they are not limited to catastrophic events or existential threats. They encompass practical concerns about job displacement, ethical dilemmas, and the potential misuse of technology. These are real issues that employees grapple with daily.

In my consultations, I find that addressing these fears is not just about alleviating anxiety; it's about building a bridge between the technological advancements and the human element. If we want employees to use AI effectively, it's crucial to address these fears and risks around AI and have effective regulations.

The widespread concern about AI calls for a democratic approach where all voices are heard, not just those in the tech industry or government. The employees, the end-users, and the general public must be part of the conversation.

In the companies I assist, fostering an environment of open dialogue and inclusion has proven to be an effective strategy. By involving employees in the decision-making process and providing clear information about AI's potential and limitations, we can demystify the technology and build trust.

The "move fast and break things" approach may have its place, but when it comes to AI, the voices of the people, including employees, must be heard. It's time to slow down, listen, and act with caution and responsibility. The future of AI depends on it, and so does the trust and well-being of those who will live and work with this transformative technology.

The Fear Factor: Catastrophic Events and Existential Threats

The numbers in the AIPI poll are staggering: 86% of voters believe AI could accidentally cause a catastrophic event, and 76% think it could eventually pose a threat to human existence. These aren't the plotlines of a sci-fi novel; they're the genuine fears of the American populace.

Imagine AI as a powerful race car. In the hands of an experienced driver (read: regulated environment), it can achieve incredible feats. But in the hands of a reckless teenager (read: unregulated tech industry), it's a disaster waiting to happen.

The fear of a catastrophic event is not mere paranoia. From autonomous vehicles gone awry to algorithmic biases leading to unjust decisions, the potential for AI to cause significant harm is real. In the workplace, these fears are palpable. Employees worry about the reliability of AI systems, the potential for errors, and the lack of human oversight.

The idea that AI could pose a threat to human existence may sound like a dystopian fantasy, but it's a concern that resonates with 76% of voters, including 75% of Democrats and 78% of Republicans. This bipartisan concern reflects a deep-seated anxiety about the unchecked growth of AI.

In the corporate world, this translates into questions about the ethical use of AI, the potential for mass surveillance, and the loss of human control over critical systems. It's not just about robots taking over the world; it's about the erosion of human values, autonomy, and agency.

In my work with companies, I see the struggle to balance innovation with safety. The desire to harness the power of AI is tempered by the understanding that caution must prevail. Employees are not just worried about losing their jobs to automation; they're concerned about the broader societal implications of AI.

Addressing these fears requires a multifaceted approach. It involves transparent communication, ethical guidelines, robust regulations, and a commitment to prioritize human well-being over profit or speed. It's about creating a culture where AI is developed and used responsibly.

The fear of catastrophic events and existential threats is not confined to the United States. It's a global concern that requires international collaboration. Mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war, as 70% of voters agree in the AIPI poll.

In my interactions with global clients, the need for a unified approach to AI safety is evident. It's not just a national issue; it's a human issue that transcends borders and cultures.

Conclusion: A United Stand for Safety

The AIPI poll is more than just a collection of statistics; it's a reflection of our collective consciousness. The data is clear: Americans want responsible AI development. The Silicon Valley strategy of "move fast and break things" may have fueled technological advancements, but when it comes to AI, safety must come first.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.