By Ramesh Srinivasan, Author, Beyond the Valley, @rameshmedia

Technology can be a great catalyst for democracy. It can bring the ‘crowd’, all of us millions of users, together to pool our resources, report corruption, and discuss and debate the issues of the day. It can bring prosperity to everyone, rather than accentuate inequality.

But as we know all too well, it can also be used to divide and conquer us. This is the story I tell in my new book, Beyond the Valley, which will be out for the public on Tuesday, October 29th. The book paves the way toward a digital world that supports our best interests as human beings – in democracy, economic security and equality, and a deep respect for diversity and difference. It argues for a digital bill of rights, with featured interviews with leading figures like Elizabeth Warren, Eric Holder, Ro Khanna, business leaders, union leaders, and much more.

We can get there, but first we need to recognize what has gone wrong. In the United States’ 2016 election, the story was Russian hacking and Cambridge Analytica, the infamous scandal in which President Trump’s campaign hired a firm called Cambridge Analytica in order to access a bank of illegally-obtained Facebook data about 70.6 million Americans. The company then fed this data into complex psychometric models, allowing it to create highly targeted “dark posts”–posts shown to individual users that can’t be seen by anyone else–in order to sway public opinion in Trump’s favor.

The Analytica example is not the exception; it’s increasingly the norm. A few days ago we heard from Mark Zuckerberg that any speech, any political ad is fair game on Facebook, which far from being a tech company, is actually the biggest media company in the history of the world. And Just a few weeks ago we found out that the anti-immigrant messaging that has characterized the last year or so of Republican campaigning was no accident, we learned this past week. It was a product of social media strategy. Indeed, as The Intercept reported on September 9th, a wave of anti-immigrant sentiment was carefully designed and calibrated, then set loose to activate and influence voters, based on the recommendations of a data-mining company, called i360, owned by the Koch brothers.

How did it do this? As an example, the company  “boasts that the (campaign of Marsha Blackburn for Senate in Tennessee) used its technology to shape 3 million voter contact calls, 1.5 million doors knocked, $8.4 million spent on television ads, and 314,000 campaign text messages”– all based on its proprietary technology’s conclusion that immigration was the perfect wedge issue for certain demographics within the repubican party.

Charles Koch has taken care to assure the public, in a series of recent statements, that he opposes the Republican Party’s anti-immigration stance. His words resonate with those of Facebook CEO Mark Zuckerberg, who in the midst of the Cambridge Analytica scandal, publicly apologized, testified in congress, and pledged to better “police” Facebook. “It’s not enough to just build tools,” he said, “We need to make sure they’re used for good. That means we need to now take a more active view in policing the ecosystem.” 

All of this sounds great, but the devil is in the details. How does Facebook, or i360, or for that matter any major data or technology company, characterize the “good” use of tools, “policing,” or even what it means for Facebook to be “more active”? Are we supposed to blindly trust tech titans whose loyalty first and foremost is to their company’s bottom-line? 

Access to massive stores of data about users isn’t some kind of “bug” in the social media ecosystem. Mass surveillance is a feature for corporations like Facebook, not a bug. Their business models depend on collecting and selling data about us, knowing our intimate lives and steering our behavior to keep our attention. Their growth demands the creation of ever-more detailed forms of that data. In 2018 Facebook agreed to abide by the the EU’s new user data privacy laws, the General Data Protection Regulation (GDRP), and volunteered to do so in all of its dealings, not just those inside the EU. Yet in the very same year Facebook collaborated with legislators and lobbyists to gut a law that prevented the company from running facial recognition scans without users’ consent. 

But this isn’t the only destiny for technology; it has been, can be, and is still used to bring people together, as a catalyst for democracy and for free and fair elections. For example, as we have seen with multiple democratic primary campaigns, technology can be used to pool resources amongst working and middle class people, bypassing traditional lobbyist and corporate PAC funding models. 

We can also look across the world to see examples of technology that serve citizen interests. Last year, I traveled to Kenya to meet the team behind Ushahidi, a unique non-profit crowdsourcing platform that has been used hundreds of thousands of times across the world. I was fascinated by Ushahidi’s rich history: a technological innovation far beyond Silicon Valley’s control or imagination that blends social media, grassroots activism, citizen journalism, and mapping. It started in Kenya, and spread across the world.

Ushahidi, Swahili for “testimony,” emerged as a response to the 2007 election crisis in Kenya. Kenyans had begun to notice that reports they viewed on television differed from what they were hearing from friends and relatives. Then the government shut down local television, and international coverage presented its own flaws and biases. An opportunity emerged to design a technology to empower Kenyans to speak to one another directly, and to get instant information from the grassroots by sharing reports through texts, tweets, emails, and photos displayed via an interactive map. The technology, which has allowed its users to monitor everything from elections to outbreaks of violence to natural disaster recovery efforts, protects personal data and can be installed on one’s own server. It has now been used to help bring people together during critical moments: in Haiti (after the 2010 earthquake), in Nigeria in 2011 (and in national elections in the United States), in Japan (the 2011 tsunami), in Syria during the civil war, and with harassmap.org (a site that helps women report on sexual harassment and other abuses).

The organization behind Ushahidi still maintains its pledge to avoid gathering personal data and has faith that people will deploy the technology as they see fit, as a tool rather than a “solution” to social, cultural, or political issues. It’s the opposite of Silicon Valley’s urge, as Ushahidi’s former director of product management told me, “to create technologies to solve what they see as problems; which end up creating other problems.” As a result of that urge, he says, “there’s a real underinvestment in the power of human choice and creativity, social capital, and democracy.” 

Such a vision recognizes our social ties and collective creativity as a gift that can serve us all, rather than a resource we can use to simply churn out profits for a privileged few. It is also a reminder that proprietary platforms controlled by the rich and powerful are not a sine qua non of communal engagement online–they are just the current way of doing things in our neck of the woods. We can do better. That means, moving Beyond the Valley, not just literally but in our thinking, moving away from thinking that just a few corporations should control the destiny of all things digital, which increasingly is the language by which our life experiences are defined.

“We” means all of us, the users of tech, members of the 99% of people on the planet who are currently left out when technologists and businessmen in Silicon Valley decide to “solve” our problems for us–but without us. The way out? Looking at examples ‘beyond the valley’, that show how across the world technologies are being reinvented and reimagined from the bottom-up to give all of us voice.