startups

Omidyar Network CEO Reimagines the Digital Technology Sector – Stanford Social Innovation Review


Outline of head with lines, tubes, and figure falling into a hole
(Illustration by Brian Stauffer) 

For decades, the United States has allowed digital technology to expand unmoored from any societal vision. Despite a history of standing up to protect people’s rights, Americans have remained uncharacteristically complacent, accepting digital technology’s impact on our economy, democracy, criminal-justice system, and social fabric as inevitable.

This acquiescence may be ending. Earlier this year, Seattle Public Schools became the first school district to sue social media companies, arguing that Facebook, Instagram, Snapchat, TikTok, and YouTube are contributing to the nation’s surging youth mental-health crisis and should be held accountable. Since then, school districts across the country have followed suit. In March 2023, California’s San Mateo County—which includes 23 school districts in the heart of Silicon Valley—the Board of Education, and the County Superintendent of Schools sued social media companies, alleging that they used artificial intelligence and machine learning technology to create addictive platforms that are harmful to young people. Numerous studies back the plaintiffs’ concerns.1

Insurers, lenders, employers, hospitals, and landlords are increasingly relying on predictive algorithms and generative artificial intelligence (Gen AI)—AI that can create new content and ideas based on prompts—to assess everything from loan and rent applications to medical treatments. Such reliance raises serious concerns about equity and fairness. A 2021 Consumer Reports study found that computer-generated decision-making is leading some auto-insurance companies to issue higher quotes to people with lower education and income.2

Last year, US Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.) and Representative Yvette Clarke (D-N.Y.) introduced the Algorithmic Accountability Act of 2022, a measure that would require companies to assess the impacts of the automated systems they use and sell and be more transparent about when and how they are using these systems. But the bill stalled in committee. Similar legislation introduced in 2019 also stalled. This year, Senate Majority Leader Chuck Schumer (D-N.Y.) has been leading an effort in the Senate to develop a legislative framework that “outlines a new regulatory regime for AI.”

As they wait for federal guidance, several states are considering some form of algorithmic accountability measures. For example, insurance regulators in Colorado and Connecticut are attempting to restrict insurance companies that use AI to determine who gets coverage and what it costs. Pending regulations would require stronger testing and ongoing monitoring of AI technology, as well as greater transparency in communication with customers.

As artificial intelligence dominates political, social, and economic discourse, fixation on potential harms is understandable. And now that Gen AI is at the forefront of conversations about digital technology, people are grappling with grand claims of existential risk, as well as real concerns about racial bias and disinformation. Policy makers and other leaders clearly regret that they did not establish a governance framework around social media at its advent. Now, amid Gen AI’s rapid spread, they may see the need for regulation as even more acute.

In fact, tech leaders who are developing these tools are calling for guardrails. In May, more than 350 executives, researchers, and engineers working in AI signed the following statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”3

Such widespread attention to the perils of allowing technology (and technologists) to call the shots urges a much-needed conversation about how to ensure that society drives technology—rather than the other way around. Omidyar Network is a longtime supporter of the power and potential of digital technology, having invested more than $750 million in tech start-ups aimed at improving people’s lives. Lessons from our work show that a tech system that benefits the many, not just the few, must balance innovation with social responsibility, regardless of whether technology is deployed by individuals, companies, or governments.

Channeling the power of technology for the good of society requires a shared vision of an ideal society. Despite the country’s increasing polarization, most Americans agree on the principles of a representative democracy and embrace the three quintessential rights inscribed in the Declaration of Independence—life, liberty, and the pursuit of happiness. Freedom and individual liberty, including freedom of speech, religion, and assembly and the right to privacy, are fundamental to most people’s expectations for this country, as are equality for all citizens, a just legal system, and a strong economy. Widespread consensus also exists around giving children a strong start in life; ensuring access to basic necessities like health care, food, and housing; and taking care of the planet.

By deliberately building a digital tech system guided by these values, society has an opportunity to advance its interests and future-proof the digital tech system for better outcomes.

Such collective action requires a broad conversation about what kind of society Americans want and how digital technology fits into that vision. To initiate this discussion, I suggest five questions philanthropists, technologists, entrepreneurs, policy makers, academics, advocates, movement leaders, students, consumers, investors, and everyone else who has a stake in the nation’s future need to start asking—now.

1. What underlying assumptions, mindsets, and ideas must change to create a digital technology system that uplifts society?

Ideas matter. They are grounded in values and have durable influence. Ideas spark conversations about what is possible and inform which policies endure and which get repealed.

The ideas that currently guide our economy—and therefore much of our digital technology system—started in the late 1970s among a relatively small number of academics, politicians, corporate leaders, wealthy people, and other elites who seeded a new set of ideas across society. They placed individual freedom from government and corresponding “free” markets above all else. Economic efficiency, small government, low taxes, shareholder profits, and individual responsibility came to rule the day, stripping all other purpose out of the economy. Because digital technology first came of age during this free-market philosophy’s peak, policy makers have taken a laissez-faire approach to governing—or not governing—it. This stance has come at the expense of consumers, communities, and society at large.

For instance, shareholder primacy—the view that CEOs and boards of directors ought to put the interests of shareholders above all others’—has favored gains for tech company owners and their investors at the expense of employees, democracy, the nation’s social fabric, and the environment. Additionally, the current economic paradigm incentivizes privatizing the gains and socializing the harms while avoiding any meaningful accountability. Both private equity firms and venture capitalists invest in companies with the intent of getting maximum returns, even if it means cutting jobs, pensions, or salaries. When companies succeed, these firms and their investors reap the profits. However, when the investments fall short, these firms socialize the costs, leaving once healthy companies or promising start-ups bankrupt or in shambles.

Rather than accepting the current reality as inevitable, society has an opportunity to push for a new economic paradigm—one that is inclusive of the digital technology sector and prioritizes individual, community, and societal well-being. Reimagining the nation’s digital technology system to support society must start with replacing outdated and in many cases discredited ideas with a new paradigm that reflects the realities of today’s world. Omidyar Network’s 2020 report “Our Call to Reimagine Capitalism in America” outlines the five primary economic areas that must be addressed in order to create a new economic paradigm that is founded on individual, community, and societal well-being and ensures meaningful participation for everyone.

Redesigning the digital technology system to support a more equitable, inclusive, and resilient society requires revising tech companies’ obligations to do more than earn and maximize profits. For example, a digital tech system that supports the American ideals of personal freedom and liberty must prioritize the way it handles and secures personal data. Currently, consumers have no ready means to see or understand where their data are being sold or shared. Business models treat data as a commodity, offering them up to the highest bidder. This lopsided value proposition ignores the producers of data—all of us—and underscores the power that corporations hold over Americans’ data. Consent, cookies, and privacy policies do not solve this challenge. Anyone who opts out is unfairly penalized by being excluded from participating fully in the digital world on which our lives depend. The system deceives, coerces, and extracts from the public.

Read More   ODNI Tells Tech Startups to Be Cautious of Foreign Investments - Executive Gov

Adopting a new economic mindset and new business models that are not extractive brings an opportunity to recharacterize data and guide how their economic value is derived and shared in support of a fairer, more just approach. Instead of conceiving data as property, society must think of them more as a public good that should be used in the public interest and have a greater benefit for society. Worker Info Exchange, a nonprofit devoted to helping workers access and benefit from data collected about them in the workplace, is already putting this vision into action. For example, Uber and Lyft drivers, delivery workers, and others in the gig economy can use this online resource to pool their data so that they can collectively push for fair wages and better working conditions.

To reimagine the nation’s digital tech system to better serve society, Americans must further explore the benefits, harms, and limits of data. They can perhaps start by looking overseas. With the Digital Services Act (DSA) and the Digital Markets Act (DMA), the European Union has shown admirable leadership in creating a safer digital space that protects the fundamental rights of users while establishing a level playing field for businesses. Focused on regulating online intermediaries (e.g., social media platforms and digital-service providers), the DSA aims to protect users’ fundamental rights, including the right to freedom of expression and access to information, while mitigating illegal content, disinformation, and the risk of other harmful online activities. Central to the measure are new transparency requirements and greater user empowerment, including mechanisms that make it easier for users to report illegal content. The DMA includes regulations intended to foster competition and ensure that businesses have fair access to digital markets, such as prohibiting platforms with significant market power—Amazon, for example—from favoring their own services or products over competitors’ or leveraging data collected from their own platforms to gain an unfair advantage in the marketplace.

2. How can inclusive participation drive a stronger digital technology system?

“We mutually pledge to each other our lives, our fortunes, and our sacred honor,” states the closing line of the US Declaration of Independence, affirming the nation’s dependence on the contributions of all Americans. A more democratic economy gives everyone—including working people, consumers, small businesses, and families—an equal voice and ability to get ahead.

Like many of today’s systems, digital technology was shaped by a narrow set of voices—primarily those of straight white men. Among technology executives, 80 percent are men and 82 percent are white, while only 3 percent are Latino and just 2 percent are Black. Women, people of color, LGBTQIA+, youth, and people with disabilities and special needs are consistently underrepresented, both as builders and as users.

This lack of representation and the undersampling of these groups in the data that shape AI lead to digital technology that is optimized for a narrow portion of the world and can therefore exacerbate biases. For example, facial-recognition software—which law-enforcement agencies use to identify suspects more quickly—routinely performs better on male faces than on female faces and better on white-skinned subjects than on those with darker skin. For digital technology to support a just and equitable society, the workforce that is designing, financing, creating, governing, and developing it must reflect the society it aims to support.

Like many of today’s systems, digital technology was shaped by a narrow set of voices— primarily those of straight white men. Among technology executives, 80 percent are men and 82 percent are white, while only 3 percent are Latino and just 2 percent are Black.

Some investors, such as Kapor Capital, have supported efforts to develop a diverse tech workforce that addresses social interests, not solely commercial ones. Additionally, a coalition of philanthropic foundations, think tanks, universities, and community colleges is investing heavily in public-interest technology. One aspect of the coalition’s work is to recruit more Black people into the tech sector and to include historically Black colleges and universities, such as Prairie View A&M University and Howard University, in these efforts. And civil-society organizations such as Black & Brown Founders have joined forces with tech investors to diversify who starts tech businesses.

The digital tech sector can and should embrace intentional hiring practices, contractual obligations, and new standards for itself—as well as heed calls for change from consumers. A broader, more diverse range of participants at all levels of the system—e.g., standards bodies, regulators, policy makers, and international organizations—will ensure that decisions made about the future of technology reflect the interests, needs, and input of all stakeholders.

3. How can ethics and transparency enhance digital technology’s ability to serve society?

For digital technology to serve society, it must be driven by clear ethical codes and norms that are grounded in shared social values. As Gene Kimmelman, former senior advisor to the US Department of Justice and former president of open-internet champion Public Knowledge, once told me, “We are constantly trying to adapt market practices and regulations to fit the new technology into old norms and rules (e.g., crypto, fintech), instead of addressing whether the new technology has such profound ethical implications that we must first address whether such technology should be used at all. We simply have no ‘nuclear freeze’ or circuit breaker available to turn this process around.”

From biomedicine, genetics, and health care to agriculture and genetically modified foods, most novel discoveries of the 19th and 20th centuries are bound by an ethical framework. Scholars have debated the moral ethics of nuclear energy for decades. The potential for nuclear energy to reverse the impact of climate change has stirred an entirely new dialogue over whether a “morally acceptable” level of nuclear-energy production exists. When it comes to a moral code, digital technology should not get a pass.

Profile of head with figure building the top half of the skull with lines
(Illustration by Brian Stauffer) 

Biases in artificial intelligence and the capacity for Gen AI to evolve in unpredictable ways underscore the need for an ethical framework to guide digital systems. Algorithms inform, support, and govern large swaths of today’s society, giving technology an outsize economic and social impact. For instance, judges may use recidivism-risk scores determined by algorithms trained on decades of criminal records to determine bail decisions, mortgage lenders can base interest rates on default risks predicted by algorithms, and public social services may draw on algorithmic support to make decisions about financial aid.4 A digital technology system that fosters a fair and equitable society must eliminate algorithmic biases in all forms (preexisting, technical, and emergent).

Ethical frameworks are also critical to addressing novel challenges associated with other digital technologies. For example, open-source software, the foundation of the internet, operates much of our critical infrastructure—the power grids, hospitals, communication and transportation systems, phones, cars, and planes that make commerce and industry possible. Open-source software has the power to connect communities, spur innovation and collaboration, and build transparency and accountability into the system. Because open-source software removes the ability to control what others do with the original code, anyone can use, remix, or sell that code into new technological possibilities with little restriction. The very nature of its openness, though fostering innovation, also creates risks and vulnerabilities that need to be addressed. A bad actor can use that code for evil purposes or add code that could threaten security and stability.

Read More   South Korea stocks surge more than 5% for best session since March 2020 after short-selling ban - CNBC

Decentralized autonomous organizations (DAOs)—community-led entities with no central authority that are intended to respect the interest of stakeholders outside the control of any one party—are the backbone of much of cryptocurrency and the Web 3.0 innovations happening today. As they continue to grow in popularity, ethical guidelines are important to secure public trust and guide reputation management. Addressing ethical considerations related to voice and biometric technologies (e.g., consent, data usage, and potential biases) is crucial to avoid misuse or discrimination. Ethical guidelines can also help to make sure that encryption—vital for protecting data—does not hinder legitimate access by law enforcement.

Some tech companies employ in-house ethicists and human-centered designers. This trend is encouraging, and these companies should be applauded for their approach. But these employees must be incentivized to be honest in their assessments and empowered to reckon with potential harms. Most providers still operate using a narrow product lens, rather than a broader frame about a given technology’s real-world effects. To encourage responsible tech workers to ask hard questions, consider the implications of their products in advance, and course-correct where needed, Omidyar Network, alongside many contributors, built the Ethical Explorer Pack. Featuring a series of tools and resources to change internal practices and lessons learned from other companies’ experiences, the online guide is designed to help designers, engineers, product managers, founders, and others integrate ethical values into their products.

Government can help by making procurement opportunities contingent on trustworthy and ethical norms and behavior that will lead to better outcomes. Civil-society organizations such as the Trust & Safety Professional Association, the Integrity Institute, Whistleblower Aid, Coworker.org, and the Algorithmic Justice League, along with professional bodies like the Institute of Electrical and Electronics Engineers, also have an important role to play in informing new ethical standards. These ethical frameworks should account for the indirect impact digital technologies have on individuals and communities (e.g., automation and AI replacing workers, data centers and crypto adversely impacting the environment, and the sharing and selling of personal data eroding privacy and trust).

Consumers also have a part to play in defining and demanding a stronger ethical code. And as digital natives lead the way, we must begin early in the classroom, teaching children about the need for ethical considerations and normative choices that direct digital technology to support an ideal society. This improved understanding has the potential to spur over time widespread demand for a dramatic shift in digital technology governance.

Better transparency is essential for the widespread adoption of ethical norms. Other industries, including fashion and food, offer models for responding to demands for greater transparency. Amid growing concerns about the environmental and social impacts of the fast-fashion industry, many brands now disclose information about their supply chains, production processes, and sourcing practices. The food industry has also made efforts to improve transparency, providing information about where ingredients come from and about environmental and social implications throughout the supply chain. Consumers can hold digital technology companies to similar scrutiny by demanding information about how their personal data are collected, used, stored, and shared. Consumer Reports created Permission Slip to help consumers understand and control the data that companies collect. The app provides information about companies’ data practices and allows users to send requests to companies to delete or stop selling their personal data.

Popular debate about Gen AI has turned to transparency and audits as possible remedies for potential social harms. But large language model (LLM) developers are resistant and say it is too hard to share how their LLMs make decisions. Their claims are not credible. After all, technologies that allow private companies to share data with government already exist. The US Securities and Exchange Commission does this with financial data through EDGAR, a portal that allows anyone to access and download (for free) companies’ registration statements, periodic reports, and other forms. And nonprofits such as OpenMined are also building out the technical infrastructure to enable full transparency. But to realize this norm at scale, society must demand it.

Transparency is necessary for many technical issues, such as algorithms, data, and privacy, as well as corporate and labor practices, including human rights; manufacturing; procurement; hiring and diversity, equity, and inclusion considerations; and harms and violations. Improving transparency also requires increased use of open-source code, greater interoperability, and new protocols that will inherently drive knowledge-sharing across actors (including potentially creating systems that will enable people to see where their data are being sold or shared). Making more data—stripped of personally identifiable information—available to qualified researchers across academia, the media, civil society, and government agencies will bolster understanding of current trends, inform future action, protect the public interest, and hold responsible parties accountable.

In 2022, Omidyar Network grantee Demos, a progressive public-policy think tank, published The Open Road, a seminal report on creating sustainable open-source systems. “More openness means more innovation,” the study concluded. “More transparency means more scrutiny, which means fewer overlooked security vulnerabilities. Openness favors the development of ‘good technology,’ which embeds privacy, security, and other protections in its design.”5  Openness illuminates shortcomings in code and design, leading to more robust applications and solutions. In short, openness boosts innovation and can contribute to a digital tech system that favors equity and fairness by creating checks and balances for consumers.

4. How can policy guide a reimagined digital technology system?

Policy makers elected to preserve the country’s democracy and safeguard the well-being of their constituents can help guide the transition to a more responsible digital tech system. Currently, too many policy makers are financially beholden to or overly influenced by tech lobbying efforts. The five biggest tech firms—Apple, Amazon, Microsoft, Alphabet, and Meta—spend roughly $69 million per year on lobbying in the United States.

A lack of meaningful competition policy has resulted in a world where the big five tech firms had an August 2022 market cap of almost $8.5 trillion, larger than the sovereign economy of Germany or Japan. This results in a dangerous, unchecked concentration of corporate power that limits innovation and hampers policy makers from holding digital tech accountable for the needs of society or incentivizing companies to support the nation’s values.6

The answer is not to pit innovation against regulation. A digital tech system that supports the ideals of a democratic society needs both. And regulation is not necessarily anathema to growth or innovation. For example, banking is one of the most heavily regulated sectors, yet fintech has managed to follow the rules while being among the fastest-growing and largest categories of venture capital (VC) investment. Biomedicine is another heavily regulated sector, yet it took less than nine months to develop and roll out an entirely new class of lifesaving mRNA COVID vaccines. With better incentives and regulation, digital tech companies can unleash innovation in business models, products, and competitive features that foster and advance the common good.

As federal agencies, Congress, and the White House all scramble to determine the best regulatory approach, where such a governing body should be housed and how it should be structured remains unclear. Continued examination of the complex and overlapping issues may lead to stricter mandates and clarified authority for existing agencies, such as the Federal Trade Commission, and even perhaps to the creation of new institutions with new mandates and capabilities. At the United Nations AI for Good Summit in July, Gary Marcus, Karen Bakker, and Anka Reuel—researchers who are focused on various aspects of AI’s impact on society—introduced the Center for the Advancement of Trustworthy AI (CATAI), a new initiative on AI governance. (Omidyar Network provides financial support to this effort.) By producing basic and applied research on new, more trustworthy forms of AI, CATAI aims to inform and develop new global AI governance models.

Read More   Slovakia’s government signs a memorandum with China’s Gotion High-Tech to build a car battery plant - RochesterFirst

When considering regulatory interventions, policy makers must be able to evaluate the technology’s systemic importance, scale, maturity, and potential real-world harms. They may have to adapt or revisit prior regulatory frameworks, or adopt new theories and frameworks to account, for instance, for business models that have no explicit consumer pricing. Take, for example, Facebook or Google search, which give away their products for “free” but should still be held accountable to prevent harms to consumers and market concentration.

To be fair, policy makers are already taking important steps. In recent months, the federal government has increased efforts to rein in digital tech, and many such measures aim to support a healthier and more vibrant society. Several bills in the House and Senate are aimed at making digital technology safer for children, including measures intended to reduce risks associated with social media platforms, such as cyberbullying and targeted advertising. A bill that passed through the House Committee on Energy and Commerce last year aimed to protect consumer privacy by putting stronger guardrails around data collection. Although the Senate failed to take up the measure before the end of the Congress, it had strong bipartisan support. And in May, the Biden administration took what it called “actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” These actions included providing $140 million in funding to launch seven new national AI research institutes intended to encourage collaboration across institutions of higher education, federal agencies, industry, and others to ensure that advancements in AI are “ethical, trustworthy, responsible, and serve the public good.”

States are also belatedly stepping up action on digital tech. Last year, to safeguard children in the state, California enacted the California Age-Appropriate Design Code Act. The measure requires online platforms to consider the best interests of child users and to default to privacy and safety settings that protect children’s mental and physical health and well-being. Many other states are trying to follow suit. Faced with more than 600 million children online, tech makers must design their products with children’s safety and privacy in mind while policy makers at all levels enact policies that ensure accountability.

Additionally, nearly 20 state legislatures have introduced comprehensive consumer-privacy legislation. Most of these bills would empower consumers to access, delete, or correct their information online and either allow consumers to opt out of sales pitches and targeted advertising or require opt-in consent to process their sensitive information. 

While technology is constantly changing and evolving, our rules and regulations must anticipate what’s coming, instead of playing catch-up.

5. What financial models will incentivize a healthy digital technology system?

Major technological revolutions usually come with their own accompanying financial revolutions. Digital technology is no different. Venture capital, with its new breed of investors, systems, and incentives to develop and advance digital technology, is well established. It has fostered a culture and engine of innovation and investment that anchor and drive the digital technology sector. In 2022, venture capital funds (VCs) invested $1.37 billion in 78 Gen AI deals—almost as much as they invested in Gen AI in the previous five years combined.7

The prevailing VC model puts a premium on acquiring users to fuel the growth that will make the investment pay off. This model makes VC-backed companies more accountable to their investors than they are to users, communities, workers, markets, and society at large.

But the VC revolution has its downsides. The current financing model and culture prioritizes growth at all costs to satisfy shareholders who expect immediate returns. As former venture capitalist Evan Armstrong notes, “We have now reached a point in the start-up ecosystem where for large VC funds, a start-up achieving a billion-dollar outcome is meaningless. To hit a 3-5x return for a fund, a venture partnership is looking to partner with start-ups that can go public at north of $50 billion. … In the entire universe of public technology companies, there are only 48 public tech companies that are valued at over $50 billion.”8 As a result, entrepreneurs are often forced to take bigger and bigger risks to get their products to a dominant position in the marketplace.

Shareholder primacy has left VCs with no incentive to consider the potential social consequences. Moreover, the prevailing VC model puts a premium on acquiring users to fuel the growth that will make the investment pay off. Investors are willing to subsidize losses to undermine competing companies that finance their growth capital from operating revenues and profits. This model makes VC-backed companies more accountable to their investors than they are to users, communities, workers, markets, and society at large.9 New private financing models with longer horizons that take the pressure off turning an immediate profit and consider factors that go beyond the bottom line are urgently needed. Limited partners of VCs—several of which already represent broader public interests, such as worker pension funds, university endowments, and sovereign wealth funds—can and should use their significant leverage to encourage VCs to take more responsible approaches.

Financing is starting to see new innovations in revenue models, ownership structures, and the allocation of returns and dividends, but these are notable exceptions and far from the norm. Founded by venture capitalist Bryce Roberts, Indie.vc—whose initial backers included Omidyar Network—attempted a new approach. Rather than providing large amounts of seed funding to help a founder get an idea off the ground, Indie.vc made smaller investments in promising, already established start-ups—including several from overlooked geographies and demographics—without taking an initial stake in the company. The intent was to allow founders who had already launched to focus on growing their businesses, rather than fixating on turning a profit for their investors. Ultimately, Indie.vc failed to attract the institutional support it needed to scale. In announcing the firm’s closure on Medium, Roberts wrote, “As we’ve sought to lean more aggressively into scaling our investments and ideas behind an ‘Indie Economy,’ we’ve not found that same level of enthusiasm from the institutional LP market.” Remaining optimistic, he also noted, “I have no doubt that in 4-5 years we’ll see our Indie companies posting comparable results as our previous funds that have generated 5x+ net multiples for our LPs.”

Establishing more patient funding models will support technologists who embody the values needed to improve the digital tech system, such as safeguarding rights, promoting justice, and building tech for social good.

A Better World Is Possible

Digital technology—and now Gen AI—may be unique in the history of technological advances. It has grown rapidly and pervades all of society. A litany of basic social functions depends on it. It has its own self-learning capacities. Its inner workings and complexity now evade mass understanding. These attributes, along with anxieties about existential risks, contribute to a feeling of inevitability that nothing can be done to alter its path.

We must counter this narrative. Americans can steer, shape, and govern digital technology in service of a democratic society. To succeed, we must stop measuring success simply by the speed and scale of digital tech advances and prioritize how it can help drive a positive vision for society.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Mike Kubzansky.

 





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.