United States Securities and Exchange Commission
Washington, D.C. 20549
NOTICE OF EXEMPT SOLICITATION
Pursuant to Rule 14a-103
Name of the Registrant: Alphabet Inc.
Name of persons relying on exemption: National Legal and Policy Center
Address of persons relying on exemption: 107 Park Washington Court, Falls Church, VA 22046
Written materials are submitted pursuant to Rule 14a-6(g)(1) promulgated under the Securities Exchange Act of 1934. Filer of this notice does not beneficially own more than $5 million of securities in the Registrant company. Submission is not required of this filer under the terms of the Rule but is made voluntarily in the interest of public disclosure and consideration of these important issues.
PROXY MEMORANDUM
TO: Shareholders of Alphabet Inc.
RE: The case to vote FOR Proposal Number 11 on the 2025 Proxy Ballot (“Stockholder Proposal Regarding a Report on AI Data Usage Oversight”)
This is not a solicitation of authority to vote your proxy. Please DO NOT send us your proxy card; National Legal and Policy Center is not able to vote your proxies, nor does this communication contemplate such an event. NLPC urges shareholders to vote for Proposal Number 11 following the instructions provided on management's proxy mailing.
The following information should not be construed as investment advice.
Photo credits follow at the end of the report.
1
National Legal and Policy Center (“NLPC”) urges shareholders to vote FOR Proposal Number 11,1 which it sponsors, on the 2025 proxy ballot of Alphabet Inc. (“Alphabet” or the “Company”). The “Resolved” clause of the proposal states:
Shareholders request the Company to prepare a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, which assesses the risks to the Company’s operations and finances, and to public welfare, presented by the real or potential unethical or improper usage of external data in the development, training, and deployment of its artificial intelligence offerings; what steps the Company takes to mitigate those risks; and how it measures the effectiveness of such efforts.
Introduction
Artificial intelligence (AI) is one of the most transformative innovations in modern economic history – reshaping industries, revolutionizing business practices, and influencing how individuals and governments engage with technology. AI’s potential to improve everything from healthcare to financial services is undeniable – as are its risks. Alphabet, with its substantial AI presence, stands at a pivotal juncture where adopting strong privacy-centered policies could set it apart as a trusted leader. |
|
Data is the lifeblood of artificial intelligence. Machine learning models require massive datasets to learn, adapt, and improve their performance over time. However, this hunger for data drives developers to seek out large quantities of information from the internet and other digital sources, some of which may not be obtained ethically or legally. AI models may incorporate data on human behavior, speech, images, and other sensitive content, making their development and deployment a privacy concern.
As AI matures, so does the public’s awareness of AI data ethics. Consumers, regulators, and governments increasingly ask tough questions about where AI developers obtain the data used to train their models. Data scraping, unauthorized data collection, and the use of proprietary or copyrighted content without permission have become focal points in the debate over AI ethics. Without proper oversight, AI development may violate data privacy laws, infringe on intellectual property rights, or utilize personal information without consent.
The report requested in the Proposal would increase shareholder value by increasing disclosure of Alphabet’s strategy for ethical usage of user data in AI development. This report seeks to encourage Alphabet to adopt a pro-privacy stance, which may provide the Company a strong competitive advantage against its AI competitors.
1 Alphabet. “Notice of 2025 Annual Meeting of Stockholders and Proxy Statement See https://abc.xyz/assets/7b/19/1cfce14d4a09a8aa9ad8580219b1/pro012701-1-alphabet-courtesy-edgar.pdf
2
Privacy and Ethical Challenges Facing Alphabet in AI Development
Alphabet is a leading player in the AI space as one of the largest technology companies in the world. The Company’s position provides a platform to define expectations for responsible AI development.
| Alphabet’s golden goose is advertising, driven largely by Google,2 which dominates the search engine market and provides nearly three-quarters of the Company’s total revenue.3 4 The Company’s plan (and justification for the name change from Google to Alphabet) was to invest the substantial free cash flow generated by Google search into other technology ventures and shift its business model from primarily search to a broader portfolio company. |
Generative AI now threatens to replace search engines, and investors are eager to see how Alphabet will respond. Google search also provides Alphabet with one of the most valuable proprietary datasets in the world,5 which should theoretically give the Company an edge in AI development. However, markets believe that Alphabet has fallen behind its major competitors – Microsoft, OpenAI, Meta, and Anthropic6 – in the AI arms race.7
Alphabet has long had a shaky reputation for data privacy and ethics. Investors and consumers alike should be concerned that the Company will further sacrifice user trust in an effort to catch up to its competitors, a shortsighted move that would further damage Alphabet’s standing and put the Company is serious danger of being left in the past as the world moves away from search engines.
Further, Alphabet’s lack of transparency presents a significant risk, as it diminishes accountability and reduces external pressure to maintain ethical data standards. Shareholders should therefore demand increased visibility into Alphabet’s data practices – particularly how it acquires, vets, and deploys external information to train Gemini and other future models – to ensure that the Company’s AI deployments adhere to robust ethical standards, thus safeguarding
2 Statista. “Market share of leading search engines worldwide from January 2015 to March 2025.” See https://www.statista.com/statistics/1381664/worldwide-all-devices-market-share-of-search-engines/
3 Tremayne-Pengelly, Alexandra. “Google Faces Losing Temu and Shein Ad Revenue Due to China Tariffs,” Observer, April 25, 2025. See https://observer.com/2025/04/google-ad-revenue-china-tariffs/
4 Gallagher, Dan. “Google’s Earnings Power Holds Up in Global Turbulence,” Wall Street Journal, April 24, 2025. See https://www.wsj.com/business/earnings/google-earnings-alphabet-q1-2025-googl-stock-210b34b6
5 Trefis Team. “Google Stock to $500?” Forbes, March 24, 2025. See https://www.forbes.com/sites/greatspeculations/2025/03/24/could-google-stock-triple/
6 McKinsey & Company. “What is generative AI?” April 2, 2024. See https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai
7 Vlastelica, Ryan. “Alphabet Can’t Shake Off AI Concerns Even With Low Multiple,” Bloomberg, April 10, 2025. See https://www.bloomberg.com/news/articles/2025-04-10/alphabet-can-t-shake-off-ai-concerns-even-with-low-multiple
3
against hidden liabilities that could jeopardize Alphabet’s long‑term reputation and financial stability.
Alphabet’s size, scope, and influence – the Company is one of the largest in the world by market capitalization,8 revenue,9 and headcount10 – invite distrust. Public scrutiny is further amplified by Alphabet’s relationships with other power players in the industry, as well as the federal government.
For example, consider Alphabet’s troubling history of mishandling consumer data. As far back as 2010, Google’s Street View cars secretly captured fragments of e‑mails, passwords, and other payload data from private Wi‑Fi networks, prompting regulators around the world to investigate and sanction the Company.11 12 In 2018, software bugs in the Google+ API exposed profile information for up to 500,000 users13 (and later an additional 52 million),14 ultimately forcing Alphabet to shutter the platform entirely. Regulators have also accused the Company of misleading users about its “Location History” settings; in 2022 Alphabet agreed to a record‑breaking $391.5 million multi-state settlement after investigators found it continued to track users even when they believed location services were disabled.15 Google paid another settlement to the state of Texas for $1.375 billion over privacy violations.16 Google also quietly revised its privacy policy to state that “publicly available online information” – including photos, videos, and text – may be ingested by Bard/Gemini and other AI models without explicit consent,17 a change that privacy advocates warn could sweep up sensitive attributes such as political beliefs or health status. That same pattern of overreach led France’s CNIL to impose the GDPR’s first major penalty – a €50 million fine – on Google for inadequate transparency and consent in its personalized‑ads ecosystem,18 underscoring persistent weaknesses in Alphabet’s
8 Companies Market Cap. “Largest Companies by Marketcap.” See https://companiesmarketcap.com/
9 Companies Market Cap. “Companies ranked by revenue.” See https://companiesmarketcap.com/largest-companies-by-revenue/
10 Companies Market Cap. “Companies ranked by number of employees.” See https://companiesmarketcap.com/largest-companies-by-number-of-employees/
11 Guynn, Jessica; Sarno, David. “Google Street View privacy scandal broadens,” Los Angeles Times, May 1, 2012. See https://www.latimes.com/business/la-xpm-2012-may-01-la-fi-google-street-view-20120502-story.html
12 Halliday, Josh. “Google faces new Street View data controversy,” The Guardian, July 27, 2012. See https://www.theguardian.com/technology/2012/jul/27/google-street-view-controversy
13 Kosoff, Maya. “The Incredible Unpopularity of Google+ May Be Its Saving Grace,” Vanity Fair, October 8, 2018. See https://www.vanityfair.com/news/2018/10/google-finally-kills-off-google-plus-after-masking-a-security-breach
14 Newman, Lily Hay. “A New Google+ Blunder Exposed Data From 52.5 Million Users,” Wired, December 10, 2010. See https://www.wired.com/story/google-plus-bug-52-million-users-data-exposed/
15 Collins, Dave; Gordon, Marcy; The Associated Press. “Google settles with 40 states for $391.5 million over claims it tracked user locations,” Fortune, November 14, 2022. See https://fortune.com/2022/11/14/google-settles-with-40-states-391-million-location-data-tracking-privacy/
16 Peters, Jay. “Google will pay a $1.375 billion settlement to Texas over privacy violations,” The Verge, May 9, 2025. See https://www.theverge.com/news/664663/google-texas-settlement-1-billion-data-privacy-violations
17 Weatherbed, Jess. “Google confirms it’s training Bard on scraped web data, too,” The Verge, July 5, 2023. See https://www.theverge.com/2023/7/5/23784257/google-ai-bard-privacy-policy-train-web-scraping
18 Auvieux, Cecile; Evans, Marcus; White, Lara. “First multi-million Euro GDPR fine: Google LLC fined €50 million under GDPR for transparency and consent infringements in relation to use of personal data for personalized ads,” Norton Rose Fulbright, January 25, 2019. See https://www.dataprotectionreport.com/2019/01/first-multi-million-euro-gdpr-fine-google-llc-fined-e50-million-under-gdpr-for-transparency-and-consent-infringements-in-relation-to-use-of-personal-data-for-personalised-ads-2/
4
data‑governance culture. These are just a few of Alphabet’s many privacy breaches. Several others are covered in the Proposal.
Meanwhile, Alphabet’s approach to data ethics in AI raises significant concerns. The Company advertises that its Gemini models are trained on data harvested from Google’s own products and services, such as YouTube uploads.19 Repurposing information that users originally entrusted to discrete services – frequently under terms that did not contemplate wholesale AI deployment – blurs the line between permissible processing and exploitation. Embedding Gemini‑powered features like “AI Overviews” in Search, “Help me write” in Gmail, and the Gemini side‑panel across Google Workspace amplifies the risk that sensitive search queries, private e‑mails, or proprietary business documents could be ingested by Alphabet’s models or inadvertently surfaced to unintended audiences. Together, these practices cast doubt on Alphabet’s commitment to ethical data stewardship and underscore the urgent need for greater transparency and oversight of its AI data practices.
According to Google’s own Transparency Report, the Company routinely receives tens of thousands of government requests for user information every six-month reporting period and produces data in the majority of cases.20 Beyond routine legal processes, congressional investigations into the Cybersecurity and Infrastructure Security Agency (CISA) reveal “switchboarding” portals through which CISA, the FBI, and election officials could flag Google search results or YouTube videos for rapid removal, effectively deputizing Alphabet to police speech on the government’s behalf.21 Litigation such as Murthy v. Missouri further documents direct White House pressure on Alphabet to suppress content deemed “misinformation,” underscoring how governmental influence can shape the Company’s moderation and ranking algorithms.22
These substantial interactions risk intertwining Alphabet’s commercial objectives with federal priorities, creating powerful incentives – or pressures – to adapt its AI models and data‑collection practices to suit governmental aims. As policymakers intensify efforts to steer digital information flows, especially through generative‑AI tools capable of rewriting, ranking, or quietly demoting content, Alphabet’s immense troves of user data and its AI models could be leveraged to surveil citizens or amplify preferred narratives. Shareholders should therefore demand strict oversight of every government data request and clear firewalls between Alphabet’s AI roadmap and external political actors to prevent mission‑creep that could erode consumer trust and, ultimately, Alphabet’s market valuation.
19 Fried, Ina. “Axios AI+,” Axios, November 18, 2024. See https://www.axios.com/newsletters/axios-ai-plus-26b74fd0-a39f-11ef-9cea-1581abda4f04
20 Google. “Google Transparency Report.” See https://transparencyreport.google.com/user-data/overview
21 House Judiciary Committee. “The Weaponization of CISA, How a “Cybersecurity” Agency Colluded With Big Tech and “Disinformation” Partners to Censor Americans,” June 26, 2023. See https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/evo-media-document/cisa-staff-report6-26-23.pdf
22 Supreme Court of the United States. “Murphy, Surgeon General et al. v. Missouri Et Al.” October Term, 2023. See https://www.supremecourt.gov/opinions/23pdf/23-411_3dq3.pdf
5
In response, citizens and consumers have begun to demand increased protections for data privacy. At its core, the debate centers around who truly “owns” the data generated by users—be it personal information, behavioral patterns, or digital content—and what rights individuals have over how their data is used, stored, or shared.23 These evolving expectations have created new challenges for companies like Alphabet, especially as they collect vast amounts of data to train and refine artificial intelligence (AI) systems.
The European Union has emerged as a global leader in the push for stronger data rights through the General Data Protection Regulation (GDPR), which came into effect in 2018.24 GDPR represents one of the most comprehensive data privacy laws globally, fundamentally changing how companies collect, process, and store personal data for EU citizens. It grants individuals greater control over their data, including the right to access, correct, or delete their information, as well as the right to be informed about how their data is used. GDPR enforces strict penalties for non-compliance, with fines reaching up to 4% of a company’s global annual revenue, creating a powerful incentive for companies to adhere to the principles of transparency, accountability, and user control. For companies like Alphabet, which operates on a global scale, GDPR has raised the stakes of data ethics.
In the United States, data privacy laws have traditionally been less stringent than those in the EU, with no comprehensive federal data privacy law akin to GDPR. However, this landscape is changing as states begin to adopt their own data protection regulations, reflecting a growing recognition of the need for privacy protections. California, for example, enacted the California
23 Evans-Greenwood, Peter. Sanders, Deen. Hanson, Rob. “A new narrative for digital data,” Deloitte Insights, March 22, 2023. See https://www2.deloitte.com/us/en/insights/topics/digital-transformation/data-ownership-protection-privacy-issues.html
24 Intersoft Consulting. “General Data Protection Regulation.” See https://gdpr-info.eu/
6
Consumer Privacy Act (CCPA) in 2020,25 giving residents similar rights to those under GDPR, such as the right to know what personal information is being collected, the right to delete that information, and the right to opt out of its sale.
The movement for data privacy is gaining momentum in other states as well, creating a patchwork of state-level regulations that large corporations like Alphabet must navigate. These new expectations around data privacy indicate a shift in public attitudes toward data ownership, with Americans increasingly demanding the right to control their digital information.
By continuing its current practices, Alphabet risks becoming entangled in lawsuits and regulatory actions that could erode shareholder value and harm its reputation. Additionally, as consumers become more privacy-conscious, they may choose to support companies that demonstrate a genuine commitment to respecting data rights.
Increasing Shareholder Value and Building Competitive Advantage Through Privacy Leadership
Consumers have consistently expressed concern with the lack of control they have over their personal data.26 McKinsey & Company has argued that companies that prioritize data privacy will build a competitive advantage over their competitors that do not:27
As consumers become more careful about sharing data, and regulators step up privacy requirements, leading companies are learning that data protection and privacy can create a business advantage.
Given the low overall levels of trust, it is not surprising that consumers often want to restrict the types of data that they share with businesses. Consumers have greater control over their personal information as a result of the many privacy tools now available, including web browsers with built-in cookie blockers, ad-blocking software (used on more than 600 million devices around the world), and incognito browsers (used by more than 40 percent of internet users globally). However, if a product or service offering—for example, healthcare or money management—is critically important to consumers, many are willing to set aside their privacy concerns.
Consumers are not willing to share data for transactions they view as less important. They may even “vote with their feet” and walk away from doing business with
25 State of California Department of Justice. “California Consumer Privacy Act (CCPA),” March 13, 2024. See https://oag.ca.gov/privacy/ccpa
26 Anderson, Monica; Auxier, Brooke; Kumar, Madhu; Perrin, Andrew; Rainie, Lee; Turner, Erica. “Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information,” Pew Research Center, November 15, 2019. See
https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
27 Anant, Venky; Donchak, Lisa; Kaplan, James; Soller, Henning. “The consumer-data opportunity and the privacy imperative,” McKinsey & Co., April 27, 2020. See https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-consumer-data-opportunity-and-the-privacy-imperative
7
companies whose data-privacy practices they don’t trust, don’t agree with, or don’t understand.”
The authors add:
Our research revealed that our sample of consumers simply do not trust companies to handle their data and protect their privacy. Companies can therefore differentiate themselves by taking deliberate, positive measures in this domain. In our experience, consumers respond to companies that treat their personal data as carefully as they do themselves.
The report drives home the reality that as data privacy concerns grow, consumers increasingly favor companies that prioritize ethical data handling and transparency. This underscores the reality that companies with transparent, privacy-focused practices have a strategic advantage in a market where trust is paramount.
The shift in expectations around data ownership represents an opportunity for Alphabet to position itself as a leader in ethical AI by adopting transparent and consent-driven data practices. Such a shift would not only help Alphabet avoid legal challenges but would also build consumer trust, aligning the company with global standards that prioritize the individual’s right to control their own data.
For Alphabet, this means that transparent and privacy-respecting AI practices can foster customer loyalty and reduce churn. The financial benefits of customer retention are well-documented, as retaining an existing customer is often significantly less expensive than acquiring a new one.
Moreover, a privacy-centric approach aligns with the growing “techno-optimism” movement, which advocates for technology that empowers individuals rather than exploits them. Champions of this movement, such as venture capitalist Marc Andreessen, 28 argue that technology should decentralize power, enhance transparency, and empower users. By supporting these values, Alphabet can attract a growing demographic of users who view technology as a tool for personal empowerment rather than corporate control. This alignment would not only attract consumers but also influence public perception, positioning Alphabet as a leader in ethical AI.
28 Andreessen Horowitz. “The Techno-Optimist Manifesto.” See https://a16z.com/the-techno-optimist-manifesto/
8
| Finally, the emphasis on privacy and transparency could reduce Alphabet’s vulnerability to regulatory backlash and legal issues. With stricter data privacy regulations emerging globally, and cases like the New York Times lawsuit against OpenAI highlighting the risks of unethical data practices, Alphabet can preemptively mitigate risks by setting a high standard for transparency. |
| |
Alphabet’s competitors – including Microsoft, Apple, Anthropic, and Meta – vary significantly in their approaches to privacy, | |
reflecting their values, business models, and strategic goals. Understanding how each company handles privacy provides insight into the broader landscape of AI ethics, transparency, and consumer trust. | |
Microsoft
Microsoft is a major player in AI, largely due to its partnership with OpenAI. However, this relationship has sparked concerns over unethical data practices, including allegations of data scraping and the use of personal and proprietary information without consent. These issues have led to lawsuits, such as one filed by the New York Times, and raised questions about the ethical foundations of Microsoft’s AI development.
Additionally, Microsoft’s extensive government contracts, often involving sensitive technologies, have drawn criticism for potentially aligning the company’s AI initiatives with state interests.29 30 31 32 These concerns have fueled skepticism about its commitment to privacy and independent oversight.
While Microsoft emphasizes responsible AI, its algorithms often operate as “black boxes,” offering little transparency about data use or decision-making processes. Strengthening privacy measures and aligning with global standards like GDPR could help rebuild trust, but current practices highlight a need for greater accountability.
29 Goldstein, Luke. “Defense Department Submits to Microsoft’s Profit-Taking,” The American Prospect, June 11, 2024. See https://prospect.org/power/2024-06-11-defense-department-microsofts-profit-taking/
30 Computer & Communications Industry Association. “New Study Shows Microsoft Holds 85% Market Share in U.S. Public Sector Productivity Software.” See https://ccianet.org/news/2021/09/new-study-shows-microsoft-holds-85-market-share-in-u-s-public-sector-productivity-software/
31 Biddle, Sam. “U.S. Military Makes First Confirmed OpenAI Purchase for War-Fighting Forces,” The Intercept, October 25, 2024. See https://theintercept.com/2024/10/25/africom-microsoft-openai-military/
32 Heise, Angie. “Generative AI and the Public Sector,” Microsoft. See https://wwps.microsoft.com/blog/ai-public-sector
9
Apple
Apple, while not solely focused on AI, has branded itself as a privacy-first company.33 Unlike Meta and Alphabet, Apple’s business model does not rely on advertising, allowing it to prioritize user privacy without compromising revenue. Apple’s AI-driven products, like Siri, are designed with privacy-enhancing technologies, including on-device processing, which minimizes data collection and promotes user control over personal information.
Apple’s extensive privacy features have allowed the company to sell itself as the most-privacy focused of the big tech platforms. However, it outsources its privacy violations to its competitors, such as its massive search deal with Google, which has its own history of privacy violations.34 Apple recently made a similar deal with OpenAI.35 This strategy allows Apple to maintain a facade of privacy while giving other companies the privilege to collect Apple customers’ data.
Anthropic
Anthropic, an AI research lab founded by former OpenAI employees, has positioned itself as a company dedicated to “alignment” and AI safety. Its primary mission is to develop AI systems that are aligned with human interests, prioritizing safety and ethics over rapid deployment.36 Although Anthropic is smaller than Microsoft, Meta, or Alphabet, its focus on long-term AI safety makes it a relevant player in the privacy conversation.37
Anthropic emphasizes transparency in AI behavior and is cautious about deploying its models in commercial applications without rigorous testing. While Anthropic’s approach does not specifically prioritize privacy in the same way as Microsoft or Meta, its emphasis on safety, alignment, and ethical concerns indirectly supports a privacy-conscious framework. By promoting transparency and caution in deployment, Anthropic positions itself as an organization willing to sacrifice rapid growth for responsible, user-centered AI practices.
Given that Anthropic is still relatively new, it has yet to encounter significant regulatory or public scrutiny. Yet its foundational principles suggest a commitment to ethical practices, which could offer a competitive advantage as privacy expectations evolve.
33 Leswing, Kif. “Apple is turning privacy into a business advantage, not just a marketing slogan,” CNBC, June 7, 2021. See https://www.cnbc.com/2021/06/07/apple-is-turning-privacy-into-a-business-advantage.html
34 Pierce, David. “Google reportedly pays $18 billion a year to be Apple’s default search engine,” The Verge, October 26, 2023. See https://www.theverge.com/2023/10/26/23933206/google-apple-search-deal-safari-18-billion
35 OpenAI. “OpenAI and Apple announce partnership to integrate ChatGPT into Apple experiences.” See https://openai.com/index/openai-and-apple-announce-partnership/
36 https://www.anthropic.com/
37 https://etc.cuit.columbia.edu/news/ai-community-practice-hosts-anthropic-explore-claude-ai-enterprise
10
Meta
Meta has historically faced scrutiny over data privacy issues, particularly regarding how user data is used to inform targeted advertising algorithms.38 39 In recent years, however, Meta has made strides toward increasing transparency in its AI research. Its release of the open-source Llama AI tool stands as a testament to its new direction, signaling a willingness to contribute to transparent and accessible AI development.40 Open-source AI models, like Llama, allow researchers and developers to examine and modify the code, increasing transparency.
However, despite this open-source shift, privacy concerns persist due to Meta’s reliance on user data for advertising revenue. Meta’s AI algorithms extensively leverage personal data to generate targeted ads,41 which raises concerns about whether the open-source commitment will extend to the company's most valuable and sensitive data-driven algorithms. The public scrutiny Meta has faced in recent years, including the Cambridge Analytica scandal,42 has also impacted trust, and although open-sourcing Llama may signal greater transparency, questions remain about whether Meta’s privacy improvements go far enough.
Conclusion
By prioritizing privacy and ethical AI, Alphabet can distinguish itself in an industry where consumer trust is critical. As regulatory pressures grow and public expectations shift toward data transparency and control, Alphabet’s commitment to responsible AI would not only safeguard its reputation but also enhance shareholder value. Embracing a privacy-first approach positions Alphabet as a leader in ethical technology, aligning it with both consumer and societal values. This strategic shift can help Alphabet gain a sustainable competitive advantage, fostering long-term growth and making a positive impact on the industry as a whole.
Thus, we urge our fellow shareholders to vote FOR Proposal Number 11 at Alphabet’s annual meeting on June 6, 2025.
38Bhuiyan, Johana. “As Threads app thrives, experts warn of Meta’s string of privacy violations,” The Guardian, July 11, 2023. See https://www.theguardian.com/technology/2023/jul/11/threads-app-privacy-user-data-meta-policy
39 Satariano, Adam. “Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules,” New York Times, May 22, 2023. See https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html
40 Robison, Kylie. “Open-source AI must reveal its training data, per new OSI definition,” The Verge, October 28, 2024. See https://www.theverge.com/2024/10/28/24281820/open-source-initiative-definition-artificial-intelligence-meta-llama
41 Chee, Foo Yun. “Meta faces call in EU not to use personal data for AI models,” Reuters, June 6, 2024. See https://www.reuters.com/technology/meta-gets-11-eu-complaints-over-use-personal-data-train-ai-models-2024-06-06/
42 Confessore, Nicholas. “Cambridge Analytica and Facebook: The Scandal and the Fallout So Far,” New York Times, April 4, 2018. See https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
11
Photo credits:
Page 2: Image via MikeMacMarketing/Creative Commons
Page 3: Google headquarters – Ben Nuttall/Creative Commons
Page 6: Image via Visual Content/Creative Commons
Page 9: New York Times headquarters building – ShanMcG213/Creative Commons
THE FOREGOING INFORMATION MAY BE DISSEMINATED TO SHAREHOLDERS VIA TELEPHONE, U.S. MAIL, E-MAIL, CERTAIN WEBSITES AND CERTAIN SOCIAL MEDIA VENUES, AND SHOULD NOT BE CONSTRUED AS INVESTMENT ADVICE OR AS A SOLICITATION OF AUTHORITY TO VOTE YOUR PROXY.
THE COST OF DISSEMINATING THE FOREGOING INFORMATION TO SHAREHOLDERS IS BEING BORNE ENTIRELY BY THE FILERS.
THE INFORMATION CONTAINED HEREIN HAS BEEN PREPARED FROM SOURCES BELIEVED RELIABLE BUT IS NOT GUARANTEED BY US AS TO ITS TIMELINESS OR ACCURACY, AND IS NOT A COMPLETE SUMMARY OR STATEMENT OF ALL AVAILABLE DATA. THIS PIECE IS FOR INFORMATIONAL PURPOSES AND SHOULD NOT BE CONSTRUED AS A RESEARCH REPORT.
PROXY CARDS WILL NOT BE ACCEPTED BY US. PLEASE DO NOT SEND YOUR PROXY TO US. TO VOTE YOUR PROXY, PLEASE FOLLOW THE INSTRUCTIONS ON YOUR PROXY CARD.
For questions regarding Alphabet Inc. Proposal Number 11 – requesting the Board of Directors to produce a “Report on AI Data Usage Oversight,” submitted by National Legal and Policy Center – please contact Luke Perlot, associate director of NLPC’s Corporate Integrity Project, via email at lperlot@nlpc.org.
12