
Big Tech Executives Gain Influence Over U.S. Science Policy
The appointment of industry leaders to key advisory roles intensifies concerns about corporate dominance and regulatory gaps.
Today's Bluesky technology threads form a scathing tableau of tech's collision with political power, corporate arrogance, and the growing skepticism toward AI's unchecked expansion. The day's posts underscore a recurring motif: those who hold the purse strings are increasingly writing the rules, leaving communities and watchdogs scrambling to catch up—or simply venting in the fediverse.
Tech Moguls and Political Capture: The New Science Advisory Paradigm
The announcement that Trump has appointed a dozen Big Tech executives, including Jensen Huang and Mark Zuckerberg, to the President's Council of Advisors on Science and Technology has sparked outrage. As Elizabeth Warren points out, this move replaces scientists and doctors with billionaires, signaling a pivot where "only money gets a seat at the table." Senator Chris Van Hollen amplifies the critique, suggesting that Trump's policy, including his ban on states regulating AI, is designed to let Big Tech billionaires self-regulate, disregarding public interest. This trend is not isolated; posts like Alon Pinkas's tirade against the "vacuous hubris of tech moguls" reinforce the sense that democracy is being usurped by corporate actors masquerading as public servants.
"Just a bunch of mediocre middle-aged assholes trying to cope with the fact that no amount of money will make them happy or satisfied. But instead of working it out in therapy like a normal person, they're making their emotional issues everyone else's problem."- @seminoledvm.bsky.social (4 points)
The backlash is not just rhetorical. The response to the Meta and YouTube damages verdict—a mere $3 million—illustrates the impotence of legal penalties when weighed against tech giants' profits. "That's it? That is pocket change for Meta. Consumers lose again!" sums up the prevailing sense of futility.
"This has to be the stupidest period in human history since the Middle ages. Not only dumb in terms of the asininity and moronic nature of political 'leaders' or the vacuous hubris of tech moguls, but extravagantly inane given available and accessible knowledge, science, medicine, technology."- @alonpinkas.bsky.social (69 points)
AI Regulation, Data Exploitation, and Platform Exodus
Efforts to rein in AI are becoming more performative than effective. The introduction of companion legislation by Bernie Sanders and Alexandria Ocasio-Cortez to halt new data center construction until Congress passes comprehensive AI regulation, as reported by TechCrunch, is met with skepticism—existing centers remain opaque about their water usage, and critics view the move as political theater. Meanwhile, the uproar over GitHub's decision to train its AI with user data after initially allowing opt-outs highlights the persistent problem of forced consent and the normalization of data exploitation.
"That's some bullshit with the forced opt-out. It should always be opt-in only. They are thieves just like all the rest that pull that shit."- @realmadmojo.bsky.social (6 points)
This unease extends to broader platform dynamics. The Flipboard Tech Desk's call to abandon Twitter in favor of fediverse alternatives resonates with users tired of centralized platforms' extractive practices. Meanwhile, Reddit's push to verify suspected automated accounts, detailed in another TechCrunch post, is interpreted by some as a thinly veiled bid to monetize user data for surveillance, especially post-IPO.
The AI Bubble Bursts: Retreat from Grandiose Promises
Signs of a tech reckoning abound. The abrupt shutdown of OpenAI's Sora model and its billion-dollar Disney deal, chronicled by Nitish Pahwa, exemplifies the industry's retreat from ambitious, risky ventures toward safer, revenue-generating applications. As Sora's user growth stalls and legal risks mount, OpenAI pivots to focus on coding and clerical tasks, hinting at the weakening of the AI bubble and its broader economic implications.
"AI basically kills people's ability to do metacognition meaningfully."- @bairfanx.com (27 points)
The toxicity of AI in education, flagged by Greg Pak, mirrors a wider backlash against technology's unintended consequences. From environmental damage and labor exploitation to the dissemination of fascist propaganda, the call for a total ban on AI in schools is echoed by parents and educators, reinforcing the notion that tech's promise has soured and its risks are now front and center.
Journalistic duty means questioning all popular consensus. - Alex Prescott