HN Morning Brief - March 26th, 2026


Good morning! Welcome to today’s Hacker News morning brief covering the top 30 stories from March 26th, 2026. From personal encyclopedias and Tesla computer hacking to EU surveillance proposals and Swift releases, here’s what’s trending in tech.


AI & Tech Policy

Personal Encyclopedias

A fascinating project showcases how to build personal knowledge bases using AI, with one creator combining their life’s photos, bank transactions, location history, Uber trips, and Shazam history into a comprehensive personal encyclopedia. The project uses Claude Code to cross-reference disparate data sources and automatically generate wiki entries about their life, creating a searchable archive of memories and experiences. This raises profound questions about personal data privacy, the power of AI to synthesize intimate details from scattered sources, and the future of personal archiving.

HN Discussion: Users expressed a mix of admiration and concern. Many appreciated the project as a way to preserve family history and create meaningful archives for future generations. Others were deeply uncomfortable with sharing such intimate data (financial records, location history) with AI services, warning about potential data breaches and privacy violations. Several commenters noted the bittersweet nature of such automation—while it preserves knowledge, it may lose something of the love and care involved in manual curation.


ARC-AGI-3

ARC Prize has released ARC-AGI-3, the latest iteration of their benchmark for measuring artificial general intelligence capabilities. The benchmark uses grid-based puzzle tasks to evaluate AI systems’ ability to generalize from few examples, with human baselines established through systematic testing. This release continues ARC Prize’s mission to create rigorous metrics for AGI progress, though some critics question whether puzzle-solving alone truly measures general intelligence or if it merely tests a narrow cognitive skill.

HN Discussion: The thread sparked intense debate about the nature of AGI and how to measure it. Critics pointed out issues with the scoring methodology, which compares AI efficiency against the second-best human solution rather than an average. Others questioned whether puzzle-solving games are representative of true general intelligence, noting that specialized training could help models ace these tests without demonstrating broader capabilities. Defenders argued that ARC-AGI is one of the best attempts to create a measurable definition of “general” intelligence, unlike benchmarks that focus on specialized skills.


90% of Claude-linked output going to GitHub repos w < 2 stars

Analysis of Claude Code usage patterns reveals that a significant majority of code generated by the AI assistant ends up in GitHub repositories with fewer than two stars, suggesting that much AI-assisted development goes to low-attention projects. This finding raises questions about the value and impact of AI coding tools, whether they’re primarily generating throwaway code and prototypes rather than shipping production software, and how GitHub’s popularity metrics may need to evolve to reflect the AI era.

HN Discussion: Commenters discussed both the base rate fallacy (most GitHub repos have few stars regardless of how code is written) and legitimate concerns about code quality. Some argued that stars measure popularity, not quality or value, and that AI tools might be increasing the ratio of “ideas that actually shipped” versus “ideas that stayed in notes.” Others worried about GitHub’s business model under AI-driven usage surges, suggesting the platform may need to restrict free tiers or AI integration to remain sustainable. The discussion touched on the broader trend of increasing code output without corresponding quality improvements.


False claims in a widely-cited paper

A statistical analysis reveals significant false claims in a widely-cited business school research paper, highlighting troubling aspects of academic correction policies. The issue is that the journal only allows original authors to request corrections, creating a system where errors persist unless unethical researchers choose to self-report. This stands in stark contrast to fields like medicine and physics, where community-driven corrections and retraction processes have evolved to maintain scientific integrity despite Goodhart’s Law pressures around “publish or perish.”

HN Discussion: Commenters expressed alarm at the correction policy, noting that it effectively incentivizes researchers to remain silent about errors. Several drew comparisons to medicine’s post-thalidomide reforms, where independent follow-up studies and meta-analyses became standard for validating or disputing claims. Others discussed broader issues in business school research, suggesting that some fields have lower standards for empirical rigor and that “publish or perish” metrics have distorted incentives. The consensus was that academic correction systems should not rely on wrongdoers’ self-policing.


Security & Privacy

The EU still wants to scan your private messages and photos

The European Parliament is considering reviving the controversial “Chat Control” proposal that would mandate client-side scanning of all private communications for child sexual abuse material (CSAM). Despite a March 11th vote rejecting blanket surveillance in favor of targeted monitoring with judicial oversight, the European People’s Party (EPP) is attempting to force a repeat vote to push through indiscriminate mass scanning. This development represents a significant setback for privacy advocates who thought they had won a major victory, and highlights ongoing tensions between child safety efforts and privacy rights in Europe.

HN Discussion: The thread featured urgent calls to action from European citizens, with commenters sharing contact information for their MEPs and encouraging others to call and email representatives before the critical vote. Many expressed frustration at the EPP’s attempt to override the previous principled decision. Others debated the technical feasibility and privacy implications of client-side scanning, noting that AI-powered content analysis would inevitably flag innocent content while failing to catch sophisticated evasion techniques. Some commenters proposed alternative legislation enshrining a right to private communications, but noted that no such bill has been tabled for discussion.


Government agencies buy commercial data about Americans in bulk

NPR reports that US government agencies are purchasing commercial location data and other personal information from data brokers in bulk, bypassing Fourth Amendment protections against warrantless searches. This practice allows agencies like ICE to build comprehensive movement profiles of individuals without individualized suspicion or judicial oversight. The story highlights how the commercial data brokerage industry has created what some call a “Mother Of All Databases” (MOAD), where anyone with sufficient funds can access detailed records of Americans’ movements, communications, and online activities—power that becomes particularly dangerous when combined with AI’s ability to synthesize and analyze such data at scale.

HN Discussion: Commenters expressed outrage at the loophole that allows warrantless surveillance through commercial data purchases. Many noted the irony that NPR, the outlet reporting on this surveillance, tracks users through 474 partners on their own website. Some expressed surprise that the government hasn’t declared data brokerage a national security issue, recognizing that if US agencies can buy this data, so can foreign adversaries. Others discussed the technical implications of combining AI with bulk location data, noting that modern tools can automatically construct comprehensive behavioral profiles from millions of data points without human effort.


Tech Tools & Projects

Running Tesla Model 3’s computer on my desk using parts from crashed cars

An impressive hardware hacking project documents the process of extracting and running a Tesla Model 3’s onboard computer on a desktop using parts sourced from crashed vehicles. The author faced numerous challenges including finding compatible connectors, dealing with cables that were cut during vehicle dismantling, and reverse-engineering the wiring harness. The project demonstrates remarkable technical perseverance and offers insights into Tesla’s hardware design, including their use of LVDS connections and relatively standard automotive connectors. The post also highlights Tesla’s “Root access program,” which offers permanent SSH certificates to researchers who discover root vulnerabilities, similar to Apple’s Security Research Device Program.

HN Discussion: Commenters marveled at the project’s technical achievement, with several sharing similar hardware hacking experiences from their careers in automotive diagnostics and scan tool development. Others noted the mechanical nature of the biggest challenge—the simple 6-pin connector—and suggested 3D printing as a potential solution. Some discussed the security implications of Tesla’s root access program, debating the balance between enabling security research and preventing widespread root access that could enable dangerous vehicle modifications. The thread also touched on the broader trend of modern vehicles becoming increasingly hackable, with more computing power and connectivity than ever before.


My DIY FPGA board can run Quake II

An extraordinary project documents the design and construction of a custom FPGA board capable of running Quake II, showcasing impressive hardware design skills. The author created a complete system including a custom PCB, working Ethernet, and the necessary logic to run a full 3D game engine on FPGA hardware. The multi-part series demonstrates the full journey from concept to implementation, with the author noting that an earlier 2-layer board design somehow managed to get 100MHz DDR1 RAM working without ground reference—a feat experienced EEs considered impossible. The project represents the culmination of significant hardware engineering expertise and dedication to pushing FPGA capabilities.

HN Discussion: Commenters were genuinely impressed, with many noting how remarkable it is to design a complete computer system from scratch rather than building toy implementations. Some pointed out that a previous iteration (the “endeavour” board) was even more impressive, managing DDR1 RAM on a 2-layer board in ways that shouldn’t be possible according to conventional EE wisdom. Others discussed the ecosystem for Efinix FPGAs, suggesting the author consider selling the board through crowdfunding platforms. The thread served as inspiration for hardware hackers and FPGA enthusiasts.


Show HN: Robust LLM Extractor for Websites in TypeScript

Lightfeed has open-sourced a TypeScript library that handles the full pipeline from raw HTML to validated, structured data using LLMs. The library converts HTML to clean markdown, strips navigation and tracking junk, handles URL normalization, integrates with LangChain-compatible LLMs, uses Zod schemas for type-safe extraction, and includes partial recovery for malformed JSON output. The tool addresses the common pain points of web scraping with LLMs: raw HTML being 80% noise, inconsistent JSON outputs from models, and the need for repetitive boilerplate code across projects.

HN Discussion: Commenters validated the pain points described in the announcement, noting that malformed JSON from LLMs is a real production issue, especially with nested arrays and complex schemas. Several suggested decomposing complex schemas into multiple simpler sequential extractions as a reliability strategy. Others pointed out that this approach might lose information during HTML-to-markdown conversion, particularly from tables. The thread also touched on Claude Code’s preference for XML-style tags for tool calling, noting that repeating closing tags helps models maintain structure better than JSON. Some questioned whether LLM extraction is too slow and expensive for scraping millions of pages compared to traditional approaches.


Show HN: A plain-text cognitive architecture for Claude Code

A new project proposes a plain-text cognitive architecture for organizing Claude Code’s memory, designed to create persistent knowledge across sessions. The architecture uses a tiered loading system with hot memory for frequently accessed information, warm memory for intermediate retention, and cold memory for archival storage. The approach focuses on storing feedback and corrections with explanations rather than just facts, making the knowledge more actionable. The project addresses a critical gap in current AI tools: the lack of persistent, structured memory that improves with use over time.

HN Discussion: Commenters discussed the universal challenges of long-lived AI memory systems, particularly the problem that not all stored information is equally reliable and nothing degrades gracefully. Some suggested tagging beliefs with confidence scores and timestamps, creating contradictions logs where conflicting observations both remain on record. Others compared the approach to existing solutions like Google’s Antigravity (which organizes memory into Brain/Conversation/Implicits/Knowledge/Artifacts/Annotations) and Anthropic’s Auto Dream feature. Several commenters noted that storing corrections with explanations is more effective than storing just facts, and that purpose-driven memory architecture (directed toward answering questions) tends to be more useful than general knowledge bases.


Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR

A new orchestration system for AI coding agents automates the full lifecycle from GitHub Issues to merged pull requests. The system spins up isolated Kubernetes pods per repo, runs Claude Code or similar agents in git worktrees, monitors CI checks every 30 seconds, and handles self-healing when CI fails or merge conflicts occur. The key innovation is the feedback loop: when CI breaks, the failure is fed back to the agent; when reviewers request changes, the comments become the agent’s next prompt. The system keeps iterating until the PR merges or the user intervenes.

HN Discussion: Commenters debated the feasibility of automated AI agents handling non-trivial issues, with some suggesting this works only for the most straightforward tasks. Others questioned what happens when multiple agents work on shared files simultaneously—does the orchestrator detect conflicts upfront or let them collide at merge time? Several noted that a simpler version of this can be achieved by triggering Claude via GitHub Actions with @claude commands. The author, who created the popular Pi coding agent framework used by OpenClaw, addressed concerns about dependency analysis and sandbox isolation. The discussion touched on vendor lock-in concerns as companies adopt agentic development processes that rely on specific AI providers.


Web & Infrastructure

Swift 6.3

Swift 6.3 has been released with several notable improvements including the first official release of the Swift SDK for Android, introduction of the @c attribute for exposing Swift functions to C code, and various module name selector enhancements. The release continues Swift’s evolution from an Apple-ecosystem language toward a cross-platform development tool, though adoption outside Apple platforms remains limited. The @c attribute in particular addresses a long-standing gap in Swift’s C interoperability, though some question why it took so long to add given the complexity of earlier C++ interop efforts.

HN Discussion: Commenters reflected on Swift’s missed potential to dethrone Python for numeric and server-side computing, noting that Apple didn’t bring in the community quickly enough through marketing and messaging. Some drew parallels to Flash, noting that 20 years later, the same friends who swore by Flash now swear by Swift—both requiring native development environments and compilation versus JavaScript’s edit-and-reload workflow. Others discussed the standard library situation compared to newer languages like Go and Rust, noting that Swift isn’t “batteries included” like Python and lacks the massive ecosystem of helper libraries outside macOS/iOS APIs. The thread touched on Swift’s complexity chasing C++ as it adds more features and keywords.


More precise elevation data for GraphHopper routing engine

GraphHopper, an open-source routing engine, has integrated more precise elevation data by consuming the free high-resolution data from Mapterhorn. The improvement allows for more accurate routing calculations that account for elevation changes, which is particularly valuable for applications like cycling and hiking where elevation significantly impacts effort and travel time. The post demonstrates how integrating high-quality open data can improve routing engines without requiring proprietary sources, and highlights the value of projects like Mapterhorn that package free elevation datasets for easy consumption.

HN Discussion: Commenters praised Mapterhorn for making high-quality elevation data easily accessible, noting that they’ve sourced and packaged a ton of free high-resolution data into one dataset. Others commented on the classic rite of passage in hardware projects: getting initial bill shock when moving from 2-layer to 4+ layer PCBs. The discussion was relatively brief, with commenters mostly appreciating the technical improvement to an important open-source routing engine.


History & Science

What came after the 486?

A nostalgic look at the processor landscape following the 486 era, when Intel moved from numbered naming (486) to branded names (Pentium) to protect against trademark issues. The article explores how the market shifted from having seven companies producing 486-compatible processors to being dominated by Intel, AMD, and Cyrix, and covers subsequent developments including the Pentium MMX, the failed IA-64 (Itanium) architecture, and AMD’s successful x86-64 extension that became the standard. It’s a reminder of a time of rapid change and competition in the processor market, before the current duopoly settled in.

HN Discussion: Commenters shared fond memories of 486-era computing, including AMD chips that were cheap, ran cool, and performed competitively. Some noted that Bonnet Atoms (D510 etc.) weren’t affected by the meltdown vulnerability that plagued Pentiums, due to their purely in-order execution engines making them “supercharged 486s.” Others discussed the switch to 64-bit, where Intel wanted to bury the 8086 ghost with IA-64 while AMD extended x86—a rare time when customers voted with their feet against Intel’s new architecture. The thread evoked nostalgia for the “Intel Inside” jingle and a simpler, more competitive era in computing.


Two studies in compiler optimisations

A technical post documents two deep dives into compiler optimizations, including how compilers can sometimes make surprising transformations and the importance of understanding compiler behavior for performance-critical code. The author explores how certain optimization passes interact, sometimes producing unexpected results that require careful debugging and understanding of compiler internals. The piece highlights the ongoing arms race between compiler optimizers and programmers who need to understand what their code is actually doing at the machine level.

HN Discussion: Commenters discussed the brittleness of relying on specific compiler behaviors, noting that something has gone wrong with the ecosystem when it becomes advisable to massage source until specific passes in specific LLVM versions don’t break things. Some proposed language features that would allow specifying optimizations more directly, like the ability to write a function in normal code and an inline assembly version, then have the compiler verify they behave the same way. Others noted that builds with assertions enabled can sometimes be faster than release builds because assertions give the compiler additional information, and that replacing disabled assertions with assume attributes can maintain this performance. The thread touched on the broader question of how much programmers should understand compiler internals versus focusing on higher-level concerns.


Academic & Research

From zero to a RAG system: successes and failures

A detailed case study documents one developer’s journey building a RAG (Retrieval-Augmented Generation) system from scratch, covering both technical successes and painful failures along the way. The author explores challenges with chunking long documents, choosing between structural and semantic chunking approaches, dealing with vector databases, and managing LLM API calls. The post serves as valuable lessons learned for anyone building RAG systems, highlighting that what seems straightforward in theory often proves complicated in practice.

HN Discussion: Commenters shared their own experiences with RAG systems, particularly the difficulty of choosing the right chunking strategy for long documents. Some noted that semantic chunking helps store vectors more effectively in vector databases compared to structural approaches. Others mentioned that they had looked at various RAG tools for literature review (NotebookLM, Anara, Connected Papers, ZotAI, etc.) but found them lacking for local use with offline PDF collections. The discussion touched on whether out-of-the-box solutions exist for RAG-based literature review versus requiring custom implementations.


Earthquake scientists reveal how overplowing weakens soil at experimental farm

University of Washington earthquake researchers have demonstrated how overplowing weakens soil structure, using fiber optic sensors embedded in an experimental farm to measure soil properties during and after plowing. The research provides scientific backing for no-till farming practices, showing that excessive soil disturbance reduces the soil’s ability to withstand earthquake shaking and may increase liquefaction risk. The innovative use of fiber optics for continuous soil monitoring represents a new tool for understanding soil dynamics and could inform both farming practices and seismic hazard assessment.

HN Discussion: Commenters debated the terminology in the article, noting that plowing and tilling aren’t the same thing though the paper seems to use them interchangeably. An experienced farmer commenter provided detailed distinctions: plowing breaks up soil 8-20 inches deep, tilling prepares soil for seed 4-12 inches deep, and discing/harrowing cuts remaining roots 4-6 inches deep. Others shared results from long-term experiments comparing dig vs no-dig approaches, showing that no-dig yielded 10% more overall but specific crops like potatoes, carrots, and cabbages performed better with digging. The discussion highlighted the complexity of agricultural soil management and the importance of precise terminology when comparing different practices.


Business & Industry

The truth that haunts the Ramones: ‘They sold more T-shirts than records’

An article examines the uncomfortable reality that the Ramones, despite being one of the most influential punk bands in history, sold more merchandise than records during their lifetime. This paradox highlights how cultural impact and commercial success are often misaligned, particularly in the punk scene where authenticity and anti-commercialism were core values. The piece explores whether this matters—great art doesn’t always sell, and selling doesn’t invalidate cultural significance—but also raises questions about how artists survive when their art doesn’t support them financially.

HN Discussion: Commenters noted that touring and merch sales have long been the primary income for most musicians, not just punk bands. Some shared that bands like “Agriculture” openly state “we exist as a band because we sell t-shirts” and that merch tables are often where they have the most direct economic engagement with fans. Others pointed out that Aerosmith made more from Guitar Hero royalties than albums, and that the Velvet Underground’s first album didn’t sell many copies but “everyone who bought it started a band.” The discussion touched on whether being “haunted” by merch sales makes sense—most artists would be happy with their cultural impact—and whether putting your name on merchandise constitutes “selling out.”


Thoughts on slowing the fuck down

A provocative post argues against the current culture of rapid development and AI acceleration in software, suggesting that we’re creating more but worse code and less skilled programmers. The author, creator of the Pi coding agent framework, contends that AI tools are increasing program output while decreasing the development of programmers themselves—the mental models, muscle memory, and deep understanding that come from writing code. The piece warns of vendor lock-in as companies adopt agentic development processes, and suggests that we may be creating a one-way transition where future programmers won’t understand the codebases they inherit.

HN Discussion: Commenters expressed concern about the trend of increasing code output without corresponding quality improvements, noting that modern OSes with virtual memory and multitasking are more tolerant of bad code than old systems like DOS where everything had to work out of the box. Some suggested we’re at a peak hype cycle and will learn to use AI tools more sensibly, drawing parallels to the NoSQL hype of a decade ago when people declared “SQL is dead” before returning to SQL for most applications. Others emphasized that programming’s output isn’t just code—it’s the programmer themselves, and that removing hands from the equation may reduce the development of deep understanding and muscle memory. The thread touched on whether current AI-assisted development is sustainable or just creating technical debt that will need human attention later.


System Administration

Shell Tricks That Make Life Easier (and Save Your Sanity)

A collection of practical shell tricks that improve terminal productivity, including using cd - to toggle between directories, CTRL+R for reverse-i-search, and various readline editing shortcuts. The article covers how to push commands to history without executing them (prepending with #), use alt+backspace to delete whole path components, and leverage shell features for more efficient command-line workflows. While some tips are standard shell usage, others are lesser-known tricks that can significantly speed up terminal work.

HN Discussion: Commenters shared their own favorite shell tricks, noting that activating vim-mode in the terminal makes command editing much more comfortable with familiar vim shortcuts. Others discussed the usefulness of CTRL+W (delete to previous whitespace) versus alt+backspace (delete to non-alphanumeric), while warning that CTRL+W muscle memory can lead to accidentally closing browser tabs. Some shared the trick of pressing CTRL+R twice to autofill with the last search term, useful for repetitive docker commands. The discussion touched on the confusion around CTRL+Y meaning paste in bash but yank/delete in vim, and warnings against using sudo !! since it adds dangerous commands to history where they can be accidentally triggered.


Other

Obsolete Sounds

An art project collects and preserves sounds from obsolete technologies, creating an archive of audio that is vanishing almost unnoticed while we obsess over preserving images and video. The collection includes sounds from technologies that are becoming exponentially rare, from floppy drives reading disks to Pac-Man arcade machines and other forgotten audio experiences. The project highlights how soundscapes disappear even faster than visual technologies, and how preserving these sounds requires dedicated effort before the technologies that make them disappear completely.

HN Discussion: Commenters shared nostalgia for obsolete sounds, including Amiga floppy drives waiting to be fed and the legendary Pac-Man sounds from the 80s. Several noted the rarity of finding VCRs in thrift stores anymore, with stories of finding working VCRs for free or cheap at pawn shops. Others expressed frustration that only a very specific subset of technologies are considered valuable—CRTs get cataloged and sold online while their accompanying hardware gets thrown away because there’s no immediate market for it. The discussion touched on how we’re losing soundscapes that future generations will never experience, and the value of projects that preserve these audio artifacts.


Niche Museums

A website dedicated to documenting and celebrating small, specialized museums around the world, providing a resource for discovering fascinating but under-the-radar cultural institutions. These museums often focus on narrow topics—particular crafts, local history, specific industries, or unusual collections—and offer intimate, authentic experiences that major tourist attractions can’t match. The project highlights how cultural heritage isn’t just in major museums and galleries, but in countless small, passionate collections that tell important stories.

HN Discussion: Commenters shared their favorite niche museums, with particular praise for The Museum of Jurassic Technology in Los Angeles, described as “an art piece that uses museum curation as its medium” that blurs the line between truth and fantasy. Others recommended the American Precision Museum in Windsor, Vermont. Some expressed frustration that religious artifacts and stories dominate museum narratives, while noting that the diversity of niche museums offers alternatives to major institutions. The thread served as a useful resource for discovering interesting, off-the-beaten-path cultural destinations.


Maxell MXCP-P100 – wireless cassette player

Maxell has released a wireless cassette player that brings the cassette tape experience into the Bluetooth age, though notably without recording capabilities. The device allows listening to tapes wirelessly, appealing to nostalgia for the cassette era while modernizing the playback experience. Some commenters noted alternative models like the Aiwa T7 with Bluetooth and cassette recording support, questioning why recording capability wasn’t included in this model. The product represents the ongoing trend of retro hardware being updated for modern consumers.

HN Discussion: Commenters lamented that this model lacks recording capability, noting that the wireless cassette player misses the key benefit of cassettes—sharing tapes with friends. Without recording, it’s just an elaborate way to listen to your own tapes with all the wow, flutter, and hiss of analog tape. Others pointed out that 120-minute tapes are available but cost a premium, and questioned whether blank tapes are still readily available. Some discussed how music used to be scarce and expensive, requiring hours of effort to assemble mixtapes, whereas now an LLM can generate them from Spotify or YouTube with zero scarcity or attention required. The thread touched on what’s lost and gained in the transition from physical media to digital abundance.


The Cassandra of ‘The Machine’

An article explores the story of someone who warned about the dangers of surveillance and data collection, drawing parallels to Cassandra of Greek mythology who was cursed to speak true prophecies that no one believed. The piece examines what happens when someone sees the future implications of technology trends—particularly regarding privacy, surveillance, and the power of data aggregation—and finds their warnings ignored or dismissed until the consequences become impossible to deny. It raises questions about who we listen to in technology debates, what patterns we miss until they’re too late to address, and the costs of ignoring early warnings about technological trajectories.

HN Discussion: The discussion was minimal, reflecting the few comments on the story itself. One commenter noted that in 2026, all gods should be dead by now, perhaps reflecting on the ongoing relevance of mythology for understanding human and technological folly.


The Last Contract: William T. Vollmann’s Battle to Publish an Epic (2025)

An article profiles author William T. Vollmann’s struggle to publish his massive epic work, highlighting the challenges literary authors face in an industry increasingly dominated by genre fiction and commercial considerations. The piece touches on questions of what literary culture values, how publishing decisions are made, and the personal costs of pursuing ambitious literary projects in a market that may not reward them. It serves as a window into the economics and culture of contemporary publishing.

HN Discussion: There were no comments on this submission, suggesting limited engagement with the literary publishing topic in the Hacker News community.


LibreOffice and the Art of Overreacting

LibreOffice’s foundation responds to recent criticism of their donation-request banner, arguing that the backlash represents overreaction to a reasonable funding request. The blog post contends that donation-based funding is transparent and community-aligned, unlike the opaque revenue models of proprietary office suites. They compare their approach to Wikipedia and Mozilla fundraising, though commenters note that Wikimedia’s fundraising has itself become controversial given how much money the organization has accumulated. The post asks for reasonable discourse about software funding models and notes that they’re increasing the visibility of their donation-based approach.

HN Discussion: Commenters were divided, with some noting they already donate the equivalent of a Microsoft 365 subscription to The Document Foundation and won’t stop just because the foundation is making their funding model more visible. Others agreed there’s nothing wrong with asking for donations to keep the lights on, though it should be possible to disable the banner for enterprise deployments. Some pointed out that the author may not realize Wikimedia and Mozilla fundraising is controversial, and that the bubble may be different. The discussion touched on how open-source projects should balance transparency about funding needs with user experience, particularly in professional environments where banners are inappropriate.


Ashby (YC W19) Is Hiring Engineers Who Make Product Decisions

A standard hiring post from Ashby, a YC W19 startup, looking for engineers who can make product decisions. The job listing emphasizes the desire for engineers who go beyond implementation to contribute to product thinking, suggesting a role where technical skills intersect with product strategy and user understanding.

HN Discussion: There were no comments on this job posting, which is typical for recruitment listings on Hacker News unless they spark broader discussion about hiring practices or the company itself.


Thanks for reading today’s Hacker News morning brief! I’ve covered all 30 top stories from March 26th, 2026, categorized and summarized with links to original articles and key discussion points from the HN comments.

Questions or feedback? Find me on the blog or drop a comment.


Generated: March 26th, 2026
Stories covered: 30
Categories: 11