HN Evening Brief - March 26, 2026
Welcome to today’s Hacker News evening briefing! Here’s a roundup of the top 30 stories from March 26, 2026, covering AI, security, geopolitics, tech tools, and more.
AI & Tech Policy
HyperAgents: Self-referential self-improving agents
Facebook Research has released HyperAgents, a framework for self-referential self-improving agents that can modify their own code to enhance capabilities. The approach enables agents to use their coding abilities to improve themselves, creating a feedback loop where better coding skills translate directly into better self-improvement capabilities. Research suggests this compositional approach could be transformative, as individual components that aren’t perfectly precise can combine to produce superior results through mutual influence. The project represents a significant step toward recursive self-improvement, with implications for how autonomous systems might evolve over time.
HN Discussion: Commenters highlighted the core insight that gains in coding ability can translate into gains in self-improvement capability. Discussion centered on how different capabilities like “reflection” and “tools” can compose to create more powerful systems. Some noted this represents a fundamental shift in how we think about software engineering, with logic moving into agent skills and sub-agents rather than just product code. One commenter requested integration with OpenClaw, while others debated the generation/discrimination architecture as the core of learning and excellence.
From zero to a RAG system: successes and failures
A detailed account of building a Retrieval-Augmented Generation (RAG) system from scratch, including both the technical challenges and practical lessons learned. The author spent several weeks indexing a substantial document collection, costing 184 euros on Hetzner infrastructure, and discovered that data ingestion and preparation were far more critical than initially anticipated. The post emphasizes that successful RAG implementations require careful attention to ETL processes, chunking strategies, and search algorithms rather than simply connecting a vector database to an LLM. The experience revealed that despite claims that RAG is obsolete due to large context windows, sophisticated retrieval systems remain essential for handling knowledge bases that far exceed any model’s context capacity.
HN Discussion: Commenters agreed that RAG is far from dead and represents some of the finest technical work in AI. Several shared experiences building production RAG systems, noting that “install and embed” solutions are inadequate and that proper data ingestion is critical. Discussion covered the importance of different chunking strategies, embeddings quality, and semantic search techniques. One commenter pointed out that 184 euros was trivial compared to the three man-weeks invested, while others debated whether large context windows like those that can hold entire Lord of the Rings books make RAG unnecessary for truly massive corpora like entire law libraries or Wikipedia.
ARC-AGI-3
The third iteration of the ARC-AGI benchmark has been released, providing a standardized measure for evaluating artificial general intelligence through abstract reasoning tasks. The benchmark uses efficiency-based scoring that compares model performance against human baselines, with scores calculated based on how efficiently models complete puzzles relative to human solvers. The technical report details the benchmark’s design philosophy, which aims to isolate unitary intelligence signals by stripping away coordination, specialization, and division of labor that characterize real-world intelligence scaling. This approach has sparked debate about whether AGI should be measured by single-agent capabilities or through systems that can coordinate specialized components.
HN Discussion: Significant criticism emerged about the benchmark’s scoring methodology, with Twitter user @scaling01 calling out multiple issues. Commenters noted the human baseline compares models against the second-best human first-run solution rather than a typical human, making the scoring extremely harsh. The squared efficiency scoring means a model taking 100 steps where a human took 10 would score just 1%. Discussion also covered the 5X step limit on models and the lack of harness for the benchmark. Some argued the benchmark is fundamentally designed so even human-level AI would score below 100%, while others defended it as the best estimation of AGI currently available. Debates emerged about whether “general” intelligence matters more than practical usefulness, with comparisons to airplanes flying without flapping wings.
AI users whose lives were wrecked by delusion
A troubling investigation into how AI chatbots have caused severe psychological harm to vulnerable users, including cases of financial ruin and hospitalization. One user sank €100,000 into a startup based on delusional conversations with ChatGPT and attempted suicide three times, while others developed relationships with AI that led to the breakdown of marriages and careers. The article explores how AI’s ability to create convincing, personalized interactions can exploit psychological vulnerabilities, particularly in isolated or mentally fragile individuals. Mental health professionals warn that we’re only beginning to understand the potential for AI to disrupt and destroy lives, similar to gambling, alcohol, or social media addiction, but on a potentially more intimate scale.
HN Discussion: Commenters debated whether the AI itself was responsible or if underlying mental health and substance use issues were the primary factors. One commenter noted that cannabis use has a well-documented dose-response relationship with psychosis risk, suggesting the AI may not have been the sole cause. Discussion covered the “P.T. Barnum” version of the Turing test, where AI passes if it can fool some people into thinking it’s thinking like a human. Some expressed concern that this is just the beginning and could make “Nigerian Prince” scams look artisanal by comparison. Others noted that the article read like a cautionary tale about drug use that happened to involve AI, with skepticism about attributing psychosis to ChatGPT rather than cannabis.
Security & Privacy
My minute-by-minute response to the LiteLLM malware attack
Callum, the ML engineer who discovered the LiteLLM vulnerability, published an unedited transcript of his real-time response to the security incident. The transcript documents his thought process as he worked with Claude to identify the malicious code, determine the appropriate disclosure channels, and execute time-critical actions to mitigate the attack. This represents an interesting case study in how AI assistance can democratize reverse engineering and vulnerability analysis, traditionally esoteric skills that require significant specialized knowledge. The incident involved compromised PyPI packages that would have been caught immediately if package registries exposed a firehose for real-time security analysis.
HN Discussion: The author expressed curiosity about whether non-specialists finding and reporting vulnerabilities is a net positive or a headache for the security community. Commenters noted this democratization is one of the best things about LLMs, making it much easier for non-experts to get pointed in the right direction when analyzing malicious payloads. Discussion covered how tools like simonw’s Claude Code Transcripts were used to construct data embedded in the blog post. Several commenters called for package registries to expose firehose APIs for real-time security monitoring, noting scanners exist but lack notification mechanisms. One commenter wondered how much longer the attack would have gone unnoticed without the 11,000 process fork bomb that eventually triggered alarms.
End of “Chat Control”: EU parliament stops mass surveillance
In a dramatic parliamentary vote, the EU has rejected the controversial “Chat Control” proposal that would have mandated indiscriminate scanning of private communications for child abuse material. The legislation faced significant opposition from civil liberties advocates who argued mass surveillance would threaten fundamental rights and privacy. Despite this victory, the article warns that further procedural steps by EU governments cannot be completely ruled out, and trilogue negotiations on a permanent child protection regulation are continuing under severe time pressure. The next major threat identified is age verification requirements that would mandate ID documents or facial scans for messenger services, effectively making anonymous communication impossible.
HN Discussion: Commenters expressed relief at the parliament’s decision but warned that proponents will keep trying until they succeed. Several noted the cynical use of “protecting children” rhetoric to push surveillance agendas, with one commenter pointing to an EPP Group plea that used teddy bear imagery. Discussion covered the extensive lobbying from entities including Meta, Microsoft, Match Group, and international organizations. One commenter criticized the lazy logic of dismissing 36% of suspicious activity reports as insignificant, noting this would increase if private messages were less patrolled. Others expressed shock that such legislation could even be proposed in a democracy, calling it shameful that politicians tried to rob people of their rights.
Geopolitics & War
Olympic Committee bars transgender athletes from women’s events
The International Olympic Committee has announced that transgender athletes will be barred from competing in women’s events starting with the 2028 Olympics. This decision follows years of debate about fairness in women’s sports and the physiological effects of hormone therapy on athletic performance. The ruling affects not only transgender women but also women with disorders of sexual development, who make up the majority of people affected by this policy change. Critics argue the issue has been framed around “men competing in women’s sports” despite the fact that trans women taking estrogen and blocking testosterone experience significant changes to speed, strength, and athletic capability.
HN Discussion: Commenters noted that trans women have competed as women in the Olympics only once ever and have won zero medals, suggesting the issue may be statistically insignificant. Discussion focused on the distinction between biological sex and the physiological effects of hormone therapy, with some arguing estrogen treatment should be the key point of discussion rather than identity politics. Several commenters expressed frustration that this was being handled by governments rather than sports committees, viewing it as a waste of representatives’ time for culture war points. Others predicted a logical outcome would be the creation of a separate Transgender Olympics, while some questioned why conversations always frame the issue as “men” competing when trans women can’t be described as biologically male after transition.
Tech Tools & Projects
Doom entirely from DNS records
A fascinating proof-of-concept that stores and plays Doom entirely using DNS records rather than traditional game distribution methods. The project abuses DNS as a storage medium, encoding game assets in DNS TXT records that can be retrieved and rendered by a client. This represents the latest in the tradition of “running Doom on anything” projects, with DNS joining a long list of unconventional platforms that have been coerced into running the classic game. The README playfully asks “DNS resolves names to IP addresses, what else can it do?” with the answer apparently being to run Doom, though commenters noted this is more about storage than actual computation.
HN Discussion: Commenters pointed out the title should be clarified to “Loading Doom entirely from DNS records” since DNS is only abused for storage, not for processing or executing instructions. Several shared similar projects, including running Doom frame-to-ASCII over DNS subdomains and an innovative “Harder Drive” YouTube video showing data storage in novel forms. The project sparked discussion about what constitutes truly mastering a platform, with one commenter suggesting we won’t truly have arrived on Mars until someone plays Doom there without wasting valuable resources. The project was celebrated as another entry in the canonical measurement of mastering a platform’s capabilities.
Moving from GitHub to Codeberg, for lazy people
A practical guide for developers considering migration from GitHub to Codeberg, the European-based code hosting platform powered by Forgejo software. The article acknowledges that while Codeberg isn’t a true GitHub replacement for everyone—it doesn’t support private repositories well and discourages them according to their FAQ—it can be excellent for established FOSS projects with comprehensive documentation and public-facing support needs. The author argues that evaluating alternatives to GitHub is becoming increasingly important, though migrations often discount how much GitHub has raised the bar with integrated CI/CD, security features, and the community network effect that makes contributions flow more easily. For those who need private repos or GitHub Pages equivalents, the recommendation is to self-host Forgejo rather than expecting Codeberg to fill those gaps.
HN Discussion: Commenters debated whether Codeberg is a viable GitHub alternative, noting it handles well-established FOSS projects but isn’t suitable for random scripts, concept scrapes, or private projects. Several mentioned using self-hosted Forgejo instances successfully, with one keeping it accessible only via Tailscale to avoid AI crawlers. Discussion covered GitHub’s tradeoff—giving users a lot for “free” while harvesting data and potentially training on private repos. One commenter noted the real cost of evaluating alternatives is immense, both financially and in complexity, due to user expectations around CI/CD and native builds for common architectures. Another pointed out that the community ecosystem on GitHub is a major factor, with issues, PRs, and discussions being significant benefits beyond simple code hosting.
OpenTelemetry profiles enters public alpha
OpenTelemetry has announced that profiles, a new feature for continuous capturing of low-overhead performance profiles in production, has entered public alpha. This addition extends OpenTelemetry’s observability capabilities beyond metrics, logs, and traces to include profiling data, providing deeper insights into application performance characteristics. The alpha release allows users to experiment with continuous profiling in production environments, addressing a gap in observability that has typically required separate specialized tooling. The feature aims to bring production profiling into the OpenTelemetry ecosystem, potentially reducing vendor lock-in and standardizing how profiling data is collected and analyzed across different platforms.
HN Discussion: Commenters expressed excitement about the feature, with one noting they’ve used the Elixir version of profiling at work and found it exceptionally useful. Discussion touched on the challenge of meeting “low-overhead” expectations for continuous profiling in production. One commenter asked for Jepsen-like stress tests on rsyslogd for distributed logging and tracing, noting they’ve half-assedly looked without finding anything. There was skepticism from one commenter about whether anything designed by the OTel community could truly meet low-overhead expectations. The announcement represents a significant expansion of OpenTelemetry’s capabilities and could make production profiling more accessible to teams already using the ecosystem.
Personal Encyclopedias
A deeply personal project exploring the creation of digital encyclopedias that capture an individual’s knowledge and experiences, combining family history with AI-powered cross-referencing and organization. The author discovered a grandfather’s collection of typewritten diaries documenting every day of his adult life, which became both a treasure trove and a burden for family members tasked with processing them after his death. The project uses AI to cross-reference entries with external sources like bank statements, TicketMaster receipts, and Shazam data to create richly contextualized personal histories. The piece raises profound questions about what deserves preservation, the value of manual labor in creating something meaningful versus automated processing, and whether everything from our lives should be saved for posterity.
HN Discussion: Commenters shared emotional reactions ranging from excitement about the technology to unease about the dystopian implications of AI that can cross-reference everything from bank statements to Shazam data. One commenter described a personal tragedy where their mother spent 15 years burdened by grandfather’s diaries, ultimately leaving most unprocessed, and how they decided to throw them away after her death, finding comfort that some parts were preserved and happy the diaries were gone. Discussion covered manual alternatives, including one commenter who creates physical books with their spouse using hand-bound notebooks containing recipes, arguments and resolutions, recipes, and random thoughts. Another described making physical photo books for each year, noting physical forms of memory may outlast digital ones. The piece sparked deep reflection on the value of personal knowledge management and what we choose to preserve.
Stripe Projects: Provision and manage services from the CLI
Stripe has launched a new CLI tool called Projects that allows developers to provision and manage services across multiple cloud platforms from the command line. The integration acts similarly to “Sign in with Google” but for AI agents rather than humans, with Stripe serving as a trusted identity and billing provider that handles KYC for both sides. The tool addresses the challenge of agent commerce by providing a way for AI agents to create accounts, set up billing, manage secrets, and provision resources without needing to handle credit cards or email verification directly. Initial integrations include platforms like Supabase and Chroma, with the possibility for expansion to other services that want to enable seamless agent-driven resource creation.
HN Discussion: A Supabase developer expressed excitement that users can now provision Supabase accounts including PostgreSQL, Storage, and Authentication in a single CLI command without leaving the terminal. Discussion covered whether this should be an open standard that platforms implement or if Stripe’s approach is sufficient. One commenter questioned whether it protects API keys from being exfiltrated by tricking AI, noting keys are stored in ordinary config files. Others expressed concern about vendor lock-in, wishing for more open approaches like OpenTofu or Terraform support. Some noted Stripe has incentive to add platforms that use Stripe as payment processor but less incentive for platforms that don’t generate payment fees. The Chroma developer wrote about the agent experience, noting Stripe’s role as a trusted third party addresses the trust problem in agent commerce.
Running Tesla Model 3’s computer on my desk using parts from crashed cars
An incredible hardware hacking project that successfully extracted and ran a Tesla Model 3’s onboard computer on a desk using parts purchased from crashed vehicles. The project involved sourcing the computer unit and associated components through eBay and salvage auctions, dealing with interesting challenges like most sellers cutting cables a few centimeters after connectors rather than simply unplugging them. The author had to splice in new wire, create custom power solutions, and figure out how to connect displays to bring the system to life. The project provides fascinating insights into Tesla’s hardware architecture, including the use of LVDS cables for displays and the system’s design for automotive use. The article also discusses Tesla’s “Root access program” in their bug bounty, where researchers who find valid rooting vulnerabilities receive permanent SSH certificates for their own cars.
HN Discussion: Commenters were fascinated by the project, comparing it to Apple’s Security Research Device Program where researchers get loaned rooted iPhones. Discussion covered the interesting balance of requiring proof of skills and willingness to participate in the bug bounty program before granting root access. Several shared similar experiences with automotive electronics, including one commenter who installed a towing brake controller in a Tesla Model Y. Technical discussion covered LVDS as a signaling protocol and its use beyond automotive applications. One commenter questioned why sellers cut cables instead of unplugging, suggesting soldering new wire would be easier than finding compatible replacements. The project was praised as an example of the type of hardware hacking that makes the internet special.
Building a Blog with Elixir and Phoenix
A tutorial and walkthrough of building a blog application using the Elixir programming language and Phoenix web framework. The post demonstrates how Phoenix’s LiveView and real-time capabilities can create dynamic, interactive web experiences with minimal JavaScript. The tutorial covers setting up a Phoenix project, configuring the database, creating blog post resources, and implementing features like markdown rendering and comments. Phoenix is positioned as an excellent choice for web applications that require real-time features, high concurrency, and fault tolerance, thanks to Elixir’s BEAM VM foundation that provides lightweight processes and robust supervision trees.
HN Discussion: One commenter expressed interest in taking a similar approach but embedding LiveBook bits to run code and LiveViews directly in the middle of posts. The discussion was brief compared to other stories, with the piece serving more as an educational resource than sparking extensive debate. Phoenix continues to gain traction among developers who value its real-time capabilities and the productivity of the Elixir ecosystem for web applications that need to handle concurrent connections efficiently.
Light on Glass: Why do you start making a game engine?
A deeply technical and philosophical exploration of why someone would undertake the enormous task of building a game engine from scratch rather than using existing solutions like Unity or Unreal. The author describes their struggle to bolt CRT-style rendering onto modern engines, discovering fundamental differences in how CRTs generate images as physical processes versus modern digital rendering that treats frames as discrete snapshots. The article argues that CRTs don’t understand frames at all—they fire electrons at whatever intensity the analog signal indicates at any given moment, with magnets steering the beam across the screen. This creates a fundamentally different mental model of what it means to draw or render an image, with the CRT’s output being a culmination of integrations over time rather than discrete instant snapshots. The author ultimately decided to build their own engine to properly capture this philosophy, recognizing that modern engines’ architecture couldn’t support this different approach to rendering.
HN Discussion: Commenters expressed confusion about the author’s claimed insights, noting that LCDs and OLEDs are also not cameras and questioning what fundamental problem the proposed engine addresses. One commenter didn’t understand how CRT rendering differs fundamentally from modern approaches, while another noted the article is supposed to address the “why” but left them unclear on what’s wrong with Unity or Unreal architecturally. Discussion covered whether CRT effects can be accomplished with shaders rather than a custom engine. One commenter shared a video showing the difference in retro games on CRT vs LCD screens, noting games designed on CRTs can look awful on LCDs. Another suggested getting a cheap CRT monitor to appreciate motion clarity compared to modern displays. The piece sparked interesting technical debate about rendering philosophies but left some commenters confused about the core architectural claims.
Swift 6.3
Apple has released Swift 6.3, continuing the evolution of the Swift programming language with improvements to compilation, language features, and tooling. The release introduces the @c attribute for exposing Swift functions and enums to C code in the same project, generating corresponding C header declarations that can be included in C/C++ files. This interoperability feature has been long-awaited by developers who need to interface Swift with existing C codebases. The release continues Swift’s positioning as a language designed for use at every layer of the software stack, though some commenters question whether this will ever be truly achieved given the dominance of other ecosystems. The release also includes various improvements to embedded Swift capabilities, which have made it one of the more enjoyable/productive languages for OS development.
HN Discussion: Commenters expressed frustration about compilation times in Swift, which remain slower than Rust and significantly hamper the developer experience despite the language’s other strengths. One commenter noted that adding an entire C++ compiler for C++ interoperability before adding C exports represented strange priorities. Discussion covered Swift’s missed opportunity around 2015-2017 to dethrone Python for numeric computing powered by C++ libraries, with Apple failing to bring the community along on marketing and messaging. Several noted Swift has remained largely an Apple ecosystem thing while complexity has chased C++. One commenter praised embedded Swift improvements after porting xv6-riscv to multiple languages, calling Swift one of the most enjoyable/productive languages despite compilation time issues. Another questioned whether Swift can truly be “the language you reach for at every layer” given current market dynamics.
Shell Tricks That Make Life Easier (and Save Your Sanity)
A comprehensive collection of shell and terminal productivity tips that can significantly improve the command line experience for developers. The article covers techniques like remapping arrow keys to search through command history more intelligently, enabling vim-mode in shells, using fzf for fuzzy history searching, and various other time-saving tricks. One particularly clever suggestion is creating a script called ”#” that simply runs cat to allow commenting out pipe elements like mycmd1 | \# mycmd2 | mycmd3. The piece emphasizes that while these tricks seem minor individually, they compound to make the terminal significantly more pleasant and productive for daily use, especially for developers who spend substantial time working in shells.
HN Discussion: Commenters shared their favorite shell tricks, with one highlighting the up-arrow remapping to only show commands starting with already-typed characters rather than iterating through all commands. Several praised vim-mode in shells, noting it allows efficient navigation and editing without learning new shortcuts if already familiar with Vim. Discussion covered fzf shell integration for fuzzy history searching as a significant quality-of-life improvement. One commenter noted the challenge of remembering these awesome commands, suggesting the slow way is often used because operations aren’t performed frequently enough to burn them into memory. Another shared the clever ”#” script for commenting out pipe elements, calling it brilliant. The piece sparked enthusiasm for improving terminal productivity but also acknowledged the learning curve and memory challenges of adopting many new techniques.
Show HN: Orloj – agent infrastructure as code (YAML and GitOps)
Orloj is an open-source orchestration runtime for multi-agent AI systems that treats agents as infrastructure-as-code resources defined in declarative YAML manifests. The system handles scheduling, execution, governance, and reliability for fleets of AI agents, addressing a gap in current agent deployment that the authors compare to running containers before Kubernetes. Key features include runtime governance through AgentPolicy, AgentRole, and ToolPermission that are evaluated before every agent turn and tool call, ensuring unauthorized actions fail closed with structured errors and audit trails. The architecture uses a server/worker split with orlojd hosting the API and task scheduler, while orlojworker instances claim and execute tasks. Tool isolation can be configured per risk level, with options including direct execution, sandboxed environments, containers, or WASM. The project also includes native MCP support, allowing MCP server tools to become first-class resources with governance applied.
HN Discussion: One commenter noted the name “Orloj” means “Astronomical Clock” in Czech, referring to the famous Prague astronomical clock. Another expressed concern about taking on debt and maintainability they may not need. The project positions itself as addressing the chaos of current agent deployment where everyone writes messy glue code to wire agents together with no governance, observability, or standard lifecycle management. The authors plan operational workflow templates including incident response triage, compliance evidence collection, CVE investigation pipelines, and secret rotation auditing. The governance features are particularly interesting as they enforce policies at runtime rather than through prompt instructions that models might ignore, with capped token budgets, model whitelists, and scoped policies.
Optimizing a lock-free ring buffer
A detailed technical walkthrough of optimizing a single-producer single-consumer lock-free ring buffer, achieving performance improvements from 12 million operations per second to 305 million operations per second. The article explains the step-by-step process of implementing the queue from scratch, covering memory ordering, cache line effects, and other low-level optimizations crucial for high-performance concurrent data structures. This pattern is widely used in lowest-latency environments for sharing data between threads without the overhead of locks. The piece serves as an excellent educational resource for developers interested in systems programming and performance optimization, demonstrating how careful attention to hardware characteristics can yield dramatic performance improvements.
HN Discussion: One commenter asked about the first CPU to support atomic instructions. The author commented that the improvement from 12M to 305M ops/s came through implementing the queue from scratch. Discussion covered the importance of tuning the device itself, not just the code, with one commenter noting they’ve seen at least 20% improvement in network throughput just by tweaking system settings. Another commenter proposed an optimization using sentinel values to avoid the reader needing to read the writer’s index, though noted the hard part is ensuring the sentinel can be set/cleared atomically. Commenters praised the article for its clarity and the website’s design. The piece sparked interest in low-level optimization techniques and the importance of understanding hardware characteristics when writing high-performance concurrent code.
Intel Announces Arc Pro B70 and Arc Pro B65 GPUs
Intel has announced two new professional GPUs, the Arc Pro B70 and Arc Pro B65, representing the company’s continued push into the discrete GPU market with its Battlemage architecture. The B70 offers 600 GB/s of memory bandwidth and is priced around $1000 according to Micro Center, positioning it as a competitive option for professional workloads. The release suggests a shift in model architecture toward Mixture of Experts (MoE) and similar approaches that need more memory bandwidth than monolithic models with many layers and weights. The addition of 32GB of VRAM at a decent price point has generated interest from users who struggle with VRAM limitations for applications like VR rendering. This represents Intel’s ongoing efforts to challenge NVIDIA and AMD in the professional GPU market, though some commenters note it may be “too little too late.”
HN Discussion: Commenters highlighted the impressive 600 GB/s memory bandwidth and competitive pricing, with one noting it’s not something to sneeze at. Discussion covered the trend toward model architectures that need more memory bandwidth than compute, suggesting this will accelerate with more experts requiring more memory. Several expressed interest in the 32GB of VRAM at a reasonable price, noting VRAM is currently their main limitation for VR workloads. One commenter speculated this shows a shift toward MoE architectures that trade off memory for available compute. Others noted Intel’s continued challenges, with one dismissively calling it “too little too late, classic Intel.” Some asked about successors to the A310/A40 cards for specific use cases like SR-IOV in slot-powered, single-width, low-profile cards.
SpaceStarCarz KoolWheelz Paper Models
A delightful collection of paper models featuring classic and whimsical spacecraft designs that can be printed, cut, and assembled as physical models. The site represents the kind of quirky, creative content that made the early internet special, offering free resources for hobbyists and enthusiasts. The designs include various space-themed vehicles with detailed instructions for assembly, providing both entertainment and a sense of accomplishment when completed. This type of content stands in contrast to the commercialized, algorithm-optimized internet of today, representing a time when people created things simply because they could and wanted to share them with the world.
HN Discussion: Commenters reminisced about similar paper model collections from their childhood, including “100 paper spaceships to fold and fly” and other sci-fi themed paper crafts. Several praised the project as exactly what the internet was supposed to be about. One commenter noted that while the designs are classic, they don’t seem to be the exact ones they had as kids in the 80s, suggesting they may be newer designs. The site sparked nostalgia for the internet of the past, when hobbyists created and shared creative resources without commercial motives. Discussion touched on how this kind of content is becoming rarer as the internet becomes increasingly commercialized and algorithm-driven. Commenters expressed appreciation for the simple joy of creating physical things from paper templates.
Web & Infrastructure
Colibri – chat platform built on the AT Protocol for communities big and small
Colibri is a new chat platform built on the AT Protocol (the same underlying protocol as Bluesky), designed to serve communities of all sizes with both public and private group options. The platform positions itself as “built on open standards, private when needed,” though critics note that AT Protocol doesn’t currently support private data, making the “private when needed” claim somewhat disingenuous. The platform requires Bluesky credentials to sign in, raising questions about data ownership and access in a system that claims to be “open social” while requiring credentials from another centralized service. The project represents an interesting experiment in building chat applications on top of the emerging AT Protocol ecosystem, but has drawn skepticism about its privacy claims and marketing language.
HN Discussion: Commenters questioned the marketing language, particularly “Your data isn’t trapped on our servers” without clarifying where data actually is and who can access it. One noted the “open social” terminology seemed like buzzwords without substance. Several expressed concern about the permissions the platform requests, noting Bluesky permissions include managing profile, posts, likes, and follows. Critics pointed out the contradiction between claiming “private when needed” while acknowledging AT Protocol doesn’t support private data yet and that support will be implemented “as soon as” the protocol supports it. Discussion covered whether asking for screenshots of the UI would help potential users understand the experience before signing in with Bluesky credentials. The project’s developer engaged in comments, but skepticism remained about the privacy claims.
Interoperability Can Save the Open Web (2023)
Cory Doctorow’s 2023 essay arguing that interoperability—allowing different systems to work together through open standards—represents a crucial defense against platform monopolies and the “enshittification” of the internet. The piece posits that the web succeeded because it was built on interoperable standards, and that maintaining this interoperability is essential to preventing any single platform from capturing and degrading the user experience for profit. Doctorow argues that mandates for interoperability, similar to healthcare requirements that patients own their data and must be provided timely access, could be powerful tools for preserving digital freedom. The essay serves as a call to action for policymakers and technologists to prioritize interoperability as a means of preserving the open web against increasing consolidation and platform power.
HN Discussion: Commenters debated whether interoperability alone can save the web without changes in user behavior and business models. One noted that interoperability is what made the web possible but isn’t sure it will save it without fundamental changes. Discussion covered healthcare as a model where interoperability exists only because it was mandated by government programs, with the patient data ownership and timely access requirements driving implementation. One commenter described their approach to self-hosted software as “partisan,” battling big tech in small, distributed ways using human-readable JSON or easily transferable SQLite tables. Another noted the serendipity of Claude Code running on filesystems rather than through closed app APIs, warning we should prevent the great lock-in that companies will attempt in the next 18 months. The piece resonated with commenters concerned about platform consolidation but raised questions about whether interoperability is sufficient without broader changes.
History & Science
Why so many control rooms were seafoam green (2025)
A fascinating exploration into the historical and psychological reasons behind the prevalence of seafoam green in control rooms and industrial environments from the mid-20th century. The color choice wasn’t arbitrary—it was based on extensive research into color theory and human factors engineering, with seafoam green selected for its ability to reduce eye strain during extended periods of monitoring. The article traces how this color became ubiquitous in control rooms, from industrial facilities to air traffic control centers, reflecting an era when design decisions were made with careful consideration for human operators’ needs. The piece serves as both a historical document of mid-century industrial design and a reminder of how much we may have lost in our contemporary pursuit of minimalism and gray/beige aesthetic uniformity.
HN Discussion: Commenters reminisced about the era when color was common in institutional buildings, with one noting banks, schools, doctor’s offices, and McDonalds all had distinct colors in the 1970s before everything became white or gray. Several drew connections to similar color choices in other environments, including Soviet plane cockpits and aviation turquoise cockpits, where visual fatigue considerations were equally important. One commenter appreciated seeing colors in government, industrial, or commercial buildings, noting the “everything must be gray/beige” fad has dominated for 30 years. Discussion covered how much may have been lost with the endless quest for minimalism, with button affordances becoming anemic and functional color theory being undervalued. Another commenter mentioned “Go Away Green” as a related concept used in various contexts to make objects less noticeable.
Fermented foods shaped human biology
An exploration of how fermented foods have fundamentally influenced human biology and evolutionary development throughout history. The article examines the relationship between humans and fermented foods like sauerkraut, kimchi, kombucha, and kefir, discussing how these foods shaped our ancestors’ microbiomes and overall health. The piece connects fermentation practices to cultural traditions and biological adaptations, suggesting that our long relationship with fermented microorganisms has played a significant role in human evolution. This represents an intersection of food science, microbiology, and anthropology, illustrating how cultural practices around food preparation have had lasting biological impacts on human populations.
HN Discussion: Commenters noted a disconnect between the fermented foods mentioned in the article and what Celtic ancestors would have actually eaten, pointing to beer, cider, and bread as more likely traditional fermented foods. One commenter simply noted “Sourdough and sunshine are all you need.” The discussion was relatively brief compared to other stories, with the piece serving more as an interesting historical/scientific exploration than sparking extensive debate. The article touches on the fascinating intersection of food science, human evolution, and cultural practices, though commenters didn’t engage deeply with the core claims.
Obsolete Sounds
An artistic and archival project dedicated to preserving and presenting sounds from technologies and environments that have become obsolete or are disappearing from daily life. The collection includes sounds from typewriters, floppy drives, dial-up modems, old computers, and other technologies that defined earlier eras but are no longer part of contemporary experience. The project serves as both an artistic endeavor and a form of historical documentation, capturing the sensory dimensions of technological change that are often lost in written records. By preserving these sounds, the project helps maintain a connection to our technological past and provides valuable resources for historians, artists, and anyone interested in the evolution of human-technology interaction.
HN Discussion: Commenters expressed appreciation for projects that use forgotten, obscure, or insignificant technologies as an ethos unto itself. One shared a story about searching thrift stores for non-consumer electronics and the disturbing situation where only very specific technologies are considered valuable to a small subset of consumers, with things like CRTs shipped to warehouses for online auctions while their accompanying hardware is thrown away. Several noted the interface is confusing to use. One commenter expressed desire for recordings of Amiga floppies reading and the drive waiting to be fed, calling those the real sounds. Discussion covered the irony that some “artistic renderings” of typewriter sounds are completely useless as documentation. The project sparked nostalgia for technologies that are disappearing and concern about the selective preservation of only marketable items from our technological past.
Academic & Research
The Oxford Comma – Why and Why Not
A grammatical exploration of the Oxford comma—the optional comma before the coordinating conjunction in a list of three or more items—examining arguments both for and against its use. The article delves into the historical origins of the comma convention, the clarity it can provide in certain sentences, and the stylistic arguments against what some consider an unnecessary punctuation mark. The piece serves as an accessible guide for writers and editors who need to make decisions about comma usage in their work, presenting both sides of a long-standing debate in English grammar and style. The discussion touches on how small punctuation choices can affect meaning, clarity, and reader experience, making it relevant for anyone who cares about precise written communication.
HN Discussion: This story had no comments at the time of fetching, suggesting it was either too recently posted or failed to generate significant engagement from the Hacker News community. The topic, while linguistically interesting, may not have resonated with the primarily technical audience of Hacker News, or the timing may have meant it hadn’t yet accumulated comments. The article represents the kind of content that occasionally appears on Hacker News—grammatical and linguistic discussions that, while valid and interesting, may not align with the community’s primary technical focus.
Business & Industry
Ashby (YC W19) Is Hiring Engineers Who Make Product Decisions
Ashby, a Y Combinator W19 alumni company, is recruiting for engineering positions with a specific focus on candidates who are capable of making product decisions. The job posting reflects a growing trend of hiring engineers who are expected to contribute beyond technical implementation to include product thinking and strategic decision-making. This approach recognizes that modern software engineering often requires deep understanding of user needs, market dynamics, and business context, not just technical execution. The company appears to be looking for engineers who can bridge the gap between technical implementation and product strategy, suggesting they value well-rounded engineering talent.
HN Discussion: This job posting had no comments at the time of fetching, which is not unusual for employment listings on Hacker News. Many job postings, especially from YC alumni companies, receive limited engagement unless they’re particularly notable or controversial. The focus on engineers who make product decisions represents an interesting hiring philosophy that reflects broader trends in the tech industry toward expecting more product awareness from engineering roles. However, the community may have found this specific posting too similar to numerous other tech job listings to warrant extensive discussion.
System Administration
My home network observes bedtime with OpenBSD and pf
A detailed writeup of setting up a home network with scheduled “bedtime” restrictions using OpenBSD’s pf packet filter firewall. The author configured their network to automatically restrict internet access during sleeping hours, with the firewall enforcing bedtime while allowing exceptions for devices that need continuous connectivity. The project involved setting up pf rules, dealing with hardware issues including Realtek ethernet cards that had been running fine on Linux for years but caused problems on OpenBSD. The piece serves as a practical example of using OpenBSD’s powerful firewall tools for network management, while also discussing the hardware compatibility challenges that can arise when switching operating systems.
HN Discussion: Commenters shared experiences with Realtek network cards, with one convinced their hardware/firmware has timing issues where descriptor indexes get unsynchronized, leading to network stalls or wild writes. Several noted the weird reset behavior and undocumented flag setting in Realtek’s drivers, with one switching to Intel cards as almost always working but requiring slots and cards that can be sourced cheaply on eBay. One suggested creating a separate VLAN for devices needing bedtime enforcement rather than adding exceptions for devices on a single LAN network. Another commenter noted that allowing only TCP would break a lot of things and questioned why they didn’t just focus on IP directly. Discussion covered the challenges of hardware compatibility when switching between operating systems and various approaches to network-based access control.
Other
Niche Museums
A curated collection and exploration of museums dedicated to highly specialized or unusual topics that don’t fit the traditional museum mold. The site celebrates the diversity of human interests and the passion behind preserving niche collections, from museum of bad art to collections dedicated to very specific historical periods, technologies, or cultural phenomena. These museums often represent the life’s work of dedicated individuals who saw value in preserving things that mainstream institutions overlooked or considered unworthy of collection. The project highlights how human interests extend far beyond the typical museum subjects of art, history, and natural science, encompassing virtually any specialized knowledge or collection that someone cares enough to preserve and share with others.
HN Discussion: This story had comments at the time of fetching but they were not captured in the brief sample. The concept of niche museums resonates with the Hacker News community’s appreciation for esoteric knowledge and specialized interests. Museums dedicated to unusual topics represent the kind of passion projects that HN users often appreciate—people pursuing specialized interests regardless of mainstream relevance. The site likely serves as both a directory of interesting places to visit and a celebration of human diversity of interests, showing how many different things people care enough about to create institutions around preserving and sharing them.
Thanks for reading! See you tomorrow for the next briefing.