HN Morning Brief — 2 April 2026
A launch-day double-header from NASA and SpaceX, a leaked system prompt that the AI community is still digesting, and enough hardware drama, reverse engineering, and open-source releases to fill the hours until the evening brief.
Space & Geopolitics
Live: Artemis II Launch Day Updates
NASA’s Artemis II mission — the first crewed flight of the Space Launch System and Orion capsule — launched on April 1st, carrying four astronauts on a lunar flyby trajectory. The live blog tracked milestones from tanking through orbital insertion, with the crew confirmed safe and all systems nominal after trans-lunar injection. This is the first time humans have travelled beyond low Earth orbit since Apollo 17 in 1972.
HN Discussion: Commenters noted the striking contrast between the SLS program’s $93 billion cumulative cost and SpaceX’s far cheaper Starship programme, with several arguing the entire architecture represents Congressional patronage rather than optimal engineering. Others simply expressed awe at watching a crewed deep-space mission unfold in real time. The timing on April 1st led to initial scepticism that the whole thing might be an elaborate joke.
SpaceX Files to Go Public
SpaceX has confidentially filed for an initial public offering, with reported valuations around $1.75 trillion. The company posted $16 billion in profit last year despite heavy R&D spending on Starship and Starlink. The filing comes amid rapid expansion of its satellite internet business and growing dominance of the global launch market.
HN Discussion: A self-identified SpaceX investor argued the valuation only makes sense if you assign a high probability to Mars colonisation — viewing the company as a $200B launch business plus an enormous option on becoming a multi-planetary civilisation. Others flagged the Nasdaq rule change that will allow SpaceX to enter index funds just 15 days after listing, meaning passive 401(k) money will flow in almost immediately. The fact that SpaceX now owns X Corp through its xAI subsidiary drew pointed questions about fiduciary responsibility.
AI & Tech Policy
The Claude Code Leak
Someone extracted and published the full system prompt that Anthropic’s Claude Code agent uses when operating as a coding assistant. The prompt runs to thousands of words and reveals detailed instructions about tool use, file editing strategies, reasoning about code structure, and how the model should handle uncertainty and partial information.
HN Discussion: The thread became a referendum on whether system prompts are trade secrets or inevitable leakage. Several commenters pointed out that any user of Claude Code can extract the prompt through simple probing, making “leak” a misnomer. Others analysed the prompt’s engineering choices — particularly its instructions for decomposing complex tasks — and debated whether publishing it helps competitors or is inconsequential since the real value lies in the weights, not the prompt.
r/Programming Bans All Discussion of LLM Programming
The moderators of r/programming announced a temporary blanket ban on all posts and comments related to LLM-assisted programming, citing fatigue from low-quality submissions and repetitive debates. The ban covers AI code generation tools, coding assistants, and any discussion of using LLMs to write software.
HN Discussion: HN users found the irony rich — a programming community banning discussion of the most consequential new programming tool in years. Some sympathised, noting that r/programming had become flooded with low-effort “I asked ChatGPT to build X” posts. Others argued the ban is intellectually dishonest for a community that claims to follow technological change, and predicted the policy would be reversed within months.
Trinity Large Thinking
Arcee AI released Trinity-Large-Thinking, an open-weight reasoning model under the Apache 2.0 licence. Built on a mixture-of-experts architecture, it scores second on PinchBench (a benchmark for agentic tasks like OpenClaw), just behind Opus-4.6, while costing roughly 96% less at $0.90 per million output tokens. The release follows months of incremental previews, with the full model now supporting multi-turn tool calling, long-horizon agent loops, and structured “thinking” before responses.
HN Discussion: Benchmarks drew scrutiny. One commenter ran Trinity against an agentic SQL benchmark where it scored 17/25 — a mediocre result compared to Qwen 27B’s 23/25. The requirement to include thinking output in multi-turn conversation history raised architectural questions about whether this approach scales. The fact that it is one of the first high-performing fully open-weight American models was noted as politically significant given current US-China AI competition.
StepFun 3.5 Flash Is #1 Cost-Effective Model for OpenClaw Tasks
A benchmark evaluation of 300 head-to-head battles on agentic tasks found StepFun’s 3.5 Flash model to be the most cost-effective option for OpenClaw-style agent workloads. The study compared multiple models on tasks like web research, file management, and multi-step reasoning, weighing quality against token pricing.
HN Discussion: Skepticism about the evaluation methodology dominated. One commenter clicked through to a sample task — finding rental properties near Wilton, CT — and found the “top-rated” model had fabricated all three property listings while still receiving 7/10, raising serious questions about how the benchmark handles hallucination. Others noted StepFun’s massive token volume on OpenRouter (3.5 trillion served) reflects its aggressive free-tier pricing as much as genuine quality preference. Several users reported that in their own testing, Gemini 2.5 Flash outperformed StepFun significantly.
AI for American-Produced Cement and Concrete
Meta released BOxCrete, a Bayesian optimisation model for designing concrete mixes, along with the foundational dataset used to develop it. The tool helps concrete suppliers reformulate mixes using domestic cement rather than imports — the US currently imports 22-25% of its cement, primarily from Turkey, Canada, and Vietnam. The model proposes candidate formulations that balance strength, cost, sustainability, and workability, reducing the months of lab trial-and-error that traditional mix design requires.
HN Discussion: Many commenters initially assumed this was an April Fools’ joke. Once convinced it was real, the discussion split between those who saw practical value — Meta builds enormous data centres and concrete formulation is genuinely complex — and those who questioned whether Bayesian optimisation over mix parameters is meaningfully “AI.” Comparisons to Google’s 2017 “AI Cookie” project, which used similar optimisation for bakery recipes, were frequent. The framing around “American-produced” cement drew eye-rolls as transparent tariff-era positioning.
Security & Privacy
Subscription Bombing and How to Mitigate It
Subscription bombing is an attack where someone floods a victim’s email address across hundreds of newsletter signups and service registrations in seconds, drowning their inbox with confirmation emails and burying legitimate messages. The article breaks down how attackers automate this using lists of open signup forms, and provides concrete mitigations: CAPTCHA on signup, rate-limiting by email domain, and deferred confirmation emails that delay the first message until the subscriber explicitly verifies.
HN Discussion: Several people shared personal experiences of being subscription-bombed, noting that the attack is trivially easy to execute and nearly impossible for victims to stop — you can’t individually unsubscribe from 500 newsletters. Commenters debated whether the root cause is the signup forms themselves (which should all implement verification) or email providers (which should better filter bulk confirmation messages). The consensus was that most mitigation responsibility falls on the services running the signups.
Email Obfuscation: What Works in 2026?
Spencer Mortensen ran a controlled experiment exposing email addresses protected by 15 different obfuscation techniques to 318 known spam harvesters over a year, measuring which methods actually prevented address extraction. The results: plain HTML entities blocked 95% of harvesters, HTML comments blocked 98%, while JavaScript-based methods (concatenation, ROT18, AES encryption) and CSS display:none all achieved 100%. Several techniques that “break usability” — like replacing the @ symbol with text instructions or rendering the address as an image — work but are inaccessible.
HN Discussion: The surprise finding that simple HTML entity encoding still blocks 95% of harvesters despite being trivially reversible drew interest. One commenter noted they have used basic entity-encoded mailto links on a static site for years with zero spam, suggesting the harvester ecosystem is less sophisticated than assumed. Someone pointed out that publishing the full effectiveness breakdown also tells spammers which bypasses are worth implementing.
Signing Data Structures the Wrong Way
The article examines a subtle cryptographic pitfall: when you sign serialised data structures (JSON, Protocol Buffers, etc.), two different message types with identical serialised forms can produce signature collisions. The canonical example is signing a “transfer” message and a “deposit” message that serialize to the same bytes. The recommended fix is domain separation — embedding a unique identifier for each message type directly into the data before signing.
HN Discussion: Several cryptographers pushed back, arguing the article treats established principles as novel discoveries. The Horton Principle (bind the context to the signature) and the Cryptographic Doom Principle (verify before processing) have been standard guidance for decades. One commenter linked to their own approach using multiset hashing to produce deterministic hashes without a separate canonicalisation step. The discussion of whether domain separators belong in-band (in a type field) or out-of-band (in protocol headers) remained unresolved.
Tech Tools & Projects
EmDash — A Spiritual Successor to WordPress
Cloudflare announced EmDash, an open-source content management system designed as a modern replacement for WordPress. Built on Cloudflare’s edge infrastructure, it uses Workers for dynamic content, R2 for storage, and D1 for its database — meaning sites run entirely on Cloudflare’s platform with no traditional server to manage. The pitch is zero-maintenance blogging with built-in CDN, DDoS protection, and automatic scaling.
HN Discussion: The dominant reaction was wariness about vendor lock-in. A CMS that only runs on one company’s infrastructure is the opposite of WordPress’s famous portability, even if the operational simplicity is appealing. Commenters noted that Cloudflare’s pricing for D1 and R2 is currently generous but could change, and questioned whether “spiritual successor to WordPress” should mean compatible with its plugin ecosystem rather than a from-scratch platform that happens to fill the same niche.
Git Bayesect — Bayesian Git Bisect
Git bayesect generalises git bisect to handle non-deterministic regressions. When a test becomes flaky rather than outright broken, standard binary search fails because any single test run might give a misleading pass or fail. Bayesect uses Bayesian inference with a Beta-Bernoulli conjugacy trick to maintain a probability distribution over which commit introduced the change, selecting the next commit to test by minimising expected entropy. It supports priors based on filenames, commit messages, or code structure.
HN Discussion: Benchmarks from one commenter showed dramatic accuracy improvements: at 90/10 flakiness, standard bisect drops to 44% accuracy while bayesect holds at 96%. Another commenter suggested weighting priors with call-graph analysis — commits that modify highly-connected functions are more likely culprits — claiming this adds 10-15% accuracy at zero test cost. A practical concern was raised about compile times: if rebuilding takes 15 minutes but running the flaky test takes one second, the tool should prefer re-testing the current commit over jumping to a new one.
Dull — Instagram Without Reels, YouTube Without Shorts (iOS)
A Show HN post from a developer who kept deleting and re-downloading Instagram because Reels were too addictive but needed the app for DMs. Dull is an iOS app that wraps Instagram, YouTube, Facebook, and X in a web view, then uses CSS and JavaScript injection (with MutationObserver for lazy-loaded content) to strip out short-form video. It also offers grayscale mode, time limits, and usage tracking. The developer acknowledges the ongoing maintenance burden: platforms constantly change their DOM structures and obfuscate class names.
HN Discussion: The top concern was legal viability — selling a product that wraps and modifies other companies’ services has historically led to takedown requests and API bans. Commenters pointed to existing alternatives like browser extensions (IGPlus, UnTrap for YouTube) and uBlock Origin filters. The philosophical point resonated: the fact that someone had to build a separate app to get the version of Instagram from five years ago says a lot about platform design incentives.
Weather.com/Retro
The Weather Channel launched a retro version of its weather display that recreates the look and feel of the classic WeatherStar 4000 — the cable-TV local forecast system with scrolling text, blocky radar maps, and smooth jazz. The page detects your location, shows current conditions in that distinctive CRT-era aesthetic, and plays the characteristic background music.
HN Discussion: Nostalgia hit hard. Commenters loved the music and the faithful recreation, though several noted that an independent project at weatherstar.netbymatt.com had already done a more complete simulation. The main request was a loop/autoplay mode so it could run continuously on a spare screen. The developer community discussed the technical implementation — filter stacking and CSS blurring — with some assuming it was streamed video rather than rendered client-side.
A New C++ Back End for ocamlc
Stephen Dolan (stedolan) opened a pull request adding a C++ code generation back end to the OCaml compiler. Rather than compiling OCaml to the existing C-like output, this new -incr-c flag produces idiomatic C++ template metaprogramming. The example in the PR compiles an OCaml prime sieve into dense C++ template code — the resulting program is essentially a compile-time computation using template instantiation, trading compile time for zero-cost abstractions at runtime.
HN Discussion: Commenters appreciated the sheer audacity of using C++ template metaprogramming as a compilation target. One noted the irony that the generated C++ computes primes via template instantiation using 3.1 GiB of memory — “Finally, I can get some primes on my laptop.” Discussion touched on whether this approach could simplify embedding OCaml into existing C++ codebases where linking against the standard OCaml runtime is awkward.
Fast and Gorgeous Erosion Filter
Rune Skovbo Johansen detailed a GPU-friendly erosion simulation for procedural terrain generation that produces realistic branching gullies and ridges without actually simulating water flow. Every point in the terrain can be evaluated independently, making it trivially parallelisable and compatible with chunked terrain systems. The technique uses analytical approximations of erosion patterns rather than iterative simulation, achieving visual quality that rivals expensive particle-based approaches at a fraction of the cost.
HN Discussion: The parallel-isability per chunk was identified as the killer feature for any procedural generation algorithm. One commenter suggested fitting parameters to high-resolution LiDAR data to produce statistically realistic terrain for specific geological histories. Comparisons to Dwarf Fortress’s erosion simulation and classic tools like Terragen from the 1990s added historical context. An interactive ShaderToy demo was shared for anyone wanting to see the filter in action.
The Future of Code Search Is Not Regex
Dmitry Kovalenko argues that semantic code search — understanding what code means rather than matching literal strings — will replace regex-based search as AI models become embedded in development workflows. The post surveys existing tools (ast-grep, ColGREP) and proposes that embedding-based approaches can find conceptually similar code even when variable names and structure differ entirely.
HN Discussion: Commenters shared their experiences with existing semantic search tools. One mentioned ColGREP (semantic code search for terminals and agents) and ast-grep, noting that LLMs struggle to produce correct ast-grep queries without fine-tuning. Someone else open-sourced a code search implementation claiming 100x speed over ripgrep. The discussion split between those who found semantic search genuinely useful for exploring unfamiliar codebases and those who argued regex remains faster and more precise when you know what you’re looking for.
Quantum Computing Bombshells That Are Not April Fools
Scott Aaronson surveys recent developments in quantum computing that, despite being announced on April 1st, are entirely real. The post covers progress on error-corrected logical qubits, new results on quantum advantage in specific computational tasks, and updates from major labs on scaling quantum processors. Aaronson’s signature approach is to distinguish genuinely important milestones from overhyped press releases, and this round-up applies that filter to several headline-grabbing claims that happened to land on a date associated with pranks.
HN Discussion: Commenters appreciated the timing — Aaronson explicitly curating “things that sound fake but aren’t” cut through the April Fools noise. Debate focused on which of the covered results actually bring practical quantum computing closer versus those that are impressive physics but computationally trivial. The gap between logical qubit counts needed for useful algorithms and what currently exists remains enormous, and several commenters pushed back on the “bombshell” framing for results that are incremental rather than transformative.
Reverse Engineering Crazy Taxi, Part 2
The second instalment of a series reverse-engineering the Dreamcast game Crazy Taxi picks up after cracking the .all archive format. This post tackles the .shp file format, suspected to contain 3D model data. The author uses the game’s included cube0.shp — a simple cube model that serves as a Rosetta Stone — to deduce the binary format by comparing known geometry (a cube has 8 vertices, 6 faces, 12 triangles) against the raw bytes. The write-up explains vertex positions, normals, and face elements using Wavefront .obj files as a reference point.
HN Discussion: The thread was sparse (the post was two days old), but commenters appreciated the pedagogical approach of starting from a known-simple model rather than guessing at complex ones. The comparison to .obj file formats as a Rosetta Stone for understanding proprietary binary geometry was highlighted as a transferable technique for any reverse engineering project.
IPv6 Address, as a Sentence You Can Remember
A web tool that converts IPv6 addresses into memorable English sentences and back. Each 16-bit segment maps to a word, producing phrases like “The amazing champions inspire boldly like brilliant genius” from a 128-bit address. The tool works in both directions — paste an IPv6 address to get a sentence, or type a sentence to recover the address.
HN Discussion: The fundamental question of why anyone needs to memorise an IPv6 address was raised immediately — one commenter pointed out that best practice is to use temporary IPv6 addresses (RFC 8981) that rotate, making memorisation counterproductive. Others compared the approach to S/KEY (RFC 1760), which represented 64-bit integers as six-word sequences for similar reasons. The generated sentences were critiqued for being less memorable than intended — “How morally the enviable assistances categorize the insistent iodine” doesn’t exactly roll off the tongue.
SolveSpace Working on Windows 2000 (2025)
A GitHub issue documenting that SolveSpace, an open-source parametric 2D/3D CAD application, can be compiled to run on Windows 2000 with minimal effort. The project already officially supports Windows Vista through 11, Linux, macOS, and the web. Running on Windows 2000 means it covers every major platform from the last 26 years.
HN Discussion: Commenters enjoyed the retro-computing appeal, with one wondering how much additional effort would be needed to target Windows 9x. The discussion veered into nostalgia for the era of small, statically-linked native Windows applications built with Petzold-style Win32 API programming — “I’m still bummed the web won the UI wars.”
Salomi: Extreme Low-Bit Transformer Quantization
A research repository exploring whether binary or near-binary weight representations can match ternary quantisation in transformer models. The honest conclusion, documented extensively in RESEARCH.md and an “Honest Assessment” document, is that strict 1.0 bits-per-parameter post-hoc binary quantisation does not hold up under rigorous evaluation. The more credible results cluster around 1.2-1.35 bpp using Hessian-guided vector quantisation, mixed precision, or magnitude-recovery methods. The repo is framed as a research workspace rather than a production tool.
HN Discussion: Barely any engagement — one commenter asked tersely whether the repo was “shoveled out with Claude/Codex to ride off the Bonsai release,” questioning the project’s authenticity and motivation.
Web & Infrastructure
Steam on Linux Use Skyrocketed Above 5% in March
Valve’s monthly hardware survey shows Linux crossing the 5% market share threshold on Steam for the first time, driven almost entirely by SteamOS on the Steam Deck and the growing maturity of Proton (Valve’s Windows compatibility layer). The milestone is significant because Linux gaming was considered a niche curiosity for decades; reaching 5% suggests a sustainable and growing user base rather than a temporary spike.
HN Discussion: Commenters debated how much of the growth is Steam Deck versus desktop Linux, with consensus that the Deck accounts for most of it. The Proton project was credited as the single most important enabler — it made thousands of Windows-only games “just work” on Linux without developer intervention. Some noted macOS has been losing Steam share simultaneously, partly due to Apple’s 32-bit app deprecation and partly due to gaming-unfriendly GPU pricing.
History & Science
What Gödel Discovered (2020)
A programmer’s intuitive explanation of Kurt Gödel’s 1931 incompleteness theorems, aimed at readers without formal mathematical training. The essay walks through the historical context — Frege’s set theory, Russell’s paradox that broke it, Hilbert’s programme for a complete and consistent mathematical foundation — and then explains how the 25-year-old Gödel proved that any sufficiently powerful formal system can either be complete or consistent, but never both. The proof works by constructing a mathematical statement that essentially says “this statement cannot be proven,” creating an unresolvable loop.
HN Discussion: The piece was praised for making the proof accessible without sacrificing the core insight. Discussion ranged into whether Gödel’s theorem has practical implications for computer science (it does — it’s closely related to the halting problem) and whether modern AI systems face analogous limitations in reasoning about their own outputs.
System Administration
DRAM Pricing Is Killing the Hobbyist SBC Market
Jeff Geerling documents how DRAM price increases over the past two years have made single-board computers dramatically more expensive, pricing out the hobbyist and educational markets that the Raspberry Pi originally created. Boards that once cost $35 now regularly exceed $60-80, and manufacturers are cutting RAM configurations to maintain price points — which defeats the purpose of using SBCs as capable miniature computers. The root cause traces to consolidation among DRAM manufacturers and surging demand from AI/datacentre customers.
HN Discussion: Several commenters argued the SBC market’s problems go beyond DRAM — SoC availability, supply chain fragmentation, and the Raspberry Pi Foundation’s shifting priorities all contribute. The used x86 mini-PC market (Dell Wyse, Lenovo Tiny PCs available for $50-80 on eBay) was repeatedly suggested as a better value proposition for most hobbyist projects, offering dramatically more compute power at similar prices. The counterargument is that SBCs still win on GPIO, form factor, and power consumption for embedded projects.
Windows Equivalents of Most Used Linux Commands
A reference guide mapping common Linux command-line operations to their Windows equivalents: ls → dir, grep → findstr, ps → tasklist, kill → taskkill, and so on. The article covers about 20 common commands with syntax examples for each platform.
HN Discussion: The author’s admission that the content below a certain point was AI-generated drew immediate criticism — “If you can use AI to generate this list, so can anyone. Why would I want to read AI slop?” Practical corrections included pointing out that find is the wrong tool for locating files across a system (use plocate instead), that kill -9 should never be the first signal tried, and that netstat works on Linux too, making the comparison misleading. The ss64.com reference site was recommended as a superior non-AI resource.
Academic & Research
Set the Line Before It’s Crossed
A framework for proactively defining personal boundaries before they’re violated. The essay distinguishes three types of lines — soft, firm, and hard — each with predefined consequences. The central argument is that without pre-committed boundaries, the normalisation of deviance causes lines to silently shift: each minor violation is rationalised as a one-off, until the boundary has moved so far that the originally unthinkable becomes routine. The article includes practical templates for defining criteria, setting violation thresholds, and automating consequences.
HN Discussion: The top comment argued that rigid pre-set lines can be counterproductive, making you resistant to legitimate change and anchoring your values as they are today at the cost of growth. Personal stories of failing to enforce boundaries — in abusive relationships, exploitative friendships, and financial commitments — filled out the practical side. One commenter noted the obvious parallel to how organisations handle security policies, where gradual normalisation of minor violations is exactly how major breaches happen.
The Revenge of the Data Scientist
Hamel Hussain argues that in the age of LLM-powered coding, the skills that make data scientists valuable are precisely the skills the industry suddenly needs: designing rigorous experiments, evaluating stochastic systems, debugging non-deterministic outputs, and building evaluation pipelines. While software engineers can now generate code with AI assistants, someone still needs to assess whether that code actually works correctly across edge cases — and that “someone” is increasingly the data scientist whose evaluation expertise translates directly.
HN Discussion: A pessimistic counterargument ran long: data scientists were valued for model creation, which LLM providers now handle. Evaluation and monitoring work is unglamorous and hard to justify to management as a high-paid speciality. A math-focused data scientist reported persistent frustration with LLMs as research partners — confidently wrong answers that take days to detect. Others noted that LLMs tend to get trapped in local minima on codebases, rarely proposing architectural refactors, making the “watch and evaluate” skill genuinely necessary.
Business & Industry
Show HN: NASA Artemis II Mission Timeline Tracker
A Show HN post presenting an interactive timeline visualiser for the Artemis II mission, allowing users to track mission milestones, orbital manoeuvres, and crew activities in real time. Built as a single-page web application, it maps the planned mission sequence against live telemetry where available.
HN Discussion: Minimal comments — one brief positive note. The project speaks for itself as a timely tool during an active mission.
Ask HN: Who Is Hiring? (April 2026)
The monthly HN hiring thread, where companies post open positions directly in the comments. April’s thread features the usual mix of startups and established companies hiring across software engineering, ML/AI, DevOps, and infrastructure roles. Notable postings include positions at Supabase (remote, open-source), Oklo (nuclear reactor software), River (Bitcoin financial services, Elixir stack), and FetLife (large-scale Rails monolith, fully remote).
HN Discussion: Hiring threads are primarily informational rather than discursive, but readers scan them for salary range signals and remote-work policies. The continued dominance of “fully remote” listings reflects a market where companies still compete on flexibility despite return-to-office pressure at major tech firms.
Thirty stories, zero filler. Back tonight with the evening brief.