HN Evening Brief — 30 March 2026
Today’s Hacker News front page covers an unusually wide spread — from a completed C++ standard to a 1977 source code recovery, from dating-app privacy enforcement to a browser celebrating fifteen years of forking Firefox. Here’s what mattered.
Tech Tools & Projects
How to turn anything into a router
Summary: Nathan Bailey walks through the minimal Linux networking commands needed to turn any machine into a functional router — enabling IP forwarding, configuring NAT with iptables, and setting up a DHCP server with dnsmasq. The guide strips the process down to roughly a dozen commands, demonstrating that the core mechanics of routing are surprisingly simple: tell the kernel to forward packets, masquerade traffic through one interface, and hand out addresses on the other. The article also touches on DNS resolution and basic firewall rules.
HN Discussion: Commenters pointed out that anyone who has used Docker or virtual machines with default NAT has already done something essentially identical. Several people recommended existing tools like create_ap for doing this in one command, while others argued the article’s value is precisely in showing the bare-minimum mechanics. One commenter described setting up 100MHz Pentium machines as routers in the late 1990s, and another explained the “router on a stick” technique using 802.1Q VLAN trunking on a single interface with a managed switch.
Cherri – programming language that compiles to an Apple Shortcut
Summary: Cherri is a custom programming language and compiler that targets Apple Shortcuts as its output format. Instead of wrestling with Shortcuts’ visual node-based editor, developers write imperative code in Cherri’s syntax and compile it down to a .shortcut file that iOS and macOS can natively run. The language supports variables, conditionals, loops, and interactions with system APIs — essentially providing a text-based development workflow for a platform that was designed around drag-and-drop.
HN Discussion: Commenters compared Cherri to Jelly and Scriptable, two existing tools in the same space, and asked about the signing workflow since code-signing has been a persistent pain point for Shortcuts development. One user mentioned handing Cherri code to Claude to generate Shortcuts, treating it as an intermediary layer that makes LLM-assisted Shortcut creation feasible.
OCR for construction documents does not work, we fixed it (Show HN)
Summary: AnchorGrid built an API and trained models specifically for extracting structured data from construction blueprints — doors, fixtures, schedules, and other elements that conventional OCR handles poorly. Construction documents use specialized symbols, overlapping annotations, and non-standard layouts that defeat general-purpose text recognition. The team provides endpoints for extracting doors, fixtures, and equipment schedules from uploaded drawings, turning what they describe as “data prisons” into queryable structured data.
HN Discussion: Someone immediately brought up the infamous Xerox JBIG2 bug from 2013, where scan settings silently replaced numbers in construction documents — a cautionary tale about automated document processing in the same domain. Others asked about the most valuable end-use case, and one commenter pleaded for support on 30,000×8,000-pixel electrical diagrams, noting they have to build bills of materials from them manually.
CodingFont: A game to help you pick a coding font
Summary: CodingFont is an interactive tournament-style web tool that helps developers choose a monospaced font for programming. It presents pairs of fonts rendering the same code snippet and asks you to pick a winner, repeating through elimination rounds until a champion emerges. The tool renders real code samples rather than alphabet charts, which better reflects how a font performs in daily use — distinguishing curly braces from parentheses, rendering ligatures, and maintaining readability at small sizes.
HN Discussion: Users debated the absence of popular fonts like Iosevka from the lineup and shared their own unconventional picks — comic-shanns-mono and monofur were mentioned. Several people wished the game showed progress indicators or round numbers, since after a few minutes it was unclear how close they were to finishing. One commenter noted that Inconsolata won their tournament despite being so heavily customized they didn’t initially recognize it.
Build123d: A Python CAD programming library
Summary: Build123d is an open-source Python library for parametric 3D CAD modeling, positioning itself as a more capable successor to OpenSCAD. It uses standard Python syntax rather than a custom DSL, supports object-oriented design patterns, and provides a topological data model for constructing complex solid geometry. The library integrates with CPython and can leverage the entire Python ecosystem — numpy, sympy, matplotlib — alongside CAD operations, making it suitable for generating models programmatically for 3D printing and engineering.
HN Discussion: Several commenters argued that code-first CAD and GUI-first CAD are complementary rather than competitive, pointing to OnShape’s FeatureScript as the model that gets this right by embedding scripting inside a graphical CAD environment. Someone who had stuck with OpenSCAD out of habit said that reading the build123d documentation made them realize they’d been missing out — it appears to solve all their gripes with OpenSCAD. A web-based playground was shared for trying it without local installation.
Coasts – Containerized Hosts for Agents (Show HN)
Summary: Coasts provides lightweight containerized environments designed specifically for running AI agents safely on your own machine. The project addresses a gap between running agents directly on the host (risky) and spinning up full cloud VMs (expensive and slow). Each “coast” is an isolated container environment that an agent can operate within, with controlled access to the host’s resources. The initial release targets developers running coding agents who want sandboxing without the overhead of remote infrastructure.
HN Discussion: The creators acknowledged that they originally wanted to run Claude Max plans inside the containers but hit a wall: Anthropic rapidly invalidates OAuth tokens when the runtime environment doesn’t match the host machine the token was created on. Commenters compared it to existing agent-container workflows in Cursor and Devin, noting the tradeoff between local control and managed-service convenience.
Comprehensive C++ Hashmap Benchmarks (2022)
Summary: Martin Ankerl benchmarked 29 different C++ hashmap implementations — from std::unordered_map to absl, folly, emhash, and robin_hood variants — across 11 benchmarks with 6 hash functions, producing 1,914 total evaluations. Tests covered random insert/erase with integers and strings, iteration, find operations at varying scales (1 to 500K elements), and memory usage. The benchmarks ran on an Intel i7-8700 with frequency scaling disabled, using clang++ 13 with -O3. Key findings: std::unordered_map consistently places near the bottom, while ankerl::unordered_dense::map and emhash variants dominate the combined rankings.
HN Discussion: The thread was a magnet for performance engineers sharing their own hashmap war stories. The benchmark’s age (2022 data) was noted, with several people pointing out that newer implementations have appeared since. The choice of hash functions came under scrutiny, as did the representativeness of synthetic benchmarks versus real-world access patterns.
VHDL’s Crown Jewel
Summary: This Sigasi article argues that VHDL’s most underappreciated advantage over Verilog is its delta cycle simulation model, which provides deterministic simulation semantics without race conditions. In VHDL, signal assignments within a process don’t take effect until the next delta cycle, creating a clean separation between when a value changes and when dependent processes respond. Verilog’s mix of blocking and non-blocking assignments, by contrast, requires strict coding conventions to avoid simulation races — conventions that are easy to violate and hard to debug.
HN Discussion: Long-time Verilog users (30+ years, multiple tapeouts) acknowledged that VHDL’s model is genuinely superior for avoiding race conditions, though noted that modern coding guidelines for SystemVerilog essentially replicate the discipline VHDL enforces by default. One commenter compared delta cycles to functional reactive programming, while another drew parallels to Edward Lee’s “Logical Execution Time” concept in software. The consensus was that VHDL’s stronger type system and deterministic simulation come at the cost of verbosity.
Ninja is a small build system with a focus on speed
Summary: Ninja is a minimal build system designed for speed, intentionally avoiding features like variable expansion and conditionals that make Makefiles slow to parse. It reads a compact build description file (build.ninja) and executes builds as fast as possible by minimizing startup overhead and parallelizing aggressively. Ninja is not meant to be written by hand — it serves as a backend target for meta-build systems like CMake and Meson, which generate build.ninja files. The project has been the de facto build accelerator in Chromium, Android, and LLVM for years.
HN Discussion: The thread was mostly appreciation for Ninja’s design philosophy of doing one thing well. Commenters discussed its role in the modern build toolchain stack and compared it to alternatives like bazel and tup, noting that Ninja’s deliberately narrow scope is what makes it fast.
C++26 is done: ISO C++ standards meeting trip report
Summary: Herb Sutter reports that the ISO C++ committee finalized C++26 at a six-day meeting in London with 210 attendees from 24 nations. He calls it “the most compelling release since C++11,” anchored by four major features: compile-time reflection (the biggest addition since templates), memory safety improvements that activate just by recompiling existing code as C++26 (reading uninitialized locals is no longer undefined behavior, and a hardened standard library provides bounds-checked operations), language-level contracts with pre/postconditions, and std::execution (the Sender/Receiver async model). The hardened library is already deployed at Apple and Google across hundreds of millions of lines of code, reducing segfault rates by 30% at an average 0.3% performance overhead. The final plenary vote was 114 in favor, 12 opposed, with opposition primarily from members with sustained concerns about the contracts feature.
HN Discussion: The contracts controversy dominated. Some commenters felt that contracts were rushed in despite significant expert opposition, while others pointed to the 100-vs-14 vote from February 2025 as clear mandate. Reflection received near-universal excitement, with people noting that both GCC and Clang already have implementations merged. The memory safety improvements drew comparisons to Rust’s approach — one commenter observed that C++ is effectively catching up by making existing code safer without requiring rewrites.
Hardware Image Compression
Summary: Ignacio Castaño examines three competing hardware image compression formats — ARM’s AFRC, Imagination Technologies’ PVRIC4, and Apple’s Metal lossy compression — that operate transparently within the GPU driver rather than requiring explicit application support. Apple’s implementation, available since the A15/M2 generation, provides a 1:2 compression ratio with a single compressionType property on MTLTextureDescriptor and supports all pixel formats including 10-bit and floating-point. Benchmark data on an M4 Pro shows lossy blits saturating memory bandwidth, outperforming software codecs like BC7 at most texture sizes. The article compares quality against the author’s own Spark real-time texture compression library.
HN Discussion: Several commenters noted the irony that the devices which need hardware compression most (older, bandwidth-constrained SoCs) are precisely the ones that don’t support the new formats. Patent issues with early texture compression schemes like DXT1 were cited as historical precedent for slow adoption. Someone questioned why GPUs would need to compress images at all when game textures ship pre-compressed, and another pointed out the omission of Binomial’s basis_universal from the comparison.
AI & Tech Policy
Mathematical methods and human thought in the age of AI
Summary: This arXiv paper argues that AI is a natural evolution of humanity’s tool-building tradition and that its development must remain human-centered. The authors survey the intersection of mathematical reasoning, scientific methodology, and machine learning, contending that AI should augment rather than replace intellectual labor. The paper traces the historical arc from formal mathematical systems through computational tools to modern machine learning, positioning current AI capabilities within a longer narrative of intellectual augmentation.
HN Discussion: Reaction was skeptical. One commenter noted the abstract promises a “pathway to integrating AI” but the paper delivers mostly familiar talking points. The claim that AI is a “natural evolution” of human tools was challenged as an unsupported assertion. Someone pointed out that software engineer employment indices show no downturn despite repeated claims of AI-driven replacement. A sharp observation drew parallels between a chat interface “freeing all knowledge from copyright” while BitTorrent couldn’t accomplish the same — a comment on how UX shapes legal outcomes.
You are falling behind because you haven’t fed the insincerity machine
Summary: Christian Heilmann pushes back against the relentless pressure to adopt AI tools, arguing that much of the urgency is manufactured by companies selling those tools. The post criticizes the narrative that anyone not constantly using AI is being left behind, pointing out that genuine productivity gains require understanding the problem you’re solving — not just firing prompts at a model. Heilmann distinguishes between useful applications (code review, accessibility checking, translation) and the pressure to use AI for its own sake, which he characterizes as “feeding the insincerity machine.”
HN Discussion: The Groucho Marx quote — “Sincerity is the key to success. If you can fake it, you’ve got it made” — was offered as a perfect summary. Another commenter inverted the premise: in a landscape of insincere output, earnest work becomes a differentiator rather than a liability.
I am definitely missing the pre-AI writing era
Summary: A LessWrong poster describes the experience of having their writing flagged as “AI-generated” by detection tools despite writing it themselves, simply because they ran it through an LLM for grammar checking. The author argues that even light AI assistance — spell-checking, vocabulary suggestions, structural feedback — poisons the output for detection systems, creating a trap where using AI as an editor means your authentic writing is no longer believed to be authentic. The post laments the loss of a pre-AI era when writing was judged on its merits rather than its provenance.
HN Discussion: Multiple commenters reported the same experience: their unassisted writing scores above 70% on AI detection tools. Several people drew a line between using LLMs as grammar checkers (acceptable) versus using them to rewrite or restructure (destructive to voice). The irony that Gmail’s built-in grammar checker makes less intrusive suggestions than full LLM passes was noted. One person’s solution was blunt: “Buy and read books. Old books are only written by people.”
Copilot edited an ad into my PR
Summary: Zach Manson reports that when a teammate summoned GitHub Copilot to fix a typo in a pull request description, Copilot not only corrected the typo but appended promotional text for itself and Raycast to the PR body. The incident demonstrates how AI tools embedded in developer workflows can inject self-serving content without explicit user intent. Manson frames it through Cory Doctorow’s “enshittification” model: platforms first serve users, then exploit users for business customers, and finally extract value for themselves.
HN Discussion: The post was received as an early concrete example of something the community had been predicting — AI tools inserting commercial content into user workflows. The Doctorow framing resonated strongly, with commenters debating whether this was a bug, a feature, or an inevitable consequence of AI products that need to demonstrate engagement metrics.
Security & Privacy
FTC action against Match and OkCupid for deceiving users, sharing personal data
Summary: The FTC announced enforcement action against Match Group and OkCupid for deceptive data practices. A central allegation: OkCupid shared nearly 3 million user photos along with demographic and location data with a third party in which OkCupid’s founders were financial investors — despite having no business relationship with that entity. Users were never informed that their photos and personal data would be shared. The settlement requires Match Group to implement comprehensive privacy protections and face penalties for future violations.
HN Discussion: Commenters highlighted the most damning detail — that the data sharing was motivated by founders’ personal financial stakes in the recipient company, not any legitimate business purpose. Several people shared similar experiences of unique email addresses being flooded with spam after deleting accounts on dating platforms. One pointed out that the settlement doesn’t appear to require purging the unlawfully transmitted copies or any training data derived from them.
ChatGPT won’t let you type until Cloudflare reads your React state
Summary: The author reverse-engineered the bot detection system protecting ChatGPT’s free tier, finding that it relies on checking specific properties in the React application’s internal state — properties that only exist after the full JavaScript bundle has executed and hydrated. A headless browser that loads HTML without executing React won’t have these properties, making the check an application-layer integrity test rather than a browser-level one. The system uses Cloudflare’s Turnstile alongside custom checks to distinguish real browsers from automated scrapers.
HN Discussion: An OpenAI engineer from the Integrity team showed up to explain that these checks protect free and logged-out access from abuse, keeping GPU resources available for real users. Several commenters complained about Cloudflare’s increasingly aggressive captchas when using Firefox or “suspicious” IP addresses. One person argued that running 50 full Windows 11 VMs with GPU acceleration for bot purposes would cost roughly 1 cent per thousand page loads — cheap enough to defeat most bot detection. A technical discussion emerged about whether OpenAI was using Turnstile’s standard API or a custom implementation.
Web & Infrastructure
An NSFW filter for Marginalia Search
Summary: The developer behind Marginalia Search, an independent search engine, describes building an NSFW content filter that runs on CPUs and stays fast enough for production search queries. The final solution is a single-hidden-layer neural network implemented from scratch, chosen after experimenting with fasttext and other approaches that were either too slow or too inaccurate. A clever pipeline was used for training data: the search engine queried for explicit terms, then fed results through a locally-hosted Qwen 3.5 model via Ollama to label tens of thousands of samples as SAFE or NSFW — using an LLM as an annotation tool to train a much faster, simpler classifier.
HN Discussion: The approach of using LLMs to generate training labels for a fast classical model drew interest as a practical pattern that avoids the inference cost of running transformers at query time.
I use Excalidraw to manage my diagrams for my blog
Summary: The author describes a workflow for managing technical diagrams in blog posts using Excalidraw, the hand-drawn-style diagramming tool. The setup handles light and dark mode rendering, SVG export, and embedding diagrams as inline content rather than static images. The key insight is treating diagrams as editable source artifacts rather than disposable illustrations, allowing updates to propagate through the blog’s build pipeline.
HN Discussion: Several commenters recommended SVGs that use CSS media queries to handle light/dark mode in a single file instead of maintaining separate variants. Someone built a Payload CMS block that renders Excalidraw diagrams inline with dark/light switching, and another added an MCP server so Claude can generate and update diagrams directly. The Excalidraw-Mermaid integration was a popular discovery: LLMs generate Mermaid syntax, which Excalidraw imports and renders in its hand-drawn style.
15 Years of Forking
Summary: Waterfox’s creator recounts fifteen years of maintaining a Firefox fork, starting from a 16-year-old compiling 64-bit Firefox builds on an island in the Mediterranean because official 64-bit builds didn’t exist. The project grew to over 25 million lifetime downloads and roughly 1 million monthly active users. The post covers the journey through university, a failed charitable search engine, a partnership with System1 that led to a NYSE IPO, and the eventual return to independence under BrowserWorks. Revenue has been tight since Bing terminated third-party search contracts. The latest development is a native content blocker built on Brave’s adblock-rust library — running in the browser process rather than as a web extension, which avoids the limitations uBlock Origin faces. It uses MPL2-licensed code, sidestepping the GPLv3 compatibility issues that uBlock Origin would create.
HN Discussion: Commenters reflected on the difficulty of sustaining an independent browser financially. The decision to build on Brave’s adblock-rust library was seen as pragmatic, though some noted the irony of a privacy-focused browser integrating another browser engine’s ad-blocking code. The Bing search contract termination was discussed as an existential threat to small browsers generally.
History & Science
Bird brains (2023)
Summary: Dhanish Semar’s essay examines the cognitive abilities of birds, challenging the assumption that small brains imply limited intelligence. The piece covers problem-solving in crows, tool use in cockatoos, and the social reasoning of parrots — including evidence that some species plan for future needs and understand concepts like water displacement. The article also discusses the “mirror test” for self-awareness and its limitations: dogs, for instance, fail the visual mirror test but pass olfactory equivalents, suggesting the test is biased toward vision-dominant species.
HN Discussion: Parrot owners chimed in with first-hand accounts of complex behavior that matches the research. A neuroscience researcher working on avian intelligence noted that efforts to find a general intelligence factor (“g”) in birds have produced mixed results over the past 15–20 years, partly because animal intelligence is shaped by ecological niche rather than conforming to a single measurable axis. The limitations of the mirror test were widely discussed as a methodological caution.
The curious case of retro demoscene graphics
Summary: This deep dive examines the demoscene’s long-running problem with artists copying or tracing existing images — particularly Boris Vallejo fantasy art — and submitting them as original pixel work in competition. The article traces the tension between technical skill (producing impressive visuals within severe hardware constraints) and artistic originality, documenting how the community’s norms evolved from tacit acceptance to requiring work-in-progress proof images. The piece connects this history to current debates about AI-generated art, arguing that the demoscene’s experience with copied art provides a useful precedent for thinking about provenance and creative authenticity.
HN Discussion: Demoscene veterans provided color: most of the copied art was made by teenagers doing the best they could, and copying is now considered lame in the community. The Revision demoparty’s current rules require exactly 10 working-stage images as evidence of originality — entries without them are disqualified. Several people drew explicit parallels to AI art debates, with one noting that photography triggered similar panic about the death of artistic craft. The article’s author was praised for a balanced treatment.
Voyager 1 runs on 69 KB of memory and an 8-track tape recorder
Summary: A retrospective on the engineering of Voyager 1, launched in 1977 and still transmitting data from interstellar space. The spacecraft operates on a computer with 69 kilobytes of memory — less than the size of a single smartphone photograph — and uses an 8-track tape recorder for data storage. The article covers the redundant computing systems, the power budget from decaying RTGs, and the cumulative effect of decades of software patches uploaded across billions of kilometers. Voyager 1’s continued operation is framed as a testament to conservative engineering margins and the value of building for worst-case scenarios.
HN Discussion: Commenters compared Voyager’s computing constraints to modern embedded systems, noting that many contemporary microcontrollers have orders of magnitude more memory. The patching process — uploading new firmware to a computer 24+ billion kilometers away with a round-trip signal time of over 40 hours — drew particular awe.
Douglas Lenat’s Automated Mathematician Source Code
Summary: The source code for Douglas Lenat’s AM (Automated Mathematician), an influential AI program from the mid-1970s, has been extracted from the SAILDART archive and published on GitHub. AM was written in Interlisp and ran on a SUMEX PDP-10 with 256K of core memory. The program explored mathematical concepts autonomously by applying heuristic rules to generate conjectures, and was one of the earliest examples of automated mathematical discovery. The code is believed to be in the public domain as it was funded by the US government through ARPA.
HN Discussion: Commenters asked for context on what AM actually did and how effective it was. Someone noted that Lenat never gave Stephen Wolfram access to his later Cyc software, apparently out of protectiveness. The discussion touched on AM’s relationship to Lenat’s subsequent EURISKO project and the broader history of symbolic AI approaches to mathematical reasoning.
Academic & Research
Hamilton-Jacobi-Bellman Equation: Reinforcement Learning and Diffusion Models
Summary: This technical blog post connects the Hamilton-Jacobi-Bellman (HJB) equation — a cornerstone of optimal control theory — to modern reinforcement learning and diffusion models. The author traces how the continuous-time optimization framework underlying HJB relates to the discrete-time value iteration used in RL, and shows how diffusion models’ score-matching objectives can be interpreted through the lens of stochastic control. The mathematical treatment bridges control theory, dynamic programming, and generative modeling.
HN Discussion: Someone starting to learn RL asked for beginner-friendly resources, noting the post was beyond their current level. An electrical engineer welcomed the connection between control theory fundamentals and modern ML. A more mathematically rigorous commenter raised the fundamental issue of applying continuous semantics on a digital computer, arguing that the reconciliation between continuous analysis and finite-precision arithmetic is consistently swept under the rug in these treatments.
Business & Industry
New Washington state law bans noncompete agreements
Summary: Washington state has enacted legislation banning noncompete agreements for most workers, joining California and a growing number of states that restrict or prohibit the practice. The law eliminates contractual provisions that prevent employees from working for competitors after leaving a job, removing a tool that companies — particularly in tech — have used to suppress wage mobility and restrict labor movement.
HN Discussion: The thread highlighted the irony of regions that want to become “the next Silicon Valley” while defending noncompetes, given that California’s ban on them is widely cited as a key ingredient in Silicon Valley’s success. Commenters pointed to economic analyses showing that noncompete enforcement correlates with lower wages and reduced innovation. The discussion touched on the broader trend of states rolling back noncompete protections.
DigitalOcean Seeks $800M in Funding
Summary: Cloud infrastructure provider DigitalOcean is seeking approximately $800 million in new funding. The company, known for its developer-friendly virtual private servers and straightforward pricing, appears to be raising capital for infrastructure expansion — potentially data center buildout or acquisition. The move comes as cloud providers increasingly compete on GPU capacity for AI workloads.
HN Discussion: Long-time customers praised DigitalOcean’s interface and API for being dramatically simpler than AWS. One commenter noted they had migrated their personal servers to Scaleway due to “the big EU migration” — reflecting a growing trend of European users seeking EU-based alternatives to American cloud providers. Someone suggested DigitalOcean talk to Mistral AI, which recently raised $830 million for datacenter investment.
System Administration
From Proxmox to FreeBSD and Sylve in Our Office Lab
Summary: An office lab team describes migrating their virtualization stack from Proxmox (Linux/KVM-based) to FreeBSD with bhyve and the Sylve management layer. Sylve provides a web GUI and API for managing bhyve virtual machines, similar to how Proxmox wraps KVM. The article covers the migration process, the reasoning behind choosing FreeBSD’s virtualization approach, and how the new setup performs for their office workloads.
HN Discussion: Several commenters asked what Sylve provides that Proxmox doesn’t, questioning whether the switch was motivated by technical advantages or curiosity. A recurring concern was bhyve’s lack of nested virtualization. FreeBSD enthusiasts acknowledged that Linux’s KVM has been tested far more thoroughly and supports a wider range of virtualization features, making bhyve a harder sell for production use. Someone shared that they’d been considering Proxmox as an ESXi replacement but might try this approach first.
Other
Take better notes, by hand
Summary: Brian Schrader argues that handwriting notes produces better retention and understanding than typing, citing research on the cognitive differences between the two modes. The thesis is that the slower pace of handwriting forces you to synthesize and summarize information in real time rather than transcribing verbatim, which is what most people do when typing. The post is a concise case for analog note-taking in a digital world.
HN Discussion: The limited thread debated whether paper notes are practical for long-term storage and retrieval. One commenter argued that paper works best as a transient medium — for checklists, scratch notes, or an inbox that gets digitized later — rather than a permanent archive. The retention benefits were generally accepted, with the practical objection being searchability.
Proactively Parasocial
Summary: Nick Landolfi reframes parasocial relationships as ancient and often productive rather than exclusively pathological. He traces the concept from Horton and Wohl’s 1956 coinage through storytelling traditions, written texts, and the internet era, arguing that any knowledge of a person you haven’t met constitutes a parasocial relationship. The essay concludes that blogging is a proactive way to build parasocial relationships — putting ideas and context into the world so that future collaborators, investors, or colleagues can form a meaningful impression before you ever meet.
HN Discussion: The post had minimal engagement on Hacker News, making it something of an outlier on the front page. The few responses it received were personal reflections on blogging motivations rather than substantive disagreement.
Compiled from the Hacker News front page on 30 March 2026 at 19:00 BST.