Hacker News Evening Brief: 2026-04-22
Hacker News Evening Brief: 2026-04-22
Tim Cook’s handover to John Ternus dominates the conversation, a 27B-parameter model challenges the big labs’ moats, and a programmer spends six years getting Linux to run on Windows 98. There is also a surprisingly popular 5x5 pixel font, a book of bodega cats, and an interactive explainer that walks you through the math of GPS. Here is what 30 fresh stories on the front page actually said.
Business & Industry
”Another Day Has Come” — Gruber on Tim Cook’s legacy and Ternus’s succession
Summary: John Gruber writes that Tim Cook’s transition to executive chairman — handing the CEO role to John Ternus, a hardware engineer — is a striking contrast to the Jobs-to-Cook handover of 2011. That transition was forced by illness; this one is a planned departure by a CEO leaving on top. Gruber calls Cook the “GOAT” by the numbers and highlights his famous shareholder-meeting line: “When we work on making our devices accessible by the blind, I don’t consider the bloody ROI.” He also notes that putting a hardware-first leader like Ternus in charge sends a signal about what Apple intends to prioritise next.
HN Discussion: Commenters shared personal stories about how Apple’s accessibility features transformed family members’ lives — one described a blind mother who relies entirely on iPhone and iPad, something JAWS on Windows could never match. Others pushed back gently on Gruber’s generous assessment, wishing Cook had invested Apple’s massive cash hoard into “wow” software experiences rather than Hollywood productions. A few drew parallels to Ballmer’s Microsoft departure, noting that even successful CEOs can become optically mismatched to the era ahead.
Anker made its own chip to bring AI to all its products
Summary: The Verge reports that Anker — best known for charging cables and power banks — has designed a custom silicon chip called “Thus” that embeds a neural-network processor into its product line. The first application is a pair of earbuds with real-time AI noise cancellation that runs entirely on-device, avoiding cloud round-trips. Anker plans to extend the chip to smart-home cameras, power banks, and other peripherals, creating a unified hardware AI layer across categories that currently rely on smartphone apps or cloud services.
HN Discussion: Several commenters admitted they had no idea Anker designed its own silicon, assuming the company was purely a private-label accessories brand. Reactions split between admirers of the vertical-integration play and sceptics who questioned whether consumers actually want AI features in charging cables. One commenter noted the irony of a company famous for simple hardware now joining the “AI in everything” trend.
AI & Tech Policy
Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model
Summary: Alibaba’s Qwen team has released Qwen3.6-27B, a dense 27-billion-parameter model that achieves coding-benchmark scores comparable to models many times larger. The release includes weights for local deployment and a web demo. The team highlights that the model can run on consumer hardware — Simon Willison reported it quantised to 16.8 GB on an M5 Pro with 128 GB RAM, generating at roughly 25 tokens per second — making it one of the strongest open-weight models accessible without a data centre.
HN Discussion: The thread was dominated by practical deployment reports: users running the model on M4 and M5 Macs with 32–128 GB RAM, comparing its SVG-generation and coding ability favourably to Opus 4.7 at a fraction of the cost. Several commenters asked the perennial question of what competitive advantage closed-model labs retain when open-weight models deliver 95% of the capability on consumer silicon. One user posted CodePen demos of a pelican on a bicycle and a dragon eating a hotdog, generated entirely by the model as an SVG.
Our eighth generation TPUs: two chips for the agentic era
Summary: Google Cloud unveiled its eighth-generation TPU architecture, built specifically for the “agentic era” of AI workloads where models run longer, multi-step tasks rather than single-turn queries. The new design uses two complementary chips — one optimised for training and one for inference — connected by a high-bandwidth interconnect that Google claims reduces token-generation latency for agent loops. The announcement is notable because it signals Google’s belief that the dominant AI workload pattern is shifting from batch training to sustained, interactive agent execution.
HN Discussion: Commenters observed that Google has been quietly building market share while rivals grab headlines, attributing the success to vertical integration — Google designs the chips, runs the data centres, and ships the models, eliminating the friction that pure chip vendors or pure cloud providers face. Several noted that Gemini consistently produces fewer tokens per answer than GPT or Claude, and speculated this reflects a deliberate training strategy that prioritises efficiency over verbose reasoning traces.
ChatGPT Images 2.0
Summary: OpenAI has released Images 2.0, the next iteration of its image generation model, available in ChatGPT, the Codex agent, and the API. The update adds support for multiple aspect ratios, a “thinking” mode with built-in reasoning, and multilingual text rendering that handles Japanese, Korean, Chinese, Hindi, and Bengali characters embedded inside generated images. OpenAI frames the release not as a creative toy but as a “visual workflow platform” — images as structured outputs for design, education, and development tasks, with outputs reaching 2K resolution in the API.
HN Discussion: Simon Willison shared results from testing the model on a “Where’s Waldo” prompt with a raccoon holding a ham radio, finding the output plausible but imperfect. Another commenter ran an elaborate 64-Pokémon-grid prompt that tested style-conditioning rules (8-bit for single-digit Pokédex numbers, charcoal for two-digit, Ukiyo-e for three-digit) — Images 2.0 followed the logic correctly while a competing model applied styles by row rather than by number and misidentified several Pokémon. Several commenters noted the “uncanny valley” feeling of looking at photorealistic images where no human participated in composition, photography, or design.
Martin Fowler: Technical, Cognitive, and Intent Debt
Summary: Martin Fowler and Kent Beck appeared on stage with Gergely Orosz at the Pragmatic Summit, comparing today’s AI shift to earlier technology transitions. Fowler introduced a tripartite framework for the debt AI introduces: technical debt (messy generated code), cognitive debt (developers losing understanding of systems they did not write), and intent debt (the gap between what a human actually wants and what the model produced, which widens when prompts are vague). Beck drew a parallel to Larry Wall’s “three virtues of a programmer” — laziness, impatience, and hubris — arguing that AI threatens laziness because it makes it too easy to generate code without thinking through the abstraction first.
HN Discussion: One commenter pushed back on Beck’s framing, arguing that “moving up abstraction layers” has always created cognitive debt — assembly to Python is a bigger leap than human-to-model — and that thinking deeply about a problem does not require expressing those thoughts as domain-driven code objects. Another noted that Fowler’s “intent debt” maps closely to YAGNI, but most teams use YAGNI to justify not building abstractions rather than the opposite. A senior engineer who has actually worked alongside people mentioned in the article claimed his LLM-managed projects score better on traditional quality metrics than his pre-AI work.
Coding Models Are Doing Too Much
Summary: A researcher documents the “over-editing” problem in AI coding models: asked to fix a single off-by-one error, models rewrite entire functions, add unnecessary input validation, rename variables, and introduce helper functions that were never requested. The author builds a benchmark that measures structural divergence between a minimal correct fix and the model’s actual output, showing that even functionally correct edits can make code review dramatically harder because the diff becomes unrecognisable. The paper proposes training models to be “faithful editors” that change only what is strictly necessary.
HN Discussion: No comments had been posted at time of writing.
Startups Brag They Spend More Money on AI Than Human Employees
Summary: 404 Media reports on a growing trend of AI-startup CEOs posting LinkedIn updates celebrating that their token-compute bills exceed their human-payroll costs. Swan AI’s CEO Amos Bar-Joseph went viral boasting that his four-person company spent $113K in a single month on Claude usage, framing it as a positive signal of growth and a deliberate strategy to “scale with intelligence, not headcount.” The article treats this as a performative metric — the AI equivalent of revenue vanity — and questions whether any of these companies have demonstrated that the spending translates into shipping better products.
HN Discussion: One commenter compared the brag to “a trucking company bragging about how much fuel they’re using,” arguing that cost-per-output is the only metric that matters and by that measure many startups are doing worse than last year, just more expensively. Another suggested the real audience is investors: these posts are signalling rounds, not operational competence. Several noted that tokens have simply replaced lines-of-code as the dumbest productivity metric available.
Surveillance Pricing: Exploiting Information Asymmetries
Summary: The LPE Project examines “surveillance pricing” — the practice where companies use personal data to charge different customers different prices for the same product. The article traces how dynamic pricing evolved from airline yield management to retail algorithmic pricing enabled by data brokers, behavioural tracking, and real-time price-adjustment algorithms. The authors argue this represents an extreme information asymmetry: the seller knows your purchase history, location, demographics, and price sensitivity, while you have no visibility into the pricing logic being applied to you.
HN Discussion: No comments had been posted at time of writing.
Security & Privacy
GitHub CLI now collects pseudoanonymous telemetry
Summary: GitHub has added opt-out telemetry collection to the gh CLI tool. The data includes command names, arguments, timestamps, and a device identifier, all labelled “pseudoanonymous.” The change landed via a small PR that simply removed the environment variable gating telemetry on — meaning it is now on by default for every user. GitHub’s stated rationale is understanding feature usage patterns to prioritise development work.
HN Discussion: The thread was sharply critical. One commenter noted that in CI/CD pipelines and bastion-host environments, any outbound connection to GitHub can break network policies — telemetry on by default means these environments may fail or leak data unintentionally. Another contrasted gh with git, which has never phoned home in twenty years: git is entirely local until you explicitly push. A third commenter drew a line between analytics (data attached to a link or artifact, deleted when that artifact is deleted) and surveillance (data that follows the user across sessions). Several suggested a verbose dry-run mode that shows exactly what would be sent without actually sending it.
Tech Tools & Projects
DuckDB 1.5.2 – SQL database that runs on laptop, server, in the browser
Summary: DuckDB 1.5.2 is a patch release of the in-process analytical SQL database, featuring incremental performance improvements and bug fixes. The release announcement reiterates DuckDB’s core positioning: a single binary that runs anywhere (laptop, server, browser via WebAssembly) and handles analytical workloads that would normally require a server cluster. The project has gained traction among data engineers who want PostgreSQL-style query semantics without provisioning infrastructure.
HN Discussion: A data engineer called DuckDB “a generational technology innovation” with “insanely good ergonomics” for the data sizes most teams actually deal with. Others shared benchmarks — one linked a Java JDBC benchmark showing fast modifications enabled by newly added user-defined functions. A commenter noted that DuckDB also runs inside Excel via the free xlwings Lite add-in, enabling Jupyter-like notebook workflows within spreadsheets.
MuJoCo – Advanced Physics Simulation
Summary: MuJoCo is DeepMind’s open-source physics engine for robotics simulation, supporting contact-rich dynamics that make it the standard tool for reinforcement-learning training in robotics. The GitHub page highlights recent additions including the Mujoco Playground RL environment wrapper, which bundles classic DeepMind control benchmarks alongside newer scenarios. The engine runs on macOS and does not require NVIDIA hardware, making it accessible to teams without GPU clusters.
HN Discussion: Several commenters shared their own MuJoCo projects: one person is training a Unitree G1 humanoid robot and praised the engine for not requiring the NVIDIA software stack; another built differential-policy training for quadruped locomotion; and a third noted that “StuffMadeHere” used MuJoCo to simulate a mini-golf course in a recent YouTube video. A commenter who runs MuJoCo simulations in the browser linked a demo of a humanoid walking on a virtual sheet.
Parallel Agents in Zed
Summary: Zed has added support for running multiple AI agents in parallel within a single editor session. The feature lets developers kick off several agent tasks simultaneously — for example, one agent refactoring a module while another writes tests — and merges the results into the same codebase. This addresses a common bottleneck where developers must wait for one agent to finish before starting the next task.
HN Discussion: Comments were brief and positive — one user had just installed Zed with vim mode the night before, another called it “becoming more and more useful by the day.” No substantive critique had appeared at time of writing.
Show HN: Broccoli, one-shot coding agent on the cloud
Summary: Broccoli is an open-source “one-shot” coding agent that connects to Jira, reads a ticket, plans an approach, generates code, and submits a pull request — all in a single automated pass. The readme includes detailed setup instructions for self-hosting on your own infrastructure, and the authors argue that teams should invest in building their own agent harness rather than depending on third-party services.
HN Discussion: A commenter with a similar Jira-connected setup that stops at “analysis and approach” said they were taking inspiration from Broccoli to push their system all the way to code generation. Another praised the detailed setup instructions in the readme, and a third agreed with the philosophy of building on top of existing agent frameworks like Claude Code or Codex rather than starting from scratch.
Windows 9x Subsystem for Linux
Summary: A developer known as Hailey has spent six years building WSL9x, a project that runs a modern Linux kernel inside Windows 98. The work involves reimplementing Linux system-call translation on top of the Win9x kernel, effectively creating a reverse-WSL: instead of running Windows binaries on Linux, it runs Linux binaries on Windows 9x. The project is an extraordinary feat of reverse engineering, requiring deep understanding of both the Win9x internal APIs and Linux kernel expectations.
HN Discussion: Commenters were stunned by the technical achievement, with one asking whether Hailey is “a wizard” and comparing it to the joke about a mathematician proving a theorem is trivial after two hours of explanation. Others compared it to CoLinux and Cygwin — earlier projects that bridged the Windows/Linux gap — and noted the delicious irony that this appears on HN the same day as the “vibe-coded Show HN” thread: one person spent six years understanding Win9x internals, while the other thread is full of apps prompted into existence in 20 minutes. A commenter shared a direct Codeberg link since the original toot required a social-media hop-through.
5x5 Pixel Font for Tiny Screens
Summary: A designer has created a complete 5x5-pixel bitmap font that renders every printable ASCII character within a 25-pixel grid. The font sits in the space where 4x4 is too cramped to distinguish letters like “M” from “N” but 6x6 is wasteful on microcontroller displays with severe memory constraints. Each glyph is hand-tuned for legibility at extreme resolution, with considerations for how characters interact in sequences.
HN Discussion: Commenters debated the minimum viable grid: one argued you need at least 7 vertical pixels to accommodate descenders on “g” and “y” while keeping lowercase shorter than uppercase, meaning a practical minimum is 8x6 with inter-character spacing. Another noted that 3x2 is the same resolution as braille (rotated 90 degrees) and wondered whether a system could be both visually and finger-readable. A reference to 1x5 subpixel rendering and a discussion of CJK scripts at similar resolutions rounded out the thread.
Prefill-as-a-Service: KVCache of Next-Generation Models Could Go Cross-Datacenter
Summary: This arxiv paper proposes “Prefill-as-a-Service” — an architecture where the KV cache generated during the prefill phase of LLM inference is stored and served as a shared cross-datacentre resource, allowing multiple inference nodes to reuse the same precomputed context without each one running the full prefill pass. The idea is that as models grow larger and context windows expand, prefill compute becomes the dominant bottleneck, and sharing KV caches across inference servers could dramatically reduce total compute costs for multi-user deployments.
HN Discussion: One commenter questioned whether this is materially different from standard content-addressable caching, just with larger files, tighter time sensitivity, and per-user scoping. They suggested the bigger efficiency gains would come from scheduling agent tasks during off-peak hours rather than cache-sharing at the KV level, noting that agents are “chatty” and need low-latency turn-by-turn responses that make batch scheduling difficult. Another proposed an async-queue model where non-urgent agent tasks get scheduled opportunistically when capacity is available.
Web & Infrastructure
Scoring Show HN submissions for AI design patterns
Summary: A researcher analysed Show HN submissions over several months and found that the number of submissions has roughly tripled, with a growing share exhibiting visual design patterns strongly associated with AI-generated frontends: gradient hero sections, rounded-card grids, emoji iconography, and generic sans-serif typography. The author built a scoring system that rates each submission on a “vibe-coded-ness” scale by detecting these patterns programmatically. The headline was revised during the day — the original “Show HN submissions tripled and are now mostly vibe-coded” was softened to focus on scoring rather than declaring a majority.
HN Discussion: Simon Willison noted that side-projects are inherently time-constrained and AI saves time, so the trend is expected — the more interesting conversation is about which visual patterns AI converges on and why. Another commenter pointed out that the analysis is already outdated because different model versions produce distinct archetypes (Opus 4.5/4.6 has a noticeably different style than earlier versions), so lumping all AI-generated sites together misses model-specific fingerprints. A third compared the situation to “Eternal September,” noting that more submissions means more exploration but also much lower signal-to-noise.
Columnar Storage Is Normalization
Summary: A short essay argues that columnar database storage is conceptually equivalent to database normalisation. The author reasons that normalising a row-oriented table — splitting it so that each column lives in its own structure with a foreign-key-like join — produces the same logical arrangement as a columnar engine stores data physically. The piece uses concrete examples: a user table split into user_attributes and user_posts maps directly to how a columnar engine keeps each column in a separate memory buffer.
HN Discussion: Commenters pushed back on the analogy’s precision. One argued that normalisation is a logical design concept while columnar storage is a physical implementation detail — conflating the two can mislead more than clarify. Another connected it to Domain-Key Normal Form from relational theory. A third pointed out that for nested datasets and arrays, the columnar equivalence breaks down: a JSON array of objects cannot be flattened into independent columns without either stringifying the array (losing queryability) or exploding it into many rows (changing the data model).
History & Science
Alberta startup sells no-tech tractors for half price
Summary: An Alberta-based startup is selling mechanically simple, electronics-free tractors at roughly half the price of modern equivalents. The tractors lack GPS guidance, telemetry screens, and software-locked service schedules — the very features that have made contemporary farm equipment expensive and difficult to repair without dealer authorisation. The story sits at the intersection of right-to-repair advocacy and agricultural economics, offering farmers a deliberately low-tech alternative to John Deere’s increasingly closed ecosystem.
HN Discussion: A commenter who ran a 1970s Massey Ferguson shared that the old machines were “clunky and heavy” but entirely fixable — hot-wired without a key, with an air filter that was literally a pipe bubbling through engine oil. Others framed the demand as a reaction to manufacturer lock-in rather than a rejection of technology itself, arguing there is room for an OEM that keeps an open ecosystem and wins users by choice. One commenter wanted exactly this for cars: an EV without tracking and touchscreens but keeping heated seats and power windows.
3.4M Solar Panels
Summary: Mark’s Blog has published a data-driven analysis of American solar-farm installations, tracking roughly 3.4 million individual panels across the country. The post visualises geographic distribution, panel orientation, tilt angles, and capacity factors, revealing significant regional variation — some states with the highest solar potential (Florida, Texas) have relatively few installations, while others with moderate sun punch well above their weight. The dataset was assembled from public permitting records and satellite imagery.
HN Discussion: Commenters were struck by Florida’s low installation numbers despite abundant sunshine, attributing it to regulatory barriers and utility-company lobbying. One shared that a friend with under 10 kW of panels is now 97% off-grid in Florida’s hot, humid climate — installed not for cost savings but for hurricane resilience. Off-grid users described their setups: 7 kW of panels with 40 kWh of lithium batteries and a rarely-used generator. Others asked for histograms of azimuth and tilt angles, curious about regional installation patterns.
Ultraviolet corona discharges on treetops
Summary: Penn State researchers have captured ultraviolet corona discharges — faint electrical glows — at the tips of tree leaves during thunderstorms. The phenomenon occurs when the electric field beneath a storm cloud is strong enough to ionise the air around sharp leaf tips, creating visible UV emission without a full lightning strike. The team used UV-sensitive cameras alongside conventional video to record the effect, publishing their findings in an atmospheric science journal. The corona glow is invisible to the naked eye but represents a measurable atmospheric electricity interaction between forests and storm systems.
HN Discussion: One commenter pushed back on the headline, noting there is no actual photograph of glowing treetops — only UV-camera video with a visible-light overlay. Another shared a personal experience of standing near a lightning strike and seeing “purple tentacles” reaching up from leaves, then expressed confusion because the article states the effect is invisible to the naked eye. A fascinating tangent: lightning strikes stimulate fungi to produce mushrooms, and some Japanese shiitake cultivators now use electrical shockwaves to increase yields by over 200%.
Who Killed the Florida Orange?
Summary: Slate examines the collapse of Florida’s commercial orange industry, tracing it from a once-dominant agricultural sector to a fraction of its former output. The article covers citrus greening disease (huanglongbing), real-estate development consuming grove land, and the decline in fresh-squeezed orange juice quality as the industry shifted to concentrate and flavour packs. The result is a near-total loss of a food crop that defined a state’s identity and economy for over a century.
HN Discussion: Commenters drew parallels to the Gros Michel banana collapse, another monoculture wiped out by disease. Several noted that supermarket orange juice “doesn’t even taste like oranges” anymore, and that fresh-squeezed remains the benchmark against which all commercial products fail. One commenter framed the collapse as premature monoculture failure driven by human stress — disease pressure, land-use change, and the economic incentive to prioritise volume over quality.
How does GPS work?
Summary: An interactive explainer walks readers through the mathematics and engineering of GPS from first principles. The article starts with the basic principle — measuring time-of-flight from satellite to receiver — and builds up through the geometry of trilateration, the role of atomic clocks on satellites, and the relativistic corrections (both special and general) that would cause GPS to drift by kilometres per day if ignored. The interactive format lets readers manipulate variables and see how errors propagate.
HN Discussion: One commenter linked Bartosz Ciechanowski’s GPS explainer as a complementary resource. Another described visiting a fundamental station that uses very-long-baseline interferometry of quasar radio signals to measure its own position to sub-millimetre accuracy — the same infrastructure that calibrates satellite orbits. A third described the engineering challenge of building a GPS network from scratch: antenna design, frequency selection, and detecting signals below thermal noise. The flat-earth joke made its obligatory appearance.
Youth Suicides Declined After Creation of National Hotline
Summary: The New York Times reports that youth suicide rates in the US have declined following the creation of the 988 Suicide & Crisis Lifeline, a nationwide three-digit number launched to replace the previous 10-digit system. The study analysed pre- and post-launch data across states and found a statistically significant reduction in suicide attempts and completions among young people, attributed to faster access to crisis intervention and reduced barriers to seeking help.
HN Discussion: The thread took a sombre turn with commenters sharing difficult personal experiences with crisis response. One described calling a crisis line in California and finding that ambulances are only dispatched if you physically witness an attempt — otherwise police arrive, often untrained, and the person in crisis is taken into custody and stripped of rights pending medical evaluation. Several noted that the Trump administration had terminated the 988 Lifeline’s LGBTQ Youth Specialized Services program, cutting a pathway that connected young callers to trained counsellors.
System Administration
Garbage Collection Without Unsafe Code
Summary: Nick Fitzgerald has built safe-gc, a garbage collection library for Rust that uses zero unsafe blocks — not in the API and not in the implementation, enforced by a forbid(unsafe_code) pragma at the crate root. The key innovation is replacing pointer-based GC references with index-based offsets into a backing vector, which the Rust type system can track without raw-pointer unsafety. The trade-off is ergonomics: every GC-managed reference must be wrapped in Gc<T>, and users must manually implement a Trace trait to enumerate outgoing edges.
HN Discussion: Several Rust developers weighed in on the index-versus-pointer debate. One argued that replacing pointers with indexes doesn’t actually eliminate memory-safety concerns — it just converts them from compile-time-checkable pointer bugs into logic bugs the compiler cannot detect. Another noted the ergonomic cost of wrapping every reference in Gc<T> and suggested that nullable Gc types (instead of Option<Gc<T>>) could reduce branch-predictor pressure. A commenter working on C FFI said they explored safe-gc for removing unsafe code from a library but ultimately took a different approach of mapping valid pointers at the FFI boundary.
Academic & Research
Drunk Post: Things I’ve learned as a senior engineer (2021)
Summary: A Substack essay originally written in 2021 collects candid observations from years of senior engineering experience. The author argues that constraints you don’t choose are better product decisions than constraints you invent, that HN and r/programming comments are mostly worthless for deep technical insight, and that the best part of software engineering is meeting people who think about problems the same way you do. The tone is informal — the “drunk” framing signals candour rather than rigour — but the observations have resonated with readers five years on.
HN Discussion: Commenters had mixed reactions to the claim that HN comments are worthless — one replied “LOL can’t disagree with that opinion.” Several confirmed the “don’t meet your heroes” observation, with one sharing that they paid $5K for a course by a hero only to realise the instructor was making it up like everyone else. A counterpoint: for every 50 engineers someone meets, maybe one is “here for the craft” while the rest want a 9-to-5, making genuine technical conversation rare. A link-shortener operator shared that running on shared hosting with only cron and a database forced better product decisions than a VPS would have.
Other
Making RAM at Home [video]
Summary: A YouTube video documents the process of fabricating RAM chips from scratch in a home-lab setting. The project involves building a clean room from a backyard shed with positive-pressure air filtration, depositing semiconductor materials, and patterning circuits using DIY photolithography. The video is part of a broader “HackerFab” movement that publishes open-source tools and resources for amateur chip manufacturing, lowering the barrier to semiconductor experimentation from industrial fab costs to garage-lab budgets.
HN Discussion: One commenter who had watched the video considered posting it to HN but wasn’t sure it fit the site’s scope. Another referenced the creator’s previous video about building a clean room from a shed, calling the positive-pressure particle-counting setup “almost mystical.” A third joked about the timeline of unfulfilled predictions: 1999 promised flying cars, 2024 promised LLM robots, and 2026 delivers homemade RAM. Someone asked how the capacitor value is read and recharged during the refresh cycle, admitting they still don’t fully understand transistors.
Bodega Cats of New York
Summary: A photo book and website documenting the cats that live in New York City bodegas. Each cat serves an implicit pest-control function while becoming a neighbourhood fixture — customers know them by name, take photos, and sometimes adopt their kittens. The project has grown into a community archive of the informal relationships between small-business owners, their feline co-workers, and the neighbourhoods they serve.
HN Discussion: Commenters shared their own bodega-cat encounters: one described an orange cat named Ice Spice who birthed kittens that now “wander in and own the place, whining at customers to open the doors.” Another noted the realisation that the cats’ primary purpose is rat control, which prompted “quite the chuckle.” Someone recommended the related book Shop Cats of New York, and a commenter joked about the sequel: Bodega Rats of New York.
XOR’ing a register with itself is the idiom for zeroing it out. Why not SUB?
Summary: Raymond Chen on the Old New Thing blog asks why XOR eax, eax is the universal assembly idiom for zeroing a register when SUB eax, eax produces the same result in the same number of bytes. The answer lies in the carry flag: XOR does not affect the carry flag, while SUB does (even when subtracting a value from itself). If the carry flag held a meaningful value from a previous operation, SUB would overwrite it, potentially breaking code that relies on carry for multi-precision arithmetic or conditional branching. XOR is carry-safe.
HN Discussion: One commenter explained the hardware-level reasoning: XOR is a single logic-gate operation where all bits fire simultaneously, while SUB requires carry propagation from LSB to MSB through a chain of gates, making XOR faster on most ALU designs. Another shared a historical detail — on some IBM processors, XOR with identical source and destination inhibited parity/ECC error checking, meaning it could clear a register with bad parity without triggering a machine check. Someone spotted a steganographic opportunity: using XOR for “zero” and SUB for “one” in compiled output to hide messages in executables.
That wraps up the evening brief. See you tomorrow morning.