HN Morning Brief - March 16, 2026


Good morning! Today’s Hacker News morning brief covers the top 30 stories from March 16, 2026.

AI & Tech Policy

1. Canada’s bill C-22 mandates mass metadata surveillance (597 points, 168 comments)

Canada’s Bill C-22, the Lawful Access Act, has been introduced and represents a significant expansion of government surveillance capabilities. The bill is divided into two main sections: one addressing law enforcement access to personal information from communication service providers, and another establishing the Supporting Authorized Access to Information Act (SAAIA). While the government claims to have scaled back warrantless access provisions from the previous Bill C-2, critics argue the new legislation still enables dangerous backdoor surveillance risks that could fundamentally alter the privacy landscape in Canada.

The bill introduces a new term called “electronic service provider” which extends far beyond traditional telecom and internet providers to potentially include internet platforms like Google and Meta. All such providers would be obligated to provide reasonable assistance for testing surveillance capabilities and would be required to keep such requests secret. Perhaps most concerningly, the bill mandates that core providers retain categories of metadata including transmission data for periods up to one year, representing an expansion of obligations compared to previous legislation.

Key Discussion Points:

  • Commenters highlighted a loophole in the bill where judges can waive the requirement to provide a copy of the warrant to subjects if deemed “justified in the circumstances”
  • Many drew parallels to similar surveillance frameworks in other Five Eyes countries and expressed concerns about global information sharing
  • Some argued that investigative work should be difficult by design, citing Blackstone’s ratio about preferring to let ten guilty persons escape rather than one innocent suffer
  • Critics warned of the potential for abuse by future governments, with one commenter stating “It’s a preparation for wildly unpopular measures in the next ~10 years”

Original Article: https://www.michaelgeist.ca/2026/03/a-tale-of-two-bills-lawful-access-returns-with-changes-to-warrantless-access-but-dangerous-backdoor-surveillance-risks-remains/


Sebastian Raschka has published a comprehensive visual gallery comparing the architectures of major open-weight LLMs, serving as an educational resource for researchers and practitioners. The gallery collects architecture figures and fact sheets from Raschka’s detailed comparisons of models including DeepSeek V3, Gemma 3, and others ranging from 8B to 671B parameters. The visualization allows users to click on figures to enlarge them and navigate to corresponding article sections, with models organized by scale, decoder type, attention mechanisms, and key architectural details.

The resource has been updated as recently as March 15, 2026, and is now available as a physical poster through Zazzle for those who prefer a tangible reference. The gallery helps researchers understand the architectural evolution and key differences between various open-source models, from dense decoders with standard GQA to sparse MoE architectures with more complex attention patterns like MLA (Multi-Head Latent Attention). One particularly interesting comparison is between dense models like Llama 3 and MoE models like DeepSeek, highlighting the trade-offs between parameter efficiency and computational complexity.

Key Discussion Points:

  • One commenter noted that while there have been many improvements to LLM architectures over the seven years since GPT-2, there have been no fundamental innovations
  • Another pointed out hybrid architectures like Qwen 3.5 that incorporate linear attention variants as a significant innovation
  • Readers compared the gallery to the Neural Network Zoo from the Asimov Institute as a similar visualization resource
  • Several expressed interest in seeing the architectural evolution laid out as a family tree or influence diagram

Original Article: https://sebastianraschka.com/llm-architecture-gallery/


9. LLMs can be exhausting (169 points, 124 comments)

Tom Johnell explores the mental exhaustion that can result from working with LLMs like Claude and Codex for extended coding sessions. He identifies several contributing factors including degradation in prompt quality as fatigue sets in, slow feedback loops that stretch into painful multi-hour cycles, and the psychological toll of constantly switching between manual exploration and AI assistance. The author notes that when he’s tired and half-assing prompts, he interrupts LLMs more frequently which leads to worse outcomes, creating a vicious cycle.

Johnell suggests recognizing when you’re not getting joy out of writing great prompts as an important signal to take a break. He recommends ensuring you have clarity about your desired end-state before submitting prompts, so you’re confident the AI will “CRUSH IT.” For tasks with slow feedback loops like parsing large files, he suggests spinning up new sessions with clearly stated problems and expectations to avoid context bloat and the cognitive overhead of constantly restarting experiments.

Key Discussion Points:

  • Many related to the mental fatigue, comparing the experience to being an instructor for a reckless Formula 1 autopilot
  • One commenter noted that you become more of a code reviewer than a coder when using LLMs, which is a different skill set
  • Several discussed the problem of corporate mandates for AI usage requiring large PR reviews, with one commenter calling this a “red flag”
  • Others found working asynchronously with agents, letting them idle while you finish your own work, helped reduce the exhausting pace

Original Article: https://tomjohnell.com/llms-can-be-absolutely-exhausting/


17. Quillx is an open standard for disclosing AI involvement in software projects (19 points, 22 comments)

Quillx has been introduced as an open standard designed to provide transparency about AI involvement in software development projects. The standard creates a framework for developers to disclose which parts of their codebase or documentation were generated or significantly assisted by AI tools. This aims to address the growing concern about AI-generated code and the need for proper attribution, allowing users and downstream consumers to make informed decisions about the software they use.

The standard is designed to be both human-readable and machine-parseable, enabling automated tools to check for AI disclosures in codebases. It provides schemas for different levels of AI involvement, from minor assistance to complete generation, and allows for detailed attribution including which AI tools were used and what prompts were involved. The project is hosted on GitHub and invites community input to refine the standard as the ecosystem around AI-assisted development continues to evolve.

Key Discussion Points:

  • Commenters debated whether AI disclosure should be required or optional, with some suggesting it should be treated like any other tool
  • One noted that AI-generated code often has distinct patterns that could be detected automatically anyway
  • Several pointed out the parallel to existing disclosure standards like SPDX for software licenses
  • Some questioned whether disclosure matters if the end result works correctly

Original Article: https://github.com/QAInsights/AIx


24. ASCII and Unicode quotation marks (9 points, 0 comments)

This article from Cambridge University provides a technical discussion about the differences between ASCII quotation marks and their Unicode equivalents. Written in 2007, it remains relevant as developers continue to grapple with the practical issues that arise from “smart quotes” and other Unicode punctuation being substituted in text processing pipelines. The author explains the various Unicode code points for different quotation mark styles and the encoding issues that can occur when these are not handled consistently.

The piece is particularly relevant in an era where copy-pasting from word processors and websites can inadvertently introduce fancy quotation marks that break code or parsing scripts. It serves as a reminder of the importance of being explicit about character encodings and the perils of silent character substitution that can introduce hard-to-debug issues. The article includes practical examples and recommendations for handling quotation marks in a cross-platform, encoding-agnostic way.

Original Article: https://www.cl.cam.ac.uk/~mgk25/ucs/quotes.html


29. What is agentic engineering? (124 points, 75 comments)

Simon Willison provides an introduction to the emerging field of “agentic engineering,” which refers to designing and building systems that leverage AI agents to accomplish complex tasks through autonomous decision-making and multi-step reasoning. The article explains how agentic engineering differs from traditional AI assistant usage by creating systems where AI agents can plan, execute, and iterate on tasks with minimal human intervention. This represents a shift from prompt engineering to system engineering where the goal is to create reliable, reusable agent workflows.

Willison outlines key patterns in agentic engineering including task decomposition, tool use, memory systems, and error recovery mechanisms. He discusses how modern agents are becoming increasingly sophisticated at breaking down complex problems into manageable sub-tasks, using external tools like APIs and databases, and recovering from failures through iterative improvement. The article serves as both an introduction for newcomers and a reference for practitioners looking to build more sophisticated agent-based systems.

Key Discussion Points:

  • One commenter drew parallels to the shift from monolithic applications to microservices as an analogy for agentic engineering
  • Several discussed the challenges of debugging agent systems when things go wrong, noting the difficulty of tracing decisions
  • Others pointed out the importance of human oversight and the risks of fully autonomous systems
  • Some debated whether “agentic engineering” is a meaningful new field or just a marketing term for existing AI engineering practices

Original Article: https://simonwillison.net/guides/agentic-engineering-patterns/what-is-agentic-engineering/


Security & Privacy

16. Glassworm is back: A new wave of invisible Unicode attacks hits repositories (248 points, 154 comments)

A new wave of Glassworm attacks is compromising hundreds of GitHub repositories using invisible Unicode characters to hide malicious payloads. The threat actor, first identified nearly a year ago, has returned with a coordinated campaign affecting GitHub, npm, and VS Code. The attacks use Private Use Area (PUA) Unicode characters that render as nothing in virtually every editor, terminal, and code review interface, allowing attackers to encode malicious payloads directly inside what appears to be empty strings. When JavaScript encounters these strings, a small decoder extracts the real bytes and passes them to eval().

The March 2026 wave has affected at least 151 repositories according to GitHub code search, with notable compromises including projects from Wasmer, Reworm (1,460 stars), and opencode-bench from anomalyco. The attacks are sophisticated, with malicious injections arriving in realistic commits that don’t appear suspicious on first review. The decoded payloads have previously been used to fetch and execute second-stage scripts using Solana as a delivery channel, capable of stealing tokens, credentials, and secrets.

Key Discussion Points:

  • Commenters argued that GitHub should provide built-in warnings for invisible Unicode characters, similar to their secret scanning feature
  • One noted that it baffles them any maintainer would merge code with eval() calls without understanding what it does
  • Several discussed using ASCII-only enforcement or gitattributes to prevent these attacks
  • Others pointed out that the mere presence of eval() should be a red flag, regardless of invisible characters

Original Article: https://www.aikido.dev/blog/glassworm-returns-unicode-attack-github-npm-vscode


20. Federal Right to Privacy Act – Draft legislation (64 points, 35 comments)

A draft Federal Right to Privacy Act has been proposed in response to growing concerns about digital surveillance and data collection by both government and private entities. The legislation aims to establish comprehensive federal privacy protections that would apply across all states, creating a unified framework for how personal data can be collected, used, and shared. The draft bill includes provisions requiring explicit consent for data collection, limits on data retention, and requirements for data minimization.

The legislation draws inspiration from international privacy frameworks like GDPR but attempts to balance privacy concerns with legitimate business and security needs. It includes enforcement mechanisms through the FTC and allows for private rights of action in certain circumstances. The draft has sparked significant debate about the appropriate balance between privacy, innovation, and national security interests, with various stakeholders providing feedback on how to refine the framework.

Key Discussion Points:

  • Commenters debated whether federal preemption of state privacy laws would be beneficial or harmful
  • Some noted the difficulty of legislating rapidly evolving technology in a way that doesn’t stifle innovation
  • Others pointed out the need for strong enforcement mechanisms to make any privacy law meaningful
  • Several discussed the parallels and differences with existing frameworks like GDPR and CCPA

Original Article: https://righttoprivacyact.github.io


Tech Tools & Projects

3. Chrome DevTools MCP (2025) (439 points, 182 comments)

Google has released Chrome DevTools MCP, which enables coding agents to directly connect to active browser sessions for debugging and analysis. The enhancement allows agents to reuse existing browser sessions that are already signed in, access active debugging sessions in the DevTools UI, and seamlessly transition between manual and AI-assisted debugging. Users can select elements in the Elements panel or network requests and ask their coding agent to investigate issues, with the agent gaining full access to DevTools capabilities.

The new auto-connect feature builds on Chrome’s existing remote debugging capabilities and requires Chrome M144 or later. When configured with the —autoConnect option, the MCP server will connect to an active Chrome instance and request a remote debugging session. Chrome shows a dialog asking for user permission each time a debugging session is requested and displays a “Chrome is being controlled by automated test software” banner while debugging is active to ensure transparency.

Key Discussion Points:

  • One commenter described using Playwright with Claude Code to intercept all requests and responses, creating strongly typed APIs for any website
  • Others debated whether MCP is “dead” or still useful, with some arguing centralized remote MCP servers are incredibly useful in enterprise environments
  • Some noted the security implications of giving agents access to your authenticated browser sessions
  • One commenter shared that a similar skill already exists for this use case: chrome-cdp-skill

Original Article: https://developer.chrome.com/blog/chrome-devtools-mcp-debug-your-browser-session


4. How I write software with LLMs (95 points, 34 comments)

Stavros Korokithakis shares his personal approach to writing software with assistance from LLMs. He describes using AI as a force multiplier that allows him to move faster and explore more options than would be possible manually, while still maintaining his role as the architect and decision-maker. The article covers practical tips for effective LLM-assisted development including how to structure prompts, when to ask for code versus when to ask for explanations, and how to maintain code quality while leveraging AI generation.

Korokithakis emphasizes that he still reviews all AI-generated code carefully and uses his deep understanding of software architecture to guide what the AI produces. He discusses the importance of providing clear context and constraints to get useful output, and shares his experience with different LLMs for various coding tasks. The article is particularly valuable for developers looking to adopt LLM assistance without sacrificing their own understanding and control over their codebase.

Key Discussion Points:

  • Commenters related to using LLMs as assistants rather than replacements for human expertise
  • Some shared their own workflows for balancing AI assistance with manual coding
  • Others discussed the importance of maintaining the ability to understand and modify AI-generated code
  • Several noted that different developers have found different optimal balances between manual and AI-assisted coding

Original Article: https://www.stavros.io/posts/how-i-write-software-with-llms/


11. Stop Sloppypasta (241 points, 107 comments)

Stop Sloppypasta is a campaign and manifesto arguing against the practice of pasting raw LLM output directly into communications with other people. The site makes the case that sharing unedited AI-generated text creates an effort asymmetry where the sender expends almost no effort but the recipient must still read and comprehend the text. The author argues that writing is thinking, and by shortcutting the writing process with LLMs, senders reduce their own comprehension and create cognitive debt.

The manifesto also addresses the trust problem created by LLM output, noting that recipients have no way to know whether the sender verified the content or what their actual level of expertise is. LLMs write authoritatively with the confidence of an expert, which removes signals readers previously used to gauge expertise. The result is erosion of trust and an additional verification tax placed on recipients who must now question everything they receive, even when it appears authoritative.

Key Discussion Points:

  • One commenter shared an even more nightmarish version: AI-generated product specs dumped directly into Jira tickets
  • Several noted that people don’t mind AI content as long as it’s “their AI” but have a visceral reaction to someone else’s AI output
  • Some suggested the endgame might be “Dead Internet Theory” where AI creates content for AI to browse
  • Others discussed the importance of AI etiquette as something we’ll all need to learn

Original Article: https://stopsloppypasta.ai/


14. //go:fix inline and the source-level inliner (131 points, 55 comments)

The Go team has introduced the //go:fix inline directive and improvements to the source-level inliner as part of ongoing efforts to optimize Go code generation. The article explains how the inliner works, when it decides to inline functions, and how developers can guide its decisions using the new directive. This provides more fine-grained control over performance optimizations without resorting to compiler flags or assembly annotations.

The inliner is an important component of Go’s compiler optimization pipeline, as inlining small functions can eliminate function call overhead and enable further optimizations. However, excessive inlining can increase binary size and harm instruction cache performance. The new directive allows developers to mark specific functions as good or bad candidates for inlining based on their knowledge of the code’s runtime behavior, giving them more control over performance trade-offs.

Key Discussion Points:

  • Commenters discussed the balance between inlining for performance versus binary size
  • Some noted that the new directive provides better control than the existing compiler flags
  • Others shared experiences with inlining in other compilers and languages
  • Several expressed interest in benchmarking the impact of different inlining strategies

Original Article: https://go.dev/blog/inliner


15. SpiceCrypt: A Python library for decrypting LTspice encrypted model files (28 points, 4 comments)

SpiceCrypt is a Python library that can decrypt LTspice encrypted model files, which are semiconductor simulation models that have been encrypted by their vendors. LTspice is a widely-used SPICE simulation tool, and many semiconductor vendors provide encrypted versions of their device models to protect intellectual property while still allowing designers to use them in simulations. This library reverse-engineers the encryption scheme, allowing users to access the underlying model parameters.

The existence of such a tool raises interesting questions about intellectual property protection in the semiconductor industry and the security of model encryption schemes. While vendors encrypt models to protect their proprietary device characteristics, the encryption must be decryptable by LTspice during simulation, meaning the key must be present somewhere. SpiceCrypt demonstrates that these encryption schemes can be vulnerable to reverse engineering, potentially giving designers access to model parameters that vendors intend to keep secret.

Key Discussion Points:

  • One noted the irony of needing to decrypt models to properly simulate with them
  • Others discussed the legal and ethical implications of reverse-engineering encrypted models
  • Some pointed out that this could be valuable for educational purposes and understanding semiconductor physics
  • Several expressed surprise that vendors would use encryption that can be easily broken

Original Article: https://github.com/jtsylve/spice-crypt


30. An experiment to use GitHub Actions as a control plane for a PaaS (15 points, 9 comments)

This article describes an experiment in using GitHub Actions as the control plane for a Platform-as-a-Service (PaaS) deployment. Instead of building a custom dashboard and control interface, the author leveraged GitHub’s existing workflow system to manage deployments, scaling, and other operational tasks. By treating GitHub workflows as the primary interface, the PaaS gains version control, audit logging, and familiar tooling without having to build custom infrastructure.

The experiment demonstrates that GitHub Actions can serve as a surprisingly capable control plane for operational systems. Workflows can trigger deployments, respond to monitoring alerts, manage scaling events, and even perform complex orchestration tasks. This approach trades some flexibility and real-time control for simplicity, auditability, and integration with existing development workflows, potentially making it a good fit for smaller teams or internal tools.

Key Discussion Points:

  • Some questioned whether GitHub Actions has the reliability and latency characteristics needed for a control plane
  • Others noted the benefits of having everything in git with full history and rollbacks
  • Several discussed the trade-offs between this approach and purpose-built control planes like Kubernetes operators
  • One pointed out that this approach works well when your team already lives in GitHub

Original Article: https://towlion.github.io


Web & Infrastructure

2. The 49MB web page (443 points, 211 comments)

A developer auditing news websites discovered that loading a single New York Times article triggered 422 network requests and downloaded 49MB of data. The page took two minutes to settle, highlighting the extraordinary bloat in modern news sites. To put this in perspective, the author notes that this single page represents more data than Windows 95 (28 floppy disks) or roughly 10-12 full MP3 songs from 2006. On 2006-era average broadband speeds (1.5 Mbps), this page would have taken several minutes to load.

The analysis reveals that news websites are running complex programmatic ad auctions directly in users’ browsers, with dozens of concurrent bidding requests to exchanges like Rubicon Project and Amazon Ad Systems. This requires downloading, parsing, and compiling megabytes of JavaScript, taxing the main thread and generating heat in mobile devices. The author argues this represents a hostile architecture where publishers are trading long-term reader retention for short-term CPM pennies from programmatic advertising.

Key Discussion Points:

  • One commenter shared that their developers managed to run 750MB per website open by pre-loading all videos on a page
  • Many discussed using browser developer tools with bandwidth throttling to test on slower connections
  • Some noted that the majority of the 49MB was video content (36MB), not just ads and trackers
  • Others pointed out that “savvy” web surfers are a rounding error and publishers target the masses who don’t understand the technical issues

Original Article: https://thatshubham.com/blog/news-audit


10. How far can you go with IX Route Servers only? (19 points, 0 comments)

This technical blog post explores the question of how far you can route internet traffic using only Internet Exchange (IX) route servers. Route servers at internet exchanges help peers establish connections without having to configure individual bilateral peering sessions. The article examines the limitations of relying solely on route servers and discusses scenarios where this approach works versus where you’d need direct peering relationships.

The analysis is particularly relevant for network operators and organizations looking to optimize their routing infrastructure. By understanding the coverage and limitations of route servers, network engineers can make informed decisions about when to invest in direct peering versus relying on the shared infrastructure provided by internet exchanges. The post likely includes specific examples, traceroute data, and analysis of routing tables to support its conclusions.

Original Article: https://blog.benjojo.co.uk/post/how-far-can-you-get-with-ix-route-servers


History & Science

5. Electric motor scaling laws and inertia in robot actuators (53 points, 9 comments)

This article from Robot Daycare explores the fundamental scaling laws governing electric motors used in robot actuators, with a particular focus on how inertia affects performance as motors scale. The author analyzes how motor characteristics change with size and discusses the implications for designing robotic systems that need to achieve specific performance characteristics. This technical analysis is valuable for robotics engineers looking to optimize actuator design for specific applications.

The discussion of scaling laws helps understand fundamental limits and trade-offs in motor design. As motors get larger, the relationship between torque, speed, and inertia changes in predictable ways that can be described by scaling relationships. Understanding these relationships allows designers to make informed decisions about motor selection and sizing based on their application requirements, whether they’re building small manipulator arms or large humanoid robots.

Original Article: https://robot-daycare.com/posts/actuation_series_1/


7. What every computer scientist should know about floating-point arithmetic (1991) [pdf] (37 points, 2 comments)

This classic paper by David Goldberg, originally published in ACM Computing Surveys in 1991, remains essential reading for anyone working with numerical computations. The paper provides a comprehensive overview of floating-point arithmetic, explaining how floating-point numbers are represented, the various rounding modes that can be used, and the implications for numerical accuracy in computations. It covers the IEEE 754 standard and provides practical examples of common pitfalls in floating-point calculations.

Despite being more than 30 years old, the paper’s content remains highly relevant. Modern processors still use the same fundamental floating-point representations described in the paper, and the same numerical accuracy issues continue to plague software that doesn’t account for floating-point precision issues. The paper is frequently referenced in computer science courses and is considered essential background for anyone doing serious numerical work.

Original Article: https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf


22. Cannabinoids remove plaque-forming Alzheimer’s proteins from brain cells (2016) (99 points, 62 comments)

This 2016 research from the Salk Institute describes how cannabinoids can help remove amyloid beta proteins from brain cells, which are the plaque-forming proteins associated with Alzheimer’s disease. The study found that cannabinoids such as THC and CBD can help remove these toxic proteins by stimulating the cellular waste disposal system. This suggests potential therapeutic applications for cannabinoids in treating or preventing Alzheimer’s disease.

The research represents an important finding in the field of neurodegenerative disease research, as amyloid beta accumulation is a hallmark of Alzheimer’s pathology. By demonstrating a mechanism by which cannabinoids might help clear these proteins, the study opens new avenues for therapeutic development. However, it’s important to note that this is early research and much more work would be needed to translate these findings into effective treatments for humans.

Key Discussion Points:

  • Commenters discussed the state of research since this study was published and whether follow-up studies have validated the findings
  • Some noted the complexity of cannabinoid chemistry and how different compounds may have different effects
  • Others pointed out the challenges of drug development and the long timeline from basic research to approved treatments
  • Several discussed the societal and regulatory challenges around researching cannabis-derived compounds

Original Article: https://www.salk.edu/news-release/cannabinoids-remove-plaque-forming-alzheimers-proteins-from-brain-cells/


23. A Visual Introduction to Machine Learning (2015) (347 points, 30 comments)

This interactive visual introduction to machine learning from r2d3.us provides an intuitive, visual approach to understanding core machine learning concepts. Through interactive visualizations, the site helps readers understand concepts like linear regression, classification, overfitting, and the bias-variance tradeoff without requiring mathematical background. The visualizations allow users to see how algorithms behave and how changing parameters affects outcomes, building intuition for machine learning fundamentals.

Despite being published in 2015, this resource remains popular as an introduction to machine learning. Its visual approach makes it accessible to beginners who might be intimidated by more mathematical treatments. The interactive nature allows for experimentation and exploration, helping readers develop intuition that they can then apply when working with actual machine learning algorithms and datasets.

Key Discussion Points:

  • Many commenters recommended this as a great starting point for beginners
  • Some noted that while it’s old, the fundamental concepts haven’t changed
  • Others suggested complementary resources for diving deeper into the mathematical foundations
  • Several shared stories of using this resource to teach ML concepts to students or colleagues

Original Article: https://r2d3.us/visual-intro-to-machine-learning-part-1/


26. Learning athletic humanoid tennis skills from imperfect human motion data (142 points, 29 comments)

Researchers have demonstrated a method for teaching humanoid robots athletic tennis skills using imperfect human motion data as training examples. The LATENT project addresses the challenge that perfect motion capture data is expensive and difficult to obtain, while imperfect data from YouTube videos and other sources is abundant. By developing algorithms that can learn robustly from noisy, imperfect examples, the system can acquire complex athletic movements without requiring high-quality motion capture.

The work represents progress in robot learning, particularly in the domain of humanoid robotics where acquiring complex motor skills is challenging. By showing that robots can learn from imperfect human demonstrations, this research opens up possibilities for learning from the vast amounts of human motion data available on the internet. The tennis demonstrations show the robot performing swings and movements that closely approximate human tennis technique, learned from video examples rather than precise motion capture.

Original Article: https://zzk273.github.io/LATENT/


28. A Plain Anabaptist Story: The Hutterites (38 points, 3 comments)

This article provides a detailed look at the Hutterites, a communal Anabaptist group that has maintained traditional ways of life for centuries. The Hutterites live in communities called colonies, sharing property and labor in accordance with their religious beliefs. They are known for their distinctive dress, German dialect, and commitment to communal living with all things held in common.

The piece explores Hutterite history, beliefs, and current way of life, providing insight into a community that has successfully maintained its traditions while adapting to modern technological and economic realities. Unlike some other Anabaptist groups like the Amish, Hutterites typically embrace certain modern technologies when they can be integrated without violating their core beliefs about community and separation from worldly concerns.

Original Article: https://ulmer457718.substack.com/p/a-plain-anabaptist-story-the-hutterites


Academic & Research

6. Lies I was told about collaborative editing, Part 2: Why we don’t use Yjs (35 points, 15 comments)

This blog post from Moment discusses why they chose not to use Yjs, a popular CRDT-based library for collaborative editing, for their collaborative editing needs. The author explains various assumptions and promises they were told about CRDTs and Yjs specifically, and how those didn’t match their actual experience. The post provides valuable insight into the trade-offs and challenges of implementing collaborative editing systems in real applications.

The discussion covers technical details about performance, memory usage, and the complexity of debugging CRDT-based systems. The author also discusses alternative approaches they considered or tried, providing a nuanced view of the collaborative editing landscape. This technical discussion is valuable for anyone building real-time collaborative applications and evaluating different technology choices.

Original Article: https://www.moment.dev/blog/lies-i-was-told-pt-2


13. The Linux Programming Interface as a university course text (73 points, 6 comments)

The Linux Programming Interface (TLPI) by Michael Kerrisk is being adopted as a textbook for university courses on Linux systems programming. The book, which comprehensively covers Linux system programming from file I/O to processes and threads, provides a thorough foundation for students learning to develop applications that directly interact with the Linux kernel and system libraries. Its adoption as course material reflects its status as the definitive reference for Linux programming.

The availability of TLPI as course material is significant for computer science education, as it provides students with a comprehensive resource that covers both the basics and advanced topics in Linux system programming. The book’s practical examples and thorough explanations make it suitable for classroom use, and its completeness as a reference means students can continue to use it throughout their careers as Linux developers.

Original Article: https://man7.org/tlpi/academic/index.html


Business & Industry

18. What makes Intel Optane stand out (2023) (196 points, 140 comments)

This 2023 analysis explores what made Intel’s Optane memory technology unique and why it failed to achieve widespread market adoption despite its technical merits. Optane combined aspects of DRAM and flash memory, offering the speed of DRAM closer to storage with the persistence of flash, at densities and price points between the two technologies. The article explains the technical innovations that made Optane possible and the market challenges it faced.

Optane represented a significant technical achievement in memory technology, using Intel’s 3D XPoint technology to create a new class of non-volatile memory. However, it struggled to find a clear market position, being too expensive to replace NAND flash but not dense enough to replace DRAM. The article discusses how memory hierarchies work and why Optane’s positioning between existing memory tiers made it difficult to justify in most applications, despite its technical advantages.

Key Discussion Points:

  • Commenters discussed the difficulty of introducing new memory technologies that don’t slot cleanly into existing tiers
  • Some shared experiences using Optane and noted its impressive performance when it could be justified
  • Others noted that the software ecosystem didn’t fully adapt to take advantage of Optane’s characteristics
  • Several debated whether Optane’s failure was market-driven or if Intel could have positioned it differently

Original Article: https://blog.zuthof.nl/2023/06/02/what-makes-intel-optane-stand-out/


19. The emergence of print-on-demand Amazon paperback books (132 points, 97 comments)

This article examines the emergence and impact of print-on-demand paperback books on Amazon, framing it as part of the “enshittification” of digital platforms. The author argues that print-on-demand has led to a flood of low-quality books flooding Amazon’s marketplace, making it harder for readers to find quality content. The economics of print-on-demand mean that essentially anyone can publish a book without upfront costs, leading to massive quantity with variable quality.

The piece explores how this affects readers, authors, and the publishing ecosystem more broadly. While print-on-demand lowers barriers to entry and allows niche authors to publish work that traditional publishers would reject, it also creates discoverability challenges and potential quality issues. The author situates this within broader concerns about how digital platforms’ incentives can lead to degraded quality and user experience over time.

Key Discussion Points:

  • Commenters debated whether print-on-demand is overall positive or negative for the literary ecosystem
  • Some noted that quality filtering and discovery have always been challenges in publishing
  • Others discussed parallels to other markets where lowered barriers lead to floods of low-quality content
  • Several shared experiences trying to find quality books on Amazon amid the print-on-demand flood

Original Article: https://www.alexerhardt.com/en/enshittification-amazon-paperback-books/


26. Nasdaq’s Shame (281 points, 93 comments)

This article from Keubiko criticizes Nasdaq’s handling of the recent trading disruptions and technical issues that have plagued the exchange. The author argues that Nasdaq, as one of the world’s premier stock exchanges, should have better systems and processes in place to prevent or quickly resolve trading halts and other technical problems. The piece details specific incidents and criticizes the exchange’s transparency and communication during these events.

The criticism touches on broader concerns about the reliability and resilience of critical financial infrastructure. As trading has become increasingly electronic and algorithmic, the stakes of exchange reliability have grown. The article questions whether Nasdaq is investing sufficiently in its technical infrastructure and operational processes, and whether regulatory oversight is adequate to ensure exchanges maintain the reliability that markets depend on.

Key Discussion Points:

  • Commenters shared frustration with trading halts and the impact on their portfolios and trading strategies
  • Some defended Nasdaq, noting the complexity of running a global exchange and the inevitability of occasional issues
  • Others discussed the incentives exchanges face to invest in reliability versus other priorities
  • Several debated whether exchanges should be regulated more strictly as critical infrastructure

Original Article: https://keubiko.substack.com/p/nasdaqs-shame


System Administration

12. Separating the Wayland compositor and window manager (267 points, 124 comments)

This blog post discusses the architectural decision to separate the Wayland compositor and window manager functionality into separate components. The author explains the benefits of this separation, including clearer separation of concerns, the ability to mix and match different compositors and window managers, and improved modularity. The River project serves as an example of this architectural approach in practice.

The discussion covers technical details about how Wayland compositors and window managers interact, and what happens when these responsibilities are separated. The author explores both the technical challenges and the philosophical differences between the monolithic approach and the separated approach. This technical discussion is valuable for Linux desktop developers and anyone interested in the architecture of modern display servers.

Key Discussion Points:

  • Many commenters discussed their experiences with different Wayland compositors and window managers
  • Some noted the historical evolution from X11’s monolithic architecture to Wayland’s more modular approach
  • Others debated whether the separation is worth the additional complexity
  • Several shared their preferences for different combinations of compositors and window managers

Original Article: https://isaacfreund.com/blog/river-window-management/


27. Bandit: A 32bit baremetal computer that runs Color Forth [video] (51 points, 2 comments)

Bandit is a 32-bit bare-metal computer designed to run Color Forth, a variant of the Forth programming language. The video demonstrates the computer and its capabilities, showing how it boots directly into a Forth environment without an operating system. This minimalist approach represents an interesting exploration of alternative computing architectures and programming paradigms.

Color Forth is an unusual dialect of Forth that uses colors as part of its syntax, providing a visual dimension to the programming experience. Running on bare metal means the system boots directly into this programming environment, with Color Forth serving as both the programming language and the operating system interface. This represents a minimalist computing philosophy that strips away layers of abstraction to provide direct, immediate interaction with the machine.

Original Article: https://www.youtube.com/watch?v=HK0uAKkt0AE


Other

21. Bus travel from Lima to Rio de Janeiro (152 points, 60 comments)

This travelogue chronicles an epic bus journey from Lima, Peru to Rio de Janeiro, Brazil, crossing multiple South American countries. The author describes the experience of long-distance bus travel in South America, including the logistics, the border crossings, the changing landscapes, and the people encountered along the way. The piece provides a vivid picture of this mode of travel that remains common in many parts of the world but would be unfamiliar to many Western travelers.

The journey highlights the diversity of South America and the interconnectedness of the continent through its bus networks. Unlike air travel that flies over the landscape, bus travel provides ground-level exposure to the geography, cultures, and daily life of the regions crossed. The article serves as both a practical travel resource and an engaging narrative about the experience of slow travel.

Original Article: https://kenschutte.com/lima-to-rio-by-bus/


That’s it for this morning’s Hacker News brief! Stay tuned for more updates throughout the day.