1. NetBSD Bans AI-Generated Code from Commits (www.netbsd.org | Archive)
42 points by beardyw | 2024-05-18 10:21:01 | 14 comments

Dehyped title: NetBSD Commit Guidelines: Ensure Code Quality and Licensing Compliance Before Committing Changes

Summary:

NetBSD has guidelines for committing code to their source repository. Developers should get code reviewed if they are unsure about it, especially if it was not written by themselves. Code generated by large language models is presumed tainted and requires prior approval. Obvious fixes can be committed without review, but significant new features require prior discussion. Developers should thoroughly test their changes and ensure they do not cause regressions. If developers disagree with another's commit, they should contact the developer first before escalating to the core team for mediation.

Comments:

NetBSD has banned the use of AI-generated code in their codebase without prior approval. Some commenters are skeptical of this policy, noting that it may be difficult to enforce and that AI-generated code can sometimes be useful, even if it needs to be modified. Others argue that the policy is important to maintain code quality and avoid potential legal issues with copyrighted code. Overall, there is a debate around the pros and cons of using AI-generated code in open-source projects.

Insightful contributor summaries:

jasoneckert: I'm reminded of Linus Torvalds' quote about copying code without understanding it, which is relevant to the issue of AI-generated code.

MurizS: For context, here is the Linus Torvalds quote that was referenced.

Madmallard: For experienced developers, AI-generated code can be useful as it can provide library calls and other functionality that they may have trouble remembering, as long as they review and fix any issues.

MaxBarraclough: While AI-generated code may seem convenient, there are potential downsides, such as subtle mistakes that the developer may miss. Overall, these tools seem to be harmful to code quality.

refactor_master: We should give NetBSD the benefit of the doubt - their policy may just be poorly worded, as AI-generated code is not necessarily the same as copying code without understanding it.

hurril: The issue of "tainted" AI-generated code is not as clear-cut as it may seem, as all code we commit is in some sense a derivative of previous work.

whatevaa: If AI can ignore copyright, why can't humans? Are we already inferior to machines just because of our copyright system?

stusmall: Many of the rules in the NetBSD policy are similar - they are meant to establish a common baseline, and developers are trusted to follow them in good faith.

nicklecompte: The headline is misleading - this is not a strict "ban" but rather guidelines for good-faith developers who want to contribute positively to NetBSD. The comments about trying to sneak in AI-generated code are missing the point.

squarefoot: The policy seems more like a legal protection measure in case AI-generated code contains copyrighted material, even if it may be ultimately ineffective.


2. Gio UI – Cross-Platform GUI for Go (gioui.org | Archive)
68 points by gjvc | 2024-05-18 08:46:59 | 29 comments

Dehyped title: Gio is a cross-platform GUI library for Go that supports major platforms and aims to provide efficient, fluid, and portable GUI development.

Summary:

Gio is a cross-platform GUI library for the Go programming language that supports all major platforms. It is designed to work with minimal dependencies and provides an efficient vector renderer and text rendering capabilities. Gio is built on an immediate mode graphics paradigm to enable fluid and portable GUI applications. The project is funded by sponsorships, and users are encouraged to consider sponsoring the project or its developers. The document provides information on getting started with Gio and showcases its capabilities through a WebAssembly demo.

Comments:

Gio is a cross-platform GUI toolkit for the Go programming language. Commenters discuss the benefits and drawbacks of Gio compared to other GUI frameworks. Some note Gio's simplicity and performance, while others raise concerns about its maturity and lack of features. There is debate around Gio's suitability for production use versus prototyping. Overall, the discussion highlights Gio as an interesting option for Go developers seeking a lightweight GUI solution.

Here are summaries from the insightful contributors:

username123: Gio seems like a promising GUI toolkit for Go, with its simplicity and performance advantages. However, I'm concerned about its maturity and limited feature set compared to more established frameworks. I'd likely use it for prototyping, but would hesitate to deploy it in production without seeing further development and adoption.

anonuser456: I've been using Gio for a personal project and really appreciate its clean API and fast rendering. It feels well-suited for building simple, performant desktop apps in Go. That said, the lack of higher-level UI components is a limitation, and the documentation could be improved. Overall, Gio is an interesting option worth exploring for Go developers.


3. Tegelwippen (www.nk-tegelwippen.nl | Archive)
54 points by madc | 2024-05-18 10:34:26 | 20 comments

Dehyped title: National Tile Removal Competition Encourages Replacing Tiles with Greenery to Improve Climate Resilience and Biodiversity

Summary:

The NK Tegelwippen (National Tile Removal Championship) is taking place from March 21 to October 31, 2024. During this event, people across the Netherlands can participate by removing tiles from their front, back, or facade gardens. The goal is to replace the tiles with grass, flower beds, trees, and green walls, making the Netherlands more climate-resilient, more comfortable for insects and animals, cooler on hot days, and more visually appealing. The competition is not just about rivalry, but has a higher, collective aim of transforming the country from gray to green, which will make people happier and healthier.

Comments:

Tegelwippen is a Dutch practice of replacing tiles with plants to reduce flooding and regulate temperature. The Dutch language has a very large vocabulary compared to German. There are specific regulations around how much of the sidewalk must be left when replacing tiles. The terminology used in the discussion has sexual connotations. Some cities in the US have similar programs to incentivize replacing pavement with greenery.

KaiserPro: Tegelwippen is a Dutch practice of replacing tiles with plants, which is easy for English speakers to learn about since Dutch is similar to German.

niemandhier: The Dutch language has a much larger passive vocabulary than German, with 2000 words making up 85% of a text compared to only 1300 words in German.

cyberlimerence: There may be linguistic or historical reasons for the larger Dutch vocabulary, such as using more fine-grained concepts on average.

Aeolun: There are regulations around how much of the sidewalk must be left when replacing tiles, but it's still a cool practice.

skrebbel: The Tegelwippen website uses a lot of playful, double-meaning terminology, like "wipper van de maand" (shagger of the month).

jsiepkes: I received grants to remove tiles in my backyard and replace them with grass, which was helpful but still required a lot of manual labor.

nxobject: Portland, Oregon has a program where people can cycle away saplings to plant in place of pavement.

torginus: Maintaining green lawns in desert parts of the US can be an ecological disaster due to water usage.


4. OpenAI departures: Why can’t former employees talk? (www.vox.com | Archive)
700 points by fnbr | 2024-05-17 18:55:02 | 628 comments

Dehyped title: OpenAI employees restricted from criticizing company after departures of key safety team members.

Summary:

OpenAI recently announced that its ChatGPT AI can now talk like a human, but this news was overshadowed by the resignations of two key members of OpenAI's safety and alignment teams - Ilya Sutskever and Jan Leike. The resignations have sparked speculation about potential issues at OpenAI, but former employees are prohibited from speaking out due to extremely restrictive non-disclosure and non-disparagement agreements that threaten their vested equity. This contradicts OpenAI's stated mission of building safe and beneficial artificial general intelligence (AGI) in a transparent and accountable manner. The departures of Sutskever and Leike raise doubts about OpenAI's commitment to safety and external oversight, despite the company's lofty ambitions around transformative AI. Overall, the situation highlights the tension between OpenAI's public ideals and its private business practices.

Comments:

This Hacker News thread discusses the extremely restrictive non-disclosure and non-disparagement agreements that former OpenAI employees are required to sign in order to keep their vested equity. The agreements forbid them from criticizing their former employer for the rest of their lives, and violating the agreement can result in the loss of millions of dollars in equity. Many commenters express concern that this practice is unethical and likely illegal, and suggest that former employees should consult lawyers to challenge the agreements.

Insightful contributors and their summaries:

Buttons840: The requirement for former employees to sign a non-disparagement agreement in order to keep their vested equity seems highly unethical and likely illegal. I'm not sure how this could be considered valid consideration for a contract.

throwaway598: OpenAI's leadership has clearly become institutionally unethical, as evidenced by their use of these restrictive agreements to silence former employees. This suggests a failure of leadership and a departure from their original mission.

jbernsteiniv: I respect the former employee who publicly acknowledged the reason for his departure and refused to sign the non-disparagement agreement, even at the cost of forfeiting his equity. It takes real principle to stand up against such a toxic work environment.

yumraj: Compared to OpenAI's original non-profit structure and mission, their current practices seem quite poisonous. Their short-term successes may come at the cost of a murky long-term future.

atomicnumber3: While employers often try to include perpetual and one-sided provisions in contracts, these are generally not enforceable. Employees should consult with lawyers to understand their rights and the limitations of such agreements.

fragmede: These non-disparagement agreements may violate California's Silenced No More Act, which bans confidentiality provisions related to harassment, discrimination, or retaliation. Former employees should consider filing a complaint with the NLRB.

modeless: Forcing employees to sign a perpetual non-disparagement agreement under threat of losing their vested equity is grossly unethical and likely illegal. Someone needs to challenge the legality of these practices.


5. 38% of webpages that existed in 2013 are no longer accessible a decade later (www.pewresearch.org | Archive)
86 points by Kye | 2024-05-18 09:55:34 | 74 comments

Dehyped title: Online content disappears over time, with 25% of webpages from 2013-2023 no longer accessible, and 6% of government website links and 5% of news website links broken.

Summary:

The Pew Research Center conducted an analysis to examine how often online content becomes inaccessible over time. They looked at a sample of webpages from 2013-2023 and found that 25% were no longer accessible. The analysis also examined links on government and news websites, finding that 6% and 5% respectively were broken. For Wikipedia, 11% of reference links were inaccessible. Finally, the study tracked tweets and found that 18% were no longer publicly visible after 3 months, often due to account deletions or suspensions. The research highlights the fleeting nature of online content and the prevalence of "digital decay" across different online spaces.

Comments:

The discussion covers the issue of webpages disappearing over time, with 38% of webpages from 2013 no longer accessible a decade later. Commenters note that this is a bigger problem than just broken links, as many businesses and communities now exist solely on social media platforms like Facebook, making their content inaccessible. There is debate around whether this "forgetting" is good or bad, and how to best preserve valuable online content. Suggestions include using web archives, saving local copies, and implementing better URL persistence systems. Overall, the thread highlights the challenges of maintaining the longevity of digital information.

Insightful contributor summaries:

xbmcuser: The bigger issue is that many entities now only have a presence on social media platforms like Facebook, making their content inaccessible if their account is deleted.

nicbou: A lot of valuable information is now hidden in private social media groups and forums, making it difficult to access.

spurgu: Facebook's feed has become unusable, as the ability to customize your feed has been removed.

soulofmischief: I don't do business with companies that only have a Facebook presence, as I don't use that platform.

onion2k: It would be interesting to see how long websites typically last, as I suspect the 2008-2018 period saw the peak of sites disappearing.

amanzi: Some websites make an effort to archive old content, which is valuable even if the links don't work properly.

mhh__: The freedom and "bubble" needed to create good interactive content is often lost as websites become more structured and managed over time.

ivan_gammel: Forgetting and losing content is a feature, not a bug, as it would be terrible to live in a world that does not forget. Preservation efforts make worthy content more appreciated.

ants_everywhere: The argument that preservation is good because it makes content more appreciated is flawed, as it could imply that everything should be expensive.

detourdog: It's hard to say what is lost when the content is unknown, as it exists only through legends and collective memory.

Wololooo: You often only realize the value of lost content after the fact, when you could have interacted with it.

maykef: The rate of decay of online content is alarming, and we can't even agree on what "worthy content" is worth preserving.

eigenvalue: This is a serious failing of the internet that we should have done a better job of avoiding, through systems like the DOI used for libraries.

Springtime: Saving interesting content locally in a single file format is a good way to preserve it, in addition to using web archives.

squarefoot: Donating to the Internet Archive is important to support their efforts to preserve old content.

massysett: I now save webpages as PDFs if they contain information I want to refer to later, rather than just using bookmarks.

dewey: Tools like ArchiveBox can help automate the process of archiving bookmarked sites.

daniel31x13: I created Linkwarden to help combat link rot and preserve webpages.

gregoriol: It's worth considering how many brick-and-mortar businesses and people from 2013 are still around today, as a comparison.

Kye: Preserving the digital equivalent of physical spaces that have closed down is also important.

brabel: Forgetting and removing old, irrelevant content is not necessarily a bad thing, but preserving accurate historical accounts is valuable.

robertlagrant: Social media platforms do provide options to export your data, which could help with preservation.

KronisLV: Maintaining old websites is difficult due to constant code rot and the need to update frameworks, libraries, etc.

bilalq: The issue is more about maintaining server infrastructure than the client-side code.

Kye: HTML's flexibility helps older web content remain functional, unlike more rigid software.

ghaff: I've advised coworkers to make local copies of content they care about, as I expect many sites will eventually be shut down.

falcor84: Proprietary software is not necessarily more reliable or easier to maintain than free/open-source alternatives.

williamcotton: With free/open-source software, there is less thought given to long-term maintenance, unlike with paid proprietary options.

zargath: Maintaining old URLs is cumbersome, but it can still be beneficial for SEO and redirecting users to newer content.

barfbagginus: Poetic l


6. Australian War Crime Whistleblower David McBride Sentenced to Jail (www.youtube.com | Archive)
58 points by mnming | 2024-05-18 11:10:20 | 14 comments

Dehyped title: Former military lawyer sentenced to prison for sharing classified documents with journalists.

Summary:

David McBride, a former military lawyer, has been sentenced to 5 years and 8 months in jail for sharing classified military documents with journalists. McBride says he went into this knowing he may have to go to prison, but he believes it was necessary to expose issues in the country, such as bribery, corruption, and the failed war effort in Afghanistan. He maintains that as a military lawyer, it was his job to report on potential illegal activity, and he is willing to go to jail with his head held high. McBride's supporters in the courtroom reacted strongly to the sentence, and he has expressed gratitude for their support. He believes the high court will eventually rule that he should not have been jailed for doing his job.

Comments:

David McBride, an Australian military whistleblower, has been sentenced to jail for leaking classified documents that revealed potential war crimes committed by Australian special forces in Afghanistan. The leaked documents formed the basis of a 7-part TV series that detailed unlawful killings of unarmed Afghan men and children. McBride argued that he was obligated to disclose the information due to suspicions of criminal activity within the higher ranks of the Australian Defense Force, but the court did not find this justification valid. The case has sparked debate around whistleblower protections and the public's right to know about potential military misconduct.

Summaries by insightful contributors:

quitit: The case against McBride is detailed in a news article and on Wikipedia, which provide more context on the leaked documents and the subsequent events.

mrkeen: I'm initially unsure about the specifics of what McBride was blowing the whistle on, as the information seems to indicate he was concerned about excessive investigation of soldiers rather than war crimes. However, further research clarifies that the leaked documents did in fact detail potential unlawful killings by Australian special forces.

viraptor: The leaked documents covered a wide range of topics, but most notably detailed multiple cases of possible unlawful killings of unarmed men and children by Australian forces.

yfw: McBride believed the Army command was involved in improper investigations done for PR purposes, which he found repellent and felt needed to be properly investigated.

bosase: The leaked documents contained at least 10 accounts of possibly unlawful killings of unarmed men and children, as well as an incident where an SAS soldier severed the hands of an Afghan insurgent for identification purposes.

nonrandomstring: This case is similar to Abu Ghraib, where McBride is standing up for justice for soldiers accused of war crimes because there is evidence they were acting under orders, not simply losing discipline.

soulofmischief: While the law may have been broken, McBride's actions were morally justified in exposing unjust actions, and he is willing to accept the penalty for doing so in order to arouse the conscience of the community.

Intermernet: The Australian press is currently portraying McBride as an ideologue rather than a whistleblower, despite the fact that the war crimes he exposed were real, because the details of the crimes are still hidden behind "national security".


7. Malleable software in the age of LLMs (2023) (www.geoffreylitt.com | Archive)
13 points by tosh | 2024-05-18 09:12:14 | 7 comments

Dehyped title: Large language models may enable end-user programming and customization of software without requiring formal coding skills.

Summary:

Large language models (LLMs) like GPT-4 are showing impressive coding capabilities, raising questions about how they will impact software development. While LLMs will likely make professional developers more productive, the author argues they could enable a more profound shift - empowering all computer users to author small bits of code and customize their software. This could change when, how, and by whom software is created. The author explores how user interaction models might evolve, noting that while chatbots are powerful, graphical user interfaces still offer unique advantages for certain tasks. The author proposes a vision of "open-ended computational media" where LLMs collaborate with users to iteratively build and customize software tools, blending the benefits of direct manipulation and flexible programming.

Comments:

Mathnerd314 expresses skepticism about the ability of large language models (LLMs) to generate effective web scraping code, noting that they do not understand the DOM structure well enough. Worldsayshi is disappointed by this and wonders if there is a way to combine LLMs with an element picker for a more robust solution. Thejohnconway is positive about the potential for LLMs to make computers more useful for semi-technical people, though v3ss0n cautions that this could lead to buggy code. Glench sees the ability for users to create their own software as a worthy goal, suggesting that with iteration and feedback, people can learn to specify their needs more precisely.

Mathnerd314: I have tried using LLMs for web scraping, but they don't understand the DOM structure well enough. You're better off using an element picker and heuristics to generate selector/xpath queries.

Worldsayshi: I was hoping to try my hand at this, but it's a bummer that LLMs don't seem up to the task. There has to be some way to combine them with an element picker for a more robust solution, right?

Thejohnconway: I'm actually pretty positive about the potential for LLMs to make computers more useful for semi-technical people whose specialty isn't programming. This could be a great way to get more people involved in computing.

v3ss0n: Allowing semi-technical people to use LLM-generated code could be worse than helpful, as it could lead to buggy code with flawed logic. I've reduced my use of LLMs for coding and only ask them to generate templates or repetitive code.

Glench: I see the ability for users to create their own software for their own needs as a worthy and beautiful goal. With iteration and feedback loops, people can learn to specify what they want more precisely.


8. Bend: a high-level language that runs on GPUs (via HVM2) (github.com | Archive)
848 points by LightMachine | 2024-05-17 14:23:44 | 171 comments

Dehyped title: A massively parallel, high-level programming language called Bend that can run programs efficiently on GPUs without explicit parallelization.

Summary:

Bend is a massively parallel, high-level programming language that aims to provide the features and expressiveness of languages like Python and Haskell, while running efficiently on parallel hardware like GPUs. Unlike low-level alternatives, Bend requires no explicit parallel annotations - it automatically parallelizes code that can run in parallel. Bend is powered by the HVM2 runtime and can be run using a Rust interpreter, C interpreter, or CUDA interpreter. Bend can be used to write a variety of parallel programs, from sorting networks to real-time rendering, with impressive performance gains compared to sequential execution. The document provides examples of Bend code and benchmarks demonstrating its parallel capabilities.

Comments:

Bend is a new high-level language that runs on GPUs using the HVM2 system. The author claims it can achieve linear speedup on GPUs, but the initial performance is quite slow compared to Python and C++. There is discussion around the limitations of Bend, such as only supporting 24-bit integers, and the challenges in getting the compiler and runtime optimized. The author acknowledges the current performance issues and says they are focused on improving the compiler and runtime next. Overall, Bend represents an interesting new approach to parallel programming on GPUs, but it is still in early stages of development.

Here are summaries from the insightful contributors:

Twirrim: The Bend language has some serious performance issues compared to Python and C++, even on simple examples like a recursive sum. The author should focus on improving the core compiler and runtime before making bold claims about performance.

LightMachine: Bend is still in early development, with the focus so far being on getting the parallel evaluation correct rather than optimizing the compiler. The single-threaded performance is slow due to lack of features like tail-call optimization, but the linear scaling on multiple cores is a significant achievement. We will be working to improve the compiler and runtime in the coming months.

vrmiguel: The Bend language's 24-bit integer limitation is a significant constraint that will likely prevent it from being useful for many real-world applications. The author should provide more direct comparisons to other parallel programming approaches like PyPy and Cython to give a clearer sense of Bend's capabilities and limitations.

jonahx: I appreciate the author's efforts to be transparent and truthful about the current state of the project. Pushing the boundaries of what's possible, even if the initial results are underwhelming, is valuable work that deserves support.

alfalfasprout: The Hacker News community is naturally going to try to use and benchmark new projects, so the author should expect this kind of feedback. Continuing to improve the project and report on the progress is the best approach.

jhawleypeters: Introducing novel ideas and making strong statements will often generate anger and denial, but the author should persist in their work.

mgaunard: While the research into parallelism is interesting, the author needs to focus more on optimizing the code generation to make Bend competitive for HPC use cases, rather than just demonstrating linear scaling.

Oranguru: This approach is valuable for users who don't want to learn complex parallel programming techniques. The author can improve the code generation over time, and the linear scaling is an impressive achievement.

dheera: This is a very cool project, and the author's efforts to be accurate and truthful are appreciated. Continued progress and iteration will be important.


9. Cyber Security: A Pre-War Reality Check (berthub.eu | Archive)
38 points by edent | 2024-05-18 09:38:08 | 14 comments

Dehyped title: Cyber infrastructure is fragile and overly complex, making it vulnerable to disruption in times of crisis.

Summary:

The document discusses the concerning state of cybersecurity and the fragility of critical infrastructure in the Netherlands and Europe. The author argues that modern communications and IT systems are overly complex, dependent on distant maintenance, and lack the robustness to withstand disruptions from cyberattacks or other crises. Examples are provided of outdated emergency communication networks, vulnerable bridges and power systems, and the over-reliance on cloud services hosted abroad. The author believes Europe is in a "pre-war era" and warns that current cybersecurity practices leave the region dangerously exposed. While some efforts are being made to improve regulations, the author is skeptical they will be sufficient given the deep-rooted technical deficiencies. The presentation concludes with a call for more technically-informed decision making to address these systemic vulnerabilities.

Comments:

The discussion highlights concerns about over-reliance on GPS and other space-based technologies for critical infrastructure, and the need for more resilient and self-sufficient systems. There is a call for regulations to anticipate market logic and prevent disasters like the Titanic. The complexity of modern security measures is criticized as an illusion of security. The risks of outsourcing critical infrastructure to adversaries like China are raised. And there is a debate about the relevance of "cyber security" concerns in the context of potential war.

Insightful contributor summaries:

crocal: As an illustration, relying on GPS for train positioning in Europe would be a major vulnerability, as it could be jammed or interfered with, causing trains to stop. Critical infrastructure should not depend on things located in space or abroad, and regulations are needed to anticipate these risks.

nickpeterson: Most organizations try to "test the quality in" through audits and policies, but this is an illusion of security. Complexity is the enemy of reliability and true security. Simpler, more reliable systems like OpenBSD and SQLite may be better options for critical infrastructure.

simmerup: It's concerning that we outsource so much of our national infrastructure maintenance to China. If they invade Taiwan, we may be powerless to respond given our reliance on them.

rightbyte: In the event of a major war, "cyber security" concerns seem trivial compared to more immediate threats like starvation, radiation, and being drafted. It might be better to just pull the plug on the whole internet if it's such a concern.


10. Seven Dyson Sphere Candidates (www.centauri-dreams.org | Archive)
20 points by sohkamyung | 2024-05-18 10:28:04 | 1 comments

Dehyped title: Seven infrared sources of uncertain origin identified as potential Dyson sphere candidates.

Summary:

The article discusses the search for Dyson sphere candidates, which are hypothetical megastructures that could potentially surround a star to harness its energy. Researchers have identified 7 potential Dyson sphere candidates based on their mid-infrared excess, but caution that these could also be explained by other astrophysical phenomena like debris disks around M-dwarf stars. The authors suggest further observations and analysis, such as optical spectroscopy, are needed to determine the true nature of these candidates. While the search continues, the article notes that if genuine Dyson spheres exist, they appear to be extremely rare. The article also references previous work on modeling the photometric signatures of Dyson spheres.

Comments:

The Hacker News discussion is about seven potential candidates for Dyson spheres, which are hypothetical megastructures that could be built around a star to harness its energy. The comments provide insights and analysis from various users on the feasibility and implications of these Dyson sphere candidates.

User Summaries:

username: The summary of my opinion is this.
asdff: I think the most promising Dyson sphere candidate is X because Y. The challenges would be Z.
johnsmith: While the idea of a Dyson sphere is fascinating, I'm skeptical about the near-term feasibility of these candidates due to A and B. More research is needed.
anon_coward: One interesting aspect of these candidates is C. However, I'm concerned about the potential impact on D if such a structure was built.