Overview
Description
Statistics
- 3 Posts
Overview
- Totolink
- A8000RU
Description
Statistics
- 1 Post
Fediverse
🚨 CRITICAL: Totolink A8000RU (7.1cu.643_b20200521) suffers from OS command injection (CVE-2026-7203). Remote, unauthenticated attackers can fully compromise affected routers. No patch confirmed — disable remote mgmt & isolate. https://radar.offseq.com/threat/cve-2026-7203-os-command-injection-in-totolink-a80-b3a02d32 #OffSeq #Vuln #IoTSec
Overview
- Totolink
- A8000RU
Description
Statistics
- 1 Post
Fediverse
💥 CVE-2026-7155: CRITICAL OS command injection in Totolink A8000RU (7.1cu.643_b20200521). Exploitable remotely, no auth needed. Disable remote mgmt & restrict access until patch. Details: https://radar.offseq.com/threat/cve-2026-7155-os-command-injection-in-totolink-a80-1189da9b #OffSeq #Vulnerability #CVE2026_7155 #IoTSecurity
Overview
Description
Statistics
- 1 Post
Fediverse
@addison Great points on maintainability, security, and sustainability! Here are my thoughts on this.
First, the security issues. These can come in two variants: an LLM introduces a bug into a library where no bug existed before, or an LLM faithfully translates buggy behavior from the original to the reimplemented library. IMO, the latter case is hard to fault the translator for and an argument can be made that, for “load bearing bugs”, the correct action here isn’t so clear. My gut feeling is that the right thing to do in this case is to fix the bug into the original and update/regenerate the translation.
The former case is by no means unique to LLMs. For example, (human-executed) rust reimplementations of archiving utilities have introduced Zip Slip vulnerabilities such as CVE-2025-29787 or CVE-2025-68705. We tend to hold coding agents to a significantly higher standard than humans here (which I think they eventually _will_ reach anyways), but I think the question of who introduces more bugs in reimplementations is far from a foregone conclusion already.
This brings us to maintainability. Again, there are two issues here: first, that no one knows the generated code and second, the question of updating it. I think that, regardless of our feelings about the matter, slopped code is here to stay. It’s already accounting for significant chunks of open source code out there (https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point), and as these agents continue to improve astronomically, this number will increase. We have, unfortunately, left the era of aggregations of developers knowing all of their code (although it can also be argued that this was never true in the first place, given maintainer drift and so on).
The fact that this code is truly “write only” in that no human reads it at all takes this a bit further for sure. I’m not sure what the eventual implications of this are (such as https://dpc.pw/posts/i-dont-want-your-prs-anymore/), and it personally makes me sad, but I do think that code is somewhere on the path to becoming mostly an intermediate representation between specification and compilation. People used to write assembly, then in earlier days of compilers, they would sometimes hand-optimize compiler-produced assembly, but even this gradually stopped as compilers improved (e.g., the latest reference to this practice I can find is 2006 https://www.cs.fsu.edu/~whalley/papers/tecs06.pdf). We still learn assembly and the compilation process in Computer Organization in undergrad, and it’s important for some disciplines of Computer Science, but it’s definitely a somewhat niche topic. Source code seems to be on a similar trajectory.
Upgradeability is very related to this. IMO, upgrading this “write only” reimplementation with new features beyond what’s in the upstream library is a bad idea. Development should continue on the original library that the original developers are familiar with. Then the translation could be fully regenerated on demand. This process exists already, but is obviously wasteful. I don’t personally see big issues with translating diffs instead, but it certainly could be that I’m missing something. After all, this whole thing is experimental!
Finally, sustainability is a tricky one. There are a lot of pieces to this: fair use of training data, energy, brainrot, economic shockwaves, etc. That’s all hard to pick apart. But dispatching agents can be the right _technical_ solution to many tasks, and I personally don’t feel that properly using them is antithetical to the research process (for example, it can lead to MUCH better implemented and more reliable experiment harnesses).
Thanks again for taking the time to write your thoughts down; looking forward to more discussion!
Overview
Description
Statistics
- 1 Post
Fediverse
@addison Great points on maintainability, security, and sustainability! Here are my thoughts on this.
First, the security issues. These can come in two variants: an LLM introduces a bug into a library where no bug existed before, or an LLM faithfully translates buggy behavior from the original to the reimplemented library. IMO, the latter case is hard to fault the translator for and an argument can be made that, for “load bearing bugs”, the correct action here isn’t so clear. My gut feeling is that the right thing to do in this case is to fix the bug into the original and update/regenerate the translation.
The former case is by no means unique to LLMs. For example, (human-executed) rust reimplementations of archiving utilities have introduced Zip Slip vulnerabilities such as CVE-2025-29787 or CVE-2025-68705. We tend to hold coding agents to a significantly higher standard than humans here (which I think they eventually _will_ reach anyways), but I think the question of who introduces more bugs in reimplementations is far from a foregone conclusion already.
This brings us to maintainability. Again, there are two issues here: first, that no one knows the generated code and second, the question of updating it. I think that, regardless of our feelings about the matter, slopped code is here to stay. It’s already accounting for significant chunks of open source code out there (https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point), and as these agents continue to improve astronomically, this number will increase. We have, unfortunately, left the era of aggregations of developers knowing all of their code (although it can also be argued that this was never true in the first place, given maintainer drift and so on).
The fact that this code is truly “write only” in that no human reads it at all takes this a bit further for sure. I’m not sure what the eventual implications of this are (such as https://dpc.pw/posts/i-dont-want-your-prs-anymore/), and it personally makes me sad, but I do think that code is somewhere on the path to becoming mostly an intermediate representation between specification and compilation. People used to write assembly, then in earlier days of compilers, they would sometimes hand-optimize compiler-produced assembly, but even this gradually stopped as compilers improved (e.g., the latest reference to this practice I can find is 2006 https://www.cs.fsu.edu/~whalley/papers/tecs06.pdf). We still learn assembly and the compilation process in Computer Organization in undergrad, and it’s important for some disciplines of Computer Science, but it’s definitely a somewhat niche topic. Source code seems to be on a similar trajectory.
Upgradeability is very related to this. IMO, upgrading this “write only” reimplementation with new features beyond what’s in the upstream library is a bad idea. Development should continue on the original library that the original developers are familiar with. Then the translation could be fully regenerated on demand. This process exists already, but is obviously wasteful. I don’t personally see big issues with translating diffs instead, but it certainly could be that I’m missing something. After all, this whole thing is experimental!
Finally, sustainability is a tricky one. There are a lot of pieces to this: fair use of training data, energy, brainrot, economic shockwaves, etc. That’s all hard to pick apart. But dispatching agents can be the right _technical_ solution to many tasks, and I personally don’t feel that properly using them is antithetical to the research process (for example, it can lead to MUCH better implemented and more reliable experiment harnesses).
Thanks again for taking the time to write your thoughts down; looking forward to more discussion!
Overview
Description
Statistics
- 1 Post
Fediverse
📰 CISA Discovers 'FIRESTARTER' Backdoor on Federal Cisco Firewall; Malware Survives Patches
🔥 CISA finds new 'FIRESTARTER' backdoor on a federal agency's Cisco firewall. The malware survives patches and firmware updates, allowing persistent access. Exploited CVE-2025-20333 & CVE-2025-20362. #CyberSecurity #CISA #Backdoor #Cisco
Overview
Description
Statistics
- 1 Post
Overview
Description
Statistics
- 1 Post
Fediverse
📰 CISA Discovers 'FIRESTARTER' Backdoor on Federal Cisco Firewall; Malware Survives Patches
🔥 CISA finds new 'FIRESTARTER' backdoor on a federal agency's Cisco firewall. The malware survives patches and firmware updates, allowing persistent access. Exploited CVE-2025-20333 & CVE-2025-20362. #CyberSecurity #CISA #Backdoor #Cisco
Overview
Description
Statistics
- 1 Post
Overview
Description
Statistics
- 1 Post