Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Two Column)

LinuxSecurity - Security Advisories




  • Fedora 44 Xen Critical DoS Issues XSA-483 XSA-484 XSA-486 XSA-488
    oxenstored keeps quota related use counts across domain destruction [XSA-483, CVE-2026-23556] Xenstored DoS via XS_RESET_WATCHES command [XSA-484, CVE-2026-23557] grant table v2 race in status page mapping [XSA-486, CVE-2026-23558] x86: Floating Point Divider State Sampling [XSA-488, CVE-2025-54505]




LWN.net

  • Eden: NHS goes to war against open source
    Terence Eden reportsthat the UK's NationalHealth Service (NHS) is preparing to close almost all of its open-source repositories as aresponse to LLM tools, such as Anthropic's Mythos, becoming moresophisticated at finding security vulnerabilities. He does not, to putit mildly, agree with the decision:

    The majority of code repospublished by the NHS are not meaningfully affected by any advancein security scanning. They're mostly data sets, internal tools,guidance, research tools, front-end design and the like. There isnothing in them which could realistically lead to a securityincident.

    When I was working at NHSX during the pandemic, we were soconfident of the safety and necessity of open source, we made sure theCovid Contact Tracing app was open sourced the minute it was availableto the public. That was a nationally mandated app, installed onmillions of phones, subject to intense scrutiny from hostile powers -and yet, despite publishing the code, architecture and documentation,the open source code caused zero securityincidents.

    Furthermore, this new guidance is in direct contradiction to theUK's TechCode of Practice point 3 "Be open and use open source" whichinsists on code being open.


  • [$] Version-controlled databases using Prolly trees
    Modern database and filesystems make pervasive use ofB-trees, which are treestructures optimized for storing sorted lists of keys and values on blockdevices.Dolt is an Apache 2.0-licensed project that makes clever use of avariant of a B-tree to support efficient version control for an entire database.The data structure it uses could well be of interest to other projects.


  • Security updates for Friday
    Security updates have been issued by AlmaLinux (fence-agents), Debian (chromium, dovecot, and kernel), Fedora (chromium, dotnet10.0, dotnet8.0, dotnet9.0, emacs, glow, jfrog-cli, openbao, pyp2spec, python3.6, rust-rustls-webpki, vhs, and xen), Oracle (grafana, grafana-pcp, PackageKit, sudo, vim, and xorg-x11-server), Red Hat (rhc), SUSE (avahi, bouncycastle, chromium, container-suseconnect, firewalld, gdk-pixbuf, grafana, java-25-openjdk, kernel, libixml11, libmozjs-140-0, libpng12-0, libsodium, libssh, mariadb, Mesa, ntfs-3g_ntfsprogs, openCryptoki, openexr, packagekit, prometheus-postgres_exporter, python-jwcrypto, python-mako, python-Pygments, python-pynacl, python311, python311-pyOpenSSL, python315, radare2, sed, and vim), and Ubuntu (kmod and zulucrypt).


  • [$] Restartable sequences, TCMalloc, and Hyrum's Law
    Hyrum's Law states that anyobservable behavior of a system will eventually be depended upon bysomebody. The kernel community is currently contending with a cleardemonstration of that principle. The recent work to address some restartable-sequencesperformance problems in the 6.19 release maintained the documented APIin all respects, but that was not enough; Google's TCMalloclibrary, as it turns out, violates the documented API, prevents other codefrom using restartable features, and breaks with 6.19. But the kernel'sno-regressions rule is forcing developers to find a way to accommodateTCMalloc's behavior.


  • GCC 16.1 released
    Version16.1 of the GNU Compiler Collection (GCC) has beenreleased.
    The C++ frontend now defaults to the GNU C++20 dialect and the correspondingparts of the standard library are no longer experimental. SeveralC++26 features receive experimental support, including Reflection(-freflection), Contracts, expansion statements and std::simd.
    Other changes include the introduction of an experimental compilerfrontend for the Algol68 language,ability to output GCC diagnostics in HTML form, and more.



  • Seven new stable kernels for Thursday
    Greg Kroah-Hartman has released the 7.0.3, 6.18.26, 6.12.85, 6.6.137, 6.1.170, 5.15.204, and 5.10.254 stable kernels. The 7.0.3 and6.18.26 kernels only contain fixes needed for Xen users; the others,though, have backported fixes for the recently disclosed AEAD socket vulnerability. Kroah-Hartman advisesthat all users of the other kernel series must upgrade.



  • Security updates for Thursday
    Security updates have been issued by AlmaLinux (buildah, firefox, gdk-pixbuf2, giflib, grafana, java-1.8.0-openjdk, java-21-openjdk, LibRaw, OpenEXR, PackageKit, pcs, python3.11, python3.12, python3.9, sudo, tigervnc, vim, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Debian (calibre, firefox-esr, and openjdk-17), Fedora (asterisk, binaryen, buildah, dokuwiki, lemonldap-ng, libexif, libgcrypt, miniupnpd, openvpn, podman, python3.9, rust-rpm-sequoia, skopeo, and xdg-dbus-proxy), Red Hat (buildah, gdk-pixbuf2, and nodejs:20), SUSE (dnsdist, libheif, openCryptoki, polkit, sed, and xen), and Ubuntu (linux-bluefield, python-marshmallow, and roundcube).


  • [$] LWN.net Weekly Edition for April 30, 2026
    Inside this week's LWN.net Weekly Edition:
    Front: Famfs; Python packaging council; Zig concurrency; pages and folios; Strawberry music manager; 7.1 merge window. Briefs: GnuPG 2.5.19; Copy Fail; Plasma security; Fedora 44; Ubuntu 26.04; Niri 26.04; pip 26.1; RIP Seth Nickell; RIP Tomáš Kalibera; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.


  • A security bug in AEAD sockets
    Security analysis firm Xint has disclosed a security bug in the Linux kernelthat allows for arbitrary 4-byte writes to the page cache, and which has beenpresent since 2017.The vulnerability hasbeen fixed in mainline kernels. A proof-of-concept script demonstrates how to use the flaw to corrupt a setuidbinary, which works onmultiple distributions, by requesting an AEAD-encrypted socket from user spaceand splicing a particular payload into it.A supplemental blogpost gives more details about the discovery and remediation.
    A core primitive underlying this bug is splice(): it transfers data between filedescriptors and pipes without copying, passing page cache pages by reference.When a user splices a file into a pipe and then into an AF_ALG socket, thesocket's input scatterlist holds direct references to the kernel's cached pagesof that file. The pages are not duplicated; the scatterlist entries point at thesame physical pages that back every read(), mmap(), andexecve() of that file.


  • [$] Python packaging council approved
    The Python packaging world now has a formalgovernance council, of the form described in PEP 772 ("PackagingCouncil governance process"), which was approvedby the steering council on April 16. It has been over a yearsince the PEP was first proposed in February 2025 and it has undergonelengthy discussions in multiple postings to the Python discussion forum. Thepackaging council will have "broad authority over packaging standards,tools, and implementations"; it will consist of five members who willbe elected in a vote that is likely to come in June—after PyCon US 2026 is held mid-May.


LXer Linux News


  • ESP-FLY micro drone kit offers ESP32-S3-based flight control and ESP-NOW support
    The ESP-FLY DIY Kit is a compact micro drone platform built around the Seeed Studio XIAO ESP32-S3, developed as a collaboration between Seeed Studio and Max Imagination. The kit targets educational and hobbyist use, combining a small airframe with wireless control options and a customizable firmware environment. The system is delivered as a DIY kit […]





  • Where to buy a non-Apple, non-Google smartphone
    Both Cupertino and Google are imposing ever stricter limits on their phones – but you have alternativesAs both Apple and Google introduce unwelcome changes in their phone OSes, here's a quick reminder that you do have alternatives to the Gruesome Twosome.…






Error: It's not possible to reach RSS file http://services.digg.com/2.0/story.getTopNews?type=rss&topic=technology ...

Slashdot

  • New Lithium-Plasma Engine Passes Key Mars Propulsion Test
    NASA engineers have tested a next-generation lithium-plasma electric propulsion system that reached 120 kilowatts, a new U.S. record and about 25 times the power of the electric thrusters on NASA's Psyche spacecraft. "Designing and building these thrusters over the last couple of years has been a long lead-up to this first test," said James Polk, who is a senior research scientist at NASA Jet Propulsion Laboratory. "It's a huge moment for us because we not only showed the thruster works, but we also hit the power levels we were targeting. And we know we have a good testbed to begin addressing the challenges to scaling up." Universe Today reports: While 120 kilowatts is a new record, NASA estimates it a future human mission to Mars will require 2 to 4 megawatts of power consisting of several thrusters and requiring more than 23,000 hours (958 days/2.6 years) of operation. To accomplish this, the thrusters would have to withstand more than 2,800 degrees Celsius (5,000 degrees Fahrenheit), which the thrusters achieved during testing. The reason for the extended operation is due to the estimated time of an entire human mission to Mars, which is estimated to be approximately 2.6 years. This is because the launch window to Mars only opens once every two years due to the orbital behaviors of both planets. While no mission has ever returned from the Red Planet, this same launch window works from Mars to Earth, too. When launched within this window, robotic spacecraft have traditionally taken approximately 6-7 months to reach Mars. However, a human mission would require a much larger spacecraft to accommodate the astronauts, food, fuel, water, and other mission-essential items. For the approximate 2.6-year mission, this would entail approximately 6-9 months traveling to Mars, followed by approximately 18 months on the surface of Mars until the next launch window opens, then another approximate 6-9 months back to Earth. However, having much less fuel due to the electric propulsion system could potentially alter this timeframe.


    Read more of this story at Slashdot.


  • Amazon Stuck With Months of Repairs After Drone Strikes On Data Centers
    An anonymous reader quotes a report from Ars Technica: Amazon's cloud customers will need to wait several more months before the US tech company can repair war-damaged data centers and restore normal operations in the Middle East. The announcement comes two months after Iranian drone strikes targeted three Amazon data centers in the United Arab Emirates and Bahrain -- meaning that full recovery from the cloud disruption could take nearly half a year in all. The Amazon Web Services (AWS) dashboard posted an April 30 update describing how its UAE and Bahrain cloud regions "suffered damage as a result of the conflict in the Middle East" and are unable to support customer applications. The update also said that "relevant billing operations are currently suspended while we restore normal operations" in a process that "is expected to take several months." That wording suggests Amazon will continue to avoid billing AWS customers in the affected regions -- ME-CENTRAL-1 and ME-SOUTH-1 -- after it initially waived all usage-related charges for March 2026 at an estimated cost of $150 million. AWS also "strongly" recommended that customers migrate resources to other cloud regions and rely on remote backups to restore any "inaccessible resources." Some customers, such as the Dubai-based super app Careem—which offers ride-hailing, household services, and food and grocery delivery -- were able to get back online quickly after doing an overnight migration to other data center servers.


    Read more of this story at Slashdot.


  • Microsoft's Xbox Mode Is Now Available For All Windows 11 PCs
    Microsoft is rolling out Xbox mode to all Windows 11 PCs, bringing a full-screen Xbox PC app interface similar to Steam's Big Picture Mode. "Some players in select markets will be able to download the Xbox mode experience today, with availability expanding to more players in those markets over the next several weeks," says the Xbox team. The Verge reports: Xbox mode aims to try and bridge the gap between Xbox consoles and Windows, but its original debut felt like a beta on the Xbox Ally devices. "Since first introducing Xbox mode, formerly known as 'full screen experience,' on Windows handhelds, we've been listening closely to player feedback and continuing to evolve the experience across devices," says the Xbox team. "Those learnings directly shaped Xbox mode on Windows 11 PCs." Microsoft is also rolling out improvements to the Xbox Ally X handheld today, including a preview of its Auto SR upscaling technology. Xbox console owners are also getting a new dashboard update today, with the ability to disable Quick Resume on individual games and a feature to add custom colors to the dashboard.


    Read more of this story at Slashdot.


  • AI Agent Designed To Speed Up Company's Coding Wipes Entire Database In 9 Seconds
    joshuark shares a report from Live Science: An AI coding agent designed to help a small software company streamline its tasks instead blew a hole through its business in just nine seconds. PocketOS founder Jer Crane, said that the AI coding agent Cursor --powered by Anthropic's Claude Opus 4.6 model -- deleted the company's entire production database and backups with a single call to its cloud provider, Railway, on April 24. [...] "This isn't a story about one bad agent or one bad API [Application Programming Interfaces]," Crane wrote in an X post. "It's about an entire industry building AI-agent integrations into production infrastructure faster than it's building the safety architecture to make those integrations safe." Crane's company, PocketOS makes software for car rental companies, handling tasks such as reservations, payments, customer records and vehicle tracking. After the deletion, Crane said customers lost reservations and new signups, and some could not find records for people arriving to pick up their rental cars. "We've contacted legal counsel," Crane wrote. "We are documenting everything." Crane explained that Cursor found an API token -- a "digital key" made of a short sequence of code that lets software talk to other services and prove it has permission to act -- in an unrelated file which it then used to run the destructive command. According to Crane, Railway's setup allowed the deletion without confirmation, and because the backups were stored close enough to the main database, they were also erased. "[Railway] resolved the issue and restored the data," Railway confirmed via email to Live Science. "We maintain both user backups as well as disaster backups. We take data very, VERY seriously." In his post, he pointed to earlier reports of Cursor ignoring user rules, changing files it was not supposed to touch and taking actions beyond the task it had been given. To him, the database wipe was not a freak accident but the next step in a larger, more concerning, pattern. After the database vanished, Crane asked Cursor to explain what happened. The AI agent reportedly admitted that it had guessed, acted without permission and failed to understand the command before running it. "I violated every principle I was given," the AI agent wrote. "I guessed instead of verifying. I ran a destructive action without being asked. I didn't understand what I was doing before doing it." The statement reads like a confession [...]. "We are not the first," Crane wrote. "We will not be the last unless this gets airtime."


    Read more of this story at Slashdot.


  • Pentagon Reaches Agreements With Top AI Companies, But Not Anthropic
    The Pentagon says it has reached deals with seven AI companies -- SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, and AWS -- to deploy their tools on classified Defense Department networks. The odd one out is Anthropic, which remains excluded after being labeled a supply-chain risk amid a dispute over military-use guardrails. Reuters reports: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services (AWS), several of which already work with the Pentagon, will be integrated into its secret and top-secret network environments, providing more military access to their products for use on sensitive topics, the Pentagon said in a statement. The lesser-known Reflection AI, which raised $2 billion in October, is backed by 1789 Capital, a venture capital firm in which Donald Trump Jr. is a partner and investor. Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups. Since the blow-up, newer AI entrants have said the military has sped up the process of incorporating them onto secret and top-secret data levels to less than three months. The process previously took 18 months or longer. By expanding AI services offered to troops, who use it for planning, logistics, targeting and in other ways to streamline huge operations and perform more quickly, the Pentagon said in its statement it will avoid "vendor lock," a likely nod to its overdependence on Anthropic or other dominant service providers. [...] AI has become increasingly important for the U.S. military. The Pentagon's main AI platform, GenAI.mil, has been used by over 1.3 million Defense Department personnel, the agency noted in its release, after five months of operation. Further reading: Google and Pentagon Reportedly Agree On Deal For 'Any Lawful' Use of AI


    Read more of this story at Slashdot.


  • ICANN Opens Applications For New Generic Top-Level Domains
    ICANN has opened applications for new generic top-level domains for the first time since 2012. The Register reports: ICANN hasn't offered new gTLDs since 2012, but on Thursday opened applications for new domains in 27 scripts. A 439-page Applicant Guidebook explains the process. The Register suggests paying attention to the string evaluation FAQ, which explains which gTLDs are valid, and those ICANN will likely frown upon. An FAQ describes this round of applications as giving "businesses, communities, and others the opportunity to apply for new top-level domains tailored to their community, culture, language, business, and customers." "A TLD can be a branding opportunity for a business, but the commercial opportunities are endless, allowing businesses in countries, entire sectors, or niche markets to develop a unique label on the Internet." ICANN also sees this round as a chance to "create a more multilingual Internet for the billions of people who speak and write in different languages and scripts and are yet to come online." If you fancy a gTLD, you'll need to pay a $227,000 application fee by August 12th ... and then wait, possibly until 2030 when this process ends.


    Read more of this story at Slashdot.


  • The Case Against an Imminent Software Developer Apocalypse
    ZipNada shares a report from ZDNet: Given the dour headlines as of late concerning the diminishing amounts of entry-level software development jobs, coupled with predictions of applications entirely AI-generated, one could be forgiven for assuming that software developers may soon be an endangered species. However, the data tells a different story. James Bessen, professor at Boston University, has been pushing back for some time against the talk of AI and automation displacing jobs on a mass scale, and lately has been arguing that the roles of software developers are nowhere near extinction. AI is certainly not killing the software developer, Bessen said in a recent analysis (PDF). AI is taking over software development tasks and boosting productivity and output, but that is not translating into lost jobs, he argued. Instead, the types of software skills sought by companies are changing. "Surprisingly, however, after three years of AI use, software developer jobs have continued to grow robustly, reaching record levels of employment -- 2.5 million in February," Bessen said in the report, citing data from the US Bureau of Labor Statistics. The number of software developers in the US has grown by over 400,000, or 19%, since ChatGPT was introduced in 2022. At that time, the employed software developer population was just under 2.1 million. [...] The productivity uptick developers are seeing may ultimately be a boost to their professional opportunities, however. "An important and possibly disruptive change is happening, but the common view misunderstands what is going on," Bessen pointed out in his report. "Careful case studies find that AI improves the productivity of software developers -- that is, the software produced per developer -- by 30%, 50%, or more. And the rate of productivity improvement in software development is improving." Tellingly, since 2022, when ChatGPT was introduced, developer productivity has increased noticeably, Bessen continued. "From 2003 to 2022, developer productivity grew at 3.9% per year; but from 2022 through 2025, it grew at 6% per year." [...] A coming flood of new software products, now more likely to be enhanced by AI, will continue to create jobs for developers, Bessen predicted. "Thus, mass unemployment of software developers seems unlikely to happen soon." This doesn't mean the job descriptions of developers or other computer occupations will remain static. AI is shifting and re-inventing these roles, Bessen added.


    Read more of this story at Slashdot.


  • GPT-5.5 Matches Heavily Hyped Mythos Preview In New Cybersecurity Tests
    An anonymous reader quotes a report from Ars Technica: Last month, Anthropic made a big deal about the supposedly outsize cybersecurity threat represented by its Mythos Preview model, leading the company to restrict the initial release to "critical industry partners." But new research from the UK's AI Security Institute (AISI) suggests that OpenAI's GPT-5.5, which launched publicly last week, reached "a similar level of performance on our cyber evaluations" as Mythos Preview, which the group evaluated last month. Since 2023, the AISI has run a variety of frontier AI models through 95 different Capture the Flag challenges designed to test capabilities on cybersecurity tasks, such as reverse engineering, web exploitation, and cryptography. On the highest-level "Expert" tasks, GPT-5.5 passed an average of 71.4 percent, slightly higher than the 68.6 percent achieved by Mythos Preview (though within the margin of error). In one particularly difficult task that involved building a disassembler to decode a Rust binary, AISI notes that "GPT-5.5 solved the challenge in 10 minutes and 22 seconds with no human assistance at a cost of $1.73" in API calls. GPT-5.5 also matched Mythos Preview in its progress on "The Last Ones" (TLO), an AISI test range set up to simulate a 32-step data extraction attack on a corporate network. GPT-5.5 succeeded in 3 of 10 attempts on TLO, compared to 2 of 10 for Mythos Preview -- no previous model had ever succeeded at the test even once. But GPT-5.5 still fails at AISI's more difficult "Cooling Tower" simulation of an attempted disruption of the control software for a power plant, as every previously tested AI model also has. The new results for GPT-5.5 suggest that, when it comes to cybersecurity risk, Mythos Preview was likely not "a breakthrough specific to one model" but rather "a byproduct of more general improvements in long-horizon autonomy, reasoning, and coding," AISI writes.


    Read more of this story at Slashdot.


  • Spotify Adds 'Verified' Badges To Distinguish Human Artists From AI
    Spotify is adding "Verified by Spotify" badges to distinguish human artists from AI-generated personas, using signals like linked social accounts, consistent listener activity, merchandise, and concert dates. The BBC reports: The world's most-used music streaming service said the 'Verified by Spotify' text and green checkmark icon would appear next to artist names when they meet "defined standards demonstrating authenticity." This could include having linked social accounts on their artist profile, consistent listener activity or other "signals of a real artist behind the profile," the company said, such as merchandise or concert dates. In its blog post, Spotify said "more than 99%" of the artists listeners actively search for will be verified, representing "hundreds of thousands of artists." It said the process would prioritize acts with "important contributions to music culture and history", rather than "content farms," with the platform rolling out verification and badges over the coming weeks.


    Read more of this story at Slashdot.


  • Hackers Are Actively Exploiting a Bug In cPanel, Used By Millions of Websites
    Hackers are actively exploiting a critical cPanel and WHM vulnerability, tracked as CVE-2026-41940, that allows remote attackers to bypass the login screen and gain full administrative access to affected web servers. Major hosts including Namecheap, HostGator, and KnownHost have taken mitigation steps or patched systems, but cPanel is urging all customers and web hosts to update immediately because the software is widely used across millions of websites. TechCrunch reports: cPanel and WHM are two software suites used for managing web servers that host websites, manage emails, and handle important configurations and databases needed to maintain an internet domain. The two suites have deep-access to the servers that they manage, allowing a malicious hacker potentially unrestricted access to data managed by the affected software. Given the ubiquity of the cPanel and WHM software across the web hosting industry, hackers could compromise potentially large numbers of websites that haven't patched the bug. Canada's national cybersecurity agency said in an advisory that the bug could be exploited to compromise websites on shared hosting servers, such as large web hosting companies. The agency said that "exploitation is highly probable" and that immediate action from cPanel customers, or their web hosts, is necessary to prevent malicious access. [...] One web hosting company says it found evidence that hackers have been abusing the vulnerability for months before the attempts were discovered.


    Read more of this story at Slashdot.


The Register


  • ServiceNow under siege as Atlassian adds to ITSM take-outs
    CEO Mike Cannon-Brookes touts 'largest ever quarter for competitive displacements'
    The chase is on. Atlassian reported its largest-ever quarter for taking share from a major IT service management provider, CEO Mike Cannon-Brookes said on the company's fiscal third-quarter earnings call Thursday, escalating its rivalry with ServiceNow.…




  • Where to buy a non-Apple, non-Google smartphone
    Both Cupertino and Google are imposing ever stricter limits on their phones – but you have alternatives
    As both Apple and Google introduce unwelcome changes in their phone OSes, here's a quick reminder that you do have alternatives to the Gruesome Twosome.…


  • CIOs ready for another role-change as AI becomes agent of chaos
    If software writes software the risk is “systematic failure at scale”. Someone needs to take charge, argues Forrester
    Forrester predicts that by decade's end, the rush toward agentic AI will grow so chaotic that CIOs will be forced into a new role as enforcer of order.…


  • That old phone in the kitchen drawer could save an industry
    Users have less cash to burn and less patience for AI in new models... now where to get the used stock
    Secondhand phones sales are booming - relatively speaking - and the industry has rising inflation, AI bloat, and consumers' growing apathy toward overpriced new handsets to thank for it.…





Polish Linux

  • Security: Why Linux Is Better Than Windows Or Mac OS
    Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]


  • Essential Software That Are Not Available On Linux OS
    An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]


  • Things You Never Knew About Your Operating System
    The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]


  • How To Fully Optimize Your Operating System
    Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]


  • The Top Problems With Major Operating Systems
    There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]


  • 8 Benefits Of Linux OS
    Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]


  • Things Linux OS Can Do That Other OS Cant
    What Is Linux OS?  Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]


  • Packagekit Interview
    Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]


  • What’s New in Ubuntu?
    What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]


  • Ext3 Reiserfs Xfs In Windows With Regards To Colinux
    The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the  official site or from the  sourceforge site. Edit the connection to “TAP Win32 Adapter [0]


OSnews

  • Email is crazy
    Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day. Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension. ↫ Saurabh Sam! Khawase The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isnt helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and thats it. Running your own mail sever isnt only a complex endeavour, its also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you dont end up on some shitlist and your emails stop arriving. I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but its such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.


  • The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS
    What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by AI! scrapers? I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed. I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry. ↫ lux at VulpineCitrus So how much traffic did the author of this piece, lux, get from AI! scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane. If, at this point in time, with everything that we know about just how deeply unethical every single aspect of AI! is, youre still using and promoting it, what is wrong with you? If youre so addicted to your AI! girlfriends unending stream of useless, forgettable sycophantic slop, despite being aware of the damage youre doing to those around you, theres something seriously wrong with you, and you desperately need professional help. You dont need any of this. The world doesnt need any of this. Nobody likes the slop AI! regurgitates, and nobody likes you for enabling it. Get help.


  • Earliest 86-DOS and PC-DOS code released as open source
    Microsoft is continuing its efforts to release early versions of DOS as open source, and today weve got a special one. We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. ↫ Stacey Haffner and Scott Hanselman Its wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.


  • Apple gives up on Vision Pro, disbands Vision Pro team
    When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded: If there’s one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it’s Apple, but I still have a hard time believing this is what people want. ↫ Thom Holwerda at OSNews (quoting myself is weird) MacRumors Juli Clover, today: Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still werent interested. Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025. ↫ Juli Clover at MacRumors VR  what the Vision Pro is, whether Apples marketing likes to say it or not  has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse. I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?


  • Apple wants to kill your Time Capsule, but they run NetBSD so they cant
    It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldnt impact most people, as its highly unlikely youre using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apples Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable. Its important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the lines availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution. Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that its trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that. If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the Network! folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups. ↫ TimeCapsuleSMB Its compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although youll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that dont and wont work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4. This whole saga is such an excellent example of why open source software protects users rights, by design.


  • Dillo 3.3.0 released
    Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that. A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set. ↫ Dillo 3.3.0 release notes You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current pages contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. Im sure some of you who live and die in the terminal are already thinking of all the possibilities here. You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.


  • Ubuntu is going to integrate AI!, but Canonical remains vague about the how and why
    Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the AI! bandwagon, and Jon Seager, Canonicals VP Engineering, published a blog post with more details. Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration. ↫ Jon Seager at Ubuntu Discourse The problem with this entire post is that, much like all other corporate communications about AI!, its all deceptively vague, open-ended, and weasely. Adjectives like focused!, principled!, thoughtful!, and tasteful! dont really mean anything, and leave everything open for basically every type of slop AI! feature under the sun. Their claims about open weights and open source models are also weakened by words like favour! and where possible!, again leaving the door wide open for basically any shady AI! companys models and features to find their way into your default Ubuntu installation. Theres also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. Theres mentions of improved text-to-speech/speech-to-text and text regurgitators, but thats about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical. I dont really feel like I know a lot more about Canonicals AI! intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?


  • If 64bit Windows 11 contains a copy of 32bit explorer.exe, could you run it as its shell?
    Raymond Chen published a blog post about how a crappy uninstaller on Windows caused a mysterious spike in the number of Explorer (Windows graphical shell) crashes. It turns out the buggy uninstaller caused repeated crashes in the 32bit version of Explorer on 64bit systems, and  hold on a minute. The how many bits on the what now? The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work. ↫ Raymond Chen at The Old New Thing So I had no idea that 64bit Windows included a copy of the 32bit Explorer for backwards compatibility. It obviously makes sense, but I just never stopped to think about it. This made me wonder though if you could go nuts and do something really dumb: could you somehow trick 64bit Windows into running this 32bit copy of Explorer as its shell? Youd be running 32bit Explorer on 64bit Windows using the 32bit WoW64 binaries where you just pulled the 32bit Explorer binary from, which seems like a really nonsensical thing to do. Since theres no longer any 32bit builds of Windows 11, you also cant just copy over the 32bit Explorer from a 32bit Windows 11 build and achieve the same goal that way, so youd really have to go digging around in WoW64 to get 32bit versions. I guess the answer to this question depends on just how complete this copy of 32bit Explorer really is, and if Windows has any defenses or triggers in place to prevent someone from doing something this uselessly stupid. Of course, theres no practical reason to do any of this and it makes very little sense, but it might be a fun hacking project. Most likely the Windows experts among you are wondering what kind of utterly deranged new designer drug Im on, but I was always told that sometimes, the dumbest questions can lead to the most interesting answers, so here we are.


  • 8087 emulation on 8086 systems
    Not too long ago I had a need and an opportunity to re-acquaint myself with the mechanism used for software emulation of the 8087 FPU on 8086/8088 machines. ↫ Michal Necasek Look, when a Michal Necasek article starts out like this, you know youre in for a learnin ol time. The 8087 was a floating-point coprocessor for the 8086 and 8088 processors, since back in those early days, processors did not include an integrated floating-point unit. It wouldnt be until the release of the 486DX, in 1989, that Intel would integrate an FPU inside the processor itself, negating the need for a separate chip and socket. Interestingly enough, Intel also released a cut-down version of the 486 with the FPU removed, the 486SX, for which an optional external FPU did exist.


  • How hard is it to open a file?
    Sebastian Wick has a great explanation of why opening files  programmatically  is a lot more complex and fraught with dangers than you might think it is. This issue was relevant for Wick as he is one of the lead developers of Flatpak, for which a number of security issues have recently been discovered, and it just so happens that many of these issues dealt with this very topic. The biggest security issue found was a complete sandbox escape, originating from the fact that flatpak run, the command-line tool to start a Flatpak application, accepted path strings, since flatpak run is assumed to be run by a trusted user. The problem lay in a D-Bus service sandboxed applications could use to create subsandboxes, and this service was built around, you guessed it, flatpak run. The issues in question, including this complete sandbox escape, have been addressed and fixed, but they highlight exactly the dangers that can come from opening files. This subsandboxing approach in Flatpak is built on assumptions from fifteen years ago, and times have changed since then. If youre a programmer who deals with opening files, you might want to take a look at your own code to see if similar issues exist.


Linux Journal - The Original Magazine of the Linux Community

  • Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
    by George Whittaker
    Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.

    The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases.
    A Gradual, Thoughtful AI Rollout
    Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.

    The plan follows a two-phase model:
    Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI
    This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities.
    Local AI First, Not the Cloud
    One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.

    Instead of sending data to remote servers, Ubuntu will aim to:
    Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance
    Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.

    This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data.
    What AI Features Could Look Like
    Canonical has outlined several potential use cases for AI inside Ubuntu. These include:
    Accessibility Improvements
    AI will enhance tools like:
    Speech-to-text Text-to-speech Assistive technologies
    These features aim to make Ubuntu more inclusive and easier to use for a wider range of users.
    Smarter System Assistance
    Future AI features may help users:
    Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks
    This could significantly lower the learning curve for new Linux users.
    Agent-Based Automation
    Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.

    Examples include:
    Go to Full Article


  • Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
    by George Whittaker
    Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.

    For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication.
    Stronger Support for Encrypted Email
    One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.

    Users can now:
    Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients
    These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks.
    New Productivity and Workflow Features
    Thunderbird 150 introduces several small but impactful workflow improvements:
    A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization
    These updates make Thunderbird easier to configure and more flexible to use daily.
    Improved Built-In PDF Viewer
    Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.

    This is particularly helpful for:
    Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows
    Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer.
    Calendar and Interface Enhancements
    Several improvements focus on usability and accessibility:
    Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application
    These changes contribute to a smoother, more consistent user experience across devices.
    Bug Fixes and Stability Improvements
    Thunderbird 150 also resolves a wide range of issues, including:
    Go to Full Article


  • Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
    by George Whittaker
    The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.

    This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle.
    Official End of Support
    The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.

    On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches.
    Why 6.19 Had a Short Lifespan
    Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.

    Linux follows a rapid development model:
    New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support
    Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation.
    What Users Should Do Now
    With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.

    Recommended upgrade paths include:
    Upgrade to Linux 7.0
    The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.

    This is a good option for:
    Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel
    For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.

    Current LTS options include:
    Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027)
    These versions receive ongoing security updates and are better suited for stable environments.
    Why EOL Matters
    When a kernel reaches end of life:
    Go to Full Article


  • Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
    by George Whittaker
    The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.

    This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used.
    A Turning Point for Archinstall
    Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.

    With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.

    This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction.
    Why Wayland Is Taking Over
    Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.

    Compared to X.Org, Wayland is designed to:
    Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates
    As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol.
    What Changed in Archinstall 4.2
    With this release, users installing Arch through Archinstall will notice:
    Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults
    This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup.
    What About X.Org?
    While Archinstall is moving forward, X.Org itself is not disappearing overnight.

    Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.

    For advanced users, Arch still provides full flexibility:
    Go to Full Article


  • OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
    by George Whittaker
    “probably the single most important release of software, probably ever.”

    — Jensen Huang, CEO of NVIDIA


    Wow! That’s a bold statement from one of the most influential figures in modern computing.

    But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.

    This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.

    What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.


    Top 10 Questions About OpenClaw
    What is OpenClaw?

    OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.

    What does OpenClaw actually do?

    OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.

    Do you need to be a developer to use OpenClaw?

    No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.

    Is OpenClaw more suited for business or consumer use?

    OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.

    How is OpenClaw different from ChatGPT or Claude?

    ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.

    Who created OpenClaw?
    Go to Full Article


  • Linux Kernel Developers Adopt New Fuzzing Tools
    by George Whittaker
    The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.

    This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale.
    What Is Fuzzing and Why It Matters
    Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.

    In the Linux kernel, fuzzing has become one of the most effective ways to detect:
    Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems
    Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing.
    New Tools Enter the Scene
    Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.

    Early testing has uncovered bugs in areas such as:
    SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers
    The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency.
    AI and Smarter Fuzzing Techniques
    One of the most interesting developments is the growing role of AI and machine learning in fuzzing.

    New research projects like KernelGPT use large language models to:
    Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths
    These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.

    Other advancements include:
    Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage
    Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports.
    Why This Shift Is Happening Now
    The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible.
    Go to Full Article


  • GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
    by George Whittaker
    Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.

    With GNOME 50, that includes one of the most significant shifts in the desktop’s history.
    A Major GNOME Milestone
    GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.

    Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.

    For Arch Linux users, that translates into a more streamlined and future-ready desktop environment.
    Goodbye X11, Hello Wayland-Only Desktop
    The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.

    After years of gradual transition:
    X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50
    This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.

    The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security.
    Improved Graphics and Display Handling
    GNOME 50 brings several key improvements to display and graphics performance:
    Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management
    These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.

    For gamers and users with high-refresh monitors, these upgrades are especially noticeable.
    Performance and Responsiveness Gains
    Beyond graphics, GNOME 50 includes multiple performance optimizations:
    Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop
    These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors.
    New Parental Controls and Accessibility Features
    GNOME 50 also expands its focus on usability and accessibility.
    Go to Full Article


  • MX Linux Pushes Back Against Age Verification: A Stand for Privacy and Open Source Principles
    by George Whittaker
    The MX Linux project has taken a firm stance in a growing controversy across the Linux ecosystem: mandatory age-verification requirements at the operating system level. In a recent update, the team made it clear, they have no intention of implementing such measures, citing concerns over privacy, practicality, and the core philosophy of open-source software.

    As governments begin introducing laws that could require operating systems to collect user age data, MX Linux is joining a group of projects resisting the shift.
    What Sparked the Debate?
    The discussion around age verification stems from new legislation, particularly in regions like the United States and Brazil, that aims to protect minors online. These laws may require operating systems to:
    Collect user age or date of birth during setup Provide age-related data to applications Enable content filtering based on age categories
    At the same time, underlying Linux components such as systemd have already begun exploring technical changes, including storing birthdate fields in user records to support such requirements.
    MX Linux Says “No” to Age Verification
    In response, the MX Linux team has clearly rejected the idea of integrating age verification into their distribution. Their reasoning is rooted in several key concerns:
    User privacy: Collecting age data introduces sensitive personal information into systems that traditionally avoid such tracking Feasibility: Implementing consistent, secure age verification across a decentralized OS ecosystem is highly complex Philosophy: Open-source operating systems are not designed to act as data collectors or gatekeepers
    The developers emphasized that they do not want to burden users with intrusive requirements and instead encouraged concerned individuals to direct their efforts toward policymakers rather than Linux projects.
    A Broader Resistance in the Linux Community
    MX Linux is not alone. The Linux world is divided on how, or whether, to respond to these regulations.

    Some projects are exploring compliance, while others are pushing back entirely. In fact, age verification laws have sparked:
    Strong debate among developers and maintainers Concerns about enforceability on open-source platforms New projects explicitly created to resist such requirements
    In some extreme cases, distributions have even restricted access in certain regions to avoid legal complications.
    Why This Matters
    At its core, this issue goes beyond a single feature, it raises fundamental questions about what an operating system should be.

    Linux has long stood for:
    Go to Full Article


  • LibreOffice Drives Europe’s Open Source Shift: A Growing Push for Digital Sovereignty
    by George Whittaker
    LibreOffice is increasingly at the center of Europe’s push toward open-source adoption and digital independence. Backed by The Document Foundation, the widely used office suite is playing a key role in helping governments, institutions, and organizations reduce reliance on proprietary software while strengthening control over their digital infrastructure.

    Across the European Union, this shift is no longer experimental, it’s becoming policy.
    A Broader Movement Toward Open Source
    Europe has been steadily moving toward open-source technologies for years, but recent developments show clear acceleration. Governments and public institutions are actively transitioning away from proprietary platforms, often citing concerns about vendor lock-in, cost, and data control.

    According to recent industry data, European organizations are adopting open source faster than their U.S. counterparts, with vendor lock-in concerns cited as a major driver.

    LibreOffice sits at the center of this trend as a mature, fully open-source alternative to traditional office suites.
    LibreOffice as a Strategic Tool
    LibreOffice isn’t just another productivity application, it has become a strategic component in Europe’s digital policy framework.

    The software:
    Is fully open source and community-driven Supports open standards like OpenDocument Format (ODF) Allows governments to avoid dependency on specific vendors Enables long-term control over data and infrastructure
    These characteristics align closely with the European Union’s broader strategy to promote interoperability and transparency through open standards.
    Government Adoption Across Europe
    LibreOffice adoption is already happening at scale across multiple countries and sectors.

    Examples include:
    Germany (Schleswig-Holstein): transitioning tens of thousands of government systems to Linux and LibreOffice Denmark: replacing Microsoft Office in public institutions as part of a broader digital sovereignty initiative France and Italy: deploying LibreOffice across ministries and defense organizations Spain and local governments: adopting LibreOffice to standardize workflows and reduce costs
    In some cases, migrations involve hundreds of thousands of systems, demonstrating that open-source office software is viable at national scale.
    Go to Full Article


  • From Linux to Blockchain: The Infrastructure Behind Modern Financial Systems
    by George Whittaker
    The modern internet is built on open systems. From the Linux kernel powering servers worldwide to the protocols that govern data exchange, much of today’s digital infrastructure is rooted in transparency, collaboration, and decentralization. These same principles are now influencing a new frontier: financial systems built on blockchain technology.

    For developers and system architects familiar with Linux and open-source ecosystems, the rise of cryptocurrency is not just a financial trend, it is an extension of ideas that have been evolving for decades.
    Open-Source Foundations and Financial Innovation
    Linux has long demonstrated the power of decentralized development. Instead of relying on a single authority, it thrives through distributed contributions, peer review, and community-driven improvement.

    Blockchain technology follows a similar model. Networks like Bitcoin operate on open protocols, where consensus is achieved through distributed nodes rather than centralized control. Every transaction is verified, recorded, and made transparent through cryptographic mechanisms.

    For those who have spent years working within Linux environments, this architecture feels familiar. It reflects a shift away from trust-based systems toward verification-based systems.
    Understanding the Stack: Nodes, Protocols, and Interfaces
    At a technical level, cryptocurrency systems are composed of multiple layers. Full nodes maintain the blockchain, validating transactions and ensuring network integrity. Lightweight clients provide access to users without requiring full data replication. On top of this, exchanges and platforms act as interfaces that connect users to the underlying network.

    For developers, interacting with these systems often involves APIs, command-line tools, and automation scripts, tools that are already integral to Linux workflows. Managing wallets, verifying transactions, and monitoring network activity can all be integrated into existing development environments.
    Go to Full Article


Page last modified on November 02, 2011, at 10:01 PM