Recent Changes - Search:

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All/All+Images) (Single Column)

LinuxSecurity - Security Advisories

  • SciLinux: SLSA-2021-1512-1 Important: postgresql on SL7.x x86_64>
    postgresql: Reconnection can downgrade connection security settings (CVE-2020-25694) * postgresql: Multiple features escape "security restricted operation" sandbox (CVE-2020-25695) * postgresql: TYPE in pg_temp executes arbitrary SQL during SECURITY DEFINER execution (CVE-2019-10208) For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other [More...]

  • RedHat: RHSA-2021-1512:01 Important: postgresql security update>
    An update for postgresql is now available for Red Hat Enterprise Linux 7. Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability

  • [$] Pyodide: Python for the browser
    Python in the browser has long been an item on the wish list of many in thePython community. At this point, though, JavaScript has well-cemented its role as thelanguage embedded into the web and its browsers. The Pyodide project provides away to run Python in the browser by compiling the existing CPythoninterpreter to WebAssembly andrunning that binary within the browser's JavaScript environment. Pyodidecame about as part of Mozilla's Iodideproject, which has fallen by the wayside, but Pyodide is now beingspunout as a community-driven project.

  • Why Sleep Apnea Patients Rely on a CPAP Machine Hacker (Vice)
    Vice takesa look at the SleepyHead systemfor the management of CPAP machines.
    The free, open-source, and definitely not FDA-approved piece of software is the product of thousands of hours of hacking and development by a lone Australian developer named Mark Watkins, who has helped thousands of sleep apnea patients take back control of their treatment from overburdened and underinvested doctors. The software gives patients access to the sleep data that is already being generated by their CPAP machines but generally remains inaccessible, hidden by proprietary data formats that can only be read by authorized users (doctors) on proprietary pieces of software that patients often can’t buy or download.

  • Making eBPF work on Windows (Microsoft Open Source Blog)
    The Microsoft Open Source Blog takesa look at implementing eBPF support in Windows. "Although support for eBPF was first implemented in the Linux kernel, there has been increasing interest in allowing eBPF to be used on other operating systems and also to extend user-mode services and daemons in addition to just the kernel.Today we are excited to announce a new Microsoft open source project tomake eBPF work on Windows 10 and Windows Server 2016 and later. The ebpf-for-windows project aims to allow developers to use familiar eBPF toolchains and application programming interfaces (APIs) on top of existing versions of Windows. Building on the work of others, this project takes several existing eBPF open source projects and adds the “glue” to make them run on Windows."

  • Announcing coreboot 4.14
    The coreboot firmware project has releasedversion 4.14. "These changes have been all over the place, so that there's noparticular area to focus on when describing this release: We hadimprovements to mainboards, to chipsets (including much welcomedwork to open source implementations of what has been blobs before),to the overall architecture."

  • Two stable kernels
    Stable kernels 5.10.36 and 5.4.118 have been released. They both containimportant fixes throughout the tree. Users should upgrade.

  • Security updates for Tuesday
    Security updates have been issued by Debian (hivex), Fedora (djvulibre and thunderbird), openSUSE (monitoring-plugins-smart and perl-Image-ExifTool), Oracle (kernel and kernel-container), Red Hat (kernel and kpatch-patch), SUSE (drbd-utils, java-11-openjdk, and python3), and Ubuntu (exiv2, firefox, libxstream-java, and pyyaml).

  • DragonFly BSD 6.0
    DragonFly BSD 6.0 has been released. "This version has a revamped VFS caching system, various filesystem updates including HAMMER2, and a long list of userland updates."

  • [$] The second half of the 5.13 merge window
    By the time the last pull request was acted on and 5.13-rc1was released, a total of 14,231 non-merge commits had found their way intothe mainline. That makes the 5.13 merge window larger than the entire 5.12development cycle (13,015 commits) and just short of all of 5.11 (14,340).In other words, 5.13 looks like one of the busier development cycles wehave seen for a little while.About 6,400 of these commits came in after thefirst-half summary was written, and they include a number ofsignificant new features.

  • Security updates for Monday
    Security updates have been issued by Debian (libxml2), Fedora (autotrace, babel, kernel, libopenmpt, libxml2, mingw-exiv2, mingw-OpenEXR, mingw-openexr, python-markdown2, and samba), openSUSE (alpine, avahi, libxml2, p7zip, redis, syncthing, and vlc), and Ubuntu (webkit2gtk).

  • Kernel prepatch 5.13-rc1
    The first 5.13 kernel prepatch is out fortesting, and the merge window is closed for this development cycle."This was - as expected - a fairly big merge window, but things seemto have proceeded fairly smoothly. Famous last words." In the end,14,231 non-merge changesets were pulled into the mainline during the mergewindow — more than were seen during the entire 5.12 cycle.

  • An IEEE statement on the UMN paper
    The IEEE, whose Symposium on Security and Privacy conference had acceptedthe "hypocrite commits" paper for publication, has posteda statement [PDF] on the episode.
    The paper was reviewed byfour reviewers in the Fall S&P 2021 review cycle and received a verypositive overall rating (2 Accept and 2 Weak Accept scores, putting it inthe top 5% of submitted papers). The reviewers noted that the fact that amalicious actor can attempt to intentionally add a vulnerability to an opensource project is not new, but also acknowledged that the authors provideseveral new insights by describing why this might be easier than expected,and why it might be difficult for maintainers to detect the problem. One ofthe PC members briefly mentioned a possible ethical concern in theirreview, but that comment was not significantly discussed any further at thetime; we acknowledge that we missed it.

    The statement concludes with some actions to be taken by IEEE to ensurethat ethically questionable papers are not accepted again.

  • [$] Noncoherent DMA mappings
    While it is sometimes possible to perform I/O by moving data through theCPU, the only way to get the required level of performance is usually for devicesto move data directly to and from memory. Direct memory access (DMA) I/Ohas been well supported in the Linux kernel since the early days, but thereare always ways in which that support can be improved, especially whenhardware adds some challenges of its own. The somewhat confusingly named"non-contiguous" DMA API that was added for 5.13 shows the kinds of things that have to be done to getthe best performance on current systems.

  • Five new stable kernels
    New stable kernels 5.12.2, 5.11.19, 5.10.35, 5.4.117, and 4.19.190 have been released. They contain arelatively short list of updates throughout the tree; users of those seriesshould upgrade.

  • Security updates for Friday
    Security updates have been issued by Debian (mediawiki and unbound1.9), Fedora (djvulibre and samba), Mageia (ceph, messagelib, and pagure), openSUSE (alpine and exim), Oracle (kernel and postgresql), Scientific Linux (postgresql), and Ubuntu (thunderbird and unbound).

  • An Interview With Linus Torvalds: Open Source And Beyond - Part 2 (Tag1)
    The secondhalf of the interview with Linus Torvalds on the Tag1 Consulting sitehas been posted.
    I think one of the reasons Linux succeeded wasexactly the fact that I actually did NOT have a big plan, and did not havehigh expectations of where things would go, and so when people startedsending me patches, or sending me requests for features, to me that was allgreat, and I had no preconceived notion of what Linux should be. Endresult: all those individuals (and later big companies) that wanted toparticipate in Linux kernel development had a fairly easy time to do so,because I was quite open to Linux doing things that I personally had had noreal interest in originally.

LXer Linux News

  • – Testing TLS/SSL Encryption Anywhere on Any Port is a free and open-source, feature-rich command-line tool used for checking TLS/SSL encryption enabled services for supported ciphers, protocols, and some cryptographic flaws, on Linux/BSD servers. It can be run on macOS X and Windows using MSYS2 or Cygwin.

  • What is fog computing?
    In the early days, computers were big and expensive. There were few users in the world, and they had to reserve time on a computer (and show up in person) to have their punchcards processed. Systems called mainframes made many innovations and enabled time-shared tasks on terminals (like desktop computers, but without their own CPU).

  • Use the Alpine email client in your Linux terminal
    Email is an important communications medium and will remain so for the foreseeable future. I have used many different email clients over the last 30 years, and Thunderbird is what I have used the most in recent years. It is an excellent and functional desktop application that provides all the features that most people need—including me.

  • How to generate and backup a gpg keypair on Linux
    Gnu Privacy Guard (gpg) is the Gnu project free and open source implementation of the OpenGPG standard. The gpg encryption system is called “asymmetric” and it is based on public key encryption: we encrypt a document with the public key of a recipient which will be the only one able to decrypt it, since it owns the private key associated with it. In this tutorial we will see how to generate and create a backup of a gpg keypair.

  • How to Install Vagrant in Linux
    This series is focused on Vagrant with VirtualBox as the Provider. From the previous article, you might have an understanding of what is a provider. Virtualbox is the default provider with Vagrant and it is cross-platform and can run in Windows, Linux, and macOS

  • LFCA: Basic Security Tips to Protect Linux System – Part 17
    Now more than ever, we are living in a world where organizations are constantly bombarded by security breaches motivated by the acquisition of highly sensitive and confidential data which is highly valuable and makes for a huge financial reward.

  • How to create a custom rpm repository on Linux
    Rpm is the acronym of RPM Package Manager: it is the low-level package manager in use in all the Red Hat family of distributions, such as Fedora and Red Hat Enterprise Linux. An rpm package is a package containing software that is meant to be installed using this package management system, and rpm packages are usually distributed via software repositories. In this tutorial we learn how to create a custom rpm repository and how to configure our distribution to use it as a software source.

  • Run Linux on Refurbished Mini PCs – Storage – Part 4
    In this article we consider hard disk drives which form a central part of every modern PC. If you have lots of documents, music, photos and videos, you’ll need plenty of disk space. This series recommends what to choose when buying a refurbished Mini PC to run Linux as a desktop computer.

  • My Little Contribution to GNOME 40
    GNOME 40 is finally out and I'm happy to say a small contribution of mine made it into the release. My contribution adds a new feature to GNOME System Monitor version 40. Few articles about GNOME 40 mention it, but some power users might find my contribution useful.


  • NASA's OSIRIS-REx Spacecraft Heads For Earth With Asteroid Sample
    Obipale shares a press release from NASA: After nearly five years in space, NASA's Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) spacecraft is on its way back to Earth with an abundance of rocks and dust from the near-Earth asteroid Bennu. On Monday, May 10, at 4:23 p.m. EDT the spacecraft fired its main engines full throttle for seven minutes -- its most significant maneuver since it arrived at Bennu in 2018. This burn thrust the spacecraft away from the asteroid at 600 miles per hour (nearly 1,000 kilometers per hour), setting it on a 2.5-year cruise towards Earth. After releasing the sample capsule, OSIRIS-REx will have completed its primary mission. It will fire its engines to fly by Earth safely, putting it on a trajectory to circle the sun inside of Venus' orbit. After orbiting the Sun twice, the OSIRIS-REx spacecraft is due to reach Earth Sept. 24, 2023. Upon return, the capsule containing pieces of Bennu will separate from the rest of the spacecraft and enter Earth's atmosphere. The capsule will parachute to the Utah Test and Training Range in Utah's West Desert, where scientists will be waiting to retrieve it. "OSIRIS-REx's many accomplishments demonstrated the daring and innovate way in which exploration unfolds in real time," said Thomas Zurbuchen, associate administrator for science at NASA Headquarters. "The team rose to the challenge, and now we have a primordial piece of our solar system headed back to Earth where many generations of researchers can unlock its secrets." To realize the mission's multi-year plan, a dozen navigation engineers made calculations and wrote computer code to instruct the spacecraft when and how to push itself away from Bennu. After departing from Bennu, getting the sample to Earth safely is the team's next critical goal. This includes planning future maneuvers to keep the spacecraft on course throughout its journey.

    Read more of this story at Slashdot.

  • Biden Administration Approves Nation's First Major Offshore Wind Farm
    The Biden administration gave approval Tuesday to the nation's first commercial-scale offshore wind farm, which is scheduled to begin construction this summer. The New York Times reports: he Vineyard Wind project calls for up to 84 turbines to be installed in the Atlantic Ocean about 12 nautical miles off the coast of Martha's Vineyard, Mass. Together, they could generate about 800 megawatts of electricity, enough to power about 400,000 homes. The administration estimates that the work will create about 3,600 jobs. The project would dwarf the scale of the country's two existing wind farms, off the coasts of Virginia and Rhode Island. Together, they produce just 42 megawatts of electricity. In addition to Vineyard Wind, a dozen other offshore wind projects along the East Coast are now under federal review. The Interior Department has estimated that by the end of the decade, some 2,000 turbines could be churning in the wind along the coast from Massachusetts to North Carolina.   Electricity generated by the Vineyard Wind turbines will travel via cables buried six feet below the ocean floor to Cape Cod, where they would connect to a substation and feed into the New England grid. The company said that it expects to begin delivering wind-powered electricity in 2023. The Biden administration said that it intended to fast-track permits for other projects off the Atlantic Coast and that it would offer $3 billion in federal loan guarantees for offshore wind projects and invest in upgrades to ports across the United States to support wind turbine construction. [...] The administration has pledged to build 30,000 megawatts of offshore wind in the United States by 2030. It's a target the White House has said would spark $12 billion in capital investments annually, supporting 77,000 direct and indirect jobs by the end of the decade. If Mr. Biden's offshore wind targets are met, it could avoid 78 million metric tons of carbon dioxide emissions, while creating new jobs and even new industries along the way, the administration said.

    Read more of this story at Slashdot.

  • Ford Patents Tech That Could Scan Billboards and Show Associated In-Car Ads
    An anonymous reader quotes a report from Motor1: Roads are lined with unattractive billboards many of us ignore on our daily commutes, but Ford's new tech will make sure we don't miss them anymore. The system works by scanning the billboards, interpreting the information on the sign, and delivering the most useful bits right into the vehicle's display. It sounds invasive and distracting, with a side of Orwellian creepiness tossed on top for good measure. For now, though, this is just a patent application and may never see implementation, but it's not difficult to see how this could be useful to automakers and advertisers. Ford's application says the tech could display an advertiser's products or services, directions to the store, or the phone number.   It's not a stretch to imagine a future where you're driving down the road, and your car sees a sign for your favorite restaurant, prompting you to place an order because the vehicle knows Thursday is take-out night. Cars are only getting infused with more technology designed to assist people in their day-to-day lives, and this would be another avenue to do just that, creating a tailored driving experience. It could also force advertisers to pay Ford to access to its fleet of billboard-scanning-equipped cars, expanding revenue streams beyond the car itself. In a comment to Motor1, Ford says the company submits "patents on new inventions as a normal course of business, but they aren't necessarily an indication of new business or product plans."

    Read more of this story at Slashdot.

  • Forests the Size of France Regrown Since 2000, Study Suggests
    An area of forest the size of France has regrown naturally across the world in the last 20 years, a study suggests. The BBC reports: The restored forests have the potential to soak up the equivalent of 5.9 gigatons (Gt) of carbon dioxide - more than the annual emissions of the US, according to conservation groups. A team led by WWF used satellite data to build a map of regenerated forests. Forest regeneration involves restoring natural woodland through little or no intervention. This ranges from doing nothing at all to planting native trees, fencing off livestock or removing invasive plants.   The Atlantic Forest in Brazil gives reason for hope, the study said, with an area roughly the size of the Netherlands having regrown since 2000. In the boreal forests of northern Mongolia, 1.2 million hectares of forest have regenerated in the last 20 years, while other regeneration hotspots include central Africa and the boreal forests of Canada. The researchers warned that forests across the world face "significant threats." "Despite 'encouraging signs' with forests along Brazil's Atlantic coast, deforestation is such that the forested area needs to more than double to reach the minimal threshold for conservation," the report says.

    Read more of this story at Slashdot.

  • Apple Faces UK Class Action for App Store Overcharging
    Apple is facing a London lawsuit over claims it overcharged nearly 20 million U.K. customers for App Store purchases, yet another legal headache for the tech giant fighting lawsuits across the world. Bloomberg reports: Apple's 30% fee is "excessive" and "unlawful" the claimants said in a press release Tuesday. The claim, filed at London's Competition Appeal Tribunal on Monday, calls for the U.S. firm to compensate U.K. iPhone and iPad users for years of alleged overcharging. They estimate that Apple could face paying out in excess of 1.5 billion pounds ($2.1 billion). "Apple is abusing its dominance in the app store market, which in turn impacts U.K. consumers," Rachael Kent, the lead claimant in the case and a professor at King's College London. She teaches the ways in which consumers interact and depend upon digital platforms.   The legal challenges come as Apple faces a backlash -- with billions of dollars in revenue on the line -- from global regulators and some developers who say its fees and other policies are unjust and self-serving. Last month, the European Commission sent a statement of objections to the firm, laying out how it thinks Apple abused its power as the "gatekeeper" for music-streaming apps on its store. The suit alleges that Apple deliberately shuts out potential competition and forces ordinary users to use its own payment processing system, generating unlawfully excessive levels of profit for the company. The claimants say any U.K. user of an iPhone or iPad who purchased paid apps, subscriptions or made other in-app purchases since October 2015 is entitled to compensation. "We believe this lawsuit is meritless and welcome the opportunity to discuss with the court our unwavering commitment to consumers and the many benefits the App Store has delivered to the U.K.'s innovation economy," Apple said in an emailed statement. "The commission charged by the App Store is very much in the mainstream of those charged by all other digital marketplaces," Apple said. "In fact, 84% of apps on the App Store are free and developers pay Apple nothing. And for the vast majority of developers who do pay Apple a commission because they are selling a digital good or service, they are eligible for a commission rate of 15%."

    Read more of this story at Slashdot.

  • Some Countries Have No COVID-19 Jabs At All
    The World Health Organization says nearly a dozen countries -- many of them in Africa -- are still waiting to get vaccines. Those last in line on the continent along with Chad are Burkina Faso, Burundi, Eritrea and Tanzania. From a report: "Delays and shortages of vaccine supplies are driving African countries to slip further behind the rest of the world in the COVID-19 vaccine rollout and the continent now accounts for only 1% of the vaccines administered worldwide," WHO warned Thursday. And in places where there are no vaccines, there's also the chance that new and concerning variants could emerge, said Gian Gandhi, UNICEF's COVAX coordinator for Supply Division.   "So we should all be concerned about any lack of coverage anywhere in the world," Gandhi said, urging higher-income countries to donate doses to the nations that are still waiting. While the total of confirmed COVID-19 cases among them is relatively low compared with the world's hot spots, health officials say that figure is likely a vast undercount: The countries in Africa still waiting for vaccines are among those least equipped to track infections because of their fragile health care systems. Chad has confirmed only 170 deaths since the pandemic began, but efforts to stop the virus entirely here have been elusive. Although the capital's international airport was closed briefly last year, its first case came via someone who crossed one of Chad's porous land borders illegally.

    Read more of this story at Slashdot.

  • California Ban On Gas-Powered Cars Would Rewrite Plug-In Hybrid Rules
    An anonymous reader quotes a report from CNET: As of now, California wants to implement an 80-20 mix where 80% of new cars sold will be totally electric or hydrogen-powered, and 20% may still feature a plug-in hybrid powertrain. Essentially, automakers will still be able to plop an engine under the hood come 2035. However, PHEVs will need to follow far more stringent definitions of the powertrain. California wants any plug-in hybrid to achieve 50 miles of all-electric range to meet the categorization -- a huge ask. Only two plug-in hybrids in recent years meet that criteria: the Chevrolet Volt (no longer on sale) and the Polestar 1 (soon to exit production). To achieve such a lofty range, automakers need to fit larger batteries, and when you're talking about a big battery and an internal-combustion engine, things get complex (and costly) quickly.   But, that's not all the state will need. Future PHEVs to qualify under these regulations will need to be capable of driving under only electric power throughout their charged range. So, no software to flick on the engine for a few moments to recoup some lost energy. While these regulations would actually benefit drivers to shift PHEVs away from "compliance cars" to something far more usable, the complexities may just turn automakers to focus exclusively on EVs. It all remains to be seen, however since the plans remain open for public comment until June 11 of this year. After that, the board will vote and detail a full proposal later this year.

    Read more of this story at Slashdot.

  • Impossible Burgers Are Coming To US Schools
    Impossible Foods has secured Child Nutrition Labels for its Impossible Burger products, which means they can now be part of school nutrition programs in the US. Engadget reports: To obtain the CN Labels, USDA's Food and Nutrition Services had to evaluate the plant-based meat's product formulation, as well as the company's quality control procedures and manufacturing processes. Now that it has acquired CN Labels for its products, the company is launching K-12 pilot programs this month in partnership with several school districts. The Palo Alto Unified School District in California, the Aberdeen School District in Washington, the Deer Creek Public Schools in Edmond, Oklahoma and the Union City Public Schools in Union City, Oklahoma will be using Impossible's faux meat in a variety of dishes for their menu. Those dishes include tacos, frito pies and spaghetti with Impossible meat sauce. Other school districts can easily obtain Impossible products from suppliers to add them to their menus, as well.

    Read more of this story at Slashdot.

  • eBay Embraces NFTs
    eBay is joining the NFT frenzy, telling Reuters today that going forward it will allow the sales of NFTs on its platform, a mainstream embrace that follows billions of dollars in NFT purchases over the past few months. TechCrunch reports: The e-commerce company seems poised to slowly build up sales of digital collectibles on the platform, starting with a smaller group of verified sellers on the platform. "In the coming months, eBay will add new capabilities that bring blockchain-driven collectibles to our platform," eBay exec Jordan Sweetnam told them. eBay has invested heavily in infrastructure for physical collectibles like trading cards, as well as items like sneakers and watches which they help verify for buyers.

    Read more of this story at Slashdot.

  • Amazon and Others Ordered To Slash Diesel Pollution From Warehouse Trucks
    Southern California has adopted a new air pollution rule aimed at slashing noxious emissions from warehouse trucks that move goods sold by Amazon and other e-commerce retailers. Ars Technica reports: Diesel pollution from heavy trucks causes everything from asthma to heart attacks, and even Parkinson's disease. Previously, such pollution tended to be concentrated around shipping ports and highways, but the growth of e-commerce has created a new source that is affecting neighborhoods farther inland. There are nearly 34,000 warehouses enclosing 1.17 billion square feet of space in the Los Angeles region alone. The rule, which was adopted late last week by a 9-4 vote of the South Coast Air Quality Management District (AQMD), would cover around 3,300 warehouses that are larger than 100,000 square feet. The rule seeks to reduce the amount of diesel particulate matter and nitrogen oxides produced by trucks serving these facilities. The district covers more than 17 million people, or nearly half the state's population.   The way the South Coast AQMD is approaching warehouse-related pollution is novel. Rather than attempting to control traffic flow to and from the facilities, the regulator will require warehouse owners to take various steps to reduce pollution in the area. That could include buying electric or fuel-cell trucks, adding solar panels to the building roofs, or installing air filters at nearby homes, hospitals, and schools. Each of these measures is assigned a point value, and warehouse operators must achieve a certain total to offset the emissions from their truck traffic. If they cannot meet the goal through mitigation measures, they can pay a fee instead. South Coast AQMD is phasing in compliance depending on the size of the facility. Warehouses that are over 250,000 square feet must meet their goals by June 30, 2022. Warehouses over 150,000 square feet must comply by the same day the following year, and those over 100,000 square feet get until June 30, 2024. Amazon's typical warehouses, for example, range in size from 600,000 to 1 million square feet. [...] The new rule is expected to save 150 to 300 lives and prevent 2,500 to 5,800 asthma attacks between 2022 and 2031. Overall, the public health benefits could be as large as $2.7 billion over the same timeframe.

    Read more of this story at Slashdot.

  • Army of Fake Fans Boosts China's Messaging on Twitter
    China's ruling Communist Party has opened a new front in its long, ambitious war to shape global public opinion: Western social media. From a report: Liu Xiaoming, who recently stepped down as China's ambassador to the United Kingdom, is one of the party's most successful foot soldiers on this evolving online battlefield. He joined Twitter in October 2019, as scores of Chinese diplomats surged onto Twitter and Facebook, which are both banned in China. Since then, Liu has deftly elevated his public profile, gaining a following of more than 119,000 as he transformed himself into an exemplar of China's new sharp-edged "wolf warrior" diplomacy, a term borrowed from the title of a top-grossing Chinese action movie. "As I see it, there are so-called 'wolf warriors' because there are 'wolfs' in the world and you need warriors to fight them," Liu, who is now China's Special Representative on Korean Peninsula Affairs, tweeted in February. His stream of posts -- principled and gutsy ripostes to Western anti-Chinese bias to his fans, aggressive bombast to his detractors -- were retweeted more than 43,000 times from June through February alone. But much of the popular support Liu and many of his colleagues seem to enjoy on Twitter has, in fact, been manufactured.   A seven-month investigation by the Associated Press and the Oxford Internet Institute, a department at Oxford University, found that China's rise on Twitter has been powered by an army of fake accounts that have retweeted Chinese diplomats and state media tens of thousands of times, covertly amplifying propaganda that can reach hundreds of millions of people -- often without disclosing the fact that the content is government-sponsored. More than half the retweets Liu got from June through January came from accounts that Twitter has suspended for violating the platform's rules, which prohibit manipulation. Overall, more than one in ten of the retweets 189 Chinese diplomats got in that time frame came from accounts that Twitter had suspended by Mar. 1. But Twitter's suspensions did not stop the pro-China amplification machine. An additional cluster of fake accounts, many of them impersonating U.K. citizens, continued to push Chinese government content, racking up over 16,000 retweets and replies before Twitter kicked them off late last month and early this month, in response to the AP and Oxford Internet Institute's investigation.

    Read more of this story at Slashdot.

  • Voice Actor Reportedly Responsible For Amazon Alexa Revealed
    An anonymous reader quotes a report from The Verge: Amazon's Alexa has a voice familiar to millions: calm, warm, and measured. But like most synthetic speech, its tones have a human origin. There was someone whose voice had to be recorded, analyzed, and algorithmically reproduced to create Alexa as we know it now. Amazon has never revealed who this "original Alexa" is, but journalist Brad Stone says he tracked her down, and she is Nina Rolle, a voiceover artist based in Boulder, Colorado. The claim comes from Stone's upcoming book on the tech giant, Amazon Unbound, an excerpt of which is published here in Wired. Neither Amazon nor Rolle confirmed or denied Stone's reporting, which he says is based on conversations with the professional voiceover community, but Rolle's voice alone makes for a compelling case.   Here's how Stone writes up the process in selecting Alexa's voice: "Believing that the selection of the right voice for Alexa was critical, [then-Amazon exec Greg] Hart and colleagues spent months reviewing the recordings of various candidates that GM Voices produced for the project, and presented the top picks to Bezos. The Amazon team ranked the best ones, asked for additional samples, and finally made a choice. Bezos signed off on it. Characteristically secretive, Amazon has never revealed the name of the voice artist behind Alexa. I learned her identity after canvasing the professional voice-over community: Boulder, Colorado -- based voice actress and singer Nina Rolle. Her professional website contains links to old radio ads for products such as Mott's Apple Juice and the Volkswagen Passat -- and the warm timbre of Alexa's voice is unmistakable. Rolle said she wasn't allowed to talk to me when I reached her on the phone in February 2021. When I asked Amazon to speak with her, they declined."

    Read more of this story at Slashdot.

  • Chinese TV Maker Skyworth Under Fire For Excessive Data Collection That Users Call Spying
    Chinese television maker Skyworth has issued an apology after a consumer found that his set was quietly collecting a wide range of private data and sending it to a Beijing-based analytics company without his consent. From a report: A network traffic analysis revealed that a Skyworth smart TV scanned for other devices connected to the same local network every 10 minutes and gathered data that included device names, IP addresses, network latency and even the names of other Wi-Fi networks within range, according to a post last week on the Chinese developer forum V2EX. The data was sent to the Beijing-based firm Gozen Data, the forum user said. Gozen is a data analytics company that specialises in targeted advertising on smart TVs, and it calls itself Chinaâs first "home marketing company empowered by big data centred on family data."   The user did not identify himself, and efforts to contact the person received no reply. However, the post quickly picked up steam, touching a nerve among Chinese consumers and prompting angry comments. "Isn't this already the criminal offence of spying on people?" asked one user on, a Chinese financial news portal. "Whom will the collected data be sold to, and who is the end user of this data?"

    Read more of this story at Slashdot.

  • East Coast Facing Gas Shortage Due To Ransomware Attack
    New submitter TheCowSaysMoo writes: Gas stations from Florida to Virginia began running dry and prices at the pump jumped on Tuesday as the shutdown of the biggest U.S. fuel pipeline by hackers extended into a fifth day and sparked panic buying by motorists. About 7.5% of gas stations in Virginia and 5% in North Carolina had no fuel on Tuesday as demand jumped 20%, tracking firm GasBuddy said. Prices rose to their highest in more than six years, and Georgia suspended sales tax on gas until Saturday to ease the strain on consumers. North Carolina declared an emergency. Colonial Pipeline has forecast that it will not substantially restore operations of the 5,500-mile pipeline network that supplies nearly half of the East Coast's fuel until the end of the week. The company preventively shut the pipeline on Friday after hackers locked its computers and demanded ransom, underscoring the vulnerability of U.S. energy infrastructure to cyberattack.

    Read more of this story at Slashdot.

  • Google Plans To Double AI Ethics Research Staff
    Alphabet's Google plans to double the size of its team studying artificial-intelligence ethics in the coming years, as the company looks to strengthen a group that has had its credibility challenged by research controversies and personnel defections. From a report: Vice President of Engineering Marian Croak said at The Wall Street Journal's Future of Everything Festival that the hires will increase the size of the responsible AI team that she leads to 200 researchers. Additionally, she said that Alphabet Chief Executive Sundar Pichai has committed to boost the operating budget of a team tasked with evaluating code and product to avert harm, discrimination and other problems with AI. "Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business," Ms. Croak said. "It severely damages the brand if things aren't done in an ethical way." Google announced in February that Ms. Croak would lead the AI ethics group after it fired the division's co-head, Margaret Mitchell, for allegedly sharing internal documents with people outside the company. Ms. Mitchell's exit followed criticism of Google's suppression of research last year by a prominent member of the team, Timnit Gebru, who says she was fired because of studies critical of the company's approach to AI. Mr. Pichai pledged an investigation into the circumstances around Ms. Gebru's departure and said he would seek to restore trust.

    Read more of this story at Slashdot.

The Register

  • Blessed are the cryptographers, labelling them criminal enablers is just foolish
    Preserving privacy is hard. I know because when I tried, I quickly learned not to play with weapons
    Column Nearly a decade ago I decided to try my hand as a cryptographer. It went about as well as you might expect. I’d gotten the crazy idea to write a tool that would encrypt Twitter’s direct messages - sent in the clear - so that your private communications would truly be private, visible to no one, including Twitter.…

  • Intel throws sand in the face of 'musclebooks' with 10nm Tiger Lake tech
    11th-gen Core H has nice new touches, but pitch is usual 'a new PC will be faster and smaller and lighter than an old PC' promise
    Intel is talking up a new generation of laptop and mobile workstation CPUs that it says will deliver modest performance gains and lighten laptops for power users.…

  • LibreBMC project to open source baseboard management controllers with security as a priority
    Freely available from the hardware schematics to OpenPOWER cores on an FPGA, to the firmware on top
    The OpenPOWER Foundation, formed to promote IBM's open-source POWER instruction set architecture (ISA), on Monday said it is putting together a new working group to develop LibreBMC, claimed to be the first baseboard management controller (BMC) designed with open source software and hardware.…

  • US postal service goes all in on AI
    Plus: Google boffin who resigned over AI ethics controversy, joins Apple
    In Brief What do you know? The US Postal Service uses AI technology and have GPU servers running computer vision algorithms to track items being delivered across the country.…

 offline for now


  • AMD Publishes Radeon Rays 4.1 As Open-Source
    Last year Radeon Rays 4.0 brought Vulkan support while dropping OpenCL and at the same time no longer being open-source... This GPU-accelerated ray intersection library used by the likes of Radeon ProRender is out today with version 4.1 and now it's back to being open-source...

  • LibreOffice Begins Landing GTK4 Support Code
    Ahead of this week's LibreOffice 7.2 Alpha and the feature freeze / branching next month, initial GTK4 toolkit support code has begun landing in this open-source office suite...

  • Microsoft Bringing eBPF Support To Windows
    eBPF has been one of the greatest Linux kernel innovations of the past decade and now Microsoft has decided to bring this "revolutionary technology" to Windows Server and Windows 10...

  • Daemon Engine 0.52 Beta Continues Advancing The id Tech 3 Open-Source Code In 2021
    The Daemon engine that has been in development for many years as part of the Unvanquished open-source game project released their long-awaited 0.52 beta ahead of the game's next beta later in the week. Daemon was originally based on the open-source id Tech 3 game engine but in 2021 continues pushing ahead working on features like WebAssembly support and renderer enhancements...

  • NVIDIA GeForce RTX 3090 - Windows vs. Linux GPU Compute Performance
    Following the recent RTX 30 series Linux gaming benchmarks and RTX 30 compute comparison, I was curious how the Linux performance for the flagship GeForce RTX 3090 graphics card compares to the Windows 10 performance in various GPU compute workloads. Well, here are those benchmarks for those wondering about Vulkan / OpenCL / CUDA / OptiX compute performance between Windows and Linux with the very latest NVIDIA drivers.

  • Linux 5.13 Features From Apple M1 To New GPU Support, Security Additions
    Following the two week merge window, feature development on the Linux 5.13 kernel is slated to end today with the release of Linux 5.13-rc1. Here is a look at some of the most interesting new features and improvements for this kernel that in turn should debut as stable around the end of June.

  • China Is Launching A New Alternative To Google Summer of Code, Outreachy
    The Institute of Software Chinese Academy of Sciences (ISCAS) in cooperation with the Chinese openEuler Linux distribution have been working on their own project akin to Google Summer of Code and Outreachy for paying university-aged students to become involved in open-source software development...

  • Linux 5.10 LTS Will Be Maintained Through End Of Year 2026
    Linux 5.10 as the latest Long Term Support release when announced was only going to be maintained until the end of 2022 but following enough companies stepping up to help with testing, Linux 5.10 LTS will now be maintained until the end of year 2026...



  • OpenIndiana Hipster 2021.04 released
    After another 6 months have passed we are proud to announce the release of our 2021.04 snapshot. The images are available at the usual place. As usual we have automatically received all updates that have been integrated into illumos-gate. The major changes are new versions of Firefox and Thunderbird, multiple NVIDIA drivers to choose from, and a lot more. For those unaware, OpenIndiana is a distribution of illumos, which in turn is the continuation of the last open source Solaris version before Oracle did what it does best and messed everything up.

  • Parallels Desktop 16.5 review: Windows comes to Apple Silicon (sort of)
    After sixteen major releases, you might think there’s not much left to be added to Parallels Desktop – and for the vast majority of Mac users who are still using Intel CPUs, there isn’t. For them, this update to the popular virtualisation software tidies up a few bugs and adds support for the latest version of the Linux kernel, but that’s largely it. Overall it’s not even consequential enough to warrant a full ticking up of the version number.` Yet arguably, this is the most significant release of Parallels Desktop since it first appeared in 2006. Just as version one unlocked the potential of Apple’s then-recent switch to the Intel architecture, this one breaks new ground by allowing you to install and run Windows 10 on Apple Silicon. They conclude its a great first release, but that it still has ways to go.

  • OpenBSD 6.9 released
    OpenBSD 6.9 has been released. This release focuses a lot on improving support for certain platforms, such as powerpc64  mainly for modern POWER9 systems such as the Blackbird (which we reviewed late last year) and Talos II (which I have here now for review), arm64, and preliminary support for Apples ARM M1 architecture. There is way, way more in this release, of course, so feel free to peruse the release notes. On a related note, I recently bought an HP Visualize C3750 PA-RISC workstation, and its been pretty much impossible to get my hands on a proper copy of HP-UX 11i v1 that works on the machine. As such, in the interim, I installed OpenBSD on it, and its been working like a charm. I still need to set up and try X, but other than that, its been a very pleasant experience. Effortless installation, good documentation, and user friendlier than I expected.

  • EU accuses Apple of App Store antitrust violations
    The European Commission is issuing antitrust charges against Apple over concerns about the company’s App Store practices. The Commission has found that Apple has broken EU competition rules with its App Store policies, following an initial complaint from Spotify back in 2019. Specifically, the Commission believes Apple has a “dominant position in the market for the distribution of music streaming apps through its App Store.” The EU has focused on two rules that Apple imposes on developers: the mandatory use of Apple’s in-app purchase system (for which Apple charges a 30 percent cut), and a rule forbidding app developers to inform users of other purchasing options outside of apps. The Commission has found that the 30 percent commission fee, or “Apple tax” as it’s often referred to, has resulted in higher prices for consumers. “Most streaming providers passed this fee on to end users by raising prices,” according to the European Commission. As predicted, and entirely reasonable. This is only the first step in the process, and Apple will have the opportunity to respond. If found guilty, Apple could face a fine of more than 22 billion euro, 10% of its annual revenue, or be forced to change its business model.

  • Microsoft announces Windows 10 May 2021 Update (version 21H1)
    The Windows 10 May 2021 Update has been finalized and Build 19043.928 is likely to be the release candidate. Unsurprisingly, May 2021 Update will begin rolling out to millions of users around the world in May, and it will ship with a few minor improvements, mostly for enterprise customers. Microsoft has officially named the version 21H1 update as “May 2021 Update” and published the final bits in the Release Preview Channel. I wish Microsoft would rethink its obtuse versioning and naming scheme for windows, because none of this makes any sense to me anymore. This is a small update, and mostly focused on remote work scenarios in the enterprise.

  • Linux 5.12 released
    Linux 5.12 brings Intel Variable Rate Refresh (VRR/Adaptive-Sync), Radeon RX 6000 series overclocking support, mainline support for the Nintendo 64, the Sony PlayStation 5 DualSense controller driver, CXL 2.0 Type-3 memory device support, KFENCE, dynamic preemption capabilities, Clang link-time optimizations, laptop support improvements, and much more. A decently sized release. My favourite is definitely adding N64 support to the kernel.

  • Arm announces Neoverse V1, N2 platforms and CPUs, CMN-700 Mesh
    Today, we’re pivoting towards the future and the new Neoverse V1 and Neoverse N2 generation of products. Arm had already tested the new products last September, teasing a few characteristics of the new designs, but falling short of disclosing more concrete details about the new microarchitectures. Following last month’s announcement of the Armv9 architecture, we’re now finally ready to dive into the two new CPU microarchitectures as well as the new CMN-700 mesh network. These are looking really good.

  • Apple will reportedly face EU antitrust charges this week
    Im linking to The Verge, since the original FT article is locked behind a paywall. The European Commission will issue antitrust charges against Apple over concerns about the company’s App Store practices, according to a report from the Financial Times. The commission has been investigating whether Apple has broken EU competition rules with its App Store policies, following an initial complaint from Spotify back in 2019 over Apple’s 30 percent cut on subscriptions. The European Commission opened up two antitrust investigations into Apple’s App Store and Apple Pay practices last year, and the Financial Times only mentions upcoming charges on the App Store case. It’s not clear yet what action will be taken. Im glad both the US and EU are turning up the heat under Apple (and the other major technology companies), since their immense market power and clear-cut cases of abuse have to end. I am a strict proponent of doing what the United States used to be quite good at, and thats breaking Apple and Google up into smaller companies forced to compete with one another and the rest of the market. The US has done it countless times before, and they should do it again. In this specific case, Apple should be divided up into Mac hardware, mobile hardware, software (macOS, iOS, and applications), and services. This would breath immense life into the market, and would create countless opportunities for others to come in and compete. The US has taken similar actions with railroads, oil, airplanes, and telecommunications, and the technology market should be no different.

  • iOS 14.5, macOS 11.3 released
    iOS 14.5 is a major update with a long list of new features, including the ability to unlock an iPhone with an Apple Watch, 5G support for dual-SIM users, new emoji characters, an option to select a preferred music service to use with Siri, crowd sourced data collection for Apple Maps accidents, AirPlay 2 support for Fitness+, and much more. The update also introduces support for AirTags and Precision Finding on the iPhone 12 models, and it marks the official introduction of App Tracking Transparency. There are a long list of bug fixes, with Apple addressing everything from AirPods switching issues to the green tint that some users saw on ‌iPhone 12‌ models. A big update for such a small version number, and a lot of good stuff in there. Apple also released macOS Big Sur 11.3, which is a smaller update than the iOS one, but still contains some nice additions such as better touch integration for running iOS apps on the Mac and improved support for game controllers.

  • Microsoft is building a new app store for Windows 10
    Microsoft is working on a brand-new Store app for Windows 10 that will introduce a modern and fluid user interface, as well as bring changes to the policies that govern what kind of apps can be submitted to the store by developers. According to sources familiar with the matter, this new Store will pave the way to a revitalized storefront thats more open to both end users and developers. The biggest change is that Microsoft will supposedly allow developers to host unpackaged, unaltered, bog-standard Win32 applications in the Store. Right now, even Win32 applications need to be packaged as MSIX, but this requirement is going away. The Microsoft Store definitely needs a lot of love, but I feel like the problem isnt the Store itself  its just how messy and fragmented managing applications on Windows really is.

  • Using a PowerBook in 2021
    It has been recently announced that the venerable TenFourFox web browser for PowerPC (PPC) Macs was going to cease regular development, which rekindled my interest in playing around with my trusty PowerBook G4, which only gets occasional use if Im testing a PowerPC version of some of my own software. Such is the way of aging hardware and software: the necessity to support them wanes over time, but it does question how useful can an 18 year old laptop be in 2021. Can it still be useful, or is it relegated to a hobbyists endeavors? As usual, the internet and networking are the hurdles.

  • Ubuntu 21.04 released
    Today, Canonical released Ubuntu 21.04 with native Microsoft Active Directory integration, Wayland graphics by default, and a Flutter application development SDK. Separately, Canonical and Microsoft announced performance optimization and joint support for Microsoft SQL Server on Ubuntu. Ubuntu 21.04 is an important release, if only because of the switch to Wayland, following in Fedoras footsteps. Ubuntu did opt out of shipping GNOME 40, though, so it comes with 3.38 instead. The step to Wayland is surely going to cause problems for some people, but overall, I think its high time and Wayland is pretty much as ready as its ever going to be. Remember, Wayland is not X, as I said a few months ago: Wayland is not Let me repeat that. Wayland is not If you need the functionality that delivers, then you shouldn’t be using Wayland. This is like buying a Mac and complaining your Windows applications don’t work. With NVIDIA finally seeming to get at least somewhat on board, and development basically having dried up, the time for Wayland is now.

  • Why does trying to break into the NT 3.1 kernel reboot my 486DX4 machine?
    While installing Windows NT 3.1 worked perfectly, I really like to tinker with my retro stuff. The Windows NT 3.1 CD comes with the full set of debugging symbols, Im curious into investigating why NetDDE throws an error into the event log, and the system crashes with a specific EISA ethernet card (which might be due to faulty hardware), so I decided to dive into kernel debugging. Setting up kernel debugging is straight-forward, once you realize you should use the i386kd executable supplied with Windows NT 3.1 instead of kd/ntkd from the current Windows 10 develepmont kit. As soon as I want to break in (using Ctrl-C in i386kd), the target machine reboots instead of providing a kdb prompt. Such an obscure question and bug, and yet, theres someone providing a detailed answer  and a fix.

  • Here’s everything new in Android 12 Developer Preview 3
    The next version of Android remains focussed on developers until the first beta launches next month. With that in mind, we’re diving into today’s release of Android 12 DP3 to find all the new features. Mostly small changes, still, and many of them seem specific to Googles own devices.

  • University banned from contributing to Linux kernel after intentionally submitting vulnerable code
    A statement from the University of Minnesota Department of Computer Science 8 Engineering: Leadership in the University of Minnesota Department of Computer Science 8 Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel. The research method used raised serious concerns in the Linux Kernel community and, as of today, this has resulted in the University being banned from contributing to the Linux Kernel. We take this situation extremely seriously. We have immediately suspended this line of research. We will investigate the research method and the process by which this research method was approved, determine appropriate remedial action, and safeguard against future issues, if needed. We will report our findings back to the community as soon as practical. This story is crazy. It turns out researchers from the University of Minnesota were intentionally trying to introduce vulnerabilities into the Linux kernel as part of some research study. This was, of course, discovered, and kernel maintainer Greg Kroah-Hartman immediately banned the entire university from submitting any code to the Linux kernel. Replying to the researcher in question, Kroah-Hartman wrote: You, and your group, have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. Now you submit a new series of obviously-incorrect patches again, so what am I supposed to think of such a thing? They obviously were _NOT_ created by a static analysis tool that is of any intelligence, as they all are the result of totally different patterns, and all of which are obviously not even fixing anything at all. So what am I supposed to think here, other than that you and your group are continuing to experiment on the kernel community developers by sending such nonsense patches? Our community does not appreciate being experimented on, and being tested! by submitting known patches that are either do nothing on purpose, or introduce bugs on purpose. If you wish to do work like this, I suggest you find a different community to run your experiments on, you are not welcome here. Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems. This is obviously the only correct course of action, and the swift response by the university is the right one.

Linux Journal - The Original Magazine of the Linux Community

  • eBPF for Advanced Linux Infrastructure Monitoring
    by Odysseas Lamztidis   
    A year has passed since the pandemic left us spending the better part of our days sheltering inside our homes. It has been a challenging time for developers, Sysadmins, and entire IT teams for that matter who began to juggle the task of monitoring and troubleshooting an influx of data within their systems and infrastructures as the world was forced online. To do their job properly, free, open-source technologies like Linux have become increasingly attractive, especially amongst Ops professionals and Sysadmins in charge of maintaining growing and complex environments. Engineers, as well, are using more open-source technologies largely due to the flexibility and openness they have to offer, versus commercial offerings that are accompanied by high-cost pricing and stringent feature lock-ins.

    One emerging technology in particular - eBPF - has made its appearance in multiple projects, including commercial and open-source offerings. Before discussing more about the community surrounding eBPF and its growth during the pandemic, it’s important to understand what it is and how it’s being utilized. eBPF, or extended Berkley packet filtering, was originally introduced as BPF back in 1992 in a paper by Lawrence Berkeley Laboratory researchers as a rule-based mechanism to filter and capture network packets. Filters would be implemented to run inside a register-based Virtual Machine (VM), which itself would exist inside the Linux Kernel. After several years of non-activity, BPF was extended to eBPF, featuring a full-blown VM to run small programs inside the Linux Kernel. Since these programs run from inside the Kernel, they can be attached to a particular code path and be executed when it is traversed, making them perfect to create applications for packet filtering and performance analysis and monitoring.

    Originally, it was not easy to create eBPF programs, as the programmer needed to know an extremely low-level language. However, the community around that technology has evolved considerably through their creation of tools and libraries to simplify and speed up the process of developing and loading an eBPF program inside the Kernel. This was crucial for creating a large number of tools that can trace system and application activity down to a very granular level. The image that follows demonstrates this, showing the sheer number of tools that exist to trace various parts of the Linux stack.
        Go to Full Article          

  • How to set up a CrowdSec multi-server installation
    by Manuel Sabban    Introduction  CrowdSec is an open-source & collaborative security solution built to secure Internet-exposed Linux services, servers, containers, or virtual machines with a server-side agent. It is a modernized version of Fail2ban which was a great source of inspiration to the project founders.
    CrowdSec is free (under an MIT License) and its source code available on GitHub. The solution is leveraging a log-based IP behavior analysis engine to detect attacks. When the CrowdSec agent detects any aggression, it offers different types of remediation to deal with the IP behind it (access prohibition, captcha, 2FA authentication etc.). The report is curated by the platform and, if legitimate, shared across the CrowdSec community so users can also protect their assets from this IP address.
    A few months ago, we added some interesting features to CrowdSec when releasing v1.0.x. One of the most exciting ones is the ability of the CrowdSec agent to act as an HTTP rest API to collect signals from other CrowdSec agents. Thus, it is the responsibility of this special agent to store and share the collected signals. We will call this special agent the LAPI server from now on.
    Another worth noting feature, is that mitigation no longer has to take place on the same server as detection. Mitigation is done using bouncers. Bouncers rely on the HTTP REST API served by the LAPI server.
    Goals  In this article we’ll describe how to deploy CrowdSec in a multi-server setup with one server sharing signal.
    Both server-2 and server-3 are meant to host services. You can take a look on our Hub to know which services CrowdSec can help you secure. Last but not least, server-1 is meant to host the following local services:
      the local API needed by bouncers
        the database fed by both the three local CrowdSec agents and the online CrowdSec blocklist service.  As server-1 is serving the local API, we will call it the LAPI server.
     We choose to use a postgresql backend for CrowdSec database in order to allow high availability. This topic will be covered in future posts. If you are ok with no high availability, you can skip step 2.
        Go to Full Article          

  • Develop a Linux command-line Tool to Track and Plot Covid-19 Stats
    by Nawaz Abbasi    It’s been over a year and we are still fighting with the pandemic at almost every aspect of our life. Thanks to technology, we have various tools and mechanisms to track Covid-19 related metrics which help us make informed decisions. This introductory-level tutorial discusses developing one such tool at just Linux command-line, from scratch.
    We will start with introducing the most important parts of the tool – the APIs and the commands. We will be using 2 APIs for our tool - COVID19 API and Quickchart API and 2 key commands – curl and jq. In simple terms, curl command is used for data transfer and jq command to process JSON data.
    The complete tool can be broken down into 2 keys steps:

    1. Fetching (GET request) data from the COVID19 API and piping the JSON output to jq so as to process out only global data (or similarly, country specific data).
     $ curl -s --location --request GET '' | jq -r '.Global'  {   "NewConfirmed": 561661,   "TotalConfirmed": 136069313,   "NewDeaths": 8077,   "TotalDeaths": 2937292,   "NewRecovered": 487901,   "TotalRecovered": 77585186,   "Date": "2021-04-13T02:28:22.158Z"  } 
    2. Storing the output of step 1 in variables and calling the Quickchart API using those variables, to plot a chart. Subsequently piping the JSON output to jq so as to filter only the link to our chart.
     $ curl -s -X POST \   -H 'Content-Type: application/json' \   -d '{"chart": {"type": "bar", "data": {"labels": ["NewConfirmed (${newConf})", "TotalConfirmed (${totConf})", "NewDeaths (${newDeath})", "TotalDeaths (${totDeath})", "NewRecovered (${newRecover})", "TotalRecovered (${totRecover})"], "datasets": [{"label": "Global Covid-19 Stats (${datetime})", "data": [${newConf}, ${totConf}, ${newDeath}, ${totDeath}, ${newRecover}, ${totRecover}]}]}}}' \ | jq -r '.url'    That’s it! Now we have our data plotted out in a chart:

        Go to Full Article          

  • FSF’s LibrePlanet 2021 Free Software Conference Is Next Weekend, Online Only
    by George Whittaker    On Saturday and Sunday, March 20th and 21st, 2021, free software supporters from all over the world will log in to share knowledge and experiences, and to socialize with others within the free software community. This year’s theme is “Empowering Users,” and keynotes will be Julia Reda, Nathan Freitas, and Nadya Peek. Free Software Foundation (FSF) associate members and students attend gratis at the Supporter level. 
    You can see the schedule and learn more about the conference at, and participants are encouraged to register in advance at
    The conference will also include workshops, community-submitted five-minute Lightning Talks, Birds of a Feather (BoF) sessions, and an interactive “exhibitor hall” and “hallway” for socializing.
        Go to Full Article          

  • Review: The New weLees Visual LVM, a new style of LVM management, has been released
    by George Whittaker    Maintenance of the storage system is a daily job for system administrators. Linux provides users with a wealth of storage capabilities, and powerful built-in maintenance tools. However, these tools are hardly friendly to system administrators while generally considerable effort is required for mastery.
    As a Linux built-in storage model, LVM provides users with plenty flexible management modes to fit various needs. For users who can fully utilize its functions, LVM could meet almost all needs. But the premise is thorough understanding of the LVM model, dozens of commands as well as accompanying parameters.
    The graphical interface would dramatically simplify both learning curve and operation with LVM, in a similar approach as partition tools that are widely used on Windows/Linux platforms. Although scripts with commands are suitable for daily, automatic tasks, the script could not handle all functions in LVM. For instance, manual calculation and processing are still required by many tasks.
    Significant effort had been spent on this problem. Nowadays, several graphical LVM management tools are already available on the Internet, some of them are built-in with Linux distributions and others are developed by third parties. But there remains a critical problem: desire for remote machines or headless servers are completely ignored.
    This is now solved by Visual LVM Remote. Front end of this tool is developed based on the HTTP protocol. With any smart device that can connect to the storage server, Users can perform management operations.
    Visual LVM is developed by weLees Corporation and supports all Linux distributions. In addition to working with remote/headless servers, it also supports more advanced features of LVM compared with various on-shelf graphic LVM management tools.
    Dependences of Visual LVM Remote  Visual LVM Remote can work on any Linux distribution that including two components below:
     UI of Visual LVM Remote  With a concise UI, partitions/physical volumes/logical volumes are displayed by disk layout. With a glance, disk/volume group information can be obtained immediately. In addition, detailed relevant information of the object will be displayed in the information bar below with the mouse hover on the concerned object.
        Go to Full Article          

  • Nvidia Linux drivers causing random hard crashes and now a major security risk still not fixed after 5+ months
    Image       The recent fiasco with Nvidia trying to block Hardware Unboxed from future GPU review samples for the content of their review is one example of how they choose to play this game. This hatred is not only shared by reviewers, but also developers and especially Linux users.
    The infamous Torvalds videos still traverse the web today as Nvidia conjures up another evil plan to suck up more of your money and market share. This is not just one off shoot case; oh how much I wish it was. I just want my computer to work.
    If anyone has used Sway-WM with an Nvidia GPU I’m sure they would remember the –my-next-gpu-wont-be-nvidia option.
    These are a few examples of many.
    The Nvidia Linux drivers have never been good but whatever has been happening at Nvidia for the past decade has to stop today. The topic in question today is this bug: []
    This bug causes hard irrecoverable crashes from driver 440+. This issue is still happening 5+ months later with no end in sight. At first users could work around this by using an older DKMS driver along with a LTS kernel. However today this is no longer possible. Many distributions of Linux are now dropping the old kernels. DKMS cannot build. The users are now FORCED with this “choice”:
    {Use an older driver and risk security implications} or {“use” the new drivers that cause random irrecoverable crashes.}
    This issue is only going to get more and more prevalent as the kernel is a core dependency by definition. This is just another example of the implications of an unsafe older kernel causing issue for users:
    If you use Linux or care about the implications of a GPU monopoly, consider AMD. Nvidia is already rearing its ugly head and AMD is actually putting up a fight this year.
          #Linux  NVIDIA  News                   

  • Parallel shells with xargs: Utilize all your cpu cores on UNIX and Windows
    by Charles Fisher    Introduction  One particular frustration with the UNIX shell is the inability to easily schedule multiple, concurrent tasks that fully utilize CPU cores presented on modern systems. The example of focus in this article is file compression, but the problem rises with many computationally intensive tasks, such as image/audio/media processing, password cracking and hash analysis, database Extract, Transform, and Load, and backup activities. It is understandably frustrating to wait for gzip * running on a single CPU core, while most of a machine's processing power lies idle.
    This can be understood as a weakness of the first decade of Research UNIX which was not developed on machines with SMP. The Bourne shell did not emerge from the 7th edition with any native syntax or controls for cohesively managing the resource consumption of background processes.
    Utilities have haphazardly evolved to perform some of these functions. The GNU version of xargs is able to exercise some primitive control in allocating background processes, which is discussed at some length in the documentation. While the GNU extensions to xargs have proliferated to many other implementations (notably BusyBox, including the release for Microsoft Windows, example below), they are not POSIX.2-compliant, and likely will not be found on commercial UNIX.
    Historic users of xargs will remember it as a useful tool for directories that contained too many files for echo * or other wildcards to be used; in this situation xargs is called to repeatedly batch groups of files with a single command. As xargs has evolved beyond POSIX, it has assumed a new relevance which is useful to explore.

    Why is POSIX.2 this bad?  A clear understanding of the lack of cohesive job scheduling in UNIX requires some history of the evolution of these utilities.
        Go to Full Article          

  • Bypassing Deep Packet Inspection: Tunneling Traffic Over TLS VPN
    by Dmitriy Kuptsov   
    In some countries, network operators employ deep packet inspection techniques to block certain types of traffic. For example, Virtual Private Network (VPN) traffic can be analyzed and blocked to prevent users from sending encrypted packets over such networks.

    By observing that HTTPS works all over the world (configured for an extremely large number of web-servers) and cannot be easily analyzed (the payload is usually encrypted), we argue that in the same manner VPN tunneling can be organized: By masquerading the VPN traffic with TLS or its older version - SSL, we can build a reliable and secure network. Packets, which are sent over such tunnels, can cross multiple domains, which have various (strict and not so strict) security policies. Despite that the SSH can be potentially used to build such network, we have evidence that in certain countries connections made over such tunnels are analyzed statistically: If the network utilization by such tunnels is high, bursts do exist, or connections are long-living, then underlying TCP connections are reset by network operators.

    Thus, here we make an experimental effort in this direction: First, we describe different VPN solutions, which exist on the Internet; and, second, we describe our experimental effort with Python-based software and Linux, which allows users to create VPN tunnels using TLS protocol and tunnel small office/home office (SOHO) traffic through such tunnels.
    Virtual private networks (VPN) are crucial in the modern era. By encapsulating and sending client’s traffic inside protected tunnels it is possible for users to obtain network services, which otherwise would be blocked by a network operator. VPN solutions are also useful when accessing a company’s Intranet network. For example, corporate employees can access the internal network in a secure way by establishing a VPN connection and directing all traffic through the tunnel towards the corporate network. This way they can get services, which otherwise would be impossible to get from the outside world.
    There are various solutions that can be used to build VPNs. One example is Host Identity Protocols (HIP) [7]. HIP is a layer 3.5 solution (it is in fact located between transport and network layers) and was originally designed to split the dual role of IP addresses - identifier and locator. For example, a company called Tempered Networks uses HIP protocol to build secure networks (for sampling see [4]).
        Go to Full Article          

  • How to Save Time Running Automated Tests with Parallel CI Machines
    by Artur Trzop   
    Automated tests are part of many programming projects, ensuring the software is flawless. The bigger the project, the larger the test suite can be.This can result in automated tests taking a lot of time to run. In this article you will learn how to run automated tests faster with parallel Continuous Integration machines (CI) and what problems can be encountered. The article covers common parallel testing problems, based on Ruby & JavaScript tests.
    Slow automated tests
    Automated tests can be considered slow when programmers stop running the whole test suite on their local machine because it is too time consuming. Most of the time you use CI servers such as Jenkins, CircleCI, Github Actions to run your tests on an external machine instead of your own. When you have a test suite that runs for an hour then it’s not efficient to run it on your computer. Browser end-to-end tests for your web project can take a really long time to execute. Running tests on a CI server for an hour is also not efficient. You as a developer need a fast feedback loop to know if your software works fine. Automated tests should help you with that.
    Split tests between many CI machines to save time
    A way to save you time is to make CI build as fast as possible. When you have tests taking e.g. 1 hour to run then you could leverage your CI server config and setup parallel jobs (parallel CI machines/nodes). Each of the parallel jobs can run a chunk of the test suite. 

    You need to divide your tests between parallel CI machines. When you have a 60 minutes test suite you can run 20 parallel jobs where each job runs a small set of tests and this should save you time. In an optimal scenario you would run tests for 3 minutes per job. 

    How to make sure each job runs for 3 minutes? As a first step you can apply a simple solution. Sort all of your test files alphabetically and divide them by the number of parallel jobs. Each of your test files can have a different execution time depending on how many test cases you have per test file and how complex each test case is. But you can end up with test files divided in a suboptimal way, and this is problematic. The image below illustrates a suboptimal split of tests between parallel CI jobs where one job runs too many tests and ends up being a bottleneck.
        Go to Full Article          

  • The KISS Web Development Framework
    by Blake McBride   
    Perhaps the most popular platform for applications is the web. There are many reasons for this including portability across platforms, no need to update the program, data backup, sharing data with others, and many more. This popularity has driven many of us to the platform.

    Unfortunately, the platform is a bit complex. Rather than developing in a particular environment, with web applications it is necessary to create two halves of a program utilizing vastly different technologies. On top of that, there are many additional challenges such as the communications and security between the two halves.

    A typical web application would include all of the following building blocks:
    Front-end layout (HTML/CSS)  Front-end functionality (JavaScript)  Back-end server code (Java, C#, etc.)  Communications (REST, etc.)  Authentication  Data persistence (SQL, etc.) 
    All these don't even touch on all the other pieces that are not part of your application proper, such as the server (Apache, tomcat, etc), the database server (PostgreSQL, MySQL, MongoDB, etc), the OS (Linux, etc.), domain name, DNS, yadda, yadda, yadda.

    The tremendous complexity notwithstanding, most application developers mainly have to concern themselves with the six items listed above. These are their main concerns.

    Although there are many fine solutions available for these main concerns, in general, these solutions are siloed, complex, and incongruent. Let me explain.

    Many solutions are siloed because they are single-solution packages that are complete within themselves and disconnected from other pieces of the system.

    Some solutions are so complex that they can take years to learn well. Developers can struggle more with the framework they are using than the language or application they are trying to write. This is a major problem.

    Lastly, by incongruent I mean that the siloed tools do not naturally fit well together. A bunch of glue code has to be written, learned, and supported to fit the various pieces together. Each tool has a different feel, a different approach, a different way of thinking.

    Being frustrated with all of these problems, I wrote the KISS Web Development Framework. At first it was just various solutions I had developed. But later it evolved into a single, comprehensive web development framework. KISS, an open-source project, was specifically designed to solve these exact challenges.

    KISS is a single, comprehensive, fully integrated web development framework that includes integrated solutions for:

    Custom HTML controls  Easy communications with the back-end with built-in authentication  Browser cache control (so the user never has to clear their cache)  A variety of general purpose utilities 
        Go to Full Article          

Page last modified on October 08, 2013, at 07:08 PM