Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Two Column)

LinuxSecurity - Security Advisories





  • RedHat: RHSA-2020-0520:01 Important: firefox security update>
    An update for firefox is now available for Red Hat Enterprise Linux 7. Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability



LWN.net

  • Security updates for Monday
    Security updates have been issued by Debian (evince, postgresql-9.4, and thunderbird), Fedora (ksh and libxml2), openSUSE (hostapd and nextcloud), Red Hat (chromium-browser, firefox, flash-plugin, and ksh), and SUSE (firefox and thunderbird).


  • NetBSD 9.0 released
    The NetBSD 9.0 release is out. "This is the seventeenth major release of the NetBSD operating systemand brings significant improvements in terms of hardware support,quality assurance, security, along with new features and hundreds ofbug fixes." Significant new features include Arm64 support, bettervirtualization support, kernel address-space layout randomization, andmore; see the releasenotes for details.


  • Kernel prepatch 5.6-rc2
    The 5.6-rc2 kernel prepatch is out fortesting. Linus says: "More than half the rc2 patch is actuallyDocumentation updates, because the kvm docs got turned into RST.Another notable chunk is just tooling updates, which is about 50/50perf updates (much of it due to header file syncing) and - again -kvm".


  • OpenSSH 8.2 released
    OpenSSH 8.2 is out. This release removes support for the ssh-rsa keyalgorithm, which may disrupt connectivity to older servers; see theannouncement for a way to check whether a given server can handle newer,more secure algorithms. Also new in this release is support for FIDO/U2Fhardware tokens.



  • [$] Keeping secrets in memfd areas
    Back in November 2019, Mike Rapoport madethe case that there is too much address-space sharing in Linuxsystems. This sharing can be convenient and good for performance, but inan era of advanced attacks and hardware vulnerabilities it also facilitatessecurity problems. At that time, he proposed a number of possible changesin general terms; he has now come back with a patchimplementing a couple of address-space isolation options for the memfd mechanism. This work demonstrates thesort of features we may be seeing, but some of the hard work has been leftfor the future.


  • Security updates for Friday
    Security updates have been issued by Debian (debian-security-support, postgresql-11, and postgresql-9.6), Fedora (cutter-re, firefox, php-horde-Horde-Data, radare2, and texlive-base), openSUSE (docker-runc), Oracle (kernel), Red Hat (sudo), and Ubuntu (firefox).


  • [$] Revisiting stable-kernel regressions
    Stable-kernel updates are, unsurprisingly, supposed to be stable; that iswhy the first of the rulesfor stable-kernel patches requires them to be "obviously correctand tested". Even so, for nearly as long as the kernel community has been producing stable updatereleases, said community has also been complaining about regressions thatmake their way intothose releases. Back in 2016, LWN did someanalysis that showed the presence of regressions in stable releases, thoughat a rate that many saw as being low enough. Since then, the volume ofpatches showing up in stable releases has grown considerably, so perhapsthe time has come to see what the situation with regressions is with current stable kernels.


  • Security updates for Thursday
    Security updates have been issued by Arch Linux (dovecot, firefox, ksh, and webkit2gtk), Debian (firefox-esr and openjdk-8), Mageia (exiv2, flash-player-plugin, python-waitress, and vim and neovim), openSUSE (pcp and rubygem-rack), Oracle (kernel), Red Hat (sudo), and Slackware (libarchive).



LXer Linux News


  • How to List Cron Jobs in Linux
    Cron is a scheduling daemon that allows you to schedule the execution of tasks at specified intervals. These tasks are called cron jobs and can be scheduled to run by a minute, hour, day of the month, month, day of the week, or any combination of these.


  • Minicomputers and The Soul of a New Machine
    The Command Line Heroes podcast is back, and this season it covers the machines that run all the programming languages I covered last season. As the podcast staff puts it:read more


  • Switching between different Linux distributions without losing data
    So you installed the latest version of a Linux distribution but want to switch to another variant of Linux or want to upgrade to the latest version of a Linux distribution and prefer a clean install over regular upgrade ? Doing this will surely make you lose your personal data



  • How to Install Lighttpd on Debian 9
    In this tutorial, we will demonstrate how to install and deploy Lighttpd on a Debian 9 VPS with FPM/FastCGI support. Lighttpd is a free, open-source and high-performance web server developed by Jan Kneschke. It has a low memory footprint when compared to other web servers and is specially designed for speed-critical environments. It is secure, fast, and can handle up to 10,000 connections in parallel on a single server. It is used by many websites, including YouTube, Bloglines, WikiMedia, and many more. Lighttpd comes with a rich set of features, such as FastCGI, SCGI, Auth, URL-Rewriting, Output-Compression, event mechanism, and more. These features combined make for a compelling and high-performance web server solution.






Error: It's not possible to reach RSS file http://services.digg.com/2.0/story.getTopNews?type=rss&topic=technology ...

Slashdot

  • Tesla Teardown Finds Electronics 6 Years Ahead of Toyota and VW
    Elon Musk's Tesla technology is far ahead of the industry giants, a new report has concluded. From the report: This is the takeaway from Nikkei Business Publications' teardown of the Model 3, the most affordable car in the U.S. automaker's all-electric lineup, starting at about $33,000. What stands out most is Tesla's integrated central control unit, or "full self-driving computer." Also known as Hardware 3, this little piece of tech is the company's biggest weapon in the burgeoning EV market. It could end the auto industry supply chain as we know it. One stunned engineer from a major Japanese automaker examined the computer and declared, "We cannot do it." The module -- released last spring and found in all new Model 3, Model S and Model X vehicles -- includes two custom, 260-sq.-millimeter AI chips. Tesla developed the chips on its own, along with special software designed to complement the hardware. The computer powers the cars' self-driving capabilities as well as their advanced in-car "infotainment" system.   This kind of electronic platform, with a powerful computer at its core, holds the key to handling heavy data loads in tomorrow's smarter, more autonomous cars. Industry insiders expect such technology to take hold around 2025 at the earliest. That means Tesla beat its rivals by six years. The implications for the broader auto industry are huge and -- for some -- frightening. Tesla built this digital nerve center through a series of upgrades to the original Autopilot system it introduced in 2014. What was also called Hardware 1 was a driver-assistance system that allowed the car to follow others, mostly on highways, and automatically steer in a lane. Every two or three years, the company pushed the envelope further, culminating in the full self-driving computer.
          

    Read more of this story at Slashdot.


  • Google Ends Its Free Wi-Fi Program, Station
    Google said on Monday that it is winding down Google Station, a program that rolled out free Wi-Fi in more than 400 railway stations in India and "thousands" of other public places in several additional pockets of the world. The company worked with a number of partners on the program. From a report: Caesar Sengupta, VP of Payments and Next Billion Users at Google, said the program, launched in 2015, helped millions of users surf the internet -- a first for many -- and not worry about the amount of data they consumed. But as mobile data prices got cheaper in many markets including India, Google Station was no longer as necessary, he said. The company plans to discontinue the program this year. Additionally, it had become difficult for Google to find a sustainable business model to scale the program, the company said, which in recent years expanded Station to Indonesia, Mexico, Thailand, Nigeria, Philippines, Brazil and Vietnam. The company launched the program in South Africa just three months ago.
          

    Read more of this story at Slashdot.



  • Israeli Soldiers Tricked Into Installing Malware by Hamas Agents Posing as Women
    Members of the Hamas Palestinian militant group have posed as young teenage girls to lure Israeli soldiers into installing malware-infected apps on their phones, a spokesperson for the Israeli Defence Force (IDF) said today. From a report: Some soldiers fell for the scam, but IDF said they detected the infections, tracked down the malware, and then took down Hamas' hacking infrastructure. IDF said Hamas operatives created Facebook, Instagram, and Telegram accounts and then approached IDF soldiers. According to IDF spokesperson Brigadier General Hild Silberman, Hamas agents posed as new Israeli immigrants in order to excuse their lacking knowledge of the Hebrew language. IDF investigators said they tracked accounts for six characters used in the recent social engineering campaign. The accounts were named Sarah Orlova, Maria Jacobova, Eden Ben Ezra, Noa Danon, Yael Azoulay, and Rebecca Aboxis, respectively. Soldiers who engaged in conversations were eventually lured towards installing one of three chat apps, named Catch & See, Grixy, and Zatu, where the agents promised to share more photos.
          

    Read more of this story at Slashdot.


  • UK To Spend $1.6 Billion on World's Best Climate Supercomputer
    The U.K. said it will spend 1.2 billion pounds ($1.6 billion) on developing the most powerful weather and climate supercomputer in the world. From a report: The program aims to improve weather and climate modeling by the government forecaster, the Met Office, Business Secretary Alok Sharma said in a statement Monday. The machine will replace the U.K.'s existing supercomputer, which is already one of the 50 most powerful in the world. "Come rain or shine, our significant investment for a new supercomputer will further speed up weather predictions, helping people be more prepared for weather disruption from planning travel journeys to deploying flood defenses," said Sharma, who will preside over the annual round of United Nations climate talks in Glasgow, Scotland, in November. With Britain hosting the year-end climate summit, Prime Minister Boris Johnson is seeking to showcase the U.K.'s leadership in both studying the climate and reducing global greenhouse gas emissions. His government plans to use data generated by the new computer to inform policy as it seeks to spearhead the fight against climate change.
          

    Read more of this story at Slashdot.


  • Samsung's Second Foldable Smartphone, $1,380 Galaxy Z Flip, is Dead on Arrival, Too
    Evan Rodgers, reporting for Input: When Samsung released the Galaxy Z Flip, its newest folding phone, at midnight this past Friday, I was one of many who wasn't able to snag one due to low stock here in New York City. So here I am, refreshing my order page while I watch the lucky few who did manage to get one put them through their paces online. Though most YouTubers and reviewers seem to be enjoying the phone, durability is a question, and at $1,380 here in the U.S., it's a good one. At Unpacked, where Samsung announced the Z Flip, the company made a big deal about the "Ultra Thin Glass" that covers the display. One could be forgiven, then, for assuming that the display has all the scratch-resistant properties of glass, but in a durability test by JerryRigEverything on YouTube, that doesn't seem to be the case. In the video you can see Zack's (the YouTuber) tools and even fingernails leaving permanent scratches on the display.
          

    Read more of this story at Slashdot.


  • Many Businesses Still Love COBOL
    TechRadar shares some surprising results from a new survey of enterprises using COBOL and mainframe technologies: According to a survey by Micro Focus, which follows data gathered in previous 2017 survey, 70 percent favor modernization as an approach for implementing strategic change. This is opposed to replacing or retiring their key COBOL applications as they continue to provide a low-risk and effective means of transforming IT to support digital business initiatives...  This is further supported by the results of the survey with an increase in the size of the average application code base which grew from 8.4m in 2017 to 9.9m this year, showing continued investment, re-use and expansion in core business systems.  "92 percent of respondents felt as though their organization's COBOL applications are strategic in comparison to 84 percent of respondents in 2017," according to the official survey results. The survey spanned 40 different countries, and involved COBOL-connected architects, developers, development managers and IT executives.  "COBOL's credentials as a strong digital technology appear to be set for another decade," according to Micro Focus' senior vice president of application modernization and connectivity. "With 60 years of experience supporting mission-critical applications and business systems, COBOL continues to evolve as a flexible and resilient computer language that will remain relevant and important for businesses around the world."
          

    Read more of this story at Slashdot.


  • Uber and Lyft Are Creating Traffic, Not Reducing It
    The Wall Street Journal remembers how five years ago, Uber's co-founder "was so confident that Uber's rides would prompt people to leave their cars at home that he told a tech conference: 'If every car in San Francisco was Ubered there would be no traffic.'"   He was wrong. Rather than the apps becoming a model of algorithm-driven efficiency, drivers in major cities cruise for fares without passengers an estimated 40% of the time. Multiple studies show that Uber and Lyft have pulled people away from buses, subways and walking, and that the apps add to the overall amount of driving in the U.S. A study published last year by San Francisco County officials and University of Kentucky researchers in the journal Science Advances found that over 60% of the slowdown of traffic speeds in San Francisco between 2010 and 2016 was due to the introduction of the ride-hail companies...   The reversal of ride-hailing from would-be traffic hero to congestion villain is the sort of unintended consequence that has become a recurring feature of Silicon Valley disruption. Companies seeking rapid growth by reinventing the way we do things are delivering solutions that sometimes create their own problems... Silicon Valley is particularly prone to focusing on positive potential effects of new technologies given a decadeslong culture of utopian ideals, said Fred Turner, a Stanford University communications professor who has written a book on the topic... Tech companies tend to have an engineering-like, narrow focus on solving specific problems, often missing the broader picture as a result. "You're not rewarded for seeing the landscape within which your device will be deployed," he said... [I]n hindsight, some of the pitfalls -- such as cars cruising empty between passengers -- seem obvious...  Riders also take car trips that wouldn't have happened before Uber and Lyft. Bruce Schaller, a transportation consultant and former New York City official who has studied the topic, said in his paper that surveys in numerous cities found roughly 60% of riders in Ubers and Lyfts would have walked, biked, taken public transit or stayed home if a ride-hail car hadn't been available.
          

    Read more of this story at Slashdot.


  • Mark Zuckerberg Again Calls for Big Tech to be Regulated
    Mark Zuckerberg wrote an op-ed published in The Financial Times "once again calling for more regulation of Big Tech," reports MarketWatch, "even if it affects his company's bottom line." Zuckerberg has previously called for more government regulation of internet companies, and reiterated his arguments in favor of laws covering four major areas: elections, harmful content, privacy and data portability. "I don't think private companies should make so many decisions alone when they touch on fundamental democratic values," he wrote, adding: "We have to balance promoting innovation and research against protecting people's privacy and security."   Zuckerberg warned that regulation could have "unintended consequences, especially for small businesses that can't do sophisticated data analysis and marketing on their own...."   At his Munich appearance, Zuckerberg spoke about what type of regulation he envisioned: "Right now there are two frameworks that I think people have for existing industries — there's like newspapers and existing media, and then there's the telco-type model, which is 'the data just flows through you', but you're not going to hold a telco responsible if someone says something harmful on a phone line... I actually think where we should be is somewhere in between," he said, according to Reuters.   Reuters also reports that Zuckerberg said Facebook is already employing 35,000 people to review online content and implement security measures.  "Those teams and Facebook's automated technology currently suspend more than 1 million fake accounts each day, he said, adding that 'the vast majority are detected within minutes of signing up.'"
          

    Read more of this story at Slashdot.


  • IOTA Cryptocurrency Shut Down Its Entire Network After a Wallet Breach
    The nonprofit organization behind the IOTA cryptocurrency shut down its entire network this week after someone exploited a vulnerability in their wallet app to steal funds.  ZDNet reports: The attack happened this week, Wednesday, on February 12, 2020, according to a message the foundation posted on its official Twitter account. According to a status page detailing the incident, within 25 minutes of receiving reports that hackers were stealing funds from user wallets, the IOTA Foundation shut down "Coordinator," a node in the IOTA network that puts the final seal of approval on any IOTA currency transactions.   The never-before-seen move was meant to prevent hackers from executing new thefts, but also had the side-effect of effectively shut down the entire IOTA cryptocurrency...   IOTA members said hackers used an exploit in "a third-party integration" of Trinity, a mobile and desktop wallet app developed by the IOTA Foundation. Based on current evidence, confirmed by the IOTA team, it is believed that hackers targeted at least 10 high-value IOTA accounts and used the Trinity exploit to steal funds.    Sunday the team released "a safe version" of their Trinity Desktop "to allow users to check their balance and transactions. This version (1.4.0) removes the vulnerability announced on 12th February 2020..."   Their status page advised users to contact a member of the IOTA Foundation if their balance looks incorrect. "Please be aware that there are unfortunately active imposters posing as IOTA Foundation personnel on our Discord. Therefore it is important that you directly initiate contact with the IF or mod team yourself..."  "The Coordinator remains down for now as we finalise our remediation plan. You will not be able to send value transactions."
          

    Read more of this story at Slashdot.


The Register




  • Xerox hopes wining and dining HP shareholders will convince them of takeover
    Just edible gold rather than briefcases filled with the actual stuff
    A plate of oysters, followed by genuine Japanese Wagyu sirloin? Washed down with a bottle or two of Screaming Eagle Cabernet '92? These are just some of the levers Xerox may pull to convince HP Inc shareholders to cash out when it wines and dines them this week.…








Phoronix

  • D-Bus Broker 22 Released With Option To Use Newer Kernel Features
    With BUS1 not looking like it will come to fruition anytime soon as an in-kernel IPC mechanism and the kernel module for it not being touched since last March, the same developers continue pushing ahead with Dbus-Broker as the user-space implementation focused on D-Bus compatibility while being higher performing and more reliable than D-Bus itself...




  • GCC 8.4 + GCC 9.3 Compilers Coming Soon
    While GCC 10 will be releasing in the next month or two as the annual feature update to the GNU Compiler Collection, GCC 8.4 is expected for release soon along with GCC 9.3...






  • A Quick Look At The Blender 2.82 Performance On Intel + AMD CPUs
    With Blender 2.82 having released on Friday, this weekend we've begun our benchmarking of this new Blender release as the leading open-source 3D modeling solution currently available. Here are some preliminary v2.81 vs. v2.82 figures on different higher-end Intel and AMD processors...



Polish Linux

  • Security: Why Linux Is Better Than Windows Or Mac OS
    Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]


  • Essential Software That Are Not Available On Linux OS
    An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]


  • Things You Never Knew About Your Operating System
    The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]


  • How To Fully Optimize Your Operating System
    Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]


  • The Top Problems With Major Operating Systems
    There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]


  • 8 Benefits Of Linux OS
    Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]


  • Things Linux OS Can Do That Other OS Can’t
    What Is Linux OS?  Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]


  • Packagekit Interview
    Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]


  • What’s New in Ubuntu?
    What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]


  • Ext3 Reiserfs Xfs In Windows With Regards To Colinux
    The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the  official site or from the  sourceforge site. Edit the connection to “TAP Win32 Adapter [0]


OSnews

  • What made the 1960s CDC6600 supercomputer fast?
    Besides the architectural progress, the CDC6600 was impressive for its clock speed of 10 MHz. This may not sound much, but consider that this was a physically very large machine entirely built from discrete resistors and transistors in the early 60ies. Not a single integrated circuit was involved. For comparison, the PDP-8, released in 1965 and also based on discrete logic, had a clock speed of 1.5 MHz. The first IBM PC, released 20 years later, was clocked at less than half the speed of the CDC6600 despite being based on integrated circuits. The high clockrate is even more impressive when comparing it to more recent (hobbyist) attempts to design CPUs with discrete components such as the MT15, the Megaprocessor or the Monster6502. Although these are comparatively small designs based on modern components, none of them get to even a tenth of the CDC6600 clock speed. Detailed look at the speed of the CDC6600.


  • How Windows 10X runs Win32 applications
    Microsoft released its first emulator for Windows 10X today, allowing developers to get a first look at the new operating system variant for dual-screen devices. Microsoft wants to give developers a head start on optimizing apps before devices launch later this year, so this basic emulator provides an early look at Windows 10X before it’s finalized. My first thoughts? Windows 10X feels like a slightly more modern version of Windows 10 that has been cleaned up for future devices. In Windows 10X, everything is new. Theres none of the old Win32 code and applications lying around, or fallbacks to old Win32 dialogs. Everything is a Modern application (or whatever they call it these days), including things like the file manager  the traditional Explorer is gone. While Windows 10X does support Win32 applications, they run in a container. As detailed in this video from Microsoft (select the video titled How Windows 10X runs UWP and Win32 apps!), Windows 10X has three containers  Win32, MSIX, and Native. Win32 applications run inside a single Win32 container, capable of running pretty much anything classic! you can throw at it, such as Win32, WinForms, Electron, and so on. MSIX containers are basically slightly more advanced classic applications, and these containers run inside the Win32 container as well. The Native container runs all the modern/UWP applications. The Win32 container is actually a lot more involved than you might think. As you can see in the below overview diagram from the video, the container contains a kernel, drivers, the needed files, a registry, and so on. Its effectively an entire traditional Win32 Windows operating system running inside Windows 10X. Applications running inside the Win32 container are entirely isolated from the rest of the host Windows 10X operating system, and Windows 10X interacts with them through specialised, performance-optimised RDP clients  one for each Win32 application. This seems to finally be what many of us have always wanted out of a next-generation Windows release: move all the cruft and compatibility to a glorified virtual machine, so that the remainder of the operating system can be modernised and improved without having to take compatbility into account. For now, Windows 10X seems focused on dual screen devices, but a lot of people in the know seem to think this is the actual future of Windows. Time will tell if this is actually finally really the case, but this does look promising.


  • Apple store workers should be paid for time waiting to be searched, court rules
    Apple has $209 billion in cash on hand. California law requires Apple Inc. to pay its workers for being searched before they leave retail stores, the California Supreme Court decided unanimously Thursday. A group of Apple workers filed a class-action lawsuit against the tech giant, charging they were required to submit to searches before leaving the stores but were not compensated for the time those searches required. The U.S. 9th Circuit Court of Appeals, where the case is now pending, asked the California Supreme Court to clarify whether state law requires compensation. In a decision written by Chief Justice Tani Cantil-Sakauye, the court said an industrial wage order defines hours worked as “the time during which an employee is subject to the control of an employer, and includes all the time the employee is suffered or permitted to work, whether or not required to do so.” I repeat, Apple has $209 billion in cash on hand. Since its really hard to imagine how much even just one billion dollars really is, this demonstration should give you a very good idea. One billion dollars is way, way, way more than you think it is. Apple has 209 times that in cash on hand.


  • How the CIA used Crypto AG encryption devices to spy on countries for decades
    For more than half a century, governments all over the world trusted a single company to keep the communications of their spies, soldiers and diplomats secret. The company, Crypto AG, got its first break with a contract to build code-making machines for U.S. troops during World War II. Flush with cash, it became a dominant maker of encryption devices for decades, navigating waves of technology from mechanical gears to electronic circuits and, finally, silicon chips and software. But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company’s devices so they could easily break the codes that countries used to send encrypted messages. The article is behind a paywall, sadly, but I figured its important enough to link to.


  • NEXTSPACE: a NeXTSTEP-like desktop environment for Linux
    NEXTSPACE is a desktop environment that brings a NeXTSTEP look and feel to Linux. I try to keep the user experience as close as possible to the original NeXTs OS. It is developed according to the OpenStep User Interface Guidelines . I want to create a fast, elegant, reliable, and easy to use desktop environment with maximum attention to user experience (usability) and visual maturity. In the future I would like to see it as a platform where applications will be running with a taste of NeXTs OS. Core applications such as Login, Workspace, and Preferences are the base for future application development and examples of style and application integration methods. NEXTSPACE is not just a set of applications loosely integrated to each other. It is a core OS with frameworks, mouse cursors, fonts, colors, animations, and everything I think will help users to be effective and happy. KDE, GNOME, Xfce, and later MATE and Cinnamon have sucked up so much of the Linux desktop space that theres very little room left for anything else. Youre either mainly a Qt desktop, or mainly a GTK+ desktop, and anything that isnt based on either of those toolkits will either waste time recreating lots of wheels, or accept that half  or more  of your applications are Qt or GTK+-based, at which point the temptation to run one of the aforementioned desktop environments becomes quite strong. This project, while very welcome and having my full support and attention, will have a very hard time, but thats not going to deter me from being hopeful against all odds. Reading through the documentation and descriptions, it does seem the developers have the right attitude. Theyre not claiming to take on the other players  they just want to make something that appeals to and works for them.


  • KDE Plasma 5.18 LTS released
    A brand new version of the Plasma desktop is now available. In Plasma 5.18 you will find neat new features that make notifications clearer, settings more streamlined and the overall look more attractive. Plasma 5.18 is easier and more fun to use, while at the same time allowing you to be more productive when it is time to work. A lot welcome changes and polish, and Im particularly pleased with the death of the insipid cashew menu that resided in the top-right of the KDE desktop. You had to dive into the settings to remove it, but now its been replaced by a global edit mode thats entirely invisible until you enable it, following in the footsteps of similar edit modes in Cinnamon and other user interfaces.


  • MATE 1.24 released
    After about a year of development, the MATE Desktop team have finally released MATE 1.24. A big thank you to all contributors who helped to make this happen. This release contains plenty of new features, bug-fixes, and general improvements. Thats an impressive list. I prefer Cinnamon and GNOME 3 (after lots of tweaking!) over MATE, but Im glad MATE exists as a no-nonsense, relatively conservative continuation of GNOME 2.


  • Dissecting the Windows Defender driver
    For the next couple (or maybe more) posts I’ll be explaining how WdFilter works. I’ve always been very interested on how AVs work (Nowadays I would say EDRs though) and their development at kernel level. And since, unfortunately I don’t have access to the source code of any, my only chance is to reverse them (or to write my own). And of course what a better product to check than the one written by the company who developed the OS. For those who don’t know, WdFilter is the main kernel component of Windows Defender. Roughly, this Driver works as a Minifilter from the load order group “FSFilter Anti-Virus”, this means that is attached to the File System stack (Actually, quite high  Big Altitude) and handles I/O operations in some Pre/Post callbacks. Not only that, this driver also implements other techniques to get information of what’s going on in the system. The goal of this series of post is to have a solid understanding on how this works under the hood. Not for the fain of heart.


  • Microsoft stuffs ads in the Windows Start menu targeting Firefox users
    Microsoft has now started to show text ad for its new Chromium-based Edge in the all apps list. The ad, which shows up under ‘Suggested’ listing for Start menu, recommends using the new version of Microsoft Edge. Surprisingly, the ad is targeting Firefox users. If you have Firefox as your default browser, you might see the advertisement or suggestion in the Start menu. Depending on whether you’re actively using Firefox or other browsers, the recommendation may or may not show up. “Still using Firefox? Microsoft Edge is here,” the ad label reads and it includes a link to download Chromium-based browser. Dont use operating systems like Windows or iOS which are nothing but bait-and-switch vessels for ads.


  • The story of the audacious, visionary, totally calamitous iPad of the 90s
    Of course, AT8T wasn’t the company that ended up bringing us most of the tech predicted in the “You Will” ads. But it did bring that tablet device to market. It’s called the EO Personal Communicator 440, and while not the first mass-manufactured tablet computer — that honor goes to the GRiDPad, a device sold by Radio Shack’s corporate parent Tandy — the EO is generally considered one of the first tablets with mobile connectivity. Released by AT8T in 1993, not long after the telecom giant bought a majority stake in its maker EO, it was a tantalizing glance into the future. Any article on the EO is an article I will post  Im a simple man  but that websites fonts and font colours give me a headache.


Linux Journal - The Original Magazine of the Linux Community

  • Linux Journal Ceases Publication: An Awkward Goodbye
    by Kyle Rankin    IMPORTANT NOTICE FROM LINUX JOURNAL, LLC: On August 7, 2019, Linux Journal shut its doors for good. All staff were laid off and the company is left with no operating funds to continue in any capacity. The website will continue to stay up for the next few weeks, hopefully longer for archival purposes if we can make it happen.  –Linux Journal, LLC
     


     
    Final Letter from the Editor: The Awkward Goodbye

    by Kyle Rankin

    Have you ever met up with a friend at a restaurant for dinner, then after dinner you both step out to the street and say a proper goodbye, only when you leave, you find out that you both are walking in the same direction? So now, you get to walk together awkwardly until the true point where you part, and then you have another, second goodbye, that's much more awkward.

    That's basically this post. 

    So, it was almost two years ago that I first said goodbye to Linux Journal and the Linux Journal community in my post "So Long and Thanks for All the Bash". That post was a proper goodbye. For starters, it had a catchy title with a pun. The post itself had all the elements of a proper goodbye: part retrospective, part "Thank You" to the Linux Journal team and the community, and OK, yes, it was also part rant. I recommend you read (or re-read) that post, because it captures my feelings about losing Linux Journal way better than I can muster here on our awkward second goodbye. 

    Of course, not long after I wrote that post, we found out that Linux Journal wasn't dead after all! We all actually had more time together and got to work fixing everything that had caused us to die in the first place. A lot of our analysis of what went wrong and what we intended to change was captured in my article Go to Full Article          


  • Oops! Debugging Kernel Panics
    by Petros Koutoupis   
    A look into what causes kernel panics and some utilities to help gain more information.

    Working in a Linux environment, how often have you seen a kernel panic? When it happens, your system is left in a crippled state until you reboot it completely. And, even after you get your system back into a functional state, you're still left with the question: why? You may have no idea what happened or why it happened. Those questions can be answered though, and the following guide will help you root out the cause of some of the conditions that led to the original crash.

    Figure 1. A Typical Kernel Panic

    Let's start by looking at a set of utilities known as kexec and kdump. kexec allows you to boot into another kernel from an existing (and running) kernel, and kdump is a kexec-based crash-dumping mechanism for Linux.
     Installing the Required Packages
    First and foremost, your kernel should have the following components statically built in to its image:
      CONFIG_RELOCATABLE=y CONFIG_KEXEC=y CONFIG_CRASH_DUMP=y CONFIG_DEBUG_INFO=y CONFIG_MAGIC_SYSRQ=y CONFIG_PROC_VMCORE=y  
    You can find this in /boot/config-`uname -r`.

    Make sure that your operating system is up to date with the latest-and-greatest package versions:
      $ sudo apt update && sudo apt upgrade  
    Install the following packages (I'm currently using Debian, but the same should and will apply to Ubuntu):
      $ sudo apt install gcc make binutils linux-headers-`uname -r`  ↪kdump-tools crash `uname -r`-dbg  
    Note: Package names may vary across distributions.

    During the installation, you will be prompted with questions to enable kexec to handle reboots (answer whatever you'd like, but I answered "no"; see Figure 2).

    Figure 2. kexec Configuration Menu

    And to enable kdump to run and load at system boot, answer "yes" (Figure 3).

    Figure 3. kdump Configuration Menu
     Configuring kdump
    Open the /etc/default/kdump-tools file, and at the very top, you should see the following:
        Go to Full Article          


  • Loadsharers: Funding the Load-Bearing Internet Person
    by Eric S. Raymond   
    The internet has a sustainability problem. Many of its critical services depend on the dedication of unpaid volunteers, because they can't be monetized and thus don't have any revenue stream for the maintainers to live on. I'm talking about services like DNS, time synchronization, crypto libraries—software without which the net and the browser you're using couldn't function.

    These volunteer maintainers are the Load-Bearing Internet People (LBIP). Underfunding them is a problem, because underfunded critical services tend to have gaps and holes that could have been fixed if there were more full-time attention on them. As our civilization becomes increasingly dependent on this software infrastructure, that attention shortfall could lead to disastrous outages.

    I've been worrying about this problem since 2012, when I watched a hacker I know wreck his health while working on a critical infrastructure problem nobody else understood at the time. Billions of dollars in e-commerce hung on getting the particular software problem he had spotted solved, but because it masqueraded as network undercapacity, he had a lot of trouble getting even technically-savvy people to understand where the problem was. He solved it, but unable to afford medical insurance and literally living in a tent, he eventually went blind in one eye and is now prone to depressive spells.

    More recently, I damaged my ankle and discovered that although there is such a thing as minor surgery on the medical level, there is no such thing as "minor surgery" on the financial level. I was looking—still am looking—at a serious prospect of either having my life savings wiped out or having to leave all 52 of the open-source projects I'm responsible for in the lurch as I scrambled for a full-time job. Projects at risk include the likes of GIFLIB, GPSD and NTPsec.

    That refocused my mind on the LBIP problem. There aren't many Load-Bearing Internet People—probably on the close order of 1,000 worldwide—but they're a systemic vulnerability made inevitable by the existence of common software and internet services that can't be metered. And, burning them out is a serious problem. Even under the most cold-blooded assessment, civilization needs the mean service life of an LBIP to be long enough to train and acculturate a replacement.

    (If that made you wonder—yes, in fact, I am training an apprentice. Different problem for a different article.)

    Alas, traditional centralized funding models have failed the LBIPs. There are a few reasons for this:
        Go to Full Article          


  • Documenting Proper Git Usage
    by Zack Brown   
    Jonathan Corbet wrote a document for inclusion in the kernel tree, describing best practices for merging and rebasing git-based kernel repositories. As he put it, it represented workflows that were actually in current use, and it was a living document that hopefully would be added to and corrected over time.

    The inspiration for the document came from noticing how frequently Linus Torvalds was unhappy with how other people—typically subsystem maintainers—handled their git trees.

    It's interesting to note that before Linus wrote the git tool, branching and merging was virtually unheard of in the Open Source world. In CVS, it was a nightmare horror of leechcraft and broken magic. Other tools were not much better. One of the primary motivations behind git—aside from blazing speed—was, in fact, to make branching and merging trivial operations—and so they have become.

    One of the offshoots of branching and merging, Jonathan wrote, was rebasing—altering the patch history of a local repository. The benefits of rebasing are fantastic. They can make a repository history cleaner and clearer, which in turn can make it easier to track down the patches that introduced a given bug. So rebasing has a direct value to the development process.

    On the other hand, used poorly, rebasing can make a big mess. For example, suppose you rebase a repository that has already been merged with another, and then merge them again—insane soul death.

    So Jonathan explained some good rules of thumb. Never rebase a repository that's already been shared. Never rebase patches that come from someone else's repository. And in general, simply never rebase—unless there's a genuine reason.

    Since rebasing changes the history of patches, it relies on a new "base" version, from which the later patches diverge. Jonathan recommended choosing a base version that was generally thought to be more stable rather than less—a new version or a release candidate, for example, rather than just an arbitrary patch during regular development.

    Jonathan also recommended, for any rebase, treating all the rebased patches as new code, and testing them thoroughly, even if they had been tested already prior to the rebase.

    "If", he said, "rebasing is limited to private trees, commits are based on a well-known starting point, and they are well tested, the potential for trouble is low."

    Moving on to merging, Jonathan pointed out that nearly 9% of all kernel commits were merges. There were more than 1,000 merge requests in the 5.1 development cycle alone.
        Go to Full Article          


  • Understanding Python's asyncio
    by Reuven M. Lerner   
    How to get started using Python's asyncio.

    Earlier this year, I attended PyCon, the international Python conference. One topic, presented at numerous talks and discussed informally in the hallway, was the state of threading in Python—which is, in a nutshell, neither ideal nor as terrible as some critics would argue.

    A related topic that came up repeatedly was that of "asyncio", a relatively new approach to concurrency in Python. Not only were there formal presentations and informal discussions about asyncio, but a number of people also asked me about courses on the subject.

    I must admit, I was a bit surprised by all the interest. After all, asyncio isn't a new addition to Python; it's been around for a few years. And, it doesn't solve all of the problems associated with threads. Plus, it can be confusing for many people to get started with it.

    And yet, there's no denying that after a number of years when people ignored asyncio, it's starting to gain steam. I'm sure part of the reason is that asyncio has matured and improved over time, thanks in no small part to much dedicated work by countless developers. But, it's also because asyncio is an increasingly good and useful choice for certain types of tasks—particularly tasks that work across networks.

    So with this article, I'm kicking off a series on asyncio—what it is, how to use it, where it's appropriate, and how you can and should (and also can't and shouldn't) incorporate it into your own work.
     What Is asyncio?
    Everyone's grown used to computers being able to do more than one thing at a time—well, sort of. Although it might seem as though computers are doing more than one thing at a time, they're actually switching, very quickly, across different tasks. For example, when you ssh in to a Linux server, it might seem as though it's only executing your commands. But in actuality, you're getting a small "time slice" from the CPU, with the rest going to other tasks on the computer, such as the systems that handle networking, security and various protocols. Indeed, if you're using SSH to connect to such a server, some of those time slices are being used by sshd to handle your connection and even allow you to issue commands.

    All of this is done, on modern operating systems, via "pre-emptive multitasking". In other words, running programs aren't given a choice of when they will give up control of the CPU. Rather, they're forced to give up control and then resume a little while later. Each process running on a computer is handled this way. Each process can, in turn, use threads, sub-processes that subdivide the time slice given to their parent process.
        Go to Full Article          


  • RV Offsite Backup Update
    by Kyle Rankin   
    Having an offsite backup in your RV is great, and after a year of use, I've discovered some ways to make it even better.

    Last year I wrote a feature-length article on the data backup system I set up for my RV (see Kyle's "DIY RV Offsite Backup and Media Server" from the June 2018 issue of LJ). If you haven't read that article yet, I recommend checking it out first so you can get details on the system. In summary, I set up a Raspberry Pi media center PC connected to a 12V television in the RV. I connected an 8TB hard drive to that system and synchronized all of my files and media so it acted as a kind of off-site backup. Finally, I set up a script that would attempt to sync over all of those files from my NAS whenever it detected that the RV was on the local network. So here, I provide an update on how that system is working and a few tweaks I've made to it since.
     What Works
    Overall, the media center has worked well. It's been great to have all of my media with me when I'm on a road trip, and my son appreciates having access to his favorite cartoons. Because the interface is identical to the media center we have at home, there's no learning curve—everything just works. Since the Raspberry Pi is powered off the TV in the RV, you just need to turn on the TV and everything fires up.

    It's also been great knowing that I have a good backup of all of my files nearby. Should anything happen to my house or my main NAS, I know that I can just get backups from the RV. Having peace of mind about your important files is valuable, and it's nice knowing in the worst case when my NAS broke, I could just disconnect my USB drive from the RV, connect it to a local system, and be back up and running.

    The WiFi booster I set up on the RV also has worked pretty well to increase the range of the Raspberry Pi (and the laptops inside the RV) when on the road. When we get to a campsite that happens to offer WiFi, I just reset the booster and set up a new access point that amplifies the campsite signal for inside the RV. On one trip, I even took it out of the RV and inside a hotel room to boost the weak signal.
        Go to Full Article          


  • Another Episode of "Seems Perfectly Feasible and Then Dies"--Script to Simplify the Process of Changing System Call Tables
    by Zack Brown   
    David Howells put in quite a bit of work on a script, ./scripts/syscall-manage.pl, to simplify the entire process of changing the system call tables. With this script, it was a simple matter to add, remove, rename or renumber any system call you liked. The script also would resolve git conflicts, in the event that two repositories renumbered the system calls in conflicting ways.

    Why did David need to write this patch? Why weren't system calls already fairly easy to manage? When you make a system call, you add it to a master list, and then you add it to the system call "tables", which is where the running kernel looks up which kernel function corresponds to which system call number. Kernel developers need to make sure system calls are represented in all relevant spots in the source tree. Renaming, renumbering and making other changes to system calls involves a lot of fiddly little details. David's script simply would do everything right—end of story no problemo hasta la vista.

    Arnd Bergmann remarked, "Ah, fun. You had already threatened to add that script in the past. The implementation of course looks fine, I was just hoping we could instead eliminate the need for it first." But, bowing to necessity, Arnd offered some technical suggestions for improvements to the patch.

    However, Linus Torvalds swooped in at this particular moment, saying:

    Ugh, I hate it.

    I'm sure the script is all kinds of clever and useful, but I really think the solution is not this kind of helper script, but simply that we should work at not having each architecture add new system calls individually in the first place.

    IOW, we should look at having just one unified table for new system call numbers, and aim for the per-architecture ones to be for "legacy numbering".

    Maybe that won't happen, but in the _hope_ that it happens, I really would prefer that people not work at making scripts for the current nasty situation.

    And the portcullis came crashing down.

    It's interesting that, instead of accepting this relatively obvious improvement to the existing situation, Linus would rather leave it broken and ugly, so that someone someday somewhere might be motivated to do the harder-yet-better fix. And, it's all the more interesting given how extreme the current problem is. Without actually being broken, the situation requires developers to put in a tremendous amount of care and effort into something that David's script could make trivial and easy. Even for such an obviously "good" patch, Linus gives thought to the policy and cultural implications, and the future motivations of other people working in that region of code.

    Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.
        Go to Full Article          


  • Experts Attempt to Explain DevOps--and Almost Succeed
    by Bryan Lunduke   
    What is DevOps? How does it relate to other ideas and methodologies within software development? Linux Journal Deputy Editor and longtime software developer, Bryan Lunduke isn't entirely sure, so he asks some experts to help him better understand the DevOps phenomenon.

    The word DevOps confuses me.

    I'm not even sure confuses me quite does justice to the pain I experience—right in the center of my brain—every time the word is uttered.

    It's not that I dislike DevOps; it's that I genuinely don't understand what in tarnation it actually is. Let me demonstrate. What follows is the definition of DevOps on Wikipedia as of a few moments ago:

    DevOps is a set of software development practices that combine software development (Dev) and information technology operations (Ops) to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

    I'm pretty sure I got three aneurysms just by copying and pasting that sentence, and I still have no clue what DevOps really is. Perhaps I should back up and give a little context on where I'm coming from.

    My professional career began in the 1990s when I got my first job as a Software Test Engineer (the people that find bugs in software, hopefully before the software ships, and tell the programmers about them). During the years that followed, my title, and responsibilities, gradually evolved as I worked my way through as many software-industry job titles as I could:
     Automation Engineer: people that automate testing software.    Software Development Engineer in Test: people that make tools for the testers to use.    Software Development Engineer: aka "Coder", aka "Programmer".    Dev Lead: "Hey, you're a good programmer! You should also manage a few other programmers but still code just as much as you did before, but, don't worry, we won't give you much of a raise! It'll be great!"    Dev Manager: like a Dev Lead, with less programming, more managing.    Director of Engineering: the manager of the managers of the programmers.    Vice President of Technology/Engineering: aka "The big boss nerd man who gets to make decisions and gets in trouble first when deadlines are missed." 
    During my various times with fancy-pants titles, I managed teams that included:
        Go to Full Article          


  • DNA Geometry with cadnano
    by Joey Bernard   
    This article introduces a tool you can use to work on three-dimensional DNA origami. The package is called cadnano, and it's currently being developed at the Wyss Institute. With this package, you'll be able to construct and manipulate the three-dimensional representations of DNA structures, as well as generate publication-quality graphics of your work.

    Because this software is research-based, you won't likely find it in the package repository for your favourite distribution, in which case you'll need to install it from the GitHub repository.

    Since cadnano is a Python program, written to use the Qt framework, you'll need to install some packages first. For example, in Debian-based distributions, you'll want to run the following commands:
      sudo apt-get install python3 python3-pip  
    I found that installation was a bit tricky, so I created a virtual Python environment to manage module installations.

    Once you're in your activated virtualenv, install the required Python modules with the command:
      pip3 install pythreejs termcolor pytz pandas pyqt5 sip  
    After those dependencies are installed, grab the source code with the command:
      git clone https://github.com/cadnano/cadnano2.5.git  
    This will grab the Qt5 version. The Qt4 version is in the repository https://github.com/cadnano/cadnano2.git.

    Changing directory into the source directory, you can build and install cadnano with:
      python setup.py install  
    Now your cadnano should be available within the virtualenv.

    You can start cadnano simply by executing the cadnano command from a terminal window. You'll see an essentially blank workspace, made up of several empty view panes and an empty inspector pane on the far right-hand side.

    Figure 1. When you first start cadnano, you get a completely blank work space.

    In order to walk through a few of the functions available in cadnano, let's create a six-strand nanotube. The first step is to create a background that you can use to build upon. At the top of the main window, you'll find three buttons in the toolbar that will let you create a "Freeform", "Honeycomb" or "Square" framework. For this example, click the honeycomb button.

    Figure 2. Start your construction with one of the available geometric frameworks.
        Go to Full Article          


  • Running GNOME in a Container
    by Adam Verslype   
    Containerizing the GUI separates your work and play.

    Virtualization has always been a rich man's game, and more frugal enthusiasts—unable to afford fancy server-class components—often struggle to keep up. Linux provides free high-quality hypervisors, but when you start to throw real workloads at the host, its resources become saturated quickly. No amount of spare RAM shoved into an old Dell desktop is going to remedy this situation. If a properly decked-out host is out of your reach, you might want to consider containers instead.

    Instead of virtualizing an entire computer, containers allow parts of the Linux kernel to be portioned into several pieces. This occurs without the overhead of emulating hardware or running several identical kernels. A full GUI environment, such as GNOME Shell can be launched inside a container, with a little gumption.

    You can accomplish this through namespaces, a feature built in to the Linux kernel. An in-depth look at this feature is beyond the scope of this article, but a brief example sheds light on how these features can create containers. Each kind of namespace segments a different part of the kernel. The PID namespace, for example, prevents processes inside the namespace from seeing other processes running in the kernel. As a result, those processes believe that they are the only ones running on the computer. Each namespace does the same thing for other areas of the kernel as well. The mount namespace isolates the filesystem of the processes inside of it. The network namespace provides a unique network stack to processes running inside of them. The IPC, user, UTS and cgroup namespaces do the same for those areas of the kernel as well. When the seven namespaces are combined, the result is a container: an environment isolated enough to believe it is a freestanding Linux system.

    Container frameworks will abstract the minutia of configuring namespaces away from the user, but each framework has a different emphasis. Docker is the most popular and is designed to run multiple copies of identical containers at scale. LXC/LXD is meant to create containers easily that mimic particular Linux distributions. In fact, earlier versions of LXC included a collection of scripts that created the filesystems of popular distributions. A third option is libvirt's lxc driver. Contrary to how it may sound, libvirt-lxc does not use LXC/LXD at all. Instead, the libvirt-lxc driver manipulates kernel namespaces directly. libvirt-lxc integrates into other tools within the libvirt suite as well, so the configuration of libvirt-lxc containers resembles those of virtual machines running in other libvirt drivers instead of a native LXC/LXD container. It is easy to learn as a result, even if the branding is confusing.
        Go to Full Article          


Page last modified on November 02, 2011, at 10:01 PM