You've mastered a Linux tool, but that hard-earned knowledge came at the cost of frequent usage, reading the manual pages, and using a search engine to avoid the bad examples out there.
So what incentive do you have to learn and replace your tools with new utilities? Here are a few reasons:
- You want to be more productive to do more in less time, and a different tool can provide that.
- A different tool might mimic the way you work. It is nice to use a tool that works just the way you expect.
- A new tool challenges how you do things. This is important because as you improve, so do the tools and technology around you. It is good when a utility forces you to think outside the box.
This article offers a few interesting new tools to consider using. When evaluating a new tool, consider the community around it, whether it's easy to use, and if it has the functionality you need.
[ Boost your command line skills. Download A sysadmin's guide to Bash scripting. ]
One last thing: The topic of "replacement tools" is always controversial, so be open-minded and try them. There is nothing wrong with the original tools mentioned in the article; these are just options that might help you work better.
Also, for obvious reasons, this article doesn't cover every available tool. Consider this list as a starting point.
Before starting
Here are some things to keep in mind as you try out these new tools:
- You should be familiar with Linux's command-line interface (CLI). If you're not, read this article to get started.
- Some of these utilities may not be on your system and will require elevated privileges to install with tools like RPM.
- It might be better to install some tools under your user, rather than system-wide, with installers like pip.
OK, it's time to try some new tools.
htop and glances: Better than top
The top utility is one of the best general-purpose resource monitoring tools on Linux. It has nice features like saving stats into a file and sorting columns by criteria.
[ Learn what the first five lines of Linux's top command tell you. ]
In the same spirit, the htop command displays more information (like how hard each CPU core is working). Below is a sample session showing how to filter, sort, and search processes using htop
:
What makes this tool stand apart? The user interface gives you access to powerful operations with ease.
To install htop
on RPM-based distributions:
$ sudo dnf install -y htop
Glances
is another tool that gives you lots of information about your system, much like htop
:
Why is there another tool like htop
? Well, glances has several features that make it interesting:
- It can run in server mode, allowing you to connect to it using a web browser or with a REST client.
- It can export results in several formats, including Prometheus.
- You can write plugins to extend it in Python.
To install it, you can use a virtual environment or do a user installation:
$ pip install --user glances
smem: When you're focused on memory
Utilities like top
, htop
, and glances
give you a full array of details about your server, but what if you are concerned only about memory utilization? In that case, smem is a great option:
It is possible to filter by user, show totals, group usage by users, and even create plots with Mathlib.
To install smem
on Fedora Linux:
$ sudo dnf install -y smem
ripgrep: Faster than grep
The grep
utility is probably one of the most well-known filtering tools; if you've ever needed to find files with a filter, chances are you used grep
.
[ Happy with the usual option? Download the Linux grep command cheat sheet. ]
A nice replacement for grep is ripgrep. It is fast and has modern features that grep
doesn't have:
- It can export the output to JSON format. This is a great feature for data capture or interaction with other scripts.
- It provides automatic recursive directory searches, skipping hidden files and common ignorable backup files.
Start by comparing a regular recursive grep
that only looks inside files with extension *.pyb
, using a case-insensitive search:
$ time grep --dereference-recursive --ignore-case --count --exclude '.ipynb_*' --include '*.ipynb' death COVIDDATA/
COVIDDATA/.ipynb_checkpoints/Curve-checkpoint.ipynb:0
COVIDDATA/.ipynb_checkpoints/EUCDC-checkpoint.ipynb:37
COVIDDATA/.ipynb_checkpoints/Gammamulti-checkpoint.ipynb:11
COVIDDATA/.ipynb_checkpoints/Gammapivot-checkpoint.ipynb:11
# ... Omitted output
COVIDDATA/tweakers/zzcorwav.ipynb:10
real 0m0.613s
user 0m0.505s
sys 0m0.105s
Note that it shows the Jupyter .ipynb_checkpoints/*
checkpoint files. Next, see ripgrep
(rg
) in action:
$ time rg --ignore-case --count --type 'jupyter' death COVIDDATA/
COVIDDATA/tweakers/zzcorwav.ipynb:10
COVIDDATA/tweakers/zzbenford.ipynb:2
COVIDDATA/tweakers/EUCDC.ipynb:19
COVIDDATA/Modelpivot.ipynb:9
COVIDDATA/experiment/zzbenford.ipynb:2
COVIDDATA/experiment/zzcorwavgd.ipynb:10
# ... Omitted output
COVIDDATA/experiment/zzcasemap.ipynb:13
real 0m0.068s
user 0m0.087s
sys 0m0.071s
The command line is shorter, and rg
skips the Jupyter checkpoint files without any extra help. Check below to see rg
working with a few flags:
Install ripgrep
on Fedora Linux using DNF:
$ sudo dnf install ripgrep
drill (ldns): More informative than dig or nslookup
If you need to find the internet protocol (IP) address of a given DNS record, you probably use dig
or nslookup
. These commands have been around so long that they have entered and left the deprecation state.
A tool that offers the same functionality and is more modern is drill
(from the lndns project). Say you want to see the MX (Mail Exchangers) for the nasa.org
domain:
$ dig @8.8.8.8 nasa.org MX +noall +answer +nocmd
nasa.org. 3600 IN MX 5 mail.h-email.net.
The drill
command gives you the same information, plus some more:
$ drill @8.8.8.8 mx nasa.org
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 50948
;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;; nasa.org. IN MX
;; ANSWER SECTION:
nasa.org. 3600 IN MX 5 mail.h-email.net.
;; AUTHORITY SECTION:
;; ADDITIONAL SECTION:
;; Query time: 126 msec
;; SERVER: 8.8.8.8
;; WHEN: Sun Jul 10 14:31:48 2022
;; MSG SIZE rcvd: 58
What does this mean to you?
drill
can be used as a drop-in replacement fordig
.- It is good to have a separate implementation of DNS tools to troubleshoot and diagnose bugs.
Distribution maintainers and application developers have more compelling arguments to use ldns
:
- Some distributions like ArchLinux call for dns-tools removal and use
ldns
instead because of dependency management and bugs. ldns
has nice bindings for Python 3.
Here is a small program that can query the MX records for a given list of domains:
Install ldns
on Fedora Linux like this:
$ sudo dnf install -y python3-ldns ldns-utils ldns
Rich-CLI: One CLI to render all formats
Let's face it: It is quite annoying to use different tools to render different data types nicely on the command-line interface (CLI).
For example, here's a JSON file (no special filtering):
$ /bin/jq '.' ./.thunderbird/pximovka.default-default/sessionCheckpoints.json
{
"profile-after-change": true,
"final-ui-startup": true,
"quit-application-granted": true,
"quit-application": true,
"profile-change-net-teardown": true,
"profile-change-teardown": true,
"profile-before-change": true
}
An XML file:
$ /bin/xmllint ./opencsv-source/checkstyle-suppressions.xml
<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC "-//Puppy Crawl//DTD Suppressions 1.0//EN" "http://www.puppycrawl.com/dtds/suppressions_1_0.dtd">
<suppressions>
<suppress files="." checks="LineLength"/>
<suppress files="." checks="whitespace"/>
<suppress files="." checks="HiddenField"/>
<suppress files="." checks="FinalParameters"/>
<suppress files="." checks="DesignForExtension"/>
<suppress files="." checks="JavadocVariable"/>
<suppress files="." checks="AvoidInlineConditionals"/>
<suppress files="." checks="AvoidStarImport"/>
<suppress files="." checks="NewlineAtEndOfFile"/>
<suppress files="." checks="RegexpSingleline"/>
<suppress files="." checks="VisibilityModifierCheck"/>
<suppress files="." checks="MultipleVariableDeclarations"/>
</suppressions>
A markup file? A CSV file? A Python script? You see where this is going; a different application for each type. Some of them offer syntax colorization, and others do not. If you want pagination, you most likely need to pipe the output to less
—but then kiss colorization goodbye.
[ Free download: Advanced Linux commands cheat sheet. ]
Enter Rich-CLI (an application that's part of the Textualize project) to the rescue. Below I revisit the two files I opened before, this time using rich
. First, here is the JSON file:
$ rich ./.thunderbird/pximovka.default-default/sessionCheckpoints.json
{
"profile-after-change": true,
"final-ui-startup": true,
"quit-application-granted": true,
"quit-application": true,
"profile-change-net-teardown": true,
"profile-change-teardown": true,
"profile-before-change": true
}
Next, here is the XML file I demonstrated earlier:
$ rich ./opencsv-source/checkstyle-suppressions.xml
<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC "-//Puppy Crawl//DTD Suppressions 1.0//EN"
"http://www.puppycrawl.com/dtds/suppressions_1_0.dtd">
<suppressions>
<suppress files="." checks="LineLength"/>
<suppress files="." checks="whitespace"/>
<suppress files="." checks="HiddenField"/>
<suppress files="." checks="FinalParameters"/>
<suppress files="." checks="DesignForExtension"/>
<suppress files="." checks="JavadocVariable"/>
<suppress files="." checks="AvoidInlineConditionals"/>
<suppress files="." checks="AvoidStarImport"/>
<suppress files="." checks="NewlineAtEndOfFile"/>
<suppress files="." checks="RegexpSingleline"/>
<suppress files="." checks="VisibilityModifierCheck"/>
<suppress files="." checks="MultipleVariableDeclarations"/>
</suppressions>
See the demo below for rendering multiple file types with a single command:
Installation is trivial with pip
:
$ pip install --user rich-cli
Wrap up
You don't need to settle for the default tools that come with the Linux operating system. Many Linux tools offer new functionality that will make you more productive. And if more people use them, they will become the default tools.
Also, when evaluating any tool, look at its community and how often it is updated for bugs and new features. An active community is as important as the tool itself.
執筆者紹介
Proud dad and husband, software developer and sysadmin. Recreational runner and geek.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit