Shell Redirect, cat as editor, other tricks & bug examples
- Rava
- Contributor
- Posts: 4650
- Joined: 11 Jan 2011, 02:46
- Distribution: XFCE 5.0 x86_64 + 4.0 i586
- Location: Forests of Germany
Shell Redirect, cat as editor, other tricks & bug examples
Standard in, out, and error
There are three standard sources of input and output for a program. Standard input usually comes from the keyboard if it’s an interactive program, or from another program if it’s processing the other program’s output. The program usually prints to standard output, and sometimes prints to standard error. These three file descriptors (you can think of them as “data pipes”) are often called STDIN, STDOUT, and STDERR.
Sometimes they’re not named, they’re numbered! The built-in numberings for them are 0, 1, and 2, in that order. By default, if you don’t name or number one explicitly, you’re talking about STDOUT.
So 2>&1 redirects stderr to whatever stdout currently points at, while 2>1 redirects stderr into a file called 1 (what you probably don't want)
Also, the redirects happen in order. So if you say 2>&1 >/dev/null, it redirects stderr to point at what stdout currently points at (which is probably a noop), then redirects stdout to /dev/null. You probably want >/dev/null 2>&1
HTH!
Sources:
http://www.xaprb.com/blog/2006/06/06/wh ... l-21-mean/
http://stackoverflow.com/questions/9390 ... 1-dev-null
There are three standard sources of input and output for a program. Standard input usually comes from the keyboard if it’s an interactive program, or from another program if it’s processing the other program’s output. The program usually prints to standard output, and sometimes prints to standard error. These three file descriptors (you can think of them as “data pipes”) are often called STDIN, STDOUT, and STDERR.
Sometimes they’re not named, they’re numbered! The built-in numberings for them are 0, 1, and 2, in that order. By default, if you don’t name or number one explicitly, you’re talking about STDOUT.
So 2>&1 redirects stderr to whatever stdout currently points at, while 2>1 redirects stderr into a file called 1 (what you probably don't want)
Also, the redirects happen in order. So if you say 2>&1 >/dev/null, it redirects stderr to point at what stdout currently points at (which is probably a noop), then redirects stdout to /dev/null. You probably want >/dev/null 2>&1
HTH!
Sources:
http://www.xaprb.com/blog/2006/06/06/wh ... l-21-mean/
http://stackoverflow.com/questions/9390 ... 1-dev-null
Cheers!
Yours Rava
Yours Rava
-
- Full of knowledge
- Posts: 2564
- Joined: 25 Jun 2014, 15:21
- Distribution: 3.2.2 Cinnamon & KDE5
- Location: London
Shell Redirect
@Rava
Thanks for the references. Still trying to get my head round that obtuse terminology.
Update
So it seems one has to treat each '>' as a kind of directive, executed in sequential order. And to be quite pedantic: 8)
Thanks for the references. Still trying to get my head round that obtuse terminology.
Update
So it seems one has to treat each '>' as a kind of directive, executed in sequential order. And to be quite pedantic:
Code: Select all
guest@porteus:~$ ls /tmp 1>/dev/null 1>t
guest@porteus:~$ cat t
dump.log
kde-guest/
ksocket-guest/
usm/
guest@porteus:~$ ls /tmp 1>t 1>/dev/null
guest@porteus:~$ cat t
guest@porteus:~$ ls -l t
-rw-r--r-- 1 guest guest 0 Apr 17 20:15 t
Linux porteus 4.4.0-porteus #3 SMP PREEMPT Sat Jan 23 07:01:55 UTC 2016 i686 AMD Sempron(tm) 140 Processor AuthenticAMD GNU/Linux
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
- Rava
- Contributor
- Posts: 4650
- Joined: 11 Jan 2011, 02:46
- Distribution: XFCE 5.0 x86_64 + 4.0 i586
- Location: Forests of Germany
Shell Redirect
You have to first realize that the redirect of either STDERR or STDOUT to the other, and then "both" into the data nirvana /dev/null is not the same as putting something into a file.
Redirect into a file is always >filename, be it a "real" filename, or one from /dev.
The /dev ones can also be uses to read from,. like reading zeros to fill a file from /dev/zero, or reading random data like /dev/random or /dev/urandom...
(/dev/random and /dev/urandom do differ, look it up!)
But when you have to discard both STDERR and STDOUT you have to use the >&, without the & it is just a redirect into a file.
It is more complex. Like I wrote:
In contrast to that, echo "test" >>file is appending the stuff to the end of a file, often needed when you, like, write a log file. You sure not want a new entry single line to delete all existing data in the one and only log file of your script!
Try looking for bash script tutorials, there are quite some online, and just try out all the examples. There are a few free O'Reilly books out there, with the examples as scripts in online downloadable archives even.
The best way to learn is to play around with the stuff...
When you have to write some output into several log files, but not all output into the same logfiles, you could have stuff like so:
ls /tmp >/tmp/programme$$/log1 >>/tmp/programme$$/log2
ls /whatever >/tmp/programme$$/log1
ls /whatever2 >/tmp/programme$$/log2
[...]
(The "$$" part is quite handy, since it will be replaced with the program ID of your script, and so make an unique file name or folder name, important when a script uses a file to store some data to later read from, it would result in bugs and stuff when the script is started more than once and all these instances try to read and write from the same file!)
Remember, you not need the 1>file-name when you have not altered any STDERR or STDOUT since writing nothing here is always addressing STDOUT.
I recommend you to learn that, it can be added, but often it is not, and it helps you better read and understand the scripts of others
HTH
Redirect into a file is always >filename, be it a "real" filename, or one from /dev.
The /dev ones can also be uses to read from,. like reading zeros to fill a file from /dev/zero, or reading random data like /dev/random or /dev/urandom...
(/dev/random and /dev/urandom do differ, look it up!)
But when you have to discard both STDERR and STDOUT you have to use the >&, without the & it is just a redirect into a file.
That's right so far.Bogomips wrote:So it seems one has to treat each '>' as a kind of directive, executed in sequential order.
It is more complex. Like I wrote:
So, you have to know what you want. echo "test" >file is either deleting the contents of an already existing file named "file", and putting in the new line "test", or, when not existing, creating a new file and then writes "test" into it.2>&1 redirects stderr to whatever stdout currently points at, while 2>1 redirects stderr into a file called 1 (what you probably don't want)
In contrast to that, echo "test" >>file is appending the stuff to the end of a file, often needed when you, like, write a log file. You sure not want a new entry single line to delete all existing data in the one and only log file of your script!
Try looking for bash script tutorials, there are quite some online, and just try out all the examples. There are a few free O'Reilly books out there, with the examples as scripts in online downloadable archives even.
The best way to learn is to play around with the stuff...
This is just writing the STDOUT of your program into two files, into the real file "t", and also into the "garbage" device, which would make no sense in the above context.ls /tmp 1>/dev/null 1>t
When you have to write some output into several log files, but not all output into the same logfiles, you could have stuff like so:
ls /tmp >/tmp/programme$$/log1 >>/tmp/programme$$/log2
ls /whatever >/tmp/programme$$/log1
ls /whatever2 >/tmp/programme$$/log2
[...]
(The "$$" part is quite handy, since it will be replaced with the program ID of your script, and so make an unique file name or folder name, important when a script uses a file to store some data to later read from, it would result in bugs and stuff when the script is started more than once and all these instances try to read and write from the same file!)
Remember, you not need the 1>file-name when you have not altered any STDERR or STDOUT since writing nothing here is always addressing STDOUT.
I recommend you to learn that, it can be added, but often it is not, and it helps you better read and understand the scripts of others
HTH

Cheers!
Yours Rava
Yours Rava
-
- Full of knowledge
- Posts: 2564
- Joined: 25 Jun 2014, 15:21
- Distribution: 3.2.2 Cinnamon & KDE5
- Location: London
Shell Redirect
That was an experimentRava wrote:The best way to learn is to play around with the stuff...This is just writing the STDOUT of your program into two files, into the real file "t", and also into the "garbage" device, which would make no sense in the above context.ls /tmp 1>/dev/null 1>t

Now, further experiment in support of contention:
Code: Select all
guest@porteus:~$ ls -sh g h f
32K f 12K g 20K h
guest@porteus:~$ cat g 1>s
guest@porteus:~$ cat h 1>t
guest@porteus:~$ ls -sh s t
12K s 20K t
guest@porteus:~$ cat f 1>s 1>t
guest@porteus:~$ ls -sh s t
0 s 32K t
Linux porteus 4.4.0-porteus #3 SMP PREEMPT Sat Jan 23 07:01:55 UTC 2016 i686 AMD Sempron(tm) 140 Processor AuthenticAMD GNU/Linux
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
- Rava
- Contributor
- Posts: 4650
- Joined: 11 Jan 2011, 02:46
- Distribution: XFCE 5.0 x86_64 + 4.0 i586
- Location: Forests of Germany
Shell redirect, cat as editor, other tricks & bug examples
What you do is not redirecting INPUT or OUTPUT streams, but copying / writing data into files.Bogomips wrote:It seems clear that we are not writing to two files, and that with the first redirect the file s is opened for writing with truncate, then the second redirect is processed and the file t is written to, which means that STDOUT remains pointed at t, and no longer has anything to do with s. Is this what was meant in the above quote, that two files were being written to?
There is a huge difference between redirection of STDERR, STDOUT (or STDIN, that is, e.g. via "<") like letting STDERR direct to where STDIN is directing ATM.
Sadly, writing data into files is also called "redirect", that's making it not any easier, can be a source of confusion.
The issue is, all the above is just what you told bash to do.
With cat FILE > ANOTHER-FILE you write all data of FILE into ANOTHER-FILE.
With cat FILE >>ANOTHER-FILE you append the data from FILE to the end of ANOTHER-FILE.
Here some examples:
Code: Select all
rava@porteus:/tmp$ type l
l is aliased to `ls -oa --color=auto --time-style=long-iso'
rava@porteus:/tmp$ echo jgdsjgsdjghds >1
rava@porteus:/tmp$ echo kjhdskjlhkjhshkdgsuzsdhsdflöjsksgjsdg>2
rava@porteus:/tmp$ l 1 2
-rw-r--r-- 1 rava 14 2015-04-30 03:54 1
-rw-r--r-- 1 rava 39 2015-04-30 03:54 2
rava@porteus:/tmp$ cat 1>3
^Z
[1]+ Stopped
rava@porteus:/tmp$ cat 1 >3
rava@porteus:/tmp$ l 1 2 3
-rw-r--r-- 1 rava 14 2015-04-30 03:54 1
-rw-r--r-- 1 rava 39 2015-04-30 03:54 2
-rw-r--r-- 1 rava 14 2015-04-30 03:54 3
rava@porteus:/tmp$ echo jgsdjgd>>3
rava@porteus:/tmp$ l 1 2 3
-rw-r--r-- 1 rava 14 2015-04-30 03:54 1
-rw-r--r-- 1 rava 39 2015-04-30 03:54 2
-rw-r--r-- 1 rava 22 2015-04-30 03:55 3
rava@porteus:/tmp$ cat 1 2 3 >4
rava@porteus:/tmp$ l 1 2 3 4
-rw-r--r-- 1 rava 14 2015-04-30 03:54 1
-rw-r--r-- 1 rava 39 2015-04-30 03:54 2
-rw-r--r-- 1 rava 22 2015-04-30 03:55 3
-rw-r--r-- 1 rava 75 2015-04-30 03:55 4


Joking aside, back to serious business:
What happened with the line "cat 1>3" ?
Why did it wait and I had to send the cat into sleep? [and later kill it]
Cause that was not about cat FILE (named "1"), but redirect STDOUT (also known as "1", see above post) of the program cat into a file. And since I gave cat no data (no STDIN) to put into the file "3", the cat waited for some input. See
Code: Select all
# man cat
Also, the above is an example of a simple missing white space (let's assume you or a colleague of yours made that by mistake) makes the script buggy. But only that one line. If the script is full of case do done elif else and such, it could be that the above code is not to be found at once. Could be, the line only gets executed in an OS environment, or with another sound card than you have, or whatever different architecture than yours, then the bug would affect all users with that arch or sound card, but never you (or your assumed colleague)
Coding is great, when there are no bugs, and can get nasty where there are ones.

But that is the same with any coding, be it shell or higher languages like C or Perl or Python. One single wrong character could create a disaster. Like, not erasing some temp files and folders, but erasing all files from all mounted devices..., in the end killing the whole running OS. [When the buggy script was started by root)
____________________________________________________________
With the cat trick (not to be confused with a hat trick </lame pun is:lame>), you can create your first easy and self-coded text editor like so:
Code: Select all
rava@porteus:/tmp$ cat 1>5
Line One
Line Two
A one two three four...
Busted flat in Baton Rouge, hmmm, heading for the trains
Feelin' near as faded as my jeans
EOF
^C
rava@porteus:/tmp$ l 5
-rw-r--r-- 1 rava 137 2015-04-30 04:00 5
rava@porteus:/tmp$ cat 5
Line One
Line Two
A one two three four...
Busted flat in Baton Rouge, hmmm, heading for the trains
Feelin' near as faded as my jeans
EOF
But when another Linux has no mc installed, I use vim. Aka knowing what "<esc>:wq" is about. Or "<esc>q!" at times.

Info: Current Porteus not use vim, but the smaller implementation of busybox. Also, there is no "vim" to be found in current Porteus, just a link from "vi" to busybox.
That's how busybox works, it looks how it was called, and then acts like the program would, enabling it with just one program acting as many, like vi, ls, cat, grep... And since symlinks almost use no disk space at all it is to be used for all super small systems. [*]
Also, when you start coding in earnest, start using geany! Syntax highlighting makes coding fun! And easier to debug!
But back to the example of the failed boot of a new to you OS. You still want to write some findings and your thoughts or why the boot failed, and what you want to try next. With the above cat trick you still can create a file, writing text into it.
Just be aware that when you want to add even more lines, not use cat 1>info, cause that deletes all you put into file "info", you have to use cat 1>>info
Code: Select all
rava@porteus:/tmp$ cat 1>info
line 1
line 2
^C
rava@porteus:/tmp$ l info
-rw-r--r-- 1 rava 14 2015-04-30 04:54 info
rava@porteus:/tmp$ cat info
line 1
line 2
rava@porteus:/tmp$ cat 1>info
line 3
line 4
^C
rava@porteus:/tmp$ cat info
line 3
line 4
rava@porteus:/tmp$ cat 1>info
line 1
line 2
^C
rava@porteus:/tmp$ cat 1>>info
line 3
line 4
^C
rava@porteus:/tmp$ cat info
line 1
line 2
line 3
line 4
Code: Select all
#!/bin/sh
# [...]
echo -en ${bld}${blu}
cat <<EOF
[...]
mount -o loop,offset=307200 /path/cd.nrg /mnt/mountpoint <- not works for Nero cddb-Images, only for Nero's own kind of data ISO images!
EOF
What's with the bld and blu stuff in the example?
${bld} ${blu} and some others are escape code for colour, bold and such. Just the escape Sequence I named accordingly for coding convenience.
I created a file that defines these "code" colours and include it into scripts with that line:
test ! "$ECHO_COLORS"x = "x" && test -f $ECHO_COLORS && . $ECHO_COLORS
Of course, $ECHO_COLORS has to be known to the shell, or else the above would fail. (${bld} and such then all would be empty and just make nothing, no harm done, just no colours for my script...
I once needed to partition a laptop hardisk, and since that laptop was not able to star5t an OS from USB, I had to run gparted Live CD. But that got no write permission into /usr/local/bin [it was part of it's live CD ISO for it's program's and own tweaks, sure not the way it should have been, /usr/local is, like the name says, Unix System Resource / Local, for the local machine, the current user to be used... making sub-folders of /usr/local read-only breaks the meaning of what /usr/local should be for. *shrug*
But I still wanted my scripts to have colour, so I had to change my hard coded insert of /usr/local/bin/echo_colors into something more flexible.
Prior to having the need for the parted ISO, I mainly used Slax [Slax remix was not born at that time, that came later, and we all know into what neat OS Slack remix then evolved, yes?

How did I change my old hard coded approach? I changed it into one that includes echo_colors wher ever it might be, for any Linux to be used, and now I only have to change the path to /usr/local/bin/echo_colors if such Linux not supports "/usb" ...
That line makes echo_colors known to my scripts:
Code: Select all
#grep /etc/profile* ECHO_COLORS
profile.local:ECHO_COLORS=/usr/local/bin/echo.colors
To keep my own code separated from the code of standard Porteus. Also, by that I could port my own code more easy into any other Linux.
Sure I need to add a line into /etc/profile to make that work, like so:
Code: Select all
tail -n 4 /etc/profile
## ## Rava addition: yes, we want to use a /etc/profile.local ^-^
if [ -f /etc/profile.local ];then
. /etc/profile.local
fi
_________________________________________________________
But honestly, re-reading my above post, it really looks like me being some kind of nutty professor </yet another lame pun about a famous movie, this time> teacher, sorry for that. I could have structured it all way better, but I am too tired and too lazy to subedit all my above post to make it into one proper howto article.

___________________
[*] Linux uses just the length of the target file, to which the symlink shows.
You can even store info (text) without using any disk space. How so? Just use an empty file and put in the info you wanted as the file's file name.
Cheers!
Yours Rava
Yours Rava
-
- Full of knowledge
- Posts: 2564
- Joined: 25 Jun 2014, 15:21
- Distribution: 3.2.2 Cinnamon & KDE5
- Location: London
Re: Shell Redirect, cat as editor, other tricks & bug exampl
Another cat trick has occurred to me. Appending to file in script:
Code: Select all
Instead of:
guest@porteus:~$ [[ -f t ]] && rm t
guest@porteus:~$ ls -l t
/bin/ls: cannot access t: No such file or directory
Use:
guest@porteus:~$ cat /dev/null>t
guest@porteus:~$ ls -l t
-rw-r--r-- 1 guest guest 0 May 9 14:45 t
guest@porteus:~$ ls /tmp>>t
guest@porteus:~$ ls -l t
-rw-r--r-- 1 guest guest 40 May 9 14:46 t
guest@porteus:~$ cat /dev/null>t
guest@porteus:~$ ls -l t
-rw-r--r-- 1 guest guest 0 May 9 14:46 t
guest@porteus:~$ ls /tmp>>t
guest@porteus:~$ ls -l t
-rw-r--r-- 1 guest guest 40 May 9 14:46 t
Linux porteus 4.4.0-porteus #3 SMP PREEMPT Sat Jan 23 07:01:55 UTC 2016 i686 AMD Sempron(tm) 140 Processor AuthenticAMD GNU/Linux
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
- Rava
- Contributor
- Posts: 4650
- Joined: 11 Jan 2011, 02:46
- Distribution: XFCE 5.0 x86_64 + 4.0 i586
- Location: Forests of Germany
Re: Shell Redirect, cat as editor, other tricks & bug exampl
... but, you don't have to use /dev/null to explicit empty a file.
If you want to empty your logfile, just use echo "whatever info" >/tmp/log$$
using Just a single ">" always empties the contents of the file, while >> always appends.
So, when you need a clean file to put info in it, just use ">" when first writing into the file.
BTW, the "$$" as part of a filename above only makes sense when used in a script, as it will be expanded into the process ID. As run by bash or another CLI, the $$ would expand into a new number every time.
(Using /tmp/whatever$$ is not 100% failsafe, but way better than having a script writing to just /tmp/whatever . Just remember to delete all temp files prior exiting the script.)
If you want to empty your logfile, just use echo "whatever info" >/tmp/log$$
using Just a single ">" always empties the contents of the file, while >> always appends.
So, when you need a clean file to put info in it, just use ">" when first writing into the file.
BTW, the "$$" as part of a filename above only makes sense when used in a script, as it will be expanded into the process ID. As run by bash or another CLI, the $$ would expand into a new number every time.
(Using /tmp/whatever$$ is not 100% failsafe, but way better than having a script writing to just /tmp/whatever . Just remember to delete all temp files prior exiting the script.)
Cheers!
Yours Rava
Yours Rava
-
- Full of knowledge
- Posts: 2564
- Joined: 25 Jun 2014, 15:21
- Distribution: 3.2.2 Cinnamon & KDE5
- Location: London
Re: Shell Redirect, cat as editor, other tricks & bug exampl
The situation being addressed, mainly concerning data files, being one where somewhere further down in the code there is a loop in which a file is appended to using only >>. At some stage file has to be initialised, otherwise will be appending to rubbish.
Or file can be appended to from different places in code, and we don't know where first write to file could take place.
Or file can be appended to from different places in code, and we don't know where first write to file could take place.

Linux porteus 4.4.0-porteus #3 SMP PREEMPT Sat Jan 23 07:01:55 UTC 2016 i686 AMD Sempron(tm) 140 Processor AuthenticAMD GNU/Linux
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB
- Rava
- Contributor
- Posts: 4650
- Joined: 11 Jan 2011, 02:46
- Distribution: XFCE 5.0 x86_64 + 4.0 i586
- Location: Forests of Germany
Re: Shell Redirect, cat as editor, other tricks & bug exampl
^
Okay, valid ideas.Still, a "echo -n >/tmp/whatever" will do the same.
Okay, valid ideas.Still, a "echo -n >/tmp/whatever" will do the same.

Cheers!
Yours Rava
Yours Rava