unofficial mirror of libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Carl Edquist <edquist@cs.wisc.edu>
To: Zachary Santer <zsanter@gmail.com>
Cc: libc-alpha@sourceware.org, coreutils@gnu.org, p@draigbrady.com
Subject: Re: RFE: enable buffering on null-terminated data
Date: Mon, 11 Mar 2024 06:54:11 -0500 (CDT)	[thread overview]
Message-ID: <8c490a55-598a-adf6-67c2-eb2a6099620a@cs.wisc.edu> (raw)
In-Reply-To: <CABkLJULka=Ox-WVNfqzeLYs1dX0h7ovnfjeRdqGSFcqVMJ47KQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 7977 bytes --]

On Sun, 10 Mar 2024, Zachary Santer wrote:

> On Sun, Mar 10, 2024 at 4:36 PM Carl Edquist <edquist@cs.wisc.edu> wrote:
>>
>> Out of curiosity, do you have an example command line for your use case?
>
> My use for 'stdbuf --output=L' is to be able to run a command within a
> bash coprocess.

Oh, cool, now you're talking!  ;)


> (Really, a background process communicating with the parent process 
> through FIFOs, since Bash prints a warning message if you try to run 
> more than one coprocess at a time. Shouldn't make a difference here.)

(Kind of a side-note ... bash's limited coprocess handling was a long 
standing annoyance for me in the past, to the point that I wrote a bash 
coprocess management library to handle multiple active coprocess and give 
convenient methods for interaction.  Perhaps the trickiest bit about 
multiple coprocesses open at once (which I suspect is the reason support 
was never added to bash) is that you don't want the second and subsequent 
coprocesses to inherit the pipe fds of prior open coprocesses.  This can 
result in deadlock if, for instance, you close your write end to coproc1, 
but coproc1 continues to wait for input because coproc2 also has a copy of 
a write end of the pipe to coproc1's input.  So you need to be smart about 
subsequent coprocesses first closing all fds associated with other 
coprocesses.

Word to the wise: you might encounter this issue (coproc2 prevents coproc1 
from seeing its end-of-input) even though you are rigging this up yourself 
with FIFOs rather than bash's coproc builtin.)


> See coproc-buffering, attached.

Thanks!

> Without making the command's output either line-buffered or unbuffered, 
> what I'm doing there would deadlock. I feed one line in and then expect 
> to be able to read a transformed line immediately. If that transformed 
> line is stuck in a buffer that's still waiting to be filled, then 
> nothing happens.
>
> I swear doing this actually makes sense in my application.

Yeah makes sense!  I am familiar with the problem you're describing.

(In my coprocess management library, I effectively run every coproc with 
--output=L by default, by eval'ing the output of 'env -i stdbuf -oL env', 
because most of the time for a coprocess, that's whats wanted/necessary.)


... Although, for your example coprocess use, where the shell both 
produces the input for the coproc and consumes its output, you might be 
able to simplify things by making the producer and consumer separate 
processes.  Then you could do a simpler 'producer | filter | consumer' 
without having to worry about buffering at all.  But if the producer and 
consumer need to be in the same process (eg they share state and are 
logically interdependent), then yeah that's where you need a coprocess for 
the filter.

... On the other hand, if the issue is that someone is producing one line 
at a time _interactively_ (that is, inputting text or commands from a 
terminal), then you might argue that the performance hit for unbuffered 
output will be insignificant compared to time spent waiting for terminal 
input.


> $ ./coproc-buffering 100000
> Line-buffered:
> real    0m17.795s
> user    0m6.234s
> sys     0m11.469s
> Unbuffered:
> real    0m21.656s
> user    0m6.609s
> sys     0m14.906s

Yeah, this makes sense in your particular example.

It looks like expand(1) uses putchar(3), so in unbuffered mode this 
translates to one write(2) call for every byte.  sed(1) is not quite as 
bad - in unbuffered it appears to output the line and the newline 
terminator separately, so two write(2) calls for every line.

So in both cases (but especially for expand), line buffering reduces the 
number of write(2) calls.

(Although given your time output, you might say the performance hit for 
unbuffered is not that huge.)


> When I initially implemented this thing, I felt lucky that the data I 
> was passing in were lines ending in newlines, and not null-terminated, 
> since my script gets to benefit from 'stdbuf --output=L'.

:thumbsup:


> Truth be told, I don't currently have a need for --output=N.

Mmm-hmm  :)


> Of course, sed and all sorts of other Linux command-line tools can 
> produce or handle null-terminated data.

Definitely.  So in the general case, theoretically it seems as useful to 
buffer output on nul bytes.

Note that for gnu sed in particular, there is a -u/--unbuffered option, 
which will effectively give you line buffered output, including buffering 
on nul bytes with -z/--null-data .

... I'll be honest though, I am having trouble imagining a realistic 
pipeline that filters filenames with embedded newlines using expand(1) 
;)

...

But, I want to be a good sport here and contrive an actual use case.

So for fun, say I want to use cut(1) (which performs poorly when 
unbuffered) in a coprocess that takes null-terminated file paths on input 
and outputs the first directory component (which possibly contains 
embedded newlines).

The basic command in the coprocess would be:

 	cut -d/ -f1 -z

but with the default block buffering for pipe output, that will hang (the 
problem you describe) if you expect to read a record back from it after 
each record sent.


The unbuffered approach works, but (as discussed) is pretty inefficient:

 	stdbuf --output=0  cut -d/ -f1 -z


But, if we swap nul bytes and newlines before and after cut, then we can 
run cut with regular newline line buffering, and get the desired effect:

 	stdbuf --output=0 tr '\0\n' '\n\0' |
 	stdbuf --output=L cut -d/ -f1      |
 	stdbuf --output=0 tr '\0\n' '\n\0'


The embedded newlines in filenames will be passed by tr(1) to cut(1) as 
embedded nul bytes, cut will line-buffer its output, and the second tr 
will restore the original embedded newlines & null-terminated records.

Note that unbuffered tr(1) will still output its translated input in 
blocks (with fwrite(3)) rather than a byte at a time, so tr will 
effectively give buffered output with the same size as the input records.

(That is, newline or null-terminated input records will effectively 
produce newline or null-terminated output buffering, respectively.)


I'd venture to guess that most of the standard filters could be made to 
pass along null-terminated records as line-buffered records the same way. 
Might even package it into a convenience function to set them up:


 	swap_znl () { stdbuf -o0 tr '\0\n' '\n\0'; }

 	nulterm_via_linebuf () { swap_znl | stdbuf -oL "$@" | swap_znl; }


Then, for example, stand it up with bash's coproc:

 	$ coproc DC1 { nulterm_via_linebuf cut -d/ -f1; }

 	$ printf 'a\nb/c\nd/efg\0' >&${DC1[1]}
 	$ IFS='' read -rd '' -u ${DC1[0]} DIR
 	$ echo "[$DIR]"
 	[a
 	b]

(or however else you manage your coprocesses.)

It's a workaround, and it keeps the kind of buffering you'd get with a 
'stdbuf --output=N', but to be fair the extra data shoveling is not 
exactly free.

...

So ... again in theory I also feel like a null-terminated buffering mode 
for stdbuf(1) (and setbuf(3)) is kind of a missing feature.  It may just 
be that nobody has actually had a real need for it.  (Yet?)



> I'm running bash in MSYS2 on a Windows machine, so hopefully that 
> doesn't invalidate any assumptions.

Ooh.  No idea.  Your strace and sed might have different options than 
mine.  Also, I am not sure if there are different pipe and fd duplication 
semantics, compared to linux.  But, based on the examples & output you're 
giving, I think we're on the same page for the discussion.


> Now setting up strace around the things within the coprocess, and only 
> passing in one line, I now have coproc-buffering-strace, attached. 
> Giving the argument 'L', both sed and expand call write() once. Giving 
> the argument 0, sed calls write() twice and expand calls it a bunch of 
> times, seemingly once for each character it outputs. So I guess that's 
> it.

:thumbsup:  Yeah that matches what I was seeing also.


Thanks for humoring the peanut gallery here :D

Carl

  reply	other threads:[~2024-03-11 11:55 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CABkLJULa8c0zr1BkzWLTpAxHBcpb15Xms0-Q2OOVCHiAHuL0uA@mail.gmail.com>
     [not found] ` <9831afe6-958a-fbd3-9434-05dd0c9b602a@draigBrady.com>
2024-03-10 15:29   ` RFE: enable buffering on null-terminated data Zachary Santer
2024-03-10 20:36     ` Carl Edquist
2024-03-11  3:48       ` Zachary Santer
2024-03-11 11:54         ` Carl Edquist [this message]
2024-03-11 15:12           ` Examples of concurrent coproc usage? Zachary Santer
2024-03-14  9:58             ` Carl Edquist
2024-03-17 19:40               ` Zachary Santer
2024-04-01 19:24               ` Chet Ramey
2024-04-01 19:31                 ` Chet Ramey
2024-04-02 16:22                   ` Carl Edquist
2024-04-03 13:54                     ` Chet Ramey
2024-04-03 14:32               ` Chet Ramey
2024-04-03 17:19                 ` Zachary Santer
2024-04-08 15:07                   ` Chet Ramey
2024-04-09  3:44                     ` Zachary Santer
2024-04-13 18:45                       ` Chet Ramey
2024-04-14  2:09                         ` Zachary Santer
2024-04-04 12:52                 ` Carl Edquist
2024-04-04 23:23                   ` Martin D Kealey
2024-04-08 19:50                     ` Chet Ramey
2024-04-09 14:46                       ` Zachary Santer
2024-04-13 18:51                         ` Chet Ramey
2024-04-09 15:58                       ` Carl Edquist
2024-04-13 20:10                         ` Chet Ramey
2024-04-14 18:43                           ` Zachary Santer
2024-04-15 18:55                             ` Chet Ramey
2024-04-15 17:01                           ` Carl Edquist
2024-04-17 14:20                             ` Chet Ramey
2024-04-20 22:04                               ` Carl Edquist
2024-04-22 16:06                                 ` Chet Ramey
2024-04-27 16:56                                   ` Carl Edquist
2024-04-28 17:50                                     ` Chet Ramey
2024-04-08 16:21                   ` Chet Ramey
2024-04-12 16:49                     ` Carl Edquist
2024-04-16 15:48                       ` Chet Ramey
2024-04-20 23:11                         ` Carl Edquist
2024-04-22 16:12                           ` Chet Ramey
2024-04-17 14:37               ` Chet Ramey
2024-04-20 22:04                 ` Carl Edquist
2024-03-12  3:34           ` RFE: enable buffering on null-terminated data Zachary Santer
2024-03-14 14:15             ` Carl Edquist
2024-03-18  0:12               ` Zachary Santer
2024-03-19  5:24                 ` Kaz Kylheku
2024-03-19 12:50                   ` Zachary Santer
2024-03-20  8:55                     ` Carl Edquist
2024-04-19  0:16                       ` Modify buffering of standard streams via environment variables (not LD_PRELOAD)? Zachary Santer
2024-04-19  9:32                         ` Pádraig Brady
2024-04-19 11:36                           ` Zachary Santer
2024-04-19 12:26                             ` Pádraig Brady
2024-04-19 16:11                               ` Zachary Santer
2024-04-20 16:00                         ` Carl Edquist
2024-04-20 20:00                           ` Zachary Santer
2024-04-20 21:45                             ` Carl Edquist

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/libc/involved.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8c490a55-598a-adf6-67c2-eb2a6099620a@cs.wisc.edu \
    --to=edquist@cs.wisc.edu \
    --cc=coreutils@gnu.org \
    --cc=libc-alpha@sourceware.org \
    --cc=p@draigbrady.com \
    --cc=zsanter@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).