[Python-ideas] `to_file()` method for strings

Nick Eubank nickeubank at gmail.com
Tue Mar 22 23:58:28 EDT 2016


Thanks for the thoughts both!

I'm not opposed to `a = str.read_file()` -- it does require knowing classes
to grok it, but it's super readable and intuitive to look at (i.e.
pythonic?).

Regarding bytes, I was thinking `to_file()` would include a handful of
arguments to support unusual encodings or bytes, but leaving the default to
utf-8 text.

On Tue, Mar 22, 2016 at 8:52 PM Andrew Barnert <abarnert at yahoo.com> wrote:

> On Mar 22, 2016, at 20:29, Alexander Belopolsky <
> alexander.belopolsky at gmail.com> wrote:
>
>
> On Tue, Mar 22, 2016 at 11:06 PM, Nick Eubank <nickeubank at gmail.com>
> wrote:
>
>> it seems a simple `to_file` method on strings (essentially wrapping a
>> context-manager) would be really nice
>
>
> -1
>
> It is a rare situation when you would want to write just a single string
> to a file.
>
>
> I do it all the time in other languages when dealing with smallish files.
> Python's very nice file-object concept, slant toward iterator-based
> processing, and amazingly consistent ecosystem means that the same issues
> don't apply, so I'd rarely do the same thing. But for users migrating to
> Python from another language, or using Python occasionally while primarily
> using another language, I can see it being a lot more attractive.
>
> Also, you're neglecting the atomic-write issue. Coming up with a good API
> for an iterative atomic write is hard; for single-string write-all, it's
> just an extra flag on the function.
>
>  In most cases you write several strings and or do other file operations
> between opening and closing a file stream.  The proposed to_file() method
> may become an attractive nuisance leading to highly inefficient code.
> Remember: opening or closing a file is still in most setups a mechanical
> operation that involves moving macroscopic physical objects, not just
> electrons.
>
>
> All the more reason to assemble the whole thing in memory and write it all
> at once, rather than streaming it out. Then you don't have to worry about
> how good the buffering is, what happens if there's a power failure (or just
> an exception, if you don't use with statements) halfway through, etc. It's
> definitely going to be as fast and as safe as possible if all of the
> details are done by the stdlib instead of user code. (I trust Python's
> buffering in 3.6, but not in 2.x or 3.1--and I've seen people even in
> modern 3.x try to "optimize" by opening files in raw mode and writing 7
> bytes here and 18 bytes there, which is going to be much slower than
> concatenating onto a buffer and writing blocks at a time...)
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20160323/69452e61/attachment-0001.html>


More information about the Python-ideas mailing list