Tuple Comprehension ???

avi.e.gross at gmail.com avi.e.gross at gmail.com
Tue Feb 21 16:09:04 EST 2023


Axy,

Nobody denies some of the many ways you can make a good design. But people
have different priorities that include not just conflicts between elements
of a design but also equally important factors like efficiency and deadlines
and not breaking too badly with the past.

You can easily enough design your own sub-language of sorts within the
python universe. Simply write your own module(s) and import them. Choose
brand new names for many of your functions or replace existing functions
carefully by figuring out which namespace a function is defined in, then
creating a new function with the same name that may call the old function
within it by explicitly referring to it.

SO if you want a new max() then either create an axy_max() or perhaps link
the original max to original_max and make your own max() that after playing
around internally, might call original_max.

Here is an example. Suppose you want the maximum value in a nested structure
like:

nested = [ 1, [2, 3, [4, 5], 6], 7]

This contains parts at several levels including an inner list containing yet
another inner list. using max(nested) will not work even if some intuition
wants it to work.

If you want your own version of max to be flexible enough to deal with this
case too, then you might find a flattener function or  a function that
checks the depth of a structure such as a list or tuple, and apply it as
needed until you have arguments suitable to hand to original_max. Your max()
may end up being recursive as it keep peeling back one level and calling
itself on the results which may then peel back another level. 

But the people who built aspects of python chose not to solve every single
case, however elegant that might be, and chose to solve several common cases
fairly efficiently.

What we often end up doing here is pseudo-religious discussions that rarely
get anywhere. It really is of no importance to discuss what SHOULD HAVE BEEN
for many scenarios albeit with some exceptions. Some errors must be slated
to be fixed or at least have more warnings in the documentation. And,
proposals for future changes to Python can be submitted and mostly will be
ignored.

What people often do not do is to ask a question that is more easy to deal
with. Asking WHY a feature is like it is can be a decent question. Asking
how to get around a feature such as whether there is some module out there
that implements it another way using some other function call, is another
good question. COMPLAINING about what has been done and used for a long time
is sometimes viewed differently and especially if it suggests people doing
this were stupid or even inconsistent. 

Appealing to make-believe rules you choose to live your life by also tends
not to work. As you note, overengineering can cause even more problems than
a simple consistent design, albeit it can also create a rather boring and
useless product.

Too much of what happens under the table is hidden in python and if you
really study those details, you might see how a seemingly trivial task like
asking to create a new object of some class, can result in a cascade of code
being run that does things that themselves result in cascades of more code
as the object is assembled and modified using a weird number of searches and
executions for dunder methods in classes and metaclasses it is based on as
well as dealing with other objects/functions like descriptors and
decorators. Since our processors are faster, we might be able to afford a
design that does so much for you and we have garbage collection to deal with
the many bits and pieces created and abandoned in many processes. So our
higher level designs can often look fairly simple and even elegant but the
complexity now must be there somewhere.

I hate to bring up  an analogy as my experience suggests people will take it
as meaning way more (or less) than I intend. Many languages, especially
early on, hated to fail. Heck, machines crashed. So an elegant design was
required to be overlaid with endless testing to avoid the darn errors.
Compilers had to try to catch them even earlier so you did not provide any
argument to a function that was not the same type. You had to explicitly
test for other things at run time to avoid dividing by zero or take the
square root of a negative number or see if a list was empty ...

Python allows or even often encourages a different paradigm where you throw
errors when needed but mainly just TRY something and be prepared to deal
with failure. It too is an elegant design but a very different one. And, you
can do BOTH. Heck, you can do many styles of programming as the language
keeps being extended. There is no one right way most of the time even if
someone once said there is.

So if the standard library provides one way to do something, it may not be
the only way and may not match what you want. Sometimes the fix for a
request is made by adding options like the default=value for max, and
sometimes by allowing the user to specify the comparison function to use
with another keyword argument. This literally lets you use max() to actually
easily calculate what min() does or do something entirely different like
find the longest word in a sentence or the shortest:

>>> words = "A sentence with sesquipedalian words like
disestablishmentarianism to measure length".split()
...         
>>> words
...         
['A', 'sentence', 'with', 'sesquipedalian', 'words', 'like',
'disestablishmentarianism', 'to', 'measure', 'length']

So the length can be measured by len() and this asks for the maximum:

>>> max(words, key=len)
...         
'disestablishmentarianism'

The max() function here does not care and simply uses whatever key you
wanted. 

Simply making a function that returns a negative of the length suffices to
make this act like min():

>>> def neglen(arg): return( - len(arg))
... 
>>> neglen("hello")
-5
>>> max(words, key=neglen)
'A'

My point is that in one sense many standard library functions already have
had features added to them in ways that do not break existing programs and
they do allow fairly interesting problems to be solved. But sometimes such
features may mean either not accepting some formats as arguments that
another function supports, or perhaps needing to try to change lots of
functions at the same time to remain in sync. That is clearly often not
possible or may be highly expensive or it may simply be done more gradually
over many releases. I believe extending list comprehension or generators of
that sort to also do sets and dictionaries may have been done incrementally
even if today it seems like a unified set of things done in the same general
way.

So my PERSONAL view is that when looking at issues, we need to NOT find a
single principle and then insist it is the one and only principle to guide
us. We need to evaluate multiple ideas and not rule out any prematurely. We
then need to weigh the ideas in specific cases and see if anything dominates
or if perhaps a few in proper combination are a reasonable compromise. So I
submit as far as compilers and interpreters go, they would "love" really
short variable names and not see oodles of whitespace that just is there to
be ignored, as well as comments to ignore. But generally, we don't care
because what we focus on is the experience of human programmers and the ones
who read or write or maintain the code. For them, we want generally
variables long enough to hold some amount of meaningfulness. Similarly, many
other features are a tradeoff and although it is likely far cheaper to
write:

    deeply.nested.object.fieldname = 0 if deeply.nested.object.fieldname > 9

As more like:

    short = deeply.nested.object.fieldname
    short = 0 if short > 9

It makes sense to allow the programmer to choose either way and either have
a compiler or interpreter optimize code with regions like this that keep
referring to nested areas, or pay the costs to keep re-traversing it even if
it takes a millionth of a second longer.

Now if I am designing something under my own control, boy do I take short
cuts when prototyping and ignore lots of "rules" as I build the skeleton. If
I am somewhat satisfied, I might start applying many of the principles I use
including adding lots of comments, naming variables in more meaningful ways,
perhaps refactoring for more efficiency, changing the code to handle special
cases, setting up to detect some errors and catch them and do something
appropriate, write a manual page and so on. Lots of things are then
important but not so much in early stages.

Python is a work in progress built by lots of people and I am regularly
amazed at how well much of it hangs together despite some areas where it is
a tad inconsistent. But I am pretty sure many of the inconsistencies have
been handled in ways that are not seen unless you search. There must be
other implementations for functions that handle edge cases or that try many
ways and return the ones that do not generate errors. 

Do you prefer languages that pretty much either require you to create many
functions for each number and type of arguments like 
max(int, int) -> int
max(int, int, int) -> int
max(int, int, int, int) -> int
max(int, double) -> double
...
max(int, double, uint16) -> double

And so on? I know languages that allow you to write frameworks and then
internally create dozens of such variants to use as needed at compile time.
Each function can be optimized but so many functions may have their own
expenses.

But a language like python takes a different tack and one function alone can
do so many things with ease including handling any number of arguments,
defaults for some, many kinds of arguments and especially those that are
either compatible or implement some protocol, and so on. When you view
things that way, the design of max() and sum() may well make quite a bit
more sense and also why they are not identically designed.



-----Original Message-----
From: Python-list <python-list-bounces+avi.e.gross=gmail.com at python.org> On
Behalf Of Axy via Python-list
Sent: Tuesday, February 21, 2023 2:37 PM
To: python-list at python.org
Subject: Re: Tuple Comprehension ???

On 21/02/2023 19:11, avi.e.gross at gmail.com wrote:
> In your own code, you may want to either design your own functions, or use
them as documented or perhaps create your own wrapper functions that
carefully examine what you ask them to do and re-arrange as needed to call
the function(s) you want as needed or return their own values or better
error messages.  As a silly example, this fails:
>
> max(1, "hello")
>
> Max expects all arguments to be of compatible types. You could write your
own function called charMax() that converts all arguments to be of type str
before calling max() or maybe call max(... , key=mycompare) where compare as
a function handles this case well.
>
> The key point is that you need to adapt yourself to what some function you
want to use offers, not expect the language to flip around at this point and
start doing it your way and probably breaking many existing programs.
>
> Yes, consistency is a good goal. Reality is a better goal.

I don't think overengineering is a good thing. Good design utilizes
associativity so a person don't get amazed by inconsistency in things that
expected to be similar.

Axy.

--
https://mail.python.org/mailman/listinfo/python-list



More information about the Python-list mailing list