[Tutor] Running multiple python scripts from one python scripts

ose micah osaosemwe at yahoo.com
Fri Oct 25 12:55:27 EDT 2019


 It is dynamic, because i would not know what the content of the python scripts is, The input is unpredictable, it changes randomly everyday (Hence the need for live feeds), based on this constantly changing value, this new python scripts must be created and executed to an endpoint infrastructure. Do not look at the naming convention for the python scripts, based on the settings in the infrastructures' endpoint, the scripts must be run sequentially, else, one scripts could affects the running of another.
if not I would have ran each as a function. in the main script. 
    On Friday, October 25, 2019, 12:26:09 PM EDT, Alan Gauld <alan.gauld at yahoo.co.uk> wrote:  
 
 On 25 October 2019, at 14:48, ose micah <osaosemwe at yahoo.com> wrote:

> dynamically creates python scripts on the fly (from live feeds), 

I'm not sure what you mean by live feeds but can you explain what different kinds of scripts you produce for the first case "delete_line3.py"?

Based on the name it doesn't sound too dynamic in nature? Could it not be turned into a function driven by some parameters?

> This newly created scripts, must be deployed in less than 2 mins max, but, cannot be deployed simultaneously, 

By deployed I assume you mean executed? If so executing the code directly will be much, much, faster than spinning up a subprocess or two for each task.

># create script1
>
>with open (/tmp/delete_line3.py, 'w+' ) 
>
>...
>

This is the bit that is puzzling me. Why must this be dynamically created? What prevents it from being prewritten code?

>extProc1 = sp.Popen([sys.executable, '/tmp/delete_line3.py'],  stdout=sp.PIPE)
>

If you must execute dynamically created python code files then using exec(open(fn).read()) is probably preferable to starting a subprocess.


  


More information about the Tutor mailing list