Re-raising a RuntimeError - good practice?

Victor Hooi victorhooi at gmail.com
Thu Oct 24 21:09:41 EDT 2013


Hi,

Thanks to @Stephen D'APrano and @Andrew Berg for your advice.

The advice seems to be that I should move my exception higher up, and try to handle it all in one place:

for job in jobs: 
    try: 
        try: 
            job.run_all() 
        except Exception as err:  # catch *everything* 
            logger.error(err) 
            raise 
    except (SpamError, EggsError, CheeseError): 
        # We expect these exceptions, and ignore them. 
        # Everything else is a bug. 
        pass 

That makes sense, but I'm sorry but I'm still a bit confused.

Essentially, my requirements are:

    1. If any job raises an exception, end that particular job, and continue with the next job.
    2. Be able to differentiate between different exceptions in different stages of the job. For example, if I get a IOError in self.export_to_csv() versus one in  self.gzip_csv_file(), I want to be able to handle them differently. Often this may just result in logging a slightly different friendly error message to the logfile.

Am I still able to handle 2. if I handle all exceptions in the "for job in jobs" loop? How will I be able to distinguish between the same types of Exceptions being raise by different methods?

Also, @Andrew Berg - you mentioned I'm just swallowing the original exception and re-raising a new RuntimeError - I'm guessing this is a bad practice, right? If I use just "raise"

        except Exception as err:  # catch *everything* 
            logger.error(err) 
            raise 

that will just re-raise the original exception right?

Cheers,
Victor

On Thursday, 24 October 2013 15:42:53 UTC+11, Andrew Berg  wrote:
> On 2013.10.23 22:23, Victor Hooi wrote:
> 
> > For example:
> 
> > 
> 
> >     def run_all(self):
> 
> >         self.logger.debug('Running loading job for %s' % self.friendly_name)
> 
> >         try:
> 
> >             self.export_to_csv()
> 
> >             self.gzip_csv_file()
> 
> >             self.upload_to_foo()
> 
> >             self.load_foo_to_bar()
> 
> >         except RuntimeError as e:
> 
> >             self.logger.error('Error running job %s' % self.friendly_name)
> 
> > ...
> 
> >     def export_to_csv(self):
> 
> >     ...
> 
> >         try:
> 
> >             with open(self.export_sql_file, 'r') as f:
> 
> >                 self.logger.debug('Attempting to read in SQL export statement from %s' % self.export_sql_file)
> 
> >                 self.export_sql_statement = f.read()
> 
> >                 self.logger.debug('Successfully read in SQL export statement')
> 
> >         except Exception as e:
> 
> >             self.logger.error('Error reading in %s - %s' % (self.export_sql_file, e), exc_info=True)
> 
> >             raise RuntimeError
> 
> You're not re-raising a RuntimeError. You're swallowing all exceptions and then raising a RuntimeError. Re-raise the original exception in
> 
> export_to_csv() and then handle it higher up. As Steven suggested, it is a good idea to handle exceptions in as few places as possible (and
> 
> as specifically as possible). Also, loggers have an exception method, which can be very helpful in debugging when unexpected things happen,
> 
> especially when you need to catch a wide range of exceptions.
> 
> 
> 
> -- 
> 
> CPython 3.3.2 | Windows NT 6.2.9200 / FreeBSD 10.0



More information about the Python-list mailing list