From fps-pd at optusnet.com.au Thu Mar 1 00:41:13 2018 From: fps-pd at optusnet.com.au (Peter Dwyer) Date: Thu, 01 Mar 2018 16:41:13 +1100 Subject: [melbourne-pug] Jobs at Reece In-Reply-To: <85po4o8uht.fsf@benfinney.id.au> References: <85po4o8uht.fsf@benfinney.id.au> Message-ID: Hi Ben I know someone else who works at Reece. I would love to join and learn a lot of hands-on Python but I don?t have a lot of experience. I have just started to study Java with OUA and also Python through Udemy. I have also intermediate skills in MS Excel. I might be able to help ?pay? my way by helping out in your Accounts Receivable (debt collection) ** I will work for ? the pay as a contractor since I have my own ABN ? if you are interested give me a call before I?m snapped up?. Cheers Peter Dwyer Ph: 0432-699-779 On 1/3/18, 12:17 pm, "melbourne-pug on behalf of Ben Finney via melbourne-pug" wrote: Richard Jones writes: > We're looking for some Python folks to come join the team at Reece. > [?] We're implementing a bunch of things in Python and Django (green > fields - Python 3 all the way) in a micro-service architecture to > support Reece's business. It's a great place to work, I'm enjoying it > and it's a bunch of lovely people to work with. If that sounds > interesting to you please get in touch! It's hard to know whether it sounds interesting, there are no specifics about what the job is (other than the technologies involved :-) Can you say what Reece does, and what the new hires would be put to work doing? Thanks! -- \ ?My aunt gave me a walkie-talkie for my birthday. She says if | `\ I'm good, she'll give me the other one next year.? ?Steven | _o__) Wright | Ben Finney _______________________________________________ melbourne-pug mailing list melbourne-pug at python.org https://mail.python.org/mailman/listinfo/melbourne-pug From ed at pythoncharmers.com Thu Mar 1 18:13:22 2018 From: ed at pythoncharmers.com (Ed Schofield) Date: Fri, 2 Mar 2018 10:13:22 +1100 Subject: [melbourne-pug] Next Melbourne Python meeting - Monday 5 March Message-ID: Hi all! We're looking forward to our second Python meetup for 2018 next week, on Monday 5 March. We have three talks planned: *1. Fred Rotbart: Hierarchical Temporal Memory in Python, part 2* (30-45 minutes) Fred will give a refresher (for those who missed his talk in February) and then pick up where he left off last time, with various fancy demos of what's possible with Hierarchical Temporal Memory for learning patterns powerfully from small(ish) datasets. *2. Adel Fazel: Web data wrangling for beginners* (20 minutes) Adel will give an introductory talk about using Python for data wrangling, accessing web APIs, parsing JSON data, and manipulating it with Pandas. He will demonstrate this by accessing the New York Times API. *3. Ed Schofield: AlphaZero - background, how it works, and a general Python implementation *(20 minutes) AlphaZero is a major recent advance in self-play-based reinforcement learning from DeepMind that can learn complex 2-player strategy games like Go and Chess from scratch (with no human knowledge) and quickly surpass human capabilities. Ed will review the algorithm, how it works, what its future applications could be, and a general-purpose Python package for implementing it. *4. Announcements and pizza* *When:* 5.45pm for mingling; talks from 6pm to 7.30pm *Where: *Outcome-Hub Co-Working Space, Suite 1, 121 Cardigan Street, Carlton *How to get there: *Walk 12 minutes north from Melbourne Central station. *Afterwards:* drinks on Lygon Street *Sponsorship:* many thanks to Outcome Hub for providing the venue and Python Charmers for ongoing sponsorship. *RSVP:* Please respond on Meetup.com so we can track numbers: https://www.meetup.com/Melbourne-Python-Meetup-Group/ We hope to see you there! :-D *Next meeting: *our next meeting will be on Monday 7 May 2018. (We'll skip April because of Easter.) We're still looking for speakers for May and beyond, so please get in contact if you'd like to speak! Best wishes, Ed -- Dr. Edward Schofield Python Charmers http://pythoncharmers.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From miked at dewhirst.com.au Fri Mar 9 02:41:01 2018 From: miked at dewhirst.com.au (Mike Dewhirst) Date: Fri, 9 Mar 2018 18:41:01 +1100 Subject: [melbourne-pug] Joblib question Message-ID: https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf I'm trying to make the following code run in parallel on separate CPU cores but haven't had any success. def make_links(self): for db in databases: link = create_useful_link(self, Link, db) if link: scrape_db(self, link, db) This is a web scraper which is working nicely in a leisurely sequential manner.? databases is a list of urls with gaps to be filled by create_useful_link() which makes a link record from the Link class. The self instance is a source of attributes for filling the url gaps. self is a chemical substance and the link record url field when clicked in a browser will bring up that external website with the chemical substance selected for researching by the viewer. If successful, we then fetch the external page and scrape a bunch of interesting data from it and turn that into substance notes. scrape_db() doesn't return anything but it does create up to nine other records. from joblib import Parallel, delayed class Substance( etc .. ... def make_links(self): #Parallel(n_jobs=-2)(delayed( # scrape_db(self, create_useful_link(self, Link, db), db) for db in databases #)) I'm getting a TypeError from Parallel delayed() - can't pickle generator objects So my question is how to write the commented code properly? I suspect I haven't done enough comprehension. Thanks for any help Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From alito at organicrobot.com Fri Mar 9 03:30:09 2018 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Fri, 9 Mar 2018 19:30:09 +1100 Subject: [melbourne-pug] Joblib question In-Reply-To: References: Message-ID: <302ec160-ff59-a46c-2f20-14dab3350def@organicrobot.com> delayed is a decorator, so it takes a function or a method. You are passing it a generator instead. def make_links(self): Parallel(n_jobs=-2)(delayed(scrape_db)(self, create_useful_link(self, Link, db), db) for db in databases ) should work, but it will only parallelise over the scrape_db calls, not the create_useful_link calls I think. Which of the two do you want to parallelise over? Or were you after parallelising both? On 09/03/18 18:41, Mike Dewhirst wrote: > https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf > > I'm trying to make the following code run in parallel on separate CPU > cores but haven't had any success. > > def make_links(self): for db in databases: link = > create_useful_link(self, Link, db) if link: scrape_db(self, link, db) > > This is a web scraper which is working nicely in a leisurely sequential > manner.? databases is a list of urls with gaps to be filled by > create_useful_link() which makes a link record from the Link class. The > self instance is a source of attributes for filling the url gaps. self > is a chemical substance and the link record url field when clicked in a > browser will bring up that external website with the chemical substance > selected for researching by the viewer. If successful, we then fetch the > external page and scrape a bunch of interesting data from it and turn > that into substance notes. scrape_db() doesn't return anything but it > does create up to nine other records. > > from joblib import Parallel, delayed > > class Substance( etc .. > ... > def make_links(self): > #Parallel(n_jobs=-2)(delayed( > # scrape_db(self, create_useful_link(self, Link, db), db) for db in databases > #)) > > I'm getting a TypeError from Parallel delayed() - can't pickle generator > objects > > So my question is how to write the commented code properly? I suspect I > haven't done enough comprehension. > > Thanks for any help > > Mike > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > From news02 at metrak.com Fri Mar 9 20:33:02 2018 From: news02 at metrak.com (paul sorenson) Date: Fri, 9 Mar 2018 17:33:02 -0800 Subject: [melbourne-pug] Joblib question In-Reply-To: References: Message-ID: <9d5a67bc-aee6-0a1d-488c-641f42930ce7@metrak.com> Mike, Are there unique features of joblib that you need to use? Scraping web pages is often a good candidate for asyncio based models. cheers On 03/08/2018 11:41 PM, Mike Dewhirst wrote: > https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf > > I'm trying to make the following code run in parallel on separate CPU > cores but haven't had any success. > > def make_links(self): for db in databases: link = > create_useful_link(self, Link, db) if link: scrape_db(self, link, db) > This is a web scraper which is working nicely in a leisurely > sequential manner.? databases is a list of urls with gaps to be filled > by create_useful_link() which makes a link record from the Link class. > The self instance is a source of attributes for filling the url gaps. > self is a chemical substance and the link record url field when > clicked in a browser will bring up that external website with the > chemical substance selected for researching by the viewer. If > successful, we then fetch the external page and scrape a bunch of > interesting data from it and turn that into substance notes. > scrape_db() doesn't return anything but it does create up to nine > other records. > > from joblib import Parallel, delayed > > class Substance( etc .. > ... > def make_links(self): > #Parallel(n_jobs=-2)(delayed( > # scrape_db(self, create_useful_link(self, Link, db), db) for db in databases > #)) > I'm getting a TypeError from Parallel delayed() - can't pickle > generator objects > > So my question is how to write the commented code properly? I suspect > I haven't done enough comprehension. > > Thanks for any help > > Mike > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug -------------- next part -------------- An HTML attachment was scrubbed... URL: From miked at dewhirst.com.au Sat Mar 10 01:03:59 2018 From: miked at dewhirst.com.au (Mike Dewhirst) Date: Sat, 10 Mar 2018 17:03:59 +1100 Subject: [melbourne-pug] Joblib question In-Reply-To: <9d5a67bc-aee6-0a1d-488c-641f42930ce7@metrak.com> References: <9d5a67bc-aee6-0a1d-488c-641f42930ce7@metrak.com> Message-ID: <17cef1d9-5c06-d344-0f75-61dfe2a80784@dewhirst.com.au> On 10/03/2018 12:33 PM, paul sorenson wrote: > > Mike, > > Are there unique features of joblib that you need to use? > I was seduced by "Parallel". On reading the docs a little more diligently it seems well suited to parallel computation with heavy compute-bound stuff like scientific number crunching and disk caching results to prevent re-computing. > Scraping web pages is often a good candidate for asyncio based models. > I think I'm being seduced by io in the name. I do judge books by their cover so I think I'll read asyncio Thanks Paul Mike > > cheers > > > On 03/08/2018 11:41 PM, Mike Dewhirst wrote: >> https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf >> >> I'm trying to make the following code run in parallel on separate CPU >> cores but haven't had any success. >> >> def make_links(self): for db in databases: link = >> create_useful_link(self, Link, db) if link: scrape_db(self, link, db) >> This is a web scraper which is working nicely in a leisurely >> sequential manner.? databases is a list of urls with gaps to be >> filled by create_useful_link() which makes a link record from the >> Link class. The self instance is a source of attributes for filling >> the url gaps. self is a chemical substance and the link record url >> field when clicked in a browser will bring up that external website >> with the chemical substance selected for researching by the viewer. >> If successful, we then fetch the external page and scrape a bunch of >> interesting data from it and turn that into substance notes. >> scrape_db() doesn't return anything but it does create up to nine >> other records. >> >> from joblib import Parallel, delayed >> >> class Substance( etc .. >> ... >> def make_links(self): >> #Parallel(n_jobs=-2)(delayed( >> # scrape_db(self, create_useful_link(self, Link, db), db) for db in databases >> #)) >> I'm getting a TypeError from Parallel delayed() - can't pickle >> generator objects >> >> So my question is how to write the commented code properly? I suspect >> I haven't done enough comprehension. >> >> Thanks for any help >> >> Mike >> >> >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug > From miked at dewhirst.com.au Sat Mar 10 01:04:02 2018 From: miked at dewhirst.com.au (Mike Dewhirst) Date: Sat, 10 Mar 2018 17:04:02 +1100 Subject: [melbourne-pug] Joblib question In-Reply-To: <302ec160-ff59-a46c-2f20-14dab3350def@organicrobot.com> References: <302ec160-ff59-a46c-2f20-14dab3350def@organicrobot.com> Message-ID: On 9/03/2018 7:30 PM, Alejandro Dubrovsky wrote: > delayed is a decorator, so it takes a function or a method. You are > passing it a generator instead. > > def make_links(self): > ????Parallel(n_jobs=-2)(delayed(scrape_db)(self, > create_useful_link(self, Link, db), db) for db in databases > ) > > should work, Yes it does :) Thank you Alejandro > but it will only parallelise over the scrape_db calls, not the > create_useful_link calls I think. Which of the two do you want to > parallelise over? Or were you after parallelising both? I think I probably want to use Celery (thanks Ed for the suggestion) or similar so I can loop through (currently) nine databases and kick off a scrape_db() task for each. Then each scrape_db task looks for (currently) ten data items of specific interest. Having scraped a data item we need to get_or_create (this is in Django) the specific data note and add the result to whatever is there. That data note update might be a bottleneck with more than one scrape_db task in parallel retrieving the same data item; say aqueous solubility. We want aqueous solubility from all databases in the same note so the user can easily compare different values and decide which value to use. So parallelising everything might eventually be somewhat problematic. It all has to squeeze through Postgres atomic transactions right at the end. I suppose this is a perfect example of an IO bound task. Also, another thing is that the app is (currently) all server side. I'm not (yet) using AJAX to update the screen when the data becomes available. Cheers Mike > > On 09/03/18 18:41, Mike Dewhirst wrote: >> https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf >> >> I'm trying to make the following code run in parallel on separate CPU >> cores but haven't had any success. >> >> def make_links(self): for db in databases: link = >> create_useful_link(self, Link, db) if link: scrape_db(self, link, db) >> >> This is a web scraper which is working nicely in a leisurely >> sequential manner.? databases is a list of urls with gaps to be >> filled by create_useful_link() which makes a link record from the >> Link class. The self instance is a source of attributes for filling >> the url gaps. self is a chemical substance and the link record url >> field when clicked in a browser will bring up that external website >> with the chemical substance selected for researching by the viewer. >> If successful, we then fetch the external page and scrape a bunch of >> interesting data from it and turn that into substance notes. >> scrape_db() doesn't return anything but it does create up to nine >> other records. >> >> ???????? from joblib import Parallel, delayed >> >> ???????? class Substance( etc .. >> ???????????? ... >> ???????????? def make_links(self): >> ???????????????? #Parallel(n_jobs=-2)(delayed( >> ???????????????? #??? scrape_db(self, create_useful_link(self, Link, >> db), db) for db in databases >> ???????????????? #)) >> >> I'm getting a TypeError from Parallel delayed() - can't pickle >> generator objects >> >> So my question is how to write the commented code properly? I suspect >> I haven't done enough comprehension. >> >> Thanks for any help >> >> Mike >> >> >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug >> > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug From miked at climate.com.au Sat Mar 10 01:13:33 2018 From: miked at climate.com.au (Mike Dewhirst) Date: Sat, 10 Mar 2018 17:13:33 +1100 Subject: [melbourne-pug] Joblib question In-Reply-To: References: <302ec160-ff59-a46c-2f20-14dab3350def@organicrobot.com> Message-ID: I've run the process a couple of times and there doesn't seem to be an appreciable difference. Both methods take enough time to boil the kettle. I know that isn't proper testing. It might be difficult to test timing accurately when we are waiting on websites all over the world to respond. Might set up a long running test to try and smooth out the differences. M On 10/03/2018 5:04 PM, Mike Dewhirst wrote: > On 9/03/2018 7:30 PM, Alejandro Dubrovsky wrote: >> delayed is a decorator, so it takes a function or a method. You are >> passing it a generator instead. >> >> def make_links(self): >> ????Parallel(n_jobs=-2)(delayed(scrape_db)(self, >> create_useful_link(self, Link, db), db) for db in databases >> ) >> >> should work, > > Yes it does :) Thank you Alejandro > >> but it will only parallelise over the scrape_db calls, not the >> create_useful_link calls I think. Which of the two do you want to >> parallelise over? Or were you after parallelising both? > > I think I probably want to use Celery (thanks Ed for the suggestion) > or similar so I can loop through (currently) nine databases and kick > off a scrape_db() task for each. Then each scrape_db task looks for > (currently) ten data items of specific interest. Having scraped a data > item we need to get_or_create (this is in Django) the specific data > note and add the result to whatever is there. > > That data note update might be a bottleneck with more than one > scrape_db task in parallel retrieving the same data item; say aqueous > solubility. We want aqueous solubility from all databases in the same > note so the user can easily compare different values and decide which > value to use. > > So parallelising everything might eventually be somewhat problematic. > It all has to squeeze through Postgres atomic transactions right at > the end. I suppose this is a perfect example of an IO bound task. > > Also, another thing is that the app is (currently) all server side. > I'm not (yet) using AJAX to update the screen when the data becomes > available. > > Cheers > > Mike > >> >> On 09/03/18 18:41, Mike Dewhirst wrote: >>> https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf >>> >>> I'm trying to make the following code run in parallel on separate >>> CPU cores but haven't had any success. >>> >>> def make_links(self): for db in databases: link = >>> create_useful_link(self, Link, db) if link: scrape_db(self, link, db) >>> >>> This is a web scraper which is working nicely in a leisurely >>> sequential manner.? databases is a list of urls with gaps to be >>> filled by create_useful_link() which makes a link record from the >>> Link class. The self instance is a source of attributes for filling >>> the url gaps. self is a chemical substance and the link record url >>> field when clicked in a browser will bring up that external website >>> with the chemical substance selected for researching by the viewer. >>> If successful, we then fetch the external page and scrape a bunch of >>> interesting data from it and turn that into substance notes. >>> scrape_db() doesn't return anything but it does create up to nine >>> other records. >>> >>> ???????? from joblib import Parallel, delayed >>> >>> ???????? class Substance( etc .. >>> ???????????? ... >>> ???????????? def make_links(self): >>> ???????????????? #Parallel(n_jobs=-2)(delayed( >>> ???????????????? #??? scrape_db(self, create_useful_link(self, Link, >>> db), db) for db in databases >>> ???????????????? #)) >>> >>> I'm getting a TypeError from Parallel delayed() - can't pickle >>> generator objects >>> >>> So my question is how to write the commented code properly? I >>> suspect I haven't done enough comprehension. >>> >>> Thanks for any help >>> >>> Mike >>> >>> >>> _______________________________________________ >>> melbourne-pug mailing list >>> melbourne-pug at python.org >>> https://mail.python.org/mailman/listinfo/melbourne-pug >>> >> >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug > -- Climate Pty Ltd PO Box 308 Mount Eliza Vic 3930 Australia +61 T: 03 9034 3977 M: 0411 704 143 From miked at dewhirst.com.au Tue Mar 13 00:01:35 2018 From: miked at dewhirst.com.au (Mike Dewhirst) Date: Tue, 13 Mar 2018 15:01:35 +1100 Subject: [melbourne-pug] Joblib question In-Reply-To: References: <302ec160-ff59-a46c-2f20-14dab3350def@organicrobot.com> Message-ID: On 10/03/2018 5:13 PM, Mike Dewhirst wrote: > I've run the process a couple of times and there doesn't seem to be an > appreciable difference. Both methods take enough time to boil the > kettle. I know that isn't proper testing. It might be difficult to > test timing accurately when we are waiting on websites all over the > world to respond. Might set up a long running test to try and smooth > out the differences. Mmmmmmmm. parallel.py:547: UserWarning: Multiprocessing-backed parallel loops cannot be nested below threads, setting n_jobs=1 ? **self._backend_args) I think I'll go back to sequential scraping and slap myself on the wrist for premature optimisation. > > M > > On 10/03/2018 5:04 PM, Mike Dewhirst wrote: >> On 9/03/2018 7:30 PM, Alejandro Dubrovsky wrote: >>> delayed is a decorator, so it takes a function or a method. You are >>> passing it a generator instead. >>> >>> def make_links(self): >>> ????Parallel(n_jobs=-2)(delayed(scrape_db)(self, >>> create_useful_link(self, Link, db), db) for db in databases >>> ) >>> >>> should work, >> >> Yes it does :) Thank you Alejandro >> >>> but it will only parallelise over the scrape_db calls, not the >>> create_useful_link calls I think. Which of the two do you want to >>> parallelise over? Or were you after parallelising both? >> >> I think I probably want to use Celery (thanks Ed for the suggestion) >> or similar so I can loop through (currently) nine databases and kick >> off a scrape_db() task for each. Then each scrape_db task looks for >> (currently) ten data items of specific interest. Having scraped a >> data item we need to get_or_create (this is in Django) the specific >> data note and add the result to whatever is there. >> >> That data note update might be a bottleneck with more than one >> scrape_db task in parallel retrieving the same data item; say aqueous >> solubility. We want aqueous solubility from all databases in the same >> note so the user can easily compare different values and decide which >> value to use. >> >> So parallelising everything might eventually be somewhat problematic. >> It all has to squeeze through Postgres atomic transactions right at >> the end. I suppose this is a perfect example of an IO bound task. >> >> Also, another thing is that the app is (currently) all server side. >> I'm not (yet) using AJAX to update the screen when the data becomes >> available. >> >> Cheers >> >> Mike >> >>> >>> On 09/03/18 18:41, Mike Dewhirst wrote: >>>> https://media.readthedocs.org/pdf/joblib/latest/joblib.pdf >>>> >>>> I'm trying to make the following code run in parallel on separate >>>> CPU cores but haven't had any success. >>>> >>>> def make_links(self): for db in databases: link = >>>> create_useful_link(self, Link, db) if link: scrape_db(self, link, db) >>>> >>>> This is a web scraper which is working nicely in a leisurely >>>> sequential manner.? databases is a list of urls with gaps to be >>>> filled by create_useful_link() which makes a link record from the >>>> Link class. The self instance is a source of attributes for filling >>>> the url gaps. self is a chemical substance and the link record url >>>> field when clicked in a browser will bring up that external website >>>> with the chemical substance selected for researching by the viewer. >>>> If successful, we then fetch the external page and scrape a bunch >>>> of interesting data from it and turn that into substance notes. >>>> scrape_db() doesn't return anything but it does create up to nine >>>> other records. >>>> >>>> ???????? from joblib import Parallel, delayed >>>> >>>> ???????? class Substance( etc .. >>>> ???????????? ... >>>> ???????????? def make_links(self): >>>> ???????????????? #Parallel(n_jobs=-2)(delayed( >>>> ???????????????? #??? scrape_db(self, create_useful_link(self, >>>> Link, db), db) for db in databases >>>> ???????????????? #)) >>>> >>>> I'm getting a TypeError from Parallel delayed() - can't pickle >>>> generator objects >>>> >>>> So my question is how to write the commented code properly? I >>>> suspect I haven't done enough comprehension. >>>> >>>> Thanks for any help >>>> >>>> Mike >>>> >>>> >>>> _______________________________________________ >>>> melbourne-pug mailing list >>>> melbourne-pug at python.org >>>> https://mail.python.org/mailman/listinfo/melbourne-pug >>>> >>> >>> _______________________________________________ >>> melbourne-pug mailing list >>> melbourne-pug at python.org >>> https://mail.python.org/mailman/listinfo/melbourne-pug >> > > From matt.trentini at gmail.com Mon Mar 26 01:39:04 2018 From: matt.trentini at gmail.com (Matt Trentini) Date: Mon, 26 Mar 2018 05:39:04 +0000 Subject: [melbourne-pug] MicroPython meetup Message-ID: Hi folks, Apologies for the cross-post, hope it's not considered too rude! I suspect some of you may be interested in MicroPython, an implementation of Python designed to run on microcontrollers. We've been running a monthly MicroPython meetup out at CCHS, a great Hackerspace in Hawthorn, and this month's event is coming up Wednesday evening. Some folks who attend are working on projects being implemented with MicroPython, others are hacking on the language itself and some are complete newbies trying to figure out what the language is and what can be done with it. All are welcome. :) I'm happy to share some hardware you can use on the night so you can make LEDs blink or something. :) If you're interested then register at the link above and come along! Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: