stock quotes off the web, py style

Friedrich Rentsch anthra.norell at bluewin.ch
Wed May 16 18:02:50 EDT 2018



On 05/16/2018 06:21 PM, Mike McClain wrote:
> On Wed, May 16, 2018 at 02:33:23PM +0200, Friedrich Rentsch wrote:
> <snip>
>> I didn't know the site you mention. I've been getting quotes from
>> Yahoo daily. The service they discontinued was for up to 50 symbols
>> per page. I now parse a separate page of some 500K of html for each
>> symbol! This site is certainly more concise and surely a lot faster.
>      Thank you sir for the response and code snippet.
> As it turns out iextrading.com doesn't supply data on mutuals which
> are the majority of my portfolio so they are not goimng to do me much
> good after all.
>      If you please, what is the URL of one stock you're getting from
> Yahoo that requires parsing 500K of html per symbol? That's better
> than not getting the quotes.
>      If AlphaVantage ever comes back up, they send 100 days quotes for
> each symbol and I only use today's and yesterday's, but it is easy to
> parse.
>
>> You would do multiple symbols in a loop which you enter with an open
>> urllib object, rather than opening a new one for each symbol inside
>> the loop.
>      At the moment I can't see how to do that but will figure it out.
> Thanks for the pointer.
>
> Mike
> --
> "There are three kinds of men. The ones who learn by reading. The
> few who learn by observation. The rest of them have to pee on the
> electric fence for themselves." --- Will Rogers
I meant to check out AlphaVantage myself and registered, since it 
appears to be a kind of interest group. I wasn't aware it is down, 
because I haven't yet tried to log on. But I hope to do so when it comes 
back.

The way I get quotes from Yahoo is a hack: 1. Get a quote on the Yahoo 
web page. 2. Copy the url. 
(https://finance.yahoo.com/quote/IBM?p=IBM&guccounter=1). 3. Compose 
such urls in a loop one symbol at a time and read nearly 600K of html 
text for each of them. 4. Parse the text for the numbers I want to 
extract. Needles in a haystack. Slow for a large set of symbols and 
grossly inefficient in terms of data traffic.

Forget my last suggestion "You would do multiple symbols . . ." that was 
wrong. You have to open a urllib object for every symbol, the same way 
you'd open a file for every file name.

And thanks to the practitioners for the warnings against using 'eval'. I 
have hardly ever used it, never in online communications. So my 
awareness level is low. But I understand the need to be careful.

Frederic




  

You would do multiple symbols

"You would do multiple symbols




More information about the Python-list mailing list